[jira] [Commented] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-07-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554895#comment-16554895
 ] 

Hudson commented on HDFS-13448:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14631 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14631/])
HDFS-13448. HDFS Block Placement - Ignore Locality for First Block (templedf: 
rev 849c45db187224095b13fe297a4d7377fbb9d2cd)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirWriteFileOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/AddBlockFlag.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/proto/ClientNamenodeProtocol.proto
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CreateFlag.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyDefault.java
* (add) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirWriteFileOp.java


> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HDFS-13448.10.patch, HDFS-13448.11.patch, 
> HDFS-13448.12.patch, HDFS-13448.13.patch, HDFS-13448.14.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13448) HDFS Block Placement - Ignore Locality for First Block Replica

2018-07-24 Thread Daniel Templeton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HDFS-13448:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.4
   3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, [~belugabehr].  Committed to trunk, branch-3.1, and 
branch-3.0.  Let me know if we need to push this back into branch-2 as well.

> HDFS Block Placement - Ignore Locality for First Block Replica
> --
>
> Key: HDFS-13448
> URL: https://issues.apache.org/jira/browse/HDFS-13448
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: block placement, hdfs-client
>Affects Versions: 2.9.0, 3.0.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 3.2.0, 3.1.1, 3.0.4
>
> Attachments: HDFS-13448.10.patch, HDFS-13448.11.patch, 
> HDFS-13448.12.patch, HDFS-13448.13.patch, HDFS-13448.14.patch, 
> HDFS-13448.6.patch, HDFS-13448.7.patch, HDFS-13448.8.patch
>
>
> According to the HDFS Block Place Rules:
> {quote}
> /**
>  * The replica placement strategy is that if the writer is on a datanode,
>  * the 1st replica is placed on the local machine, 
>  * otherwise a random datanode. The 2nd replica is placed on a datanode
>  * that is on a different rack. The 3rd replica is placed on a datanode
>  * which is on a different node of the rack as the second replica.
>  */
> {quote}
> However, there is a hint for the hdfs-client that allows the block placement 
> request to not put a block replica on the local datanode _where 'local' means 
> the same host as the client is being run on._
> {quote}
>   /**
>* Advise that a block replica NOT be written to the local DataNode where
>* 'local' means the same host as the client is being run on.
>*
>* @see CreateFlag#NO_LOCAL_WRITE
>*/
> {quote}
> I propose that we add a new flag that allows the hdfs-client to request that 
> the first block replica be placed on a random DataNode in the cluster.  The 
> subsequent block replicas should follow the normal block placement rules.
> The issue is that when the {{NO_LOCAL_WRITE}} is enabled, the first block 
> replica is not placed on the local node, but it is still placed on the local 
> rack.  Where this comes into play is where you have, for example, a flume 
> agent that is loading data into HDFS.
> If the Flume agent is running on a DataNode, then by default, the DataNode 
> local to the Flume agent will always get the first block replica and this 
> leads to un-even block placements, with the local node always filling up 
> faster than any other node in the cluster.
> Modifying this example, if the DataNode is removed from the host where the 
> Flume agent is running, or this {{NO_LOCAL_WRITE}} is enabled by Flume, then 
> the default block placement policy will still prefer the local rack.  This 
> remedies the situation only so far as now the first block replica will always 
> be distributed to a DataNode on the local rack.
> This new flag would allow a single Flume agent to distribute the blocks 
> randomly, evenly, over the entire cluster instead of hot-spotting the local 
> node or the local rack.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13688) Introduce msync API call

2018-07-24 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13688:
--
Attachment: (was: HDFS-13688-HDFS-12943.003.patch)

> Introduce msync API call
> 
>
> Key: HDFS-13688
> URL: https://issues.apache.org/jira/browse/HDFS-13688
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13688-HDFS-12943.001.patch, 
> HDFS-13688-HDFS-12943.002.patch, HDFS-13688-HDFS-12943.002.patch, 
> HDFS-13688-HDFS-12943.WIP.002.patch, HDFS-13688-HDFS-12943.WIP.patch
>
>
> As mentioned in the design doc in HDFS-12943, to ensure consistent read, we 
> need to introduce an RPC call {{msync}}. Specifically, client can issue a 
> msync call to Observer node along with a transactionID. The msync will only 
> return when the Observer's transactionID has caught up to the given ID. This 
> JIRA is to add this API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13688) Introduce msync API call

2018-07-24 Thread Chen Liang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-13688:
--
Attachment: HDFS-13688-HDFS-12943.003.patch

> Introduce msync API call
> 
>
> Key: HDFS-13688
> URL: https://issues.apache.org/jira/browse/HDFS-13688
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-13688-HDFS-12943.001.patch, 
> HDFS-13688-HDFS-12943.002.patch, HDFS-13688-HDFS-12943.002.patch, 
> HDFS-13688-HDFS-12943.003.patch, HDFS-13688-HDFS-12943.WIP.002.patch, 
> HDFS-13688-HDFS-12943.WIP.patch
>
>
> As mentioned in the design doc in HDFS-12943, to ensure consistent read, we 
> need to introduce an RPC call {{msync}}. Specifically, client can issue a 
> msync call to Observer node along with a transactionID. The msync will only 
> return when the Observer's transactionID has caught up to the given ID. This 
> JIRA is to add this API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-288) Fix bugs in OpenContainerBlockMap

2018-07-24 Thread Xiaoyu Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-288:

Fix Version/s: 0.2.1

> Fix bugs in OpenContainerBlockMap
> -
>
> Key: HDDS-288
> URL: https://issues.apache.org/jira/browse/HDDS-288
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-288.20180724.patch
>
>
> - OpenContainerBlockMap should not be synchronized for a better performance. 
> - addChunkToMap may add the same chunk twice.  See the comments below.
> {code}
>   keyDataSet.putIfAbsent(blockID.getLocalID(), getKeyData(info, 
> blockID)); // (1) when id is absent, it puts
>   keyDataSet.computeIfPresent(blockID.getLocalID(), (key, value) -> { // 
> (2) now, the id is present, it adds again.
> value.addChunk(info);
> return value;
>   });
> {code}
> - In removeContainer(..), use remove(..) instead of computeIfPresent(..).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-285) Create a generic Metadata Iterator

2018-07-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554919#comment-16554919
 ] 

genericqa commented on HDDS-285:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
52s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-285 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932965/HDDS-285.04.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dc53651de853 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ea2c6c8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/617/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-hdds/common U: hadoop-hdds/common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/617/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Create a generic Metadata Iterator
> --
>
> Key: HDDS-285
> URL: https://issues.apache.org/jira/browse/HDDS-285
> Project: Hadoop Dis

[jira] [Commented] (HDDS-266) Integrate checksum into .container file

2018-07-24 Thread Bharat Viswanadham (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554924#comment-16554924
 ] 

Bharat Viswanadham commented on HDDS-266:
-

Hi [~hanishakoneru]

Thanks for updated patch.

+1 Looks good to me. Below comment can be addressed during committing the patch.

1. java doc is missing for containerDataYaml method for getYamlForContainerType.

> Integrate checksum into .container file
> ---
>
> Key: HDDS-266
> URL: https://issues.apache.org/jira/browse/HDDS-266
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-266.001.patch, HDDS-266.002.patch, 
> HDDS-266.003.patch, HDDS-266.004.patch
>
>
> Currently, each container metadata has 2 files - .container and .checksum 
> file.
> In this Jira, we propose to integrate the checksum into the .container file 
> itself. This will help with synchronization during container updates.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13761) Add toString Method to AclFeature Class

2018-07-24 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554929#comment-16554929
 ] 

Shweta commented on HDFS-13761:
---

Hi [Xiao 
Chen|applewebdata://DD3962E0-6739-459F-861D-E56B186D84D4/jira/secure/ViewProfile.jspa?name=xiaochen],

Thank you for the valuable suggestion. Yes, without the Class name being 
present it is hard to tell or understand the message. 
 An example message displayed by the toString() :

{{AclFeature : a8148e04, Size of entries : 43 }}

As seen above I have displayed the hashCode too in Hexadecimal format as per 
your suggestion to make it more canonical.

I have submitted the patch with these changes. Please review. Thank you.

> Add toString Method to AclFeature Class
> ---
>
> Key: HDFS-13761
> URL: https://issues.apache.org/jira/browse/HDFS-13761
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDFS-13761.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13761) Add toString Method to AclFeature Class

2018-07-24 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554929#comment-16554929
 ] 

Shweta edited comment on HDFS-13761 at 7/25/18 12:16 AM:
-

Hi [~xiaochen]

Thank you for the valuable suggestion. Yes, without the Class name being 
present it is hard to tell or understand the message. 
 An example message displayed by the toString() :

{{AclFeature : a8148e04, Size of entries : 43 }}

As seen above I have displayed the hashCode too in Hexadecimal format as per 
your suggestion to make it more canonical.

I have submitted the patch with these changes. Please review. Thank you.


was (Author: shwetayakkali):
Hi [Xiao 
Chen|applewebdata://DD3962E0-6739-459F-861D-E56B186D84D4/jira/secure/ViewProfile.jspa?name=xiaochen],

Thank you for the valuable suggestion. Yes, without the Class name being 
present it is hard to tell or understand the message. 
 An example message displayed by the toString() :

{{AclFeature : a8148e04, Size of entries : 43 }}

As seen above I have displayed the hashCode too in Hexadecimal format as per 
your suggestion to make it more canonical.

I have submitted the patch with these changes. Please review. Thank you.

> Add toString Method to AclFeature Class
> ---
>
> Key: HDFS-13761
> URL: https://issues.apache.org/jira/browse/HDFS-13761
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDFS-13761.01.patch, HDFS-13761.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13761) Add toString Method to AclFeature Class

2018-07-24 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13761:
--
Attachment: HDFS-13761.02.patch

> Add toString Method to AclFeature Class
> ---
>
> Key: HDFS-13761
> URL: https://issues.apache.org/jira/browse/HDFS-13761
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDFS-13761.01.patch, HDFS-13761.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-288) Fix bugs in OpenContainerBlockMap

2018-07-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554931#comment-16554931
 ] 

genericqa commented on HDDS-288:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-ozone/integration-test {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
40s{color} | {color:red} hadoop-hdds_container-service generated 1 new + 2 
unchanged - 0 fixed = 3 total (was 2) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} container-service in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 40s{color} 
| {color:red} integration-test in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
44s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}126m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDDS-288 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932962/HDDS-288.20180724.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
uni

[jira] [Commented] (HDFS-13622) mkdir should not print the directory being created in the error message when parent directories do not exist

2018-07-24 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554933#comment-16554933
 ] 

Shweta commented on HDFS-13622:
---

Hi [~xiaochen],

Thank you for pointing out about the failing tests. I have made the changes to 
the Test Configuration file which fixes this issue. I have added the changed 
and submitted the patch. Please review. Thank you.

> mkdir should not print the directory being created in the error message when 
> parent directories do not exist
> 
>
> Key: HDFS-13622
> URL: https://issues.apache.org/jira/browse/HDFS-13622
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13622.02.patch, HDFS-13622.03.patch, 
> HDFS-13622.04.patch, HDFS-13622.05.patch, HDFS-13622.06.patch
>
>
> this is a bit misleading:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent/newdir': No such file or directory
> {code}
> I think this command should fail because "nonexistent" doesn't exists...
> the correct would be:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13622) mkdir should not print the directory being created in the error message when parent directories do not exist

2018-07-24 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HDFS-13622:
--
Attachment: HDFS-13622.06.patch

> mkdir should not print the directory being created in the error message when 
> parent directories do not exist
> 
>
> Key: HDFS-13622
> URL: https://issues.apache.org/jira/browse/HDFS-13622
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Shweta
>Priority: Major
> Attachments: HDFS-13622.02.patch, HDFS-13622.03.patch, 
> HDFS-13622.04.patch, HDFS-13622.05.patch, HDFS-13622.06.patch
>
>
> this is a bit misleading:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent/newdir': No such file or directory
> {code}
> I think this command should fail because "nonexistent" doesn't exists...
> the correct would be:
> {code}
> $ hdfs  dfs -mkdir /nonexistent/newdir
> mkdir: `/nonexistent': No such file or directory
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13735) Make QJM HTTP URL connection timeout configurable

2018-07-24 Thread Konstantin Shvachko (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554951#comment-16554951
 ] 

Konstantin Shvachko commented on HDFS-13735:


Looked at your patch, Chao. I am not a big fan of adding new timeout 
configuration parameters for every type of connection. QJM and Hadoop in 
general have already a bunch of those. So my questions are:
# Can we reuse an existing parameter for this purpose?
# If we cannot use existing, should we make the new ones public, keep 
undocumented, or use a reasonable hard-coded constant?
# If we introduce a new parameter, we should give it a reasonable default 
value. What is the reasonable timeout here? You set it to the old default.
# The best solution would be to take the http call ({{readOp()}}) out of the 
global lock. Can it be done?

> Make QJM HTTP URL connection timeout configurable
> -
>
> Key: HDFS-13735
> URL: https://issues.apache.org/jira/browse/HDFS-13735
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: qjm
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Minor
> Attachments: HDFS-13735.000.patch, HDFS-13735.001.patch
>
>
> We've seen "connect timed out" happen internally when QJM tries to open HTTP 
> connections to JNs. This is now using {{newDefaultURLConnectionFactory}} 
> which uses the default timeout 60s, and is not configurable.
> It would be better for this to be configurable, especially for 
> ObserverNameNode (HDFS-12943), where latency is important, and 60s may not be 
> a good value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-268) Add SCM close container watcher

2018-07-24 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-268:

Attachment: HDDS-268.00.patch

> Add SCM close container watcher
> ---
>
> Key: HDDS-268
> URL: https://issues.apache.org/jira/browse/HDDS-268
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HDDS-268.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13688) Introduce msync API call

2018-07-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555017#comment-16555017
 ] 

genericqa commented on HDFS-13688:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 29m 
32s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 31m 
21s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
9s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
47s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
17s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
56s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 29m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 29m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 
53s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
59s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
17s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 47s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 32m 10s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m  4s{color} 
| {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
58s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs

[jira] [Assigned] (HDDS-286) Fix NodeReportPublisher.getReport NPE

2018-07-24 Thread Junping Du (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du reassigned HDDS-286:
---

Assignee: Junjie Chen

> Fix NodeReportPublisher.getReport NPE
> -
>
> Key: HDDS-286
> URL: https://issues.apache.org/jira/browse/HDDS-286
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Junjie Chen
>Priority: Major
>  Labels: newbie
> Fix For: 0.2.1
>
>
> This can be reproed with TestKeys#testPutKey
> {code}
> 2018-07-23 21:33:55,598 WARN  concurrent.ExecutorHelper 
> (ExecutorHelper.java:logThrowableFromAfterExecute(63)) - Caught exception in 
> thread Datanode ReportManager Thread - 0: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeInfo.getScmUsed(VolumeInfo.java:107)
>   at 
> org.apache.hadoop.ozone.container.common.volume.VolumeSet.getNodeReport(VolumeSet.java:350)
>   at 
> org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.getNodeReport(OzoneContainer.java:260)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:64)
>   at 
> org.apache.hadoop.ozone.container.common.report.NodeReportPublisher.getReport(NodeReportPublisher.java:39)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.publishReport(ReportPublisher.java:86)
>   at 
> org.apache.hadoop.ozone.container.common.report.ReportPublisher.run(ReportPublisher.java:73)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available

2018-07-24 Thread Yiqun Lin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555050#comment-16555050
 ] 

Yiqun Lin commented on HDFS-12716:
--

LGTM, +1. [~RANith], would you mind attaching a patch for branch-2? I plan to 
commit this to branch-2 as well. I will hold off the commit until tomorrow once 
there are some other comments.


>  'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes 
> to be available
> -
>
> Key: HDFS-12716
> URL: https://issues.apache.org/jira/browse/HDFS-12716
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: usharani
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-12716.002.patch, HDFS-12716.003.patch, 
> HDFS-12716.004.patch, HDFS-12716.005.patch, HDFS-12716.patch
>
>
>   Currently 'dfs.datanode.failed.volumes.tolerated' supports number of 
> tolerated failed volumes to be mentioned. This configuration change requires 
> restart of datanode. Since datanode volumes can be changed dynamically, 
> keeping this configuration same for all may not be good idea.
> Support 'dfs.datanode.failed.volumes.tolerated' to accept special 
> 'negative value 'x' to tolerate failures of upto "n-x"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13761) Add toString Method to AclFeature Class

2018-07-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555076#comment-16555076
 ] 

genericqa commented on HDFS-13761:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 11s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}165m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | HDFS-13761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12932974/HDFS-13761.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0a509c30bc7b 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 849c45d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24651/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/24651/testReport/ |
| Max. process+thread count | 2968 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.

[jira] [Commented] (HDFS-13688) Introduce msync API call

2018-07-24 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555108#comment-16555108
 ] 

genericqa commented on HDFS-13688:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-12943 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
32s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
26s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
37s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 32s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
24s{color} | {color:green} HDFS-12943 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
26s{color} | {color:green} HDFS-12943 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 28m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
25s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m 47s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
47s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}268m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-client |
|  |  Dead store to e in 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.msync()  
At 
ClientNamenodeProtocolTranslatorPB.java:o

[jira] [Commented] (HDFS-13761) Add toString Method to AclFeature Class

2018-07-24 Thread Xiao Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555119#comment-16555119
 ] 

Xiao Chen commented on HDFS-13761:
--

Thanks for revving Shweta. +1 on patch 2.
Failed test look unrelated, this is a message update so no unit test needed. 
Committing this.

> Add toString Method to AclFeature Class
> ---
>
> Key: HDFS-13761
> URL: https://issues.apache.org/jira/browse/HDFS-13761
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HDFS-13761.01.patch, HDFS-13761.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13761) Add toString Method to AclFeature Class

2018-07-24 Thread Xiao Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-13761:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thank you, Shweta!

> Add toString Method to AclFeature Class
> ---
>
> Key: HDFS-13761
> URL: https://issues.apache.org/jira/browse/HDFS-13761
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-13761.01.patch, HDFS-13761.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12716) 'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes to be available

2018-07-24 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555130#comment-16555130
 ] 

Brahma Reddy Battula commented on HDFS-12716:
-

Thanks for updating the patch. Apart from the following minor nits patch LGTM. 
Sorry for delaying the review.

i) Can you change "MAX_VOLUME_FAILURE_LIMIT" to 
"MAX_VOLUME_FAILURE_TOLERATED_LIMIT",[~linyiqun], do you think same..?

ii) Can you change the following message in *DataNode.java#startDataNode?*

"Value configured is either less than 0 " to 'Value configured is either 
greater than -1 "

iii) Remove the following in *FsDatasetImpl.java#hasEnoughResource()*

623 // OS behavior

>  'dfs.datanode.failed.volumes.tolerated' to support minimum number of volumes 
> to be available
> -
>
> Key: HDFS-12716
> URL: https://issues.apache.org/jira/browse/HDFS-12716
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: usharani
>Assignee: Ranith Sardar
>Priority: Major
> Attachments: HDFS-12716.002.patch, HDFS-12716.003.patch, 
> HDFS-12716.004.patch, HDFS-12716.005.patch, HDFS-12716.patch
>
>
>   Currently 'dfs.datanode.failed.volumes.tolerated' supports number of 
> tolerated failed volumes to be mentioned. This configuration change requires 
> restart of datanode. Since datanode volumes can be changed dynamically, 
> keeping this configuration same for all may not be good idea.
> Support 'dfs.datanode.failed.volumes.tolerated' to accept special 
> 'negative value 'x' to tolerate failures of upto "n-x"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-203) Add getCommittedBlockLength API in datanode

2018-07-24 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-203:
-
Attachment: HDDS-203.06.patch

> Add getCommittedBlockLength API in datanode
> ---
>
> Key: HDDS-203
> URL: https://issues.apache.org/jira/browse/HDDS-203
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-203.00.patch, HDDS-203.01.patch, HDDS-203.02.patch, 
> HDDS-203.03.patch, HDDS-203.04.patch, HDDS-203.05.patch, HDDS-203.06.patch
>
>
> When a container gets closed on the Datanode while the active Writes are 
> happening by OzoneClient, Client Write requests will fail with 
> ContainerClosedException. In such case, ozone Client needs to enquire the 
> last committed block length from dataNodes and update the OzoneMaster with 
> the updated length for the block. This Jira proposes to add to RPC call to 
> get the last committed length of a block on a Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-203) Add getCommittedBlockLength API in datanode

2018-07-24 Thread Shashikant Banerjee (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555138#comment-16555138
 ] 

Shashikant Banerjee commented on HDDS-203:
--

Thanks [~msingh], [~szetszwo] for the review comments. patch v6 addresses your 
review comments. It also fixes the javadoc issues reported. Test failures are 
not related as they work in my local machine.

> Add getCommittedBlockLength API in datanode
> ---
>
> Key: HDDS-203
> URL: https://issues.apache.org/jira/browse/HDDS-203
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-203.00.patch, HDDS-203.01.patch, HDDS-203.02.patch, 
> HDDS-203.03.patch, HDDS-203.04.patch, HDDS-203.05.patch, HDDS-203.06.patch
>
>
> When a container gets closed on the Datanode while the active Writes are 
> happening by OzoneClient, Client Write requests will fail with 
> ContainerClosedException. In such case, ozone Client needs to enquire the 
> last committed block length from dataNodes and update the OzoneMaster with 
> the updated length for the block. This Jira proposes to add to RPC call to 
> get the last committed length of a block on a Datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13761) Add toString Method to AclFeature Class

2018-07-24 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16555139#comment-16555139
 ] 

Hudson commented on HDFS-13761:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14632 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14632/])
HDFS-13761. Add toString Method to AclFeature Class. Contributed by (xiao: rev 
26864471c24bf389ab8fc913decc3d069404688b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclFeature.java


> Add toString Method to AclFeature Class
> ---
>
> Key: HDFS-13761
> URL: https://issues.apache.org/jira/browse/HDFS-13761
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Fix For: 3.2.0
>
> Attachments: HDFS-13761.01.patch, HDFS-13761.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13764) [DOC] update flag is not necessary to avoid verifying checksums

2018-07-24 Thread Yuexin Zhang (JIRA)
Yuexin Zhang created HDFS-13764:
---

 Summary: [DOC] update flag is not necessary to avoid verifying 
checksums
 Key: HDFS-13764
 URL: https://issues.apache.org/jira/browse/HDFS-13764
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.3
Reporter: Yuexin Zhang


We mentioned to use "-update" option to avoid checksum in the following doc:

[https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Copying_between_encrypted_and_unencrypted_locations]
{code:java}
// Copying between encrypted and unencrypted locations
By default, distcp compares checksums provided by the filesystem to verify that 
the data was successfully copied to the destination. When copying between an 
unencrypted and encrypted location, the filesystem checksums will not match 
since the underlying block data is different. In this case, specify the 
-skipcrccheck and -update distcp flags to avoid verifying checksums.
{code}
 

But actually, "-update" option is not necessary, only "-skipcrccheck" is needed.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13764) [DOC] update flag is not necessary to avoid verifying checksums

2018-07-24 Thread Yuexin Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuexin Zhang updated HDFS-13764:

Affects Version/s: 2.7.0

> [DOC] update flag is not necessary to avoid verifying checksums
> ---
>
> Key: HDFS-13764
> URL: https://issues.apache.org/jira/browse/HDFS-13764
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0, 2.7.3
>Reporter: Yuexin Zhang
>Priority: Major
>
> We mentioned to use "-update" option to avoid checksum in the following doc:
> [https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Copying_between_encrypted_and_unencrypted_locations]
> {code:java}
> // Copying between encrypted and unencrypted locations
> By default, distcp compares checksums provided by the filesystem to verify 
> that the data was successfully copied to the destination. When copying 
> between an unencrypted and encrypted location, the filesystem checksums will 
> not match since the underlying block data is different. In this case, specify 
> the -skipcrccheck and -update distcp flags to avoid verifying checksums.
> {code}
>  
> But actually, "-update" option is not necessary, only "-skipcrccheck" is 
> needed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13764) [DOC] update flag is not necessary to avoid verifying checksums

2018-07-24 Thread Yuexin Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuexin Zhang updated HDFS-13764:

Description: 
We mentioned to use "-update" option to avoid checksum in the following doc:

[https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Copying_between_encrypted_and_unencrypted_locations]
{code:java}
Copying between encrypted and unencrypted locations
By default, distcp compares checksums provided by the filesystem to verify that 
the data was successfully copied to the destination. When copying between an 
unencrypted and encrypted location, the filesystem checksums will not match 
since the underlying block data is different. In this case, specify the 
-skipcrccheck and -update distcp flags to avoid verifying checksums.
{code}
 

But actually, "-update" option is not necessary, only "-skipcrccheck" is 
needed. Can we change it to:

 
{code:java}
Copying between encrypted and unencrypted locations
By default, distcp compares checksums provided by the filesystem to verify that 
the data was successfully copied to the destination. When copying between an 
unencrypted and encrypted location, the filesystem checksums will not match 
since the underlying block data is different. In this case, specify the 
-skipcrccheck flags to avoid verifying checksums.
{code}
 

  was:
We mentioned to use "-update" option to avoid checksum in the following doc:

[https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Copying_between_encrypted_and_unencrypted_locations]
{code:java}
// Copying between encrypted and unencrypted locations
By default, distcp compares checksums provided by the filesystem to verify that 
the data was successfully copied to the destination. When copying between an 
unencrypted and encrypted location, the filesystem checksums will not match 
since the underlying block data is different. In this case, specify the 
-skipcrccheck and -update distcp flags to avoid verifying checksums.
{code}
 

But actually, "-update" option is not necessary, only "-skipcrccheck" is needed.

 


> [DOC] update flag is not necessary to avoid verifying checksums
> ---
>
> Key: HDFS-13764
> URL: https://issues.apache.org/jira/browse/HDFS-13764
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0, 2.7.3
>Reporter: Yuexin Zhang
>Priority: Major
>
> We mentioned to use "-update" option to avoid checksum in the following doc:
> [https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Copying_between_encrypted_and_unencrypted_locations]
> {code:java}
> Copying between encrypted and unencrypted locations
> By default, distcp compares checksums provided by the filesystem to verify 
> that the data was successfully copied to the destination. When copying 
> between an unencrypted and encrypted location, the filesystem checksums will 
> not match since the underlying block data is different. In this case, specify 
> the -skipcrccheck and -update distcp flags to avoid verifying checksums.
> {code}
>  
> But actually, "-update" option is not necessary, only "-skipcrccheck" is 
> needed. Can we change it to:
>  
> {code:java}
> Copying between encrypted and unencrypted locations
> By default, distcp compares checksums provided by the filesystem to verify 
> that the data was successfully copied to the destination. When copying 
> between an unencrypted and encrypted location, the filesystem checksums will 
> not match since the underlying block data is different. In this case, specify 
> the -skipcrccheck flags to avoid verifying checksums.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13764) [DOC] update flag is not necessary to avoid verifying checksums

2018-07-24 Thread Yuexin Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuexin Zhang updated HDFS-13764:

Attachment: HDFS-13764_DOC1.patch

> [DOC] update flag is not necessary to avoid verifying checksums
> ---
>
> Key: HDFS-13764
> URL: https://issues.apache.org/jira/browse/HDFS-13764
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.0, 2.7.3
>Reporter: Yuexin Zhang
>Priority: Major
> Attachments: HDFS-13764_DOC1.patch
>
>
> We mentioned to use "-update" option to avoid checksum in the following doc:
> [https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Copying_between_encrypted_and_unencrypted_locations]
> {code:java}
> Copying between encrypted and unencrypted locations
> By default, distcp compares checksums provided by the filesystem to verify 
> that the data was successfully copied to the destination. When copying 
> between an unencrypted and encrypted location, the filesystem checksums will 
> not match since the underlying block data is different. In this case, specify 
> the -skipcrccheck and -update distcp flags to avoid verifying checksums.
> {code}
>  
> But actually, "-update" option is not necessary, only "-skipcrccheck" is 
> needed. Can we change it to:
>  
> {code:java}
> Copying between encrypted and unencrypted locations
> By default, distcp compares checksums provided by the filesystem to verify 
> that the data was successfully copied to the destination. When copying 
> between an unencrypted and encrypted location, the filesystem checksums will 
> not match since the underlying block data is different. In this case, specify 
> the -skipcrccheck flags to avoid verifying checksums.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2