[jira] [Created] (HDFS-11588) Output Avro format in the offline editlog viewer

2017-03-28 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-11588:
-

 Summary: Output Avro format in the offline editlog viewer
 Key: HDFS-11588
 URL: https://issues.apache.org/jira/browse/HDFS-11588
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


We found that it is handy to import the edit logs into query engines (e.g., 
Hive / Presto) to understand the usages of the cluster. Some examples include:

* The size of the data and the number of files that are written into a directory
* The distribution of the operations, for different directories.
* The number of files that are created by a user.

The answers to the above questions give insights on the usages of the clusters 
and have significant values on capacity planning.

Importing the edit log into query engines simplifies the tasks of answering 
these questions, and they can be answered efficiently.

While the Offline Editlog Viewer (OEV) supports outputting editlogs in XML 
formats, we found that it is time-consuming to transforming the XML format to 
formats that query engines recognize, because the generating the editlogs in 
XML formats and transforming them into formats that the query engine 
understands takes significant amount of time. In our environment it takes 
minutes to prepare a 100MB editlog file into a corresponding Parquet file.

This jira proposes to extend the OEV to output Avro files to make this process 
efficient. As an internal tool, the Avro output format has certain pre-defined 
schemas but it does not have the constraint of maintaining backward 
compatibility of the output, which is similar to the XML output format.







--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11541) Call RawErasureEncoder and RawErasureDecoder release() methods

2017-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946596#comment-15946596
 ] 

Hudson commented on HDFS-11541:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11487 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11487/])
HDFS-11541. Call RawErasureEncoder and RawErasureDecoder release() (rakeshr: 
rev 84d787b9d51196010495d51dc5ebf66c01c340ab)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockChecksumReconstructor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedBlockReconstructor.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/erasurecode/StripedReconstructor.java


> Call RawErasureEncoder and RawErasureDecoder release() methods
> --
>
> Key: HDFS-11541
> URL: https://issues.apache.org/jira/browse/HDFS-11541
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: László Bence Nagy
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11541-01.patch, HDFS-11541-02.patch, 
> HDFS-11541-03.patch
>
>
> The *RawErasureEncoder* and *RawErasureDecoder* classes have _release()_ 
> methods which are not called from the source code. These methods should be 
> called when an encoding or decoding operation is finished so that the 
> dynamically allocated resources can be freed. Underlying native plugins can 
> also rely on these functions to release their resources.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11541) Call RawErasureEncoder and RawErasureDecoder release() methods

2017-03-28 Thread Rakesh R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh R updated HDFS-11541:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Thanks [~Lac21] for reporting this.

Committed to trunk.

> Call RawErasureEncoder and RawErasureDecoder release() methods
> --
>
> Key: HDFS-11541
> URL: https://issues.apache.org/jira/browse/HDFS-11541
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: László Bence Nagy
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11541-01.patch, HDFS-11541-02.patch, 
> HDFS-11541-03.patch
>
>
> The *RawErasureEncoder* and *RawErasureDecoder* classes have _release()_ 
> methods which are not called from the source code. These methods should be 
> called when an encoding or decoding operation is finished so that the 
> dynamically allocated resources can be freed. Underlying native plugins can 
> also rely on these functions to release their resources.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11541) Call RawErasureEncoder and RawErasureDecoder release() methods

2017-03-28 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946559#comment-15946559
 ] 

Rakesh R commented on HDFS-11541:
-

Test case failures are unrelated to this patch.

+1, lgtm. I'll commit this shortly. Thanks [~Sammi] for the contribution.


> Call RawErasureEncoder and RawErasureDecoder release() methods
> --
>
> Key: HDFS-11541
> URL: https://issues.apache.org/jira/browse/HDFS-11541
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha2
>Reporter: László Bence Nagy
>Assignee: SammiChen
>  Labels: hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-11541-01.patch, HDFS-11541-02.patch, 
> HDFS-11541-03.patch
>
>
> The *RawErasureEncoder* and *RawErasureDecoder* classes have _release()_ 
> methods which are not called from the source code. These methods should be 
> called when an encoding or decoding operation is finished so that the 
> dynamically allocated resources can be freed. Underlying native plugins can 
> also rely on these functions to release their resources.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946551#comment-15946551
 ] 

Hudson commented on HDFS-10971:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11486 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11486/])
HDFS-10971. Distcp should not copy replication factor if source file is (wang: 
rev 0e6f8e4bc6642f90dc7b33848bfb1129ec20ee49)
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
* (edit) 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyListingFileStatus.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListingFileStatus.java


> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-10971.01.patch, HDFS-10971.02.patch, 
> HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10971:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

Committed to trunk, thanks again Manoj for the patch and Wei-chiu for 
identifying this issue originally!

> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-10971.01.patch, HDFS-10971.02.patch, 
> HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946532#comment-15946532
 ] 

Andrew Wang commented on HDFS-10971:


LGTM will commit this shortly. Thanks for working on this Manoj!

Want to file a follow-on JIRA for the cleanup of #getDefaultECPolicy and 
#enableDefaultECPolicy? Should be easy to track down usages in test code.

> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10971.01.patch, HDFS-10971.02.patch, 
> HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9705) Refine the behaviour of getFileChecksum when length = 0

2017-03-28 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-9705:
--
   Resolution: Fixed
Fix Version/s: 2.8.1
   Status: Resolved  (was: Patch Available)

Thanks Sammi and Kai, committed to branch-2 and branch-2.8 :)

> Refine the behaviour of getFileChecksum when length = 0
> ---
>
> Key: HDFS-9705
> URL: https://issues.apache.org/jira/browse/HDFS-9705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: SammiChen
>Priority: Minor
> Fix For: 3.0.0-alpha3, 2.8.1
>
> Attachments: HDFS-9705-branch-2.001.patch, 
> HDFS-9705-branch-2.002.patch, HDFS-9705-v1.patch, HDFS-9705-v2.patch, 
> HDFS-9705-v3.patch, HDFS-9705-v4.patch, HDFS-9705-v5.patch, 
> HDFS-9705-v6.patch, HDFS-9705-v7.patch
>
>
> {{FileSystem#getFileChecksum}} may accept {{length}} parameter and 0 is a 
> valid value. Currently it will return {{null}} when length is 0, in the 
> following code block:
> {code}
> //compute file MD5
> final MD5Hash fileMD5 = MD5Hash.digest(md5out.getData());
> switch (crcType) {
> case CRC32:
>   return new MD5MD5CRC32GzipFileChecksum(bytesPerCRC,
>   crcPerBlock, fileMD5);
> case CRC32C:
>   return new MD5MD5CRC32CastagnoliFileChecksum(bytesPerCRC,
>   crcPerBlock, fileMD5);
> default:
>   // If there is no block allocated for the file,
>   // return one with the magic entry that matches what previous
>   // hdfs versions return.
>   if (locatedblocks.size() == 0) {
> return new MD5MD5CRC32GzipFileChecksum(0, 0, fileMD5);
>   }
>   // we should never get here since the validity was checked
>   // when getCrcType() was called above.
>   return null;
> }
> {code}
> The comment says "we should never get here since the validity was checked" 
> but it does. As we're using the MD5-MD5-X approach, and {{EMPTY--CONTENT}} 
> actually is a valid case in which the MD5 value is 
> {{d41d8cd98f00b204e9800998ecf8427e}}, so suggest we return a reasonable value 
> other than null. At least some useful information in the returned value can 
> be seen, like values from block checksum header.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10881) Federation State Store Driver API

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946484#comment-15946484
 ] 

Hadoop QA commented on HDFS-10881:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
28s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m  5s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache |
|   | hadoop.hdfs.TestBlockStoragePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10881 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860979/HDFS-10881-HDFS-10467-013.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 77a85d6c3c9f 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 1a8a170 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18881/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18881/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18881/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881

[jira] [Commented] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946461#comment-15946461
 ] 

Hadoop QA commented on HDFS-10971:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-tools/hadoop-distcp: The patch generated 0 
new + 211 unchanged - 1 fixed = 211 total (was 212) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
42s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10971 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860983/HDFS-10971.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c14f3db40b3a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 82fb9ce |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18883/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18883/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>   

[jira] [Updated] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-28 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HDFS-10971:
--
Attachment: HDFS-10971.02.patch

Thanks for the review [~jojochuang] and [~andrew.wang]. Attaching v02 patch 
with following comments addressed. Please take a look.

Code comments fixed as per the suggestion.

bq. It'd be good to add messages to the asserts as a form of documentation.
Added

bq. The test is also named "testPreserve..." whereas we might want to name it 
"testReplFactorNotPreserved..." or "...Ignored..." for clarity
Done. Test renamed to testReplFactorNotPreservedOnErasureCodedFile

bq. Consider doing a static import on the asserts to make them a little more 
concise
Done

bq. We test EC src and repl dest, should we also test repl src and EC dst? 
Also add EC to EC test with different EC policies for completeness?
Done


> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10971.01.patch, HDFS-10971.02.patch, 
> HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11566) Ozone: Document missing metrics for container operations

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946444#comment-15946444
 ] 

Hadoop QA commented on HDFS-11566:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 1s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11566 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860981/HDFS-11566-HDFS-7240.003.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux c439b082cf08 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 8617fda |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18882/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Document missing metrics for container operations
> 
>
> Key: HDFS-11566
> URL: https://issues.apache.org/jira/browse/HDFS-11566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, ozone
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11566-HDFS-7240.001.patch, 
> HDFS-11566-HDFS-7240.002.patch, HDFS-11566-HDFS-7240.003.patch
>
>
> In HDFS-11463, it adds some metrics for container operations and can be 
> exported over JMX. But it hasn't been documented in {{Metrics.md}}. There are 
> many metrics added for container. Document these will be helpful for users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11537) Block Storage : add cache layer

2017-03-28 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-11537:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~vagarychen] for the contribution. I've commit the patch to the feature 
branch. 

> Block Storage : add cache layer
> ---
>
> Key: HDFS-11537
> URL: https://issues.apache.org/jira/browse/HDFS-11537
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Chen Liang
>Assignee: Chen Liang
> Attachments: HDFS-11537-HDFS-7240.004.patch, 
> HDFS-11537-HDFS-7240.005.patch, HDFS-11537-HDSF-7240.001.patch, 
> HDFS-11537-HDSF-7240.002.patch, HDFS-11537-HDSF-7240.003.patch
>
>
> This JIRA adds the cache layer. Specifically, this JIRA implements the cache 
> interface in HDFS-11361 and adds the code that actually talks to containers. 
> The upper layer can simply view the storage as a cache with simple put and 
> get interface, while in the backend the get and put are actually talking to 
> containers. This is a critical part to the cblock performance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11566) Ozone: Document missing metrics for container operations

2017-03-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11566:
-
Attachment: HDFS-11566-HDFS-7240.003.patch

Thanks [~anu] for the quick review. The comment makes sense to me. Attach the 
new patch.

> Ozone: Document missing metrics for container operations
> 
>
> Key: HDFS-11566
> URL: https://issues.apache.org/jira/browse/HDFS-11566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, ozone
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11566-HDFS-7240.001.patch, 
> HDFS-11566-HDFS-7240.002.patch, HDFS-11566-HDFS-7240.003.patch
>
>
> In HDFS-11463, it adds some metrics for container operations and can be 
> exported over JMX. But it hasn't been documented in {{Metrics.md}}. There are 
> many metrics added for container. Document these will be helpful for users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11546) Federation Router RPC server

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946424#comment-15946424
 ] 

Hadoop QA commented on HDFS-11546:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
12s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 10 new + 403 unchanged - 0 fixed = 413 total (was 403) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.federation.router.TestRouterRpc |
|   | hadoop.hdfs.server.namenode.TestFileTruncate |
|   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11546 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860969/HDFS-11546-HDFS-10467-000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux cf21550258c5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 1a8a170 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18880/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18880/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18880

[jira] [Updated] (HDFS-10881) Federation State Store Driver API

2017-03-28 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10881:
---
Attachment: HDFS-10881-HDFS-10467-013.patch

Tackled some [~chris.douglas] comments:
* Reusing time creation for {{BaseRecord}}
* Fixed javadoc for {{remove(T)}}
* Added annotations to {{StateStoreRecordOperations}}


> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> HDFS-10881-HDFS-10467-004.patch, HDFS-10881-HDFS-10467-005.patch, 
> HDFS-10881-HDFS-10467-006.patch, HDFS-10881-HDFS-10467-007.patch, 
> HDFS-10881-HDFS-10467-008.patch, HDFS-10881-HDFS-10467-009.patch, 
> HDFS-10881-HDFS-10467-010.patch, HDFS-10881-HDFS-10467-011.patch, 
> HDFS-10881-HDFS-10467-012.patch, HDFS-10881-HDFS-10467-013.patch
>
>
> The API interfaces and minimal classes required to support a state store data 
> backend such as ZooKeeper or a file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10881) Federation State Store Driver API

2017-03-28 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946405#comment-15946405
 ] 

Inigo Goiri commented on HDFS-10881:


[~subru], it would be good if you could take a look at the interfaces.

> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> HDFS-10881-HDFS-10467-004.patch, HDFS-10881-HDFS-10467-005.patch, 
> HDFS-10881-HDFS-10467-006.patch, HDFS-10881-HDFS-10467-007.patch, 
> HDFS-10881-HDFS-10467-008.patch, HDFS-10881-HDFS-10467-009.patch, 
> HDFS-10881-HDFS-10467-010.patch, HDFS-10881-HDFS-10467-011.patch, 
> HDFS-10881-HDFS-10467-012.patch, HDFS-10881-HDFS-10467-013.patch
>
>
> The API interfaces and minimal classes required to support a state store data 
> backend such as ZooKeeper or a file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9705) Refine the behaviour of getFileChecksum when length = 0

2017-03-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946397#comment-15946397
 ] 

Kai Zheng commented on HDFS-9705:
-

LGTM too and +1. Thanks Andrew and Sammi.

> Refine the behaviour of getFileChecksum when length = 0
> ---
>
> Key: HDFS-9705
> URL: https://issues.apache.org/jira/browse/HDFS-9705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: SammiChen
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-9705-branch-2.001.patch, 
> HDFS-9705-branch-2.002.patch, HDFS-9705-v1.patch, HDFS-9705-v2.patch, 
> HDFS-9705-v3.patch, HDFS-9705-v4.patch, HDFS-9705-v5.patch, 
> HDFS-9705-v6.patch, HDFS-9705-v7.patch
>
>
> {{FileSystem#getFileChecksum}} may accept {{length}} parameter and 0 is a 
> valid value. Currently it will return {{null}} when length is 0, in the 
> following code block:
> {code}
> //compute file MD5
> final MD5Hash fileMD5 = MD5Hash.digest(md5out.getData());
> switch (crcType) {
> case CRC32:
>   return new MD5MD5CRC32GzipFileChecksum(bytesPerCRC,
>   crcPerBlock, fileMD5);
> case CRC32C:
>   return new MD5MD5CRC32CastagnoliFileChecksum(bytesPerCRC,
>   crcPerBlock, fileMD5);
> default:
>   // If there is no block allocated for the file,
>   // return one with the magic entry that matches what previous
>   // hdfs versions return.
>   if (locatedblocks.size() == 0) {
> return new MD5MD5CRC32GzipFileChecksum(0, 0, fileMD5);
>   }
>   // we should never get here since the validity was checked
>   // when getCrcType() was called above.
>   return null;
> }
> {code}
> The comment says "we should never get here since the validity was checked" 
> but it does. As we're using the MD5-MD5-X approach, and {{EMPTY--CONTENT}} 
> actually is a valid case in which the MD5 value is 
> {{d41d8cd98f00b204e9800998ecf8427e}}, so suggest we return a reasonable value 
> other than null. At least some useful information in the returned value can 
> be seen, like values from block checksum header.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9651) All web UIs should include a robots.txt file

2017-03-28 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-9651:

Priority: Minor  (was: Trivial)

> All web UIs should include a robots.txt file
> 
>
> Key: HDFS-9651
> URL: https://issues.apache.org/jira/browse/HDFS-9651
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lars Francke
>Assignee: Lars Francke
>Priority: Minor
> Attachments: HDFS-9651.1.patch, HDFS-9651.2.patch
>
>
> Similar to HDFS-330. So that public UIs don't get crawled.
> I can provide a patch that includes a simple robots.txt. Another alternative 
> is probably a Filter that provides one automatically for all UIs but I don't 
> have time to do that.
> If anyone wants to take over please go ahead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10881) Federation State Store Driver API

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946382#comment-15946382
 ] 

Hadoop QA commented on HDFS-10881:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
27s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10881 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860962/HDFS-10881-HDFS-10467-012.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a7fd95501ae2 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 1a8a170 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18878/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18878/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18878/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
> 

[jira] [Commented] (HDFS-9705) Refine the behaviour of getFileChecksum when length = 0

2017-03-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946346#comment-15946346
 ] 

Andrew Wang commented on HDFS-9705:
---

LGTM, Kai do you want to review too?

> Refine the behaviour of getFileChecksum when length = 0
> ---
>
> Key: HDFS-9705
> URL: https://issues.apache.org/jira/browse/HDFS-9705
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: SammiChen
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-9705-branch-2.001.patch, 
> HDFS-9705-branch-2.002.patch, HDFS-9705-v1.patch, HDFS-9705-v2.patch, 
> HDFS-9705-v3.patch, HDFS-9705-v4.patch, HDFS-9705-v5.patch, 
> HDFS-9705-v6.patch, HDFS-9705-v7.patch
>
>
> {{FileSystem#getFileChecksum}} may accept {{length}} parameter and 0 is a 
> valid value. Currently it will return {{null}} when length is 0, in the 
> following code block:
> {code}
> //compute file MD5
> final MD5Hash fileMD5 = MD5Hash.digest(md5out.getData());
> switch (crcType) {
> case CRC32:
>   return new MD5MD5CRC32GzipFileChecksum(bytesPerCRC,
>   crcPerBlock, fileMD5);
> case CRC32C:
>   return new MD5MD5CRC32CastagnoliFileChecksum(bytesPerCRC,
>   crcPerBlock, fileMD5);
> default:
>   // If there is no block allocated for the file,
>   // return one with the magic entry that matches what previous
>   // hdfs versions return.
>   if (locatedblocks.size() == 0) {
> return new MD5MD5CRC32GzipFileChecksum(0, 0, fileMD5);
>   }
>   // we should never get here since the validity was checked
>   // when getCrcType() was called above.
>   return null;
> }
> {code}
> The comment says "we should never get here since the validity was checked" 
> but it does. As we're using the MD5-MD5-X approach, and {{EMPTY--CONTENT}} 
> actually is a valid case in which the MD5 value is 
> {{d41d8cd98f00b204e9800998ecf8427e}}, so suggest we return a reasonable value 
> other than null. At least some useful information in the returned value can 
> be seen, like values from block checksum header.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10675) Datanode support to read from external stores.

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946323#comment-15946323
 ] 

Hadoop QA commented on HDFS-10675:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
52s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
13s{color} | {color:green} HDFS-9806 passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 13m  
8s{color} | {color:red} root in HDFS-9806 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
28s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
55s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 6s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
1s{color} | {color:green} HDFS-9806 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
39s{color} | {color:green} HDFS-9806 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 12m 
58s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 12m 58s{color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 58s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 20s{color} | {color:orange} root: The patch generated 23 new + 1075 
unchanged - 6 fixed = 1098 total (was 1081) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 47s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
38s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}159m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}272m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
| Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 |
|   | org.apache.hadoop.hdfs.server.b

[jira] [Commented] (HDFS-11302) Improve Logging for SSLHostnameVerifier

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946312#comment-15946312
 ] 

Hadoop QA commented on HDFS-11302:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 10 new + 287 unchanged - 1 fixed = 297 total (was 288) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11302 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846694/HDFS-11302.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aeb560d43e58 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 063b513 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18877/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18877/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18877/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Improve Logging for SSLHostnameVerifier
> ---
>
> Key: HDFS-11302
> URL: https://issues.apache.org/jira/browse/HDFS-11302
> Project: Hadoop HDFS
>  Issu

[jira] [Updated] (HDFS-11546) Federation Router RPC server

2017-03-28 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-11546:
---
 Assignee: Inigo Goiri
Affects Version/s: HDFS-10467
   Status: Patch Available  (was: Open)

> Federation Router RPC server
> 
>
> Key: HDFS-11546
> URL: https://issues.apache.org/jira/browse/HDFS-11546
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Affects Versions: HDFS-10467
>Reporter: Inigo Goiri
>Assignee: Inigo Goiri
> Attachments: HDFS-11546-HDFS-10467-000.patch
>
>
> RPC server side of the Federation Router implements ClientProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11546) Federation Router RPC server

2017-03-28 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-11546:
---
Attachment: HDFS-11546-HDFS-10467-000.patch

Adding Router RPC server and client to the Federation HDFS Router.

> Federation Router RPC server
> 
>
> Key: HDFS-11546
> URL: https://issues.apache.org/jira/browse/HDFS-11546
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
> Attachments: HDFS-11546-HDFS-10467-000.patch
>
>
> RPC server side of the Federation Router implements ClientProtocol.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946270#comment-15946270
 ] 

Hadoop QA commented on HDFS-11529:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-hdfs-native-client in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11529 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860961/HDFS-11529.002.patch |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 978f86eb03d7 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 063b513 |
| Default Java | 1.8.0_121 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18879/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18879/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFExcept

[jira] [Commented] (HDFS-9651) All web UIs should include a robots.txt file

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946248#comment-15946248
 ] 

Hadoop QA commented on HDFS-9651:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-9651 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12786766/HDFS-9651.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 650ab4d1d6f7 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 01aca54 |
| Default Java | 1.8.0_121 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18876/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18876/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18876/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> All web UIs should include a robots.txt file
> 
>
> Key: HDFS-9651
> URL: https://issues.apache.org/jira/browse/HDFS-9651
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lars Francke
>Assignee: Lars Francke
>Priority: Trivial
> Attachments: HDFS-9651.1.patch, HDFS-9651.2.patch
>
>
> Similar to HDFS-330. So that public UIs don't get crawled.
> I can provide a patch that includes a simple robots.txt. Another alternative 
> is probably a Filter that provides one automat

[jira] [Commented] (HDFS-10881) Federation State Store Driver API

2017-03-28 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946228#comment-15946228
 ] 

Chris Douglas commented on HDFS-10881:
--

* If the create/mod time needs to be set, can it at least use the same 
timestamp? That call can be expensive
* If {{remove(T)}} is called on a record that isn't in the state store, is the 
result successful? Please update the javadoc
* If an implementation should maintain invariant properties (atomicity, 
idempotence, etc.) for these calls, it might be worth calling those out. For 
example, if {{putAll}} fails partway through, should the state store guarantee 
rollback, or does the client just skip collisions? I don't know if the 
annotations added in HDFS-4974 are still used, or apply to the state store 
implementations this supports.
** The API is very generic. In context with the implementation, it may be 
possible to simplify it.
* Walking the type hierarchy in {{StateStoreUtils#getRecordClass}} by classname 
is unfortunate. Since it'd be cross-cutting across subsequent patches we can 
revisit it later.

+1 for committing to the branch

> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> HDFS-10881-HDFS-10467-004.patch, HDFS-10881-HDFS-10467-005.patch, 
> HDFS-10881-HDFS-10467-006.patch, HDFS-10881-HDFS-10467-007.patch, 
> HDFS-10881-HDFS-10467-008.patch, HDFS-10881-HDFS-10467-009.patch, 
> HDFS-10881-HDFS-10467-010.patch, HDFS-10881-HDFS-10467-011.patch, 
> HDFS-10881-HDFS-10467-012.patch
>
>
> The API interfaces and minimal classes required to support a state store data 
> backend such as ZooKeeper or a file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-28 Thread Sailesh Mukil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946190#comment-15946190
 ] 

Sailesh Mukil commented on HDFS-11529:
--

{quote}
Nice improvement.
{quote}
Thanks for the review [~cmccabe]!

{quote}
printExceptionAndFreeV: This function is intended to print exceptions and then 
free them. If you are overloading it to set thread-local data, you should 
change the name to reflect that. Something like handleExceptionAndFree would 
work. You also need to document this information in the function doxygen, found 
in exception.h.
{quote}
Done. I've renamed all the printException*() functions to handleException*().

{quote}
It seems to me that the thread-local exception should be set regardless of 
whether noPrint is true or not. noPrint was intended to avoid spammy logging 
for things we expected to happen, but not to skip setting the error return. The 
thread-local storage is essentially an out-of-band way of returning more error 
data, so I don't see why it should be affected by noPrint.
{quote}
Yes, you're right. Done.

{quote}
You need to document what a NULL return means here.
{quote}
Done.

{quote}
getJNIEnv should free and zero out these thread-local pointers. Otherwise the 
exception text from one call may bleed into another, since there are still some 
code paths that don't set the thread-local error status.
{quote}
Yes, I've added code to do that now.

{quote}
It is not related to your patch, but I just noticed that hdfsGetHosts doesn't 
set errno on failure. Do you mind fixing that?
{quote}
Done.

> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10881) Federation State Store Driver API

2017-03-28 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10881:
---
Attachment: HDFS-10881-HDFS-10467-012.patch

Tackling [~chris.douglas] comments:
* Moved {{updateOrCreate()}} to {{put()}}
* Fields in {{QueryResult}} now final?
* We need to set the creation and mod time for all the records, doing it in 
{{BaseRecord#initDefaultTimes}} saves some work
* Removing {{BaseRecord#isPrimaryKeyFolder}}
* For the delete (now {{remove()}}:
** A particular record success just for that one
** Remove with a query returns how many it removed
** Remove all, succeeds if it's able to remove all of them
* {{getPrimaryKeys}} move to a regular {{Map}}

> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> HDFS-10881-HDFS-10467-004.patch, HDFS-10881-HDFS-10467-005.patch, 
> HDFS-10881-HDFS-10467-006.patch, HDFS-10881-HDFS-10467-007.patch, 
> HDFS-10881-HDFS-10467-008.patch, HDFS-10881-HDFS-10467-009.patch, 
> HDFS-10881-HDFS-10467-010.patch, HDFS-10881-HDFS-10467-011.patch, 
> HDFS-10881-HDFS-10467-012.patch
>
>
> The API interfaces and minimal classes required to support a state store data 
> backend such as ZooKeeper or a file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946167#comment-15946167
 ] 

Chen Liang commented on HDFS-11557:
---

Thanks [~dmtucker] for showing the steps! Very helpful, I did manage to 
reproduce the issue following these steps.

After checking the code, I think this is because the code is written that, if 
the directory to be deleted is empty, the permission check is (again) bypassed. 
I created a file under the directory, it then prevented me from deleting the 
directory:
{code}
$ ./bin/hdfs dfs -ls /test/dir
ls: Permission denied: user=someone2, access=READ_EXECUTE, 
inode="/test/dir":someone2:supergroup:d-w--w--w-
$ ./bin/hdfs dfs -chmod 777 /test/dir
$ ./bin/hdfs dfs -touchz /test/dir/file
$ ./bin/hdfs dfs -chmod 222 /test/dir
$ ./bin/hdfs dfs -ls /test/dir
ls: Permission denied: user=someone2, access=READ_EXECUTE, 
inode="/test/dir":someone2:supergroup:d-w--w--w-
$ ./bin/hdfs dfs -rm -r /test/dir
rm: Permission denied: user=someone2, access=ALL, 
inode="/test/dir":someone2:supergroup:d-w--w--w-
{code}

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11402) HDFS Snapshots should capture point-in-time copies of OPEN files

2017-03-28 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946165#comment-15946165
 ] 

Yongjun Zhang commented on HDFS-11402:
--

Hi [~manojg],

Thanks a lot for your work here and very sorry for my delayed review.

The patch looks largely good to me. Below are some comments, mostly cosmetic. 

1. We can put the parameters leaseManager and freezeOpenFiles together at the 
API, since they are used together for an optional feature. For example in 
INodeDirectory
{code}
public Snapshot addSnapshot(final LeaseManager leaseManager,
  int id, String name, boolean freezeOpenFiles)
{code}
we can change it to
{code}
public Snapshot addSnapshot(int id, String name,
  final LeaseManager leaseManager,
  final boolean freezeOpenFiles)
{code}
2. share common code in the two {{INodesInPath$fromINode}} methods
3. change method name {{isAncestor}} to {{isDescendent}} in {{INodesInPath}}
4. In {{LeaseManager}}, 
* INODE_FILTER_WORKER_COUNT is only used in a single method, it's better not to 
define it as public, and maybe we can just move it to the single method.
* change {{getINodeWithLeases(final INodeDirectory restrictFilesFromDir)}}
 to {{getINodesWithLease(final INodeDirectory ancestorDir)}}
and javadoc the behavior when ancestorDir is null or not-null
* optionally, possibly just use the above COUNT as a cap, and have a way to 
dynamically decide how big the thread pool is, especially when the number of 
files open for write is small.  This can be consider in the future when needed.
* add a private method (like {{getINodesInLease}} to wrap 
{code}
   synchronized (this) {
  inodes = new INode[leasesById.size()];
  for (long inodeId : leasesById.keySet()) {
inodes[inodeCount] = fsnamesystem.getFSDirectory().getInode(inodeId);
inodeCount++;
  }
}
{code}

5. In hdfs-default.xml, add a note to describe that the file length captured in 
snapshot for a file is what's recorded in NameNode, it may be shorter than what 
the client has written. In order to capture the length the client has written, 
the client need to call hflush/hsync on the file.
6. Suggest to add a test about snapshot diff.

HI [~jingzhao], wonder if you could help doing a review too. Much appreciated.

Thanks.





> HDFS Snapshots should capture point-in-time copies of OPEN files
> 
>
> Key: HDFS-11402
> URL: https://issues.apache.org/jira/browse/HDFS-11402
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Affects Versions: 2.6.0
>Reporter: Manoj Govindassamy
>Assignee: Manoj Govindassamy
> Attachments: HDFS-11402.01.patch, HDFS-11402.02.patch
>
>
> *Problem:*
> 1. When there are files being written and when HDFS Snapshots are taken in 
> parallel, Snapshots do capture all these files, but these being written files 
> in Snapshots do not have the point-in-time file length captured. That is, 
> these open files are not frozen in HDFS Snapshots. These open files 
> grow/shrink in length, just like the original file, even after the snapshot 
> time.
> 2. At the time of File close or any other meta data modification operation on 
> these files, HDFS reconciles the file length and records the modification in 
> the last taken Snapshot. All the previously taken Snapshots continue to have 
> those open Files with no modification recorded. So, all those previous 
> snapshots end up using the final modification record in the last snapshot. 
> Thus after the file close, file lengths in all those snapshots will end up 
> same.
> Assume File1 is opened for write and a total of 1MB written to it. While the 
> writes are happening, snapshots are taken in parallel.
> {noformat}
> |---Time---T1---T2-T3T4-->
> |---Snap1--Snap2-Snap3--->
> |---File1.open---write-write---close->
> {noformat}
> Then at time,
> T2:
> Snap1.File1.length = 0
> T3:
> Snap1.File1.length = 0
> Snap2.File1.length = 0
> 
> T4:
> Snap1.File1.length = 1MB
> Snap2.File1.length = 1MB
> Snap3.File1.length = 1MB
> *Proposal*
> 1. At the time of taking Snapshot, {{SnapshotManager#createSnapshot}} can 
> optionally request {{DirectorySnapshottableFeature#addSnapshot}} to freeze 
> open files. 
> 2. {{DirectorySnapshottableFeature#addSnapshot}} can consult with 
> {{LeaseManager}} and get a list INodesInPath for all open files under the 
> snapshot dir. 
> 3. {{DirectorySnapshottableFeature#addSnapshot}} after the Snapshot creation, 
> Diff creation and updating modification time, can invoke 
> {{INodeFile#recordModification}} for each of the open files. This way, the 
> Snapshot just taken will have a {{FileDiff}} with {{fileSize}} captured for 
> each of the o

[jira] [Updated] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-28 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil updated HDFS-11529:
-
Status: Patch Available  (was: Open)

> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10675) Datanode support to read from external stores.

2017-03-28 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946162#comment-15946162
 ] 

Chris Douglas commented on HDFS-10675:
--

+1 on committing to the HDFS-9806 branch (assuming Jenkins comes back clean)

> Datanode support to read from external stores. 
> ---
>
> Key: HDFS-10675
> URL: https://issues.apache.org/jira/browse/HDFS-10675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10675-HDFS-9806.001.patch, 
> HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch, 
> HDFS-10675-HDFS-9806.004.patch, HDFS-10675-HDFS-9806.005.patch, 
> HDFS-10675-HDFS-9806.006.patch, HDFS-10675-HDFS-9806.007.patch, 
> HDFS-10675-HDFS-9806.008.patch, HDFS-10675-HDFS-9806.009.patch
>
>
> This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external 
> stores, along with enabling the Datanode to read from such stores using a 
> {{ProvidedReplica}} and a {{ProvidedVolume}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-28 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil updated HDFS-11529:
-
Attachment: HDFS-11529.002.patch

> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11529) libHDFS still does not return appropriate error information in many cases

2017-03-28 Thread Sailesh Mukil (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sailesh Mukil updated HDFS-11529:
-
Status: Open  (was: Patch Available)

> libHDFS still does not return appropriate error information in many cases
> -
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-28 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946138#comment-15946138
 ] 

Andrew Wang commented on HDFS-10971:


This looks great. A couple nitty comments:

{noformat}
170   // If the target directory is Erasure Coded, target FileSystem 
takes care
171   // of creating the erasure coded file where the file replication 
factor
172   // is not applicable and the passed in replication factor is 
ignored.
{noformat}

Don't need to capitalize Erasure Coded. Also to be a little more accurate, I'd 
say something like:

If there is an erasure coding policy set on the target directory, files will be 
written to the target directory using this EC policy. The replication factor of 
the source file is ignored and not preserved.

{noformat}
239 // Preserve the replication attribute only when both source and
240 // destination files are not Erasure Coded.
{noformat}

I'd expand this comment a bit to cover all the conditions in the if statement. 
You could also restate the current as "The replication factor can only be 
preserved for replicated files. It is ignored when either the source or target 
file are erasure coded."

Finally, in the new unit test:
* It'd be good to add messages to the asserts as a form of documentation.
* The test is also named "testPreserve..." whereas we might want to name it 
"testReplFactorNotPreserved..." or "...Ignored..." for clarity
* Consider doing a static import on the asserts to make them a little more 
concise
* We test EC src and repl dest, should we also test repl src and EC dst? Also 
add EC to EC test with different EC policies for completeness?
* We added StripedFileTestUtil#getDefaultECPolicy, maybe we also add a helper 
function like DFSTestUtil#enableAllECPolicies but instead 
#enableDefaultECPolicy? Perhaps I should have originally put 
#enableAllECPolicies in StripedFileTestUtil also. Could also be a cleanup JIRA 
for later.

> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10971.01.patch, HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11010) Update to Protobuf 3 in trunk

2017-03-28 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDFS-11010.
-
Resolution: Duplicate

Let us keep all going forward discussion in HADOOP-13363

> Update to Protobuf 3 in trunk
> -
>
> Key: HDFS-11010
> URL: https://issues.apache.org/jira/browse/HDFS-11010
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0-alpha2
>Reporter: Anu Engineer
>
> This JIRA is to propose that we should move to protobuf 3 from protobuf 2.5. 
> This allows us to stay with the newer versions of the protobuf.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10881) Federation State Store Driver API

2017-03-28 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946100#comment-15946100
 ] 

Chris Douglas commented on HDFS-10881:
--

Mostly minor things. I suspect many of the points I'm confused on would be 
clearer in context
* naming; Since {{updateOrCreate}} sets a policy for existing values, would 
{{put}} be clearer?
* Can the fields on {{QueryResult}} be final?
* What is the intent behind {{BaseRecord#initDefaultTimes}}? Would a sentinel 
value (e.g., \-1) serve the same purpose?
* {{BaseRecord#isPrimaryKeyFolder}} it's not clear what this means
* The semantics of {{delete}} (rename to {{remove}}?) are unclear. If the item 
doesn't exist, does this succeed? If some records in the filter are removed, or 
if no records match, is the operation successful?
* Why does {{getPrimaryKeys}} need to be a {{SortedMap}}? Is that important to 
its semantics?

> Federation State Store Driver API
> -
>
> Key: HDFS-10881
> URL: https://issues.apache.org/jira/browse/HDFS-10881
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Jason Kace
> Attachments: HDFS-10881-HDFS-10467-001.patch, 
> HDFS-10881-HDFS-10467-002.patch, HDFS-10881-HDFS-10467-003.patch, 
> HDFS-10881-HDFS-10467-004.patch, HDFS-10881-HDFS-10467-005.patch, 
> HDFS-10881-HDFS-10467-006.patch, HDFS-10881-HDFS-10467-007.patch, 
> HDFS-10881-HDFS-10467-008.patch, HDFS-10881-HDFS-10467-009.patch, 
> HDFS-10881-HDFS-10467-010.patch, HDFS-10881-HDFS-10467-011.patch
>
>
> The API interfaces and minimal classes required to support a state store data 
> backend such as ZooKeeper or a file system.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11302) Improve Logging for SSLHostnameVerifier

2017-03-28 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-11302:
--
Target Version/s: 2.8.1  (was: 2.8.0)

> Improve Logging for SSLHostnameVerifier
> ---
>
> Key: HDFS-11302
> URL: https://issues.apache.org/jira/browse/HDFS-11302
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: security
>Reporter: Xiaoyu Yao
>Assignee: Chen Liang
> Attachments: HDFS-11302.001.patch
>
>
> SSLHostnameVerifier interface/class was copied from other projects without 
> any logging to help troubleshooting SSL certificate related issues. For a 
> misconfigured SSL truststore, we may get some very confusing error message 
> like
> {code}
> >hdfs dfs -cat swebhdfs://NNl/tmp/test1.txt
> ...
> cause:java.io.IOException: DN2:50475: HTTPS hostname wrong:  should be 
> cat: DN2:50475: HTTPS hostname wrong:  should be 
> {code}
> This ticket is opened to add tracing to give more useful context information 
> around SSL certificate verification failures inside the following code.
> {code}AbstractVerifier#check(String[] host, X509Certificate cert) {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946049#comment-15946049
 ] 

David Tucker edited comment on HDFS-11557 at 3/28/17 10:02 PM:
---

Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code:none}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in 
get_listing
self.invoke('getListing', request, response)
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in 
invoke
blob = self.channel.receive()
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in 
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission 
denied: user=nobody, access=READ_EXECUTE, 
inode="/test/empty":nobody:supergroup:d-w--w--w-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

>>> client.delete('/test/empty', can_recurse=True)
True
{code}

{code:none}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop  0 2017-03-27 10:20 /app-logs
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /apps
drwxr-xr-x   - yarn   hadoop  0 2017-03-27 10:20 /ats
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /hdp
drwxr-xr-x   - mapred hdfs0 2017-03-27 10:20 /mapred
drwxrwxrwx   - mapred hadoop  0 2017-03-27 10:20 /mr-history
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 14:55 /test
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 09:21 /tmp
drwxr-xr-x   - hdfs   hdfs0 2017-03-28 09:21 /user

[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE, 
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash 
at: 
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$ 
{code}


was (Author: dmtucker):
Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular

[jira] [Commented] (HDFS-10971) Distcp should not copy replication factor if source file is erasure coded

2017-03-28 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946050#comment-15946050
 ] 

Wei-Chiu Chuang commented on HDFS-10971:


Thanks [~manojg] for the patch. Looks good to me. Would any one else like to 
review & comment on the patch? 

> Distcp should not copy replication factor if source file is erasure coded
> -
>
> Key: HDFS-10971
> URL: https://issues.apache.org/jira/browse/HDFS-10971
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: distcp
>Affects Versions: 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Manoj Govindassamy
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-10971.01.patch, HDFS-10971.testcase.patch
>
>
> The current erasure coding implementation uses replication factor field to 
> store erasure coding policy.
> Distcp copies the source file's replication factor to the destination if 
> {{-pr}} is specified. However, if the source file is EC, the replication 
> factor (which is EC policy) should not be replicated to the destination file. 
> When a HdfsFileStatus is converted to FileStatus, the replication factor is 
> set to 0 if it's an EC file.
> In fact, I will attach a test case that shows trying to replicate the 
> replication factor of an EC file results in an IOException: "Requested 
> replication factor of 0 is less than the required minimum of 1 for 
> /tmp/dst/dest2"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946049#comment-15946049
 ] 

David Tucker edited comment on HDFS-11557 at 3/28/17 10:02 PM:
---

Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in 
get_listing
self.invoke('getListing', request, response)
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in 
invoke
blob = self.channel.receive()
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in 
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission 
denied: user=nobody, access=READ_EXECUTE, 
inode="/test/empty":nobody:supergroup:d-w--w--w-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

>>> client.delete('/test/empty', can_recurse=True)
True
{code}

{code}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop  0 2017-03-27 10:20 /app-logs
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /apps
drwxr-xr-x   - yarn   hadoop  0 2017-03-27 10:20 /ats
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /hdp
drwxr-xr-x   - mapred hdfs0 2017-03-27 10:20 /mapred
drwxrwxrwx   - mapred hadoop  0 2017-03-27 10:20 /mr-history
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 14:55 /test
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 09:21 /tmp
drwxr-xr-x   - hdfs   hdfs0 2017-03-28 09:21 /user

[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE, 
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash 
at: 
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$ 
{code}


was (Author: dmtucker):
Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:


[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946049#comment-15946049
 ] 

David Tucker commented on HDFS-11557:
-

Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code:python}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in 
get_listing
self.invoke('getListing', request, response)
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in 
invoke
blob = self.channel.receive()
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in 
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission 
denied: user=nobody, access=READ_EXECUTE, 
inode="/test/empty":nobody:supergroup:d-w--w--w-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

>>> client.delete('/test/empty', can_recurse=True)
True
{code}

{code}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop  0 2017-03-27 10:20 /app-logs
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /apps
drwxr-xr-x   - yarn   hadoop  0 2017-03-27 10:20 /ats
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /hdp
drwxr-xr-x   - mapred hdfs0 2017-03-27 10:20 /mapred
drwxrwxrwx   - mapred hadoop  0 2017-03-27 10:20 /mr-history
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 14:55 /test
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 09:21 /tmp
drwxr-xr-x   - hdfs   hdfs0 2017-03-28 09:21 /user

[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE, 
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash 
at: 
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$ 
{code}

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-

[jira] [Comment Edited] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946049#comment-15946049
 ] 

David Tucker edited comment on HDFS-11557 at 3/28/17 10:02 PM:
---

Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular client:

{code:none}
>>> import pydoofus
>>> super = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'hdfs'})
>>> client = pydoofus.namenode.v9.Client('namenode', 8020, 
>>> auth={'effective_user': 'nobody'})
>>> super.mkdirs('/test', 0777)
True
>>> client.mkdirs('/test/empty', 0222)
True
>>> client.get_listing('/test/empty')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 666, in 
get_listing
self.invoke('getListing', request, response)
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 490, in 
invoke
blob = self.channel.receive()
  File "/usr/lib/python2.7/site-packages/pydoofus/namenode/v9.py", line 310, in 
receive
raise exceptions.create_exception(err_type, err_msg, call_id, err_code)
pydoofus.exceptions.AccessControlException: ERROR_APPLICATION: Permission 
denied: user=nobody, access=READ_EXECUTE, 
inode="/test/empty":nobody:supergroup:d-w--w--w-
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1686)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getListingInt(FSDirStatAndListingOp.java:76)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4486)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:999)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getListing(ClientNamenodeProtocolServerSideTranslatorPB.java:634)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

>>> client.delete('/test/empty', can_recurse=True)
True
{code}

{code}
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 777 /test
[hdfs@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /
Found 9 items
drwxrwxrwx   - yarn   hadoop  0 2017-03-27 10:20 /app-logs
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /apps
drwxr-xr-x   - yarn   hadoop  0 2017-03-27 10:20 /ats
drwxr-xr-x   - hdfs   hdfs0 2017-03-27 10:20 /hdp
drwxr-xr-x   - mapred hdfs0 2017-03-27 10:20 /mapred
drwxrwxrwx   - mapred hadoop  0 2017-03-27 10:20 /mr-history
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 14:55 /test
drwxrwxrwx   - hdfs   hdfs0 2017-03-28 09:21 /tmp
drwxr-xr-x   - hdfs   hdfs0 2017-03-28 09:21 /user

[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -mkdir /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -chmod 222 /test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test/empty
ls: Permission denied: user=ambari-qa, access=READ_EXECUTE, 
inode="/test/empty":ambari-qa:hdfs:d-w--w--w-
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -rm -r /test/empty
17/03/28 14:57:45 INFO fs.TrashPolicyDefault: Moved: 
'hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/test/empty' to trash 
at: 
hdfs://rketcherside-hdponlynext-1.west.isilon.com:8020/user/ambari-qa/.Trash/Current/test/empty
[ambari-qa@rketcherside-hdponlynext-1 ~]$ hdfs dfs -ls /test
[ambari-qa@rketcherside-hdponlynext-1 ~]$ 
{code}


was (Author: dmtucker):
Indeed, I am able to reproduce with an internal Snakebite-like client and with 
the regular clie

[jira] [Commented] (HDFS-9651) All web UIs should include a robots.txt file

2017-03-28 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946042#comment-15946042
 ] 

Junping Du commented on HDFS-9651:
--

Patch LGTM. Will commit it in next couple of days if no objections.

> All web UIs should include a robots.txt file
> 
>
> Key: HDFS-9651
> URL: https://issues.apache.org/jira/browse/HDFS-9651
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lars Francke
>Assignee: Lars Francke
>Priority: Trivial
> Attachments: HDFS-9651.1.patch, HDFS-9651.2.patch
>
>
> Similar to HDFS-330. So that public UIs don't get crawled.
> I can provide a patch that includes a simple robots.txt. Another alternative 
> is probably a Filter that provides one automatically for all UIs but I don't 
> have time to do that.
> If anyone wants to take over please go ahead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10629) Federation Router

2017-03-28 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15946026#comment-15946026
 ] 

Inigo Goiri commented on HDFS-10629:


Committed to HDFS-10467.
Thanks [~jakace] for working on this and [~chris.douglas] and [~subru] for the 
reviews!

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Fix For: HDFS-10467
>
> Attachments: HDFS-10629.000.patch, HDFS-10629.001.patch, 
> HDFS-10629-HDFS-10467-002.patch, HDFS-10629-HDFS-10467-003.patch, 
> HDFS-10629-HDFS-10467-004.patch, HDFS-10629-HDFS-10467-005.patch, 
> HDFS-10629-HDFS-10467-006.patch, HDFS-10629-HDFS-10467-007.patch, 
> HDFS-10629-HDFS-10467-008.patch, HDFS-10629-HDFS-10467-009.patch, 
> HDFS-10629-HDFS-10467-010.patch, HDFS-10629-HDFS-10467-011.patch, 
> HDFS-10629-HDFS-10467-012.patch, HDFS-10629-HDFS-10467-013.patch, 
> routerlatency.png
>
>
> Component that routes calls from the clients to the right Namespace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2017-03-28 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-10629:
-
Fix Version/s: HDFS-10467

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Fix For: HDFS-10467
>
> Attachments: HDFS-10629.000.patch, HDFS-10629.001.patch, 
> HDFS-10629-HDFS-10467-002.patch, HDFS-10629-HDFS-10467-003.patch, 
> HDFS-10629-HDFS-10467-004.patch, HDFS-10629-HDFS-10467-005.patch, 
> HDFS-10629-HDFS-10467-006.patch, HDFS-10629-HDFS-10467-007.patch, 
> HDFS-10629-HDFS-10467-008.patch, HDFS-10629-HDFS-10467-009.patch, 
> HDFS-10629-HDFS-10467-010.patch, HDFS-10629-HDFS-10467-011.patch, 
> HDFS-10629-HDFS-10467-012.patch, HDFS-10629-HDFS-10467-013.patch, 
> routerlatency.png
>
>
> Component that routes calls from the clients to the right Namespace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2017-03-28 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10629:
---
  Resolution: Resolved
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629.000.patch, HDFS-10629.001.patch, 
> HDFS-10629-HDFS-10467-002.patch, HDFS-10629-HDFS-10467-003.patch, 
> HDFS-10629-HDFS-10467-004.patch, HDFS-10629-HDFS-10467-005.patch, 
> HDFS-10629-HDFS-10467-006.patch, HDFS-10629-HDFS-10467-007.patch, 
> HDFS-10629-HDFS-10467-008.patch, HDFS-10629-HDFS-10467-009.patch, 
> HDFS-10629-HDFS-10467-010.patch, HDFS-10629-HDFS-10467-011.patch, 
> HDFS-10629-HDFS-10467-012.patch, HDFS-10629-HDFS-10467-013.patch, 
> routerlatency.png
>
>
> Component that routes calls from the clients to the right Namespace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11551) Handle SlowDiskReport from DataNode at the NameNode

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945984#comment-15945984
 ] 

Hadoop QA commented on HDFS-11551:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 33s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}110m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11551 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860915/HDFS-11551.007.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ff272ee85927 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6b09336 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18874/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18874/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console 

[jira] [Commented] (HDFS-10629) Federation Router

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945967#comment-15945967
 ] 

Hadoop QA commented on HDFS-10629:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
9s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
59s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
13s{color} | {color:green} The patch generated 0 new + 98 unchanged - 1 fixed = 
98 total (was 99) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-10629 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860912/HDFS-10629-HDFS-10467-013.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  compile  
javac  javadoc  mvninstall  findbugs  checkstyle  xml  |
| uname | Linux eaadf4c64df8 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 6c399a8 |
| Default Java | 1.8.0_121 |
| shellcheck | v0.4.5 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18873/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18873/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Conso

[jira] [Updated] (HDFS-11472) Fix inconsistent replica size after a data pipeline failure

2017-03-28 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-11472:
---
Attachment: HDFS-11472.001.patch

Attach my first patch.

This patch adds extra handling when ondisk bytes is less than acknowledged 
length in {{FsDatasetImpl#recoverRbwImpl}}. In such case, it looks at the block 
file, and if the data written into the block file is more than acknowledged 
length, update the in-memory bytesOnDisk and truncate the block file to match 
acknowledged length.

The test case verifies two scenarios (1) block file length >= acknowledged 
length (2) block file length < acknowledged length. In the latter case the 
recovery attempt will fail.

Looking forward to comments. Thanks!

> Fix inconsistent replica size after a data pipeline failure
> ---
>
> Key: HDFS-11472
> URL: https://issues.apache.org/jira/browse/HDFS-11472
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Critical
> Attachments: HDFS-11472.001.patch, HDFS-11472.testcase.patch
>
>
> We observed a case where a replica's on disk length is less than acknowledged 
> length, breaking the assumption in recovery code.
> {noformat}
> 2017-01-08 01:41:03,532 WARN 
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to 
> obtain replica info for block 
> (=BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394519586) from 
> datanode (=DatanodeInfoWithStorage[10.204.138.17:1004,null,null])
> java.io.IOException: THIS IS NOT SUPPOSED TO HAPPEN: getBytesOnDisk() < 
> getVisibleLength(), rip=ReplicaBeingWritten, blk_2526438952_1101394519586, RBW
>   getNumBytes() = 27530
>   getBytesOnDisk()  = 27006
>   getVisibleLength()= 27268
>   getVolume()   = /data/6/hdfs/datanode/current
>   getBlockFile()= 
> /data/6/hdfs/datanode/current/BP-947993742-10.204.0.136-1362248978912/current/rbw/blk_2526438952
>   bytesAcked=27268
>   bytesOnDisk=27006
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2284)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.initReplicaRecovery(FsDatasetImpl.java:2260)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:2566)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.callInitReplicaRecovery(DataNode.java:2577)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:2645)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.access$400(DataNode.java:245)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2551)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> It turns out that if an exception is thrown within 
> {{BlockReceiver#receivePacket}}, the in-memory replica on disk length may not 
> be updated, but the data is written to disk anyway.
> For example, here's one exception we observed
> {noformat}
> 2017-01-08 01:40:59,512 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exception for 
> BP-947993742-10.204.0.136-1362248978912:blk_2526438952_1101394499067
> java.nio.channels.ClosedByInterruptException
> at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1484)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:994)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:670)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:857)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:797)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:169)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:106)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:244)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> There are potentially other places and causes where an exception is thrown 
> within {{BlockReceiver#receivePacket}}, so it may not make much sense to 
> alleviate it for this particular exception. Instead, we should improve 
> replica recovery code to handle the case where ondisk size is less than 
> acknowledged size, and update in-memory checksum accordingly.



--
This message

[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945954#comment-15945954
 ] 

Chen Liang commented on HDFS-11557:
---

Hi [~dmtucker],

After checking into the code, it turns out that if the use is a super user, the 
code will simply bypass the permission check. So I wonder does this issue 
happen if the user is not a super user?

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10675) Datanode support to read from external stores.

2017-03-28 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HDFS-10675:
-
Attachment: HDFS-10675-HDFS-9806.009.patch

> Datanode support to read from external stores. 
> ---
>
> Key: HDFS-10675
> URL: https://issues.apache.org/jira/browse/HDFS-10675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
> Attachments: HDFS-10675-HDFS-9806.001.patch, 
> HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch, 
> HDFS-10675-HDFS-9806.004.patch, HDFS-10675-HDFS-9806.005.patch, 
> HDFS-10675-HDFS-9806.006.patch, HDFS-10675-HDFS-9806.007.patch, 
> HDFS-10675-HDFS-9806.008.patch, HDFS-10675-HDFS-9806.009.patch
>
>
> This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external 
> stores, along with enabling the Datanode to read from such stores using a 
> {{ProvidedReplica}} and a {{ProvidedVolume}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11558) BPServiceActor thread name is too long

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945876#comment-15945876
 ] 

Hadoop QA commented on HDFS-11558:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860909/HDFS-11558.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 79438641890a 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6b09336 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18872/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18872/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18872/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
>  

[jira] [Commented] (HDFS-11531) Expose hedged read metrics via libHDFS API

2017-03-28 Thread Sailesh Mukil (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945827#comment-15945827
 ] 

Sailesh Mukil commented on HDFS-11531:
--

Thanks for the comments [~cmccabe], I will address them after clarifying the 
below.

> Why not just return the hdfsHedgedReadMetrics pointer? The error is in errno 
> anyway on a failure.

I agree. However, I was following the same API style exposed by 
hdfsFileGetReadStatistics() and hdfsFileFreeReadStatistics():
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c#L96

If you still think I should change it to return the struct, I can go ahead and 
do so.

> Expose hedged read metrics via libHDFS API
> --
>
> Key: HDFS-11531
> URL: https://issues.apache.org/jira/browse/HDFS-11531
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
> Attachments: HDFS-11531.000.patch, HDFS-11531.001.patch
>
>
> It would be good to expose the DFSHedgedReadMetrics via a libHDFS API for 
> applications to retrieve.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11551) Handle SlowDiskReport from DataNode at the NameNode

2017-03-28 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-11551:
--
Attachment: HDFS-11551.007.patch

Fixed checkstyle and findbug errors in patch v07.
The failed unit tests pass locally.

> Handle SlowDiskReport from DataNode at the NameNode
> ---
>
> Key: HDFS-11551
> URL: https://issues.apache.org/jira/browse/HDFS-11551
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-11551.001.patch, HDFS-11551.002.patch, 
> HDFS-11551.003.patch, HDFS-11551.004.patch, HDFS-11551.005.patch, 
> HDFS-11551.006.patch, HDFS-11551.007.patch
>
>
> DataNodes send slow disk reports via heartbeats. Handle these reports at the 
> NameNode to find the topN slow disks.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10629) Federation Router

2017-03-28 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10629:
---
Attachment: HDFS-10629-HDFS-10467-013.patch

Fixed check styles and main() approach.

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629.000.patch, HDFS-10629.001.patch, 
> HDFS-10629-HDFS-10467-002.patch, HDFS-10629-HDFS-10467-003.patch, 
> HDFS-10629-HDFS-10467-004.patch, HDFS-10629-HDFS-10467-005.patch, 
> HDFS-10629-HDFS-10467-006.patch, HDFS-10629-HDFS-10467-007.patch, 
> HDFS-10629-HDFS-10467-008.patch, HDFS-10629-HDFS-10467-009.patch, 
> HDFS-10629-HDFS-10467-010.patch, HDFS-10629-HDFS-10467-011.patch, 
> HDFS-10629-HDFS-10467-012.patch, HDFS-10629-HDFS-10467-013.patch, 
> routerlatency.png
>
>
> Component that routes calls from the clients to the right Namespace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread David Tucker (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945705#comment-15945705
 ] 

David Tucker commented on HDFS-11557:
-

[~vagarychen], please, by all means! I have nothing substantial to add at the 
moment.

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11558) BPServiceActor thread name is too long

2017-03-28 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-11558:
-
Attachment: HDFS-11558.004.patch

v4 fixed some compile issues.

> BPServiceActor thread name is too long
> --
>
> Key: HDFS-11558
> URL: https://issues.apache.org/jira/browse/HDFS-11558
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
>Priority: Minor
> Attachments: HDFS-11558.000.patch, HDFS-11558.001.patch, 
> HDFS-11558.002.patch, HDFS-11558.003.patch, HDFS-11558.004.patch
>
>
> Currently, the thread name looks like
> {code}
> 2017-03-20 18:32:22,022 [DataNode: 
> [[[DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data0,
>  
> [DISK]file:/Users/szetszwo/hadoop/t2/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/dn1_data1]]
>   heartbeating to localhost/127.0.0.1:51772] INFO  ...
> {code}
> which contains the full path for each storage dir.  It is unnecessarily long.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10629) Federation Router

2017-03-28 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945601#comment-15945601
 ] 

Chris Douglas commented on HDFS-10629:
--

Thanks for the quick turnaround. The {{initAndStartRouter}} pattern could be 
simplified/made more consistent with the service framework, but it's fine 
as-is. ACK on the other points.

+1

> Federation Router
> -
>
> Key: HDFS-10629
> URL: https://issues.apache.org/jira/browse/HDFS-10629
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Inigo Goiri
>Assignee: Jason Kace
> Attachments: HDFS-10629.000.patch, HDFS-10629.001.patch, 
> HDFS-10629-HDFS-10467-002.patch, HDFS-10629-HDFS-10467-003.patch, 
> HDFS-10629-HDFS-10467-004.patch, HDFS-10629-HDFS-10467-005.patch, 
> HDFS-10629-HDFS-10467-006.patch, HDFS-10629-HDFS-10467-007.patch, 
> HDFS-10629-HDFS-10467-008.patch, HDFS-10629-HDFS-10467-009.patch, 
> HDFS-10629-HDFS-10467-010.patch, HDFS-10629-HDFS-10467-011.patch, 
> HDFS-10629-HDFS-10467-012.patch, routerlatency.png
>
>
> Component that routes calls from the clients to the right Namespace.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11557) Empty directories may be recursively deleted without being listable

2017-03-28 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-11557:
-

Assignee: Chen Liang

> Empty directories may be recursively deleted without being listable
> ---
>
> Key: HDFS-11557
> URL: https://issues.apache.org/jira/browse/HDFS-11557
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.7.3
>Reporter: David Tucker
>Assignee: Chen Liang
>
> To reproduce, create a directory without read and/or execute permissions 
> (i.e. 0666, 0333, or 0222), then call delete on it with can_recurse=True. 
> Note that the delete succeeds even though the client is unable to check for 
> emptiness and, therefore, cannot otherwise know that any/all children are 
> deletable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11577) Combine the old and the new chooseRandom for better performance

2017-03-28 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945556#comment-15945556
 ] 

Chen Liang commented on HDFS-11577:
---

Thanks [~linyiqun] for adding the {{LOG.isDebugEnabled}} checking and 
committing the patch!

> Combine the old and the new chooseRandom for better performance
> ---
>
> Key: HDFS-11577
> URL: https://issues.apache.org/jira/browse/HDFS-11577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11577.001.patch, HDFS-11577.002.patch, 
> HDFS-11577.003.patch
>
>
> As discussed in HDFS-11535, this JIRA adds a new function combining both the 
> new and the old chooseRandom methods for better performance.
> More specifically, when choosing a random node with storage type requirement, 
> the combined method first tries the old method of blindly picking a random 
> node. If this node satisfies, it is returned. Otherwise, the new chooseRandom 
> is called, which guarantees to find a eligible node in one call (if there is 
> one at all).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11566) Ozone: Document missing metrics for container operations

2017-03-28 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945546#comment-15945546
 ] 

Anu Engineer commented on HDFS-11566:
-

bq. Storage-container metrics are off by default. They can be enabled by 
setting `ozone.enabled` to true.

I would suggest that we rewrite this:
{noformat}
Storage container is an optional service that can be enabled by setting 
'ozone.enabled' to true. These metrics are only available when ozone is enabled.
{noformat}

> Ozone: Document missing metrics for container operations
> 
>
> Key: HDFS-11566
> URL: https://issues.apache.org/jira/browse/HDFS-11566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, ozone
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11566-HDFS-7240.001.patch, 
> HDFS-11566-HDFS-7240.002.patch
>
>
> In HDFS-11463, it adds some metrics for container operations and can be 
> exported over JMX. But it hasn't been documented in {{Metrics.md}}. There are 
> many metrics added for container. Document these will be helpful for users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11571) Typo in DataStorage exception message

2017-03-28 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned HDFS-11571:
---

Assignee: Anna Budai  (was: Daniel Templeton)

> Typo in DataStorage exception message
> -
>
> Key: HDFS-11571
> URL: https://issues.apache.org/jira/browse/HDFS-11571
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Anna Budai
>Priority: Minor
>  Labels: newbie
>
> The message, "All specified directories are failed to load," has some 
> language issues.  It also appears in TestDataStorage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11566) Ozone: Document missing metrics for container operations

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945431#comment-15945431
 ] 

Hadoop QA commented on HDFS-11566:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
55s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11566 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860891/HDFS-11566-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 85a038df0f06 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 8f4d8c4 |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18871/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Ozone: Document missing metrics for container operations
> 
>
> Key: HDFS-11566
> URL: https://issues.apache.org/jira/browse/HDFS-11566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, ozone
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11566-HDFS-7240.001.patch, 
> HDFS-11566-HDFS-7240.002.patch
>
>
> In HDFS-11463, it adds some metrics for container operations and can be 
> exported over JMX. But it hasn't been documented in {{Metrics.md}}. There are 
> many metrics added for container. Document these will be helpful for users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11566) Ozone: Document missing metrics for container operations

2017-03-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11566:
-
Attachment: HDFS-11566-HDFS-7240.002.patch

Thanks [~anu] for taking the review.
{quote}
should we create an Ozonemetrics.md instead of putting it into the HDFS metrics 
file ?
{quote}
It's a good idea. We can put ozone metrics into this file before merging.
Attach a new patch.

> Ozone: Document missing metrics for container operations
> 
>
> Key: HDFS-11566
> URL: https://issues.apache.org/jira/browse/HDFS-11566
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: documentation, ozone
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
>Priority: Minor
> Attachments: HDFS-11566-HDFS-7240.001.patch, 
> HDFS-11566-HDFS-7240.002.patch
>
>
> In HDFS-11463, it adds some metrics for container operations and can be 
> exported over JMX. But it hasn't been documented in {{Metrics.md}}. There are 
> many metrics added for container. Document these will be helpful for users.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11577) Combine the old and the new chooseRandom for better performance

2017-03-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945392#comment-15945392
 ] 

Hudson commented on HDFS-11577:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11482 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11482/])
HDFS-11577. Combine the old and the new chooseRandom for better (yqlin: rev 
6b0933643835d7696ced011cfdb8b74f63022e8b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/net/TestDFSNetworkTopology.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/net/DFSNetworkTopology.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


> Combine the old and the new chooseRandom for better performance
> ---
>
> Key: HDFS-11577
> URL: https://issues.apache.org/jira/browse/HDFS-11577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11577.001.patch, HDFS-11577.002.patch, 
> HDFS-11577.003.patch
>
>
> As discussed in HDFS-11535, this JIRA adds a new function combining both the 
> new and the old chooseRandom methods for better performance.
> More specifically, when choosing a random node with storage type requirement, 
> the combined method first tries the old method of blindly picking a random 
> node. If this node satisfies, it is returned. Otherwise, the new chooseRandom 
> is called, which guarantees to find a eligible node in one call (if there is 
> one at all).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11577) Combine the old and the new chooseRandom for better performance

2017-03-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11577:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Combine the old and the new chooseRandom for better performance
> ---
>
> Key: HDFS-11577
> URL: https://issues.apache.org/jira/browse/HDFS-11577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11577.001.patch, HDFS-11577.002.patch, 
> HDFS-11577.003.patch
>
>
> As discussed in HDFS-11535, this JIRA adds a new function combining both the 
> new and the old chooseRandom methods for better performance.
> More specifically, when choosing a random node with storage type requirement, 
> the combined method first tries the old method of blindly picking a random 
> node. If this node satisfies, it is returned. Otherwise, the new chooseRandom 
> is called, which guarantees to find a eligible node in one call (if there is 
> one at all).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11577) Combine the old and the new chooseRandom for better performance

2017-03-28 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-11577:
-
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3

The test failures are not related. Committed this to trunk. Thanks 
[~vagarychen] for the contributions!

> Combine the old and the new chooseRandom for better performance
> ---
>
> Key: HDFS-11577
> URL: https://issues.apache.org/jira/browse/HDFS-11577
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Chen Liang
>Assignee: Chen Liang
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11577.001.patch, HDFS-11577.002.patch, 
> HDFS-11577.003.patch
>
>
> As discussed in HDFS-11535, this JIRA adds a new function combining both the 
> new and the old chooseRandom methods for better performance.
> More specifically, when choosing a random node with storage type requirement, 
> the combined method first tries the old method of blindly picking a random 
> node. If this node satisfies, it is returned. Otherwise, the new chooseRandom 
> is called, which guarantees to find a eligible node in one call (if there is 
> one at all).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Attachment: HDFS-11587.001.patch

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Attachment: (was: HDFS-11587.001.patch)

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7343) HDFS smart storage management

2017-03-28 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945026#comment-15945026
 ] 

Kai Zheng commented on HDFS-7343:
-

Thanks [~eddyxu] for your time reviewing this and reading the docs! Very good 
questions. :)

bq. SSM needs to maintain stats of each file / block in NN ...
This needs to clarify some bit. SSM server needs to maintain states of each 
file in SSM side. In NN side, it just maintains access count records happened 
for some files recently. For example, for a time period if 100 files are read, 
then in the NN AccessCounterMap there're only 100 entries; then when SSM 
retrieves and polls to SSM, NN will remove the 100 entries if necessary (given 
a threshold). It's SSM side to maintain and aggregate all the access counts for 
all files. 

We want SSM to maintain states and access count aggregates for all files, but 
in a compact and concise approach, avoiding the memory consumption like O(n). 
It's possible because SSM not like NN, the stored states info can be grouped 
and shared based on common folders and access patterns.

bq. when SSM pulling these stats from NN, what kind of overhead we are 
expecting?
An approach mentioned above is to let SSM fetch file meta lists by many 
iterations across the inode tree, at a speed that NN can affords. 

bq. I think that in the general design, we should work on define a good 
interface time series store for metrics, instead of specifying RocksDB. RocksDB 
might be a good implementation for now.
I thought a generic interface to abstract away concrete database impl is a good 
idea. RocksDB may be not a good fit, you're right we're still thinking about a 
better one. Initially we want to use a embedded SQL database to store the 
collected file states for flexible query, but looks like no good one for Apache 
space. The needed time series feature (like continuous aggregation/compaction, 
window sliding count) would be easy to come up for our purpose given a SQL 
database. Do you have any SQL or time series database to recommend? Thanks!

bq. it is not clear to me why both SSM and NN need to persist metrics in two 
separated rocksdb? ...
We know that current HDFS NameNode now supports to collect many kinds of 
metrics from DNs and we also want NN to record certain file access counts. Such 
metrics can suffer from loss during NN switch or restart. To minimize the 
memory consumption on this we suggest enhancing NN to allow it to spill metircs 
to a local RocksDB. But I think we can consider this separately to avoid the 
confusion here in SSM design. Sounds good?

bq. How stale or extensible the syntax will be? would the syntax be easy for 
other applications to generate / parse / validate and etc?
We want to make the rule extensible, concise and flexible. The Json and YAML 
are good as data formats, but could be very verbose to describe a rule, which 
is why I haven't yet seen a DSL well written in them. You have a good 
suggestion here, to support applications to generate/validate rules, to do 
that, a Java library would be needed. Sounds good?

bq. How to update the accessCount ? how many samples of accessCount to be 
maintained the accessCount during an interval? What if SSM failed to pull for 
the the accessCount?
Regarding how to implement {{accessCount}} is worth to be addressed separately 
and we'll follow here later. One thing to be noted now is, SSM or its 
applications should be able to suffer from access count loss. In case some 
files lose some read counts, the impact would be that SSM may not adjust 
storage options for them in first time, which should be fine and no data hurts. 
Anyhow we need good trade-off between access count accuracy and system 
complexity/cost. 

bq. Maybe we can define SSM rules as soft rules, while make HSM/EC and etc 
rules as hard rules?
I might catch you so agree SSM rules are soft rules that may not have to 
happen, but not very well to understand HSM/EC rules as hard rules. I thought 
SSM aims to ease the deployment and admin operations for HSM/EC facilities.

> HDFS smart storage management
> -
>
> Key: HDFS-7343
> URL: https://issues.apache.org/jira/browse/HDFS-7343
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Kai Zheng
>Assignee: Wei Zhou
> Attachments: HDFSSmartStorageManagement-General-20170315.pdf, 
> HDFS-Smart-Storage-Management.pdf, 
> HDFSSmartStorageManagement-Phase1-20170315.pdf, 
> HDFS-Smart-Storage-Management-update.pdf, move.jpg
>
>
> As discussed in HDFS-7285, it would be better to have a comprehensive and 
> flexible storage policy engine considering file attributes, metadata, data 
> temperature, storage type, EC codec, available hardware capabilities, 
> user/application preference and etc.
> Modified the title for re-purpose.
> We'd extend this effor

[jira] [Commented] (HDFS-11284) [SPS]: fix issue of moving blocks with satisfier while changing replication factor

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944928#comment-15944928
 ] 

Hadoop QA commented on HDFS-11284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
 6s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-10285 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}145m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAStateTransitions |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.server.namenode.TestFSImage |
|   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
|   | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 |
|   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11284 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860825/HDFS-11284-HDFS-10285.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e5fb8c7927ec 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10285 / ff9ccfe |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18870/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18870/testReport/ |
| modules | C: hadoop-hdfs-

[jira] [Assigned] (HDFS-11581) Ozone: Support force delete a container

2017-03-28 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu reassigned HDFS-11581:
-

Assignee: Yuanbo Liu

> Ozone: Support force delete a container
> ---
>
> Key: HDFS-11581
> URL: https://issues.apache.org/jira/browse/HDFS-11581
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Yuanbo Liu
>
> In some occasions, we may want to forcibly delete a container regardless of 
> if deletion condition is satisfied, e.g container is empty. This way we can 
> do best-effort to clean up containers. Note, only a CLOSED container can be 
> force deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11486) Client close() should not fail fast if the last block is being decommissioned

2017-03-28 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HDFS-11486:

   Resolution: Fixed
Fix Version/s: 2.8.1
   3.0.0-alpha3
   2.7.4
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed this. Thanks, [~jojochuang] and [~linyiqun].

> Client close() should not fail fast if the last block is being decommissioned
> -
>
> Key: HDFS-11486
> URL: https://issues.apache.org/jira/browse/HDFS-11486
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha3, 2.8.1
>
> Attachments: HDF-11486.test.patch, HDFS-11486.001.patch, 
> HDFS-11486.002.patch, HDFS-11486.003.patch, HDFS-11486-branch-2.8.003.patch, 
> HDFS-11486.test-inmaintenance.patch
>
>
> If a DFS client closes a file while the last block is being decommissioned, 
> the close() may fail if the decommission of the block does not complete in a 
> few seconds.
> When a DataNode is being decommissioned, NameNode marks the DN's state as 
> DECOMMISSION_INPROGRESS_INPROGRESS, and blocks with replicas on these 
> DataNodes become under-replicated immediately. A close() call which attempts 
> to complete the last open block will fail if the number of live replicas is 
> below minimal replicated factor, due to too many replicas residing on the 
> DataNodes.
> The client internally will try to complete the last open block for up to 5 
> times by default, which is roughly 12 seconds. After that, close() throws an 
> exception like the following, which is typically not handled properly.
> {noformat}
> java.io.IOException: Unable to close file because the last 
> blockBP-33575088-10.0.0.200-1488410554081:blk_1073741827_1003 does not have 
> enough number of replicas.
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:864)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:827)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:793)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at 
> org.apache.hadoop.hdfs.TestDecommission.testCloseWhileDecommission(TestDecommission.java:708)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}
> Once the exception is thrown, the client usually does not attempt to close 
> again, so the file remains in open state, and the last block remains in under 
> replicated state.
> Subsequently, administrator runs recoverLease tool to salvage the file, but 
> the attempt failed because the block remains in under replicated state. It is 
> not clear why the block is never replicated though. However, administrators 
> think it becomes a corrupt file because the file remains open via fsck 
> -openforwrite and the file modification time is hours ago.
> In summary, I do not think close() should fail because the last block is 
> being decommissioned. The block has sufficient number replicas, and it's just 
> that some replicas are being decommissioned. Decomm should be transparent to 
> clients.
> This issue seems to be more prominent on a very large scale cluster, with min 
> replication factor set to 2.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9705) Refine the behaviour of getFileChecksum when length = 0

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944867#comment-15944867
 ] 

Hadoop QA commented on HDFS-9705:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
116 unchanged - 3 fixed = 118 total (was 119) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}167m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Timed out junit tests | 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
| JDK v1.7.0_121 Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain |
| 

[jira] [Commented] (HDFS-11541) Call RawErasureEncoder and RawErasureDecoder release() methods

2017-03-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944766#comment-15944766
 ] 

Hadoop QA commented on HDFS-11541:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
2s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 41s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HDFS-11541 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860813/HDFS-11541-03.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d5e2381215c6 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 253e3e7 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/18868/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreComm

[jira] [Updated] (HDFS-11284) [SPS]: fix issue of moving blocks with satisfier while changing replication factor

2017-03-28 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11284:
--
Status: Patch Available  (was: Open)

> [SPS]: fix issue of moving blocks with satisfier while changing replication 
> factor 
> ---
>
> Key: HDFS-11284
> URL: https://issues.apache.org/jira/browse/HDFS-11284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11284-HDFS-10285.001.patch, TestSatisfier.java
>
>
>  When the real replication number of block doesn't match the replication 
> factor. For example, the real replication is 2 while the replication factor 
> is 3, the satisfier may encounter issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11284) [SPS]: fix issue of moving blocks with satisfier while changing replication factor

2017-03-28 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HDFS-11284:
--
Attachment: HDFS-11284-HDFS-10285.001.patch

Currently I cannot reproduce this defect. Attach a test case in case of such 
defect.

> [SPS]: fix issue of moving blocks with satisfier while changing replication 
> factor 
> ---
>
> Key: HDFS-11284
> URL: https://issues.apache.org/jira/browse/HDFS-11284
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HDFS-11284-HDFS-10285.001.patch, TestSatisfier.java
>
>
>  When the real replication number of block doesn't match the replication 
> factor. For example, the real replication is 2 while the replication factor 
> is 3, the satisfier may encounter issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Status: Open  (was: Patch Available)

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944712#comment-15944712
 ] 

Doris Gu commented on HDFS-11587:
-

Fix some spelling errors, please check! Thanks.

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Status: Patch Available  (was: Open)

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doris Gu updated HDFS-11587:

Attachment: HDFS-11587.001.patch

> Spelling errors in the Java source
> --
>
> Key: HDFS-11587
> URL: https://issues.apache.org/jira/browse/HDFS-11587
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Doris Gu
>Priority: Minor
> Attachments: HDFS-11587.001.patch
>
>
> Found some spelling errors.
> Examples are:
> seperated -> separated 
> seperator -> separator



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11587) Spelling errors in the Java source

2017-03-28 Thread Doris Gu (JIRA)
Doris Gu created HDFS-11587:
---

 Summary: Spelling errors in the Java source
 Key: HDFS-11587
 URL: https://issues.apache.org/jira/browse/HDFS-11587
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Doris Gu
Priority: Minor


Found some spelling errors.
Examples are:
seperated -> separated 
seperator -> separator




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org