[jira] [Created] (HDFS-12258) ec -listPolicies should list all policies in system, no matter it's enabled or disabled

2017-08-03 Thread SammiChen (JIRA)
SammiChen created HDFS-12258:


 Summary: ec -listPolicies should list all policies in system, no 
matter it's enabled or disabled
 Key: HDFS-12258
 URL: https://issues.apache.org/jira/browse/HDFS-12258
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: SammiChen
Assignee: Wei Zhou


ec -listPolicies should list all policies in system, no matter it's enabled or 
disabled



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10880) Federation Mount Table State Store internal API

2017-08-03 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10880:
---
Attachment: HDFS-10880-HDFS-10467-007.patch

[~chris.douglas] comments on {{MountTableResolver}}.

> Federation Mount Table State Store internal API
> ---
>
> Key: HDFS-10880
> URL: https://issues.apache.org/jira/browse/HDFS-10880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Inigo Goiri
> Attachments: HDFS-10880-HDFS-10467-000.patch, 
> HDFS-10880-HDFS-10467-001.patch, HDFS-10880-HDFS-10467-002.patch, 
> HDFS-10880-HDFS-10467-003.patch, HDFS-10880-HDFS-10467-004.patch, 
> HDFS-10880-HDFS-10467-005.patch, HDFS-10880-HDFS-10467-006.patch, 
> HDFS-10880-HDFS-10467-007.patch
>
>
> The Federation Mount Table State encapsulates the mapping of file paths in 
> the global namespace to a specific NN(nameservice) and local NN path.  The 
> mount table is shared by all router instances and represents a unified view 
> of the global namespace.   The state store API for the mount table allows the 
> related records to be queried, updated and deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12134) libhdfs++: Add a synchronization interface for the GSSAPI

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113910#comment-16113910
 ] 

Hadoop QA commented on HDFS-12134:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-8707 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
24s{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
29s{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
37s{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
55s{color} | {color:green} HDFS-8707 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
27s{color} | {color:green} hadoop-hdfs-native-client in the patch passed with 
JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3117e2a |
| JIRA Issue | HDFS-12134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880303/HDFS-12134.HDFS-8707.004.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 2ea7664decaf 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / 3117e2a |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_144 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| JDK v1.7.0_131  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20552/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20552/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Add a synchronization interface for the GSSAPI
> -
>
> Key: HDFS-12134
> URL: https://issues.apache.org/jira/browse/HDFS-12134
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12134.HDFS-8707.000.patch, 
> 

[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113908#comment-16113908
 ] 

Hadoop QA commented on HDFS-11082:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
33s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
46s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 32s{color} 
| {color:red} root generated 2 new + 1418 unchanged - 0 fixed = 1420 total (was 
1418) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 50s{color} | {color:orange} root: The patch generated 1 new + 40 unchanged - 
0 fixed = 41 total (was 40) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 17s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.TestErasureCodingPolicies |
|   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA 

[jira] [Commented] (HDFS-10880) Federation Mount Table State Store internal API

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113875#comment-16113875
 ] 

Hadoop QA commented on HDFS-10880:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
23s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} HDFS-10467 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
1s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10467 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 47s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication |
|   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10880 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880320/HDFS-10880-HDFS-10467-006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 18cde8118343 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 4d63e4a |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20550/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20550/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20550/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20550/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.

[jira] [Commented] (HDFS-12224) Add tests to TestJournalNodeSync for sync after JN downtime

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113854#comment-16113854
 ] 

Hadoop QA commented on HDFS-12224:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
50s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 55s{color} 
| {color:red} hadoop-hdfs-project_hadoop-hdfs generated 1 new + 441 unchanged - 
1 fixed = 442 total (was 442) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 5 new + 47 unchanged - 0 fixed = 52 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 37s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12224 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880302/HDFS-12224.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 0ba970442d96 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f4c6b00 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20549/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| javac | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20549/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20549/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20549/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20549/testReport/ |
| modules | C: 

[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113844#comment-16113844
 ] 

Xiao Chen commented on HDFS-10899:
--

Thanks a lot for managing your time to get to this, [~daryn].

I will go through the patch and make sure the suggestions are applied. A few 
clarifications:
bq. Why can't a file with a re-encrypting EDEK be renamed?
Details were added to HDFS-11203's description. Basically we can't guarantee 
re-encryption doesn't miss any renames easily. Can we further discuss on that 
jira, to keep our focuses?
bq. tracking via path components v.s. via inodes
(also replied in HDFS-11203 but likes some clarification). Tracking inodes 
would work, but that means we have to go from ROOT_INODE_ID to the biggest 
inodeid, no matter what the EZ is, right? It makes sense if the EZ is {{/}}, 
but if it's a small EZ, we don't want to iterate all inodes. Please elaborate 
if I misunderstood.
bq. bullet points from the spiel
Thanks a lot for the advice! I think all of those were considered, except for 
{{Sending more edits than can be buffered without a sync will cause a sync 
while holding the write lock}}. I will carefully look at this. For a quick 
reference, the batch size defaults to 1000.

Will get more numbers from different aspects about performance, I want 
everyone's gut kept safe :) Quick question, what tool did you use to blast the 
NN? Is it sharable?

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-03 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113829#comment-16113829
 ] 

SammiChen commented on HDFS-11082:
--

Thanks [~andrew.wang] for the quick review! I just realized that document is 
not updated, will update it later. 
{quote}
Also need to think about the behavior of getErasureCodingPolicy. Right now it 
returns "null" to mean replication. With this patch, a user would have to check 
both for "null" and "replication-1-2-64K" to know if it's replicated. It'd be 
good to choose one or the other to make it simpler for downstreams. "null" 
would be more compatible, and it'd hide the special replicated EC policy from 
non-admin users which I like.
{quote}
Currently, replication policy can only be set on directory, not the file. 
Because currently in file header format, replication factor and ec policy ID 
share the same bits. So a file can be either traditional replication or 
effective EC, cannot have replication EC policy. 
For getErasureCodingPolicy on directory, return "null" or 
"replication-1-2-64k", both have pros and cons.  If return "null" for 
replication EC policy,
Pros:  1. It's easy for downstream applications to check it is effectively EC 
or replication
Cons: 1. after set replication EC policy on directory, it cannot be get back, 
so there is no way to unset the policy or aware of the policy from user's point 
of view.  User cannot distinguish a traditional replication directory and an 
replication EC policy directory. 
If return "replication-1-2-64k", the pros and cons are reversed.  So it's a 
style choice, one is give all information to user and let them decide, another 
is handle it internally on behalf of user. 
I'm prone to give all information to user. But I'm OK to go "null" solution if 
it's for sure will add more benefit to users. I think you have more experience 
on this. You make the call. 

{quote}
This is not directly related (and I think we discussed this a bit on another 
JIRA) but I'm not happy with our getECPolicy API right now. Right now it 
returns the effective EC policy. Without being able to query the actual EC 
policy, the behavior when setting/unsetting is kind of tricky. Should we add an 
"getActualECPolicy" API? Can be a follow-on JIRA.
{quote}
Do you refer to {{getErasureCodingPolicy}} when you say {{getECPolicy}}?  I'm 
kind of forget when we have discussed this issue. Can you give more hints? 

The suggestions in all other comments will be addressed in next patch. 










> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-03 Thread SammiChen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110201#comment-16110201
 ] 

SammiChen edited comment on HDFS-11082 at 8/4/17 1:53 AM:
--

Hi [~andrew.wang], I'm preparing the patch. For how to provide replicated EC 
policy, here is my overall thoughts, 
1. replicated EC policy is one of system built-in policy, like other built-in 
policies, such as RS-6-3-64k, it can be listed, enabled, disabled, but cannot 
be removed. 
2. replicated EC policy will have codec name "replication" with (1,1) parameter 
combination
3. replicated EC policy can be set, unset, get from directory, but not file.
4. replicated EC policy  should be treated specially when allocate block and 
DFSClient side data read/write

I'd like to hear your opinions. 




was (Author: sammi):
Hi [~andrew.wang], I'm preparing the patch. For how to provide replicated EC 
policy, here is my overall thoughts, 
1. replicated EC policy is one of system built-in policy, like other built-in 
policies, such as RS-6-3-64k, it can be listed, enabled, disabled, but cannot 
be removed. 
2. replicated EC policy will have codec name "replication" with (1,1) parameter 
combination
3. replicated EC policy can be set, unset, get from directory & file
4. replicated EC policy  should be treated specially when allocate block and 
DFSClient side data read/write

I'd like to hear your opinions. 



> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12251) Add document for StreamCapabilities

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113797#comment-16113797
 ] 

Hadoop QA commented on HDFS-12251:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12251 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880272/HDFS-12251.02.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 9cd925ec4f5c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f4c6b00 |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20548/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add document for StreamCapabilities
> ---
>
> Key: HDFS-12251
> URL: https://issues.apache.org/jira/browse/HDFS-12251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12251.00.patch, HDFS-12251.01.patch, 
> HDFS-12251.02.patch
>
>
> Update filesystem docs to describe the purpose and usage of 
> {{StreamCapabilities}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113770#comment-16113770
 ] 

Hadoop QA commented on HDFS-12221:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HDFS-12221 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-12221 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880277/edits_hdfs-12221.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20547/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: edits_hdfs-12221.patch, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12036) Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, getErasureCodingCodecs

2017-08-03 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113758#comment-16113758
 ] 

Wei-Chiu Chuang commented on HDFS-12036:


LGTM

> Add audit log for getErasureCodingPolicy, getErasureCodingPolicies, 
> getErasureCodingCodecs
> --
>
> Key: HDFS-12036
> URL: https://issues.apache.org/jira/browse/HDFS-12036
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.0.0-alpha4
>Reporter: Wei-Chiu Chuang
>Assignee: Huafeng Wang
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-12036.001.patch, HDFS-12036.002.patch
>
>
> These three FSNameSystem operations do not yet record audit logs. I am not 
> sure how useful these audit logs would be, but thought I should file them so 
> that they don't get dropped if they turn out to be needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10880) Federation Mount Table State Store internal API

2017-08-03 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113751#comment-16113751
 ] 

Chris Douglas commented on HDFS-10880:
--

+1 overall. Optional, minor things before commit:
* {{computeIfAbsent}} can use a method reference instead of creating a new 
{{Function}} object, as in:
{code:java}
private PathLocation lookupLocation(String path) {
  // content of #apply in v006
}

@Override
public PathLocation getDestinationForPath(final String path)
throws IOException {
  verifyMountTable();
  readLock.lock();
  try {
return this.locationCache.computeIfAbsent(
path, this::lookupLocation);
  } finally {
readLock.unlock();
  }
}
{code}
* It's probably still correct to hold the read lock before calling 
{{computeIfAbsent}}, so this doesn't add items after they've been removed while 
holding the write lock.
* {{MountTableResolver#buildLocation}} can be static
* {{MountTableResolver#toString}} should hold the read lock

> Federation Mount Table State Store internal API
> ---
>
> Key: HDFS-10880
> URL: https://issues.apache.org/jira/browse/HDFS-10880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Inigo Goiri
> Attachments: HDFS-10880-HDFS-10467-000.patch, 
> HDFS-10880-HDFS-10467-001.patch, HDFS-10880-HDFS-10467-002.patch, 
> HDFS-10880-HDFS-10467-003.patch, HDFS-10880-HDFS-10467-004.patch, 
> HDFS-10880-HDFS-10467-005.patch, HDFS-10880-HDFS-10467-006.patch
>
>
> The Federation Mount Table State encapsulates the mapping of file paths in 
> the global namespace to a specific NN(nameservice) and local NN path.  The 
> mount table is shared by all router instances and represents a unified view 
> of the global namespace.   The state store API for the mount table allows the 
> related records to be queried, updated and deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12189) TestPread#testPreadFailureWithChangedBlockLocations fails intermittently

2017-08-03 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113703#comment-16113703
 ] 

Ajay Yadav commented on HDFS-12189:
---

Hi [~brahmareddy] , i tested it in a loop (20 times). Didn't failed. Do you 
have more information about failure of this case?

> TestPread#testPreadFailureWithChangedBlockLocations fails intermittently
> 
>
> Key: HDFS-12189
> URL: https://issues.apache.org/jira/browse/HDFS-12189
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Brahma Reddy Battula
>
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.hdfs.TestPread.doPreadTestWithChangedLocations(TestPread.java:656)
>   at 
> org.apache.hadoop.hdfs.TestPread.testPreadFailureWithChangedBlockLocations(TestPread.java:566)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics

2017-08-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113661#comment-16113661
 ] 

Hudson commented on HDFS-12131:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12114 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12114/])
HDFS-12131. Add some of the FSNamesystem JMX values as metrics. (wang: rev 
f4c6b00a9f48ae7667db4035b641769efc3bb7cf)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/metrics/TestNameNodeMetrics.java
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java


> Add some of the FSNamesystem JMX values as metrics
> --
>
> Key: HDFS-12131
> URL: https://issues.apache.org/jira/browse/HDFS-12131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3
>
> Attachments: HDFS-12131.000.patch, HDFS-12131.001.patch, 
> HDFS-12131.002.patch, HDFS-12131.002.patch, HDFS-12131.003.patch, 
> HDFS-12131.004.patch, HDFS-12131.005.patch, HDFS-12131.006.patch, 
> HDFS-12131-branch-2.006.patch, HDFS-12131-branch-2.8.006.patch
>
>
> A number of useful numbers are emitted via the FSNamesystem JMX, but not 
> through the metrics system. These would be useful to be able to track over 
> time, e.g. to alert on via standard metrics systems or to view trends and 
> rate changes:
> * NumLiveDataNodes
> * NumDeadDataNodes
> * NumDecomLiveDataNodes
> * NumDecomDeadDataNodes
> * NumDecommissioningDataNodes
> * NumStaleStorages
> * VolumeFailuresTotal
> * EstimatedCapacityLostTotal
> * NumInMaintenanceLiveDataNodes
> * NumInMaintenanceDeadDataNodes
> * NumEnteringMaintenanceDataNodes
> This is a simple change that just requires annotating the JMX methods with 
> {{@Metric}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8893) DNs with failed volumes stop serving during rolling upgrade

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113651#comment-16113651
 ] 

Andrew Wang commented on HDFS-8893:
---

Ping since this one is still on the critical list. Any progress Rushabh, Daryn?

> DNs with failed volumes stop serving during rolling upgrade
> ---
>
> Key: HDFS-8893
> URL: https://issues.apache.org/jira/browse/HDFS-8893
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Rushabh S Shah
>Assignee: Daryn Sharp
>Priority: Critical
>
> When a rolling upgrade starts, all DNs try to write a rolling_upgrade marker 
> to each of their volumes. If one of the volumes is bad, this will fail. When 
> this failure happens, the DN does not update the key it received from the NN.
> Unfortunately we had one failed volume on all the 3 datanodes which were 
> having replica.
> Keys expire after 20 hours so at about 20 hours into the rolling upgrade, the 
> DNs with failed volumes will stop serving clients.
> Here is the stack trace on the datanode size:
> {noformat}
> 2015-08-11 07:32:28,827 [DataNode: heartbeating to 8020] WARN 
> datanode.DataNode: IOException in offerService
> java.io.IOException: Read-only file system
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.createNewFile(File.java:947)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.setRollingUpgradeMarkers(BlockPoolSliceStorage.java:721)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setRollingUpgradeMarker(DataStorage.java:173)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.setRollingUpgradeMarker(FsDatasetImpl.java:2357)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.signalRollingUpgrade(BPOfferService.java:480)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.handleRollingUpgradeStatus(BPServiceActor.java:626)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:677)
> at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:833)
> at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113644#comment-16113644
 ] 

Andrew Wang commented on HDFS-11885:


I'm fine with this plan, it was an oversight not to use RetryStartFileException 
in this case. Though, I'm wondering how you trigger the async fetch in 
startFile without keeping this code?

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10880) Federation Mount Table State Store internal API

2017-08-03 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10880:
---
Attachment: HDFS-10880-HDFS-10467-006.patch

Small fix after testing in production.

> Federation Mount Table State Store internal API
> ---
>
> Key: HDFS-10880
> URL: https://issues.apache.org/jira/browse/HDFS-10880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Inigo Goiri
> Attachments: HDFS-10880-HDFS-10467-000.patch, 
> HDFS-10880-HDFS-10467-001.patch, HDFS-10880-HDFS-10467-002.patch, 
> HDFS-10880-HDFS-10467-003.patch, HDFS-10880-HDFS-10467-004.patch, 
> HDFS-10880-HDFS-10467-005.patch, HDFS-10880-HDFS-10467-006.patch
>
>
> The Federation Mount Table State encapsulates the mapping of file paths in 
> the global namespace to a specific NN(nameservice) and local NN path.  The 
> mount table is shared by all router instances and represents a unified view 
> of the global namespace.   The state store API for the mount table allows the 
> related records to be queried, updated and deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics

2017-08-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12131:
---
   Resolution: Fixed
Fix Version/s: 2.8.3
   3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

Wonderful! Thanks Erik, I've committed this to trunk, branch-2, branch-2.8.

> Add some of the FSNamesystem JMX values as metrics
> --
>
> Key: HDFS-12131
> URL: https://issues.apache.org/jira/browse/HDFS-12131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1, 2.8.3
>
> Attachments: HDFS-12131.000.patch, HDFS-12131.001.patch, 
> HDFS-12131.002.patch, HDFS-12131.002.patch, HDFS-12131.003.patch, 
> HDFS-12131.004.patch, HDFS-12131.005.patch, HDFS-12131.006.patch, 
> HDFS-12131-branch-2.006.patch, HDFS-12131-branch-2.8.006.patch
>
>
> A number of useful numbers are emitted via the FSNamesystem JMX, but not 
> through the metrics system. These would be useful to be able to track over 
> time, e.g. to alert on via standard metrics systems or to view trends and 
> rate changes:
> * NumLiveDataNodes
> * NumDeadDataNodes
> * NumDecomLiveDataNodes
> * NumDecomDeadDataNodes
> * NumDecommissioningDataNodes
> * NumStaleStorages
> * VolumeFailuresTotal
> * EstimatedCapacityLostTotal
> * NumInMaintenanceLiveDataNodes
> * NumInMaintenanceDeadDataNodes
> * NumEnteringMaintenanceDataNodes
> This is a simple change that just requires annotating the JMX methods with 
> {{@Metric}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113612#comment-16113612
 ] 

Andrew Wang commented on HDFS-12221:


Please do! Thanks Ajay.

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: edits_hdfs-12221.patch, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12224) Add tests to TestJournalNodeSync for sync after JN downtime

2017-08-03 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113566#comment-16113566
 ] 

Arpit Agarwal commented on HDFS-12224:
--

The \@Metrics annotation on JournalNodeSyncer looks unnecessary.

+1 with that addressed.

> Add tests to TestJournalNodeSync for sync after JN downtime
> ---
>
> Key: HDFS-12224
> URL: https://issues.apache.org/jira/browse/HDFS-12224
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12224.001.patch, HDFS-12224.002.patch, 
> HDFS-12224.003.patch, HDFS-12224.004.patch
>
>
> Adding unit tests for testing JN sync when the JN has a downtime and is 
> formatted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113581#comment-16113581
 ] 

Ajay Yadav commented on HDFS-12221:
---

[~eddyxu],[~andrew.wang] If we agree on removing xerces dependency then i can 
update the patch with changes in pom files. 

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: edits_hdfs-12221.patch, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113552#comment-16113552
 ] 

Andrew Wang commented on HDFS-11082:


Hi Sammi, this looks good overall, thanks for working on this! A few review 
comments:

* We should add documentation and javadocs describing this new special policy 
so users and admins can be aware
* Also need to think about the behavior of {{getErasureCodingPolicy}}. Right 
now it returns "null" to mean replication. With this patch, a user would have 
to check both for "null" and "replication-1-2-64K" to know if it's replicated. 
It'd be good to choose one or the other to make it simpler for downstreams. 
"null" would be more compatible, and it'd hide the special replicated EC policy 
from non-admin users which I like.
* Please add messages to the asserts in the tests to help with later debugging
* Is this policy enabled by default? I think it should be if not.
* Would be nice to rename the paths in the test cases to be more descriptive. 
As an example, right now we have:

{code}
723 final Path rootPath = new Path("/striped");
724 final Path childPath = new Path(rootPath, "replica");
725 final Path subChildPath = new Path(childPath, "replica");
726 final Path filePath = new Path(childPath, "file");
727 final Path filePath2 = new Path(subChildPath, "file");
{code}

Instead, perhaps something more like:

{code}
723 final Path rootPath = new Path("/striped");
724 final Path replicaPath = new Path(rootPath, "replica");
725 final Path subReplicaPath = new Path(replicaPath, "subreplica");
726 final Path replicaFilePath = new Path(replicaPath, "file");
727 final Path subReplicaFilePath = new Path(subReplicaPath, "file");
{code}

This is not directly related (and I think we discussed this a bit on another 
JIRA) but I'm not happy with our getECPolicy API right now. Right now it 
returns the effective EC policy. Without being able to query the actual EC 
policy, the behavior when setting/unsetting is kind of tricky. Should we add an 
"getActualECPolicy" API? Can be a follow-on JIRA.

If you don't mind, one immediate improvement we could make is documenting in 
the {{getECPolicy}} javadoc that it returns the effective EC policy.

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12196) Ozone: DeleteKey-2: Implement container recycling service to delete stale blocks at background

2017-08-03 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113492#comment-16113492
 ] 

Chen Liang commented on HDFS-12196:
---

Thanks [~cheersyang] for the reply and the updated patch!

I'm still concerned about the result caching though...I agree with you that it 
can save RPC calls, but the thing is it can be very tricky to get the caching 
work properly...more specifically:

1. One potential issue is that it seems there is no purging of the cache. i.e. 
once an entry gets added to the {{resultList}}, it never gets removed. So the 
size of {{resultList}} will monolithically increase to the point where no more 
entries can be added and then the result cache will no longer be helpful for 
any further entry. Also since all entries are just being added when there is 
space, if there is no more query for those cached result, without purging, we 
may end up just holding a bunch of objects in memory that will never be useful. 
I think ideally the cache should probably be removing entries based on factors 
such as access time, frequency etc. In short, the cache needs to have purging.

2. Seems there is no easy way to check the current {{resultList}} efficiently. 
Any service using it will have to check the entire cache. Namely, since it is a 
list, so even if some service wants to check if the result of a call is cached 
in it before making the call, it still needs to iterate over all the entries in 
it, so if the list is large enough, it could be even less efficient than 
contacting datanodes. In short, the cache needs to have fast lookup. Another 
side thing is that the entire {{resultList}} object is exposed and returned to 
caller, a bug in caller can easily mess up the list...

I'm not against adding cache here, just saying we should probably pay a little 
attention. If we really want cache here, it shouldn't be hard to implement a 
cache on top of either 
[LRUMap|https://commons.apache.org/proper/commons-collections/apidocs/org/apache/commons/collections4/map/LRUMap.html]
 OR [Guava loading cache|https://github.com/google/guava/wiki/CachesExplained], 
which should make purging and fast lookup very easy, even trivial. (One example 
is the cache in {{XceiverClientManager}}, just FYI).

> Ozone: DeleteKey-2: Implement container recycling service to delete stale 
> blocks at background
> --
>
> Key: HDFS-12196
> URL: https://issues.apache.org/jira/browse/HDFS-12196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
> Attachments: HDFS-12196-HDFS-7240.001.patch, 
> HDFS-12196-HDFS-7240.002.patch
>
>
> Implement a recycling service running on datanode to delete stale blocks.  
> The recycling service scans staled blocks for each container and delete 
> chunks and references periodically.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12251) Add document for StreamCapabilities

2017-08-03 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113483#comment-16113483
 ] 

Sean Busbey commented on HDFS-12251:


Questions I have after reading hte current draft:

Do all outputstreams returned from FileSystem implement StreamCapabilities? (I 
think no)

Presuming no, what should I assume about an outputstream that gets returned to 
me that doesn't  implement StreamCapabilities? (I think up to the application, 
but dataloss sensitive applications need to presume no operations actually work)

> Add document for StreamCapabilities
> ---
>
> Key: HDFS-12251
> URL: https://issues.apache.org/jira/browse/HDFS-12251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12251.00.patch, HDFS-12251.01.patch, 
> HDFS-12251.02.patch
>
>
> Update filesystem docs to describe the purpose and usage of 
> {{StreamCapabilities}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11644) Support for querying outputstream capabilities

2017-08-03 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113479#comment-16113479
 ] 

Sean Busbey commented on HDFS-11644:


{quote}
In the meantime, I have a question for hbase. As StreamCapabilities is bind to 
an OutputStream, Hbase needs to firstly open a file for write (i.e., getting 
the output stream, before it can query the capabilities. Would this satisfy the 
needs from hbase side?
{quote}

Yeah that's fine I think. I'll come complain if implementing use of it makes me 
change my mind. ;)

> Support for querying outputstream capabilities
> --
>
> Key: HDFS-11644
> URL: https://issues.apache.org/jira/browse/HDFS-11644
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Manoj Govindassamy
>  Labels: hdfs-ec-3.0-must-do
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: HDFS-11644.01.patch, HDFS-11644.02.patch, 
> HDFS-11644.03.patch, HDFS-11644-branch-2.01.patch
>
>
> FSDataOutputStream#hsync checks if a stream implements Syncable, and if so, 
> calls hsync. Otherwise, it just calls flush. This is used, for instance, by 
> YARN's FileSystemTimelineWriter.
> DFSStripedOutputStream extends DFSOutputStream, which implements Syncable. 
> However, DFSStripedOS throws a runtime exception when the Syncable methods 
> are called.
> We should refactor the inheritance structure so DFSStripedOS does not 
> implement Syncable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12251) Add document for StreamCapabilities

2017-08-03 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113475#comment-16113475
 ] 

Sean Busbey commented on HDFS-12251:


Am I correctly deducing from these docs changes that there isn't an ability 
query about {{append}}? Should there be?

> Add document for StreamCapabilities
> ---
>
> Key: HDFS-12251
> URL: https://issues.apache.org/jira/browse/HDFS-12251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12251.00.patch, HDFS-12251.01.patch, 
> HDFS-12251.02.patch
>
>
> Update filesystem docs to describe the purpose and usage of 
> {{StreamCapabilities}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12251) Add document for StreamCapabilities

2017-08-03 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113472#comment-16113472
 ] 

Sean Busbey commented on HDFS-12251:


{code}
1233 * `StreamCapabilties.HFLUSH` ("*hflush*"): the capability to flush out 
the data
1234 in client's buffer.
1235 * `StreamCapabilities.HSYNC` ("*hsync*"): capability to flush out the 
data in
1236 client's buffer and the disk device.
{code}

StreamCapabilities.StreamCapability isn't public, we shouldn't refer to it in 
downstream facing documentation. Just list the strings.

> Add document for StreamCapabilities
> ---
>
> Key: HDFS-12251
> URL: https://issues.apache.org/jira/browse/HDFS-12251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12251.00.patch, HDFS-12251.01.patch, 
> HDFS-12251.02.patch
>
>
> Update filesystem docs to describe the purpose and usage of 
> {{StreamCapabilities}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12134) libhdfs++: Add a synchronization interface for the GSSAPI

2017-08-03 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12134:
---
Attachment: HDFS-12134.HDFS-8707.004.patch

It seems like one of the recent patches to the build system sometimes lets the 
hadoop-hdfs-native-client test only run the libhdfs (jvm/jni) side of things.  
I'll look into what could be doing this.

Doing a clean rebuild with maven showed a deterministic error in a test that 
wasn't modified to reflect one of the code changes [~mdeepak] suggested.  patch 
004 should address it.

> libhdfs++: Add a synchronization interface for the GSSAPI
> -
>
> Key: HDFS-12134
> URL: https://issues.apache.org/jira/browse/HDFS-12134
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12134.HDFS-8707.000.patch, 
> HDFS-12134.HDFS-8707.001.patch, HDFS-12134.HDFS-8707.002.patch, 
> HDFS-12134.HDFS-8707.003.patch, HDFS-12134.HDFS-8707.004.patch
>
>
> Bits of the GSSAPI that Cyrus Sasl uses aren't thread safe.  There needs to 
> be a way for a client application to share a lock with this library in order 
> to prevent race conditions.  It can be done using event callbacks through the 
> C API but we can provide something more robust (RAII) in the C++ API.
> Proposed client supplied lock, pretty much the C++17 lockable concept. Use a 
> default if one isn't provided.  This would be scoped at the process level 
> since it's unlikely that multiple instances of libgssapi unless someone puts 
> some effort in with dlopen/dlsym.
> {code}
> class LockProvider
> {
>   virtual ~LockProvider() {}
>   // allow client application to deny access to the lock
>   virtual bool try_lock() = 0;
>   virtual void unlock() = 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12224) Add tests to TestJournalNodeSync for sync after JN downtime

2017-08-03 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDFS-12224:
--
Attachment: HDFS-12224.004.patch

Thanks for the review [~arpitagarwal]. 
Patch v04 addresses your comments and fixes the failed unit test.

> Add tests to TestJournalNodeSync for sync after JN downtime
> ---
>
> Key: HDFS-12224
> URL: https://issues.apache.org/jira/browse/HDFS-12224
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HDFS-12224.001.patch, HDFS-12224.002.patch, 
> HDFS-12224.003.patch, HDFS-12224.004.patch
>
>
> Adding unit tests for testing JN sync when the JN has a downtime and is 
> formatted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12257) Expose getSnapshottableDirListing in as a public API HdfsAdmin

2017-08-03 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-12257:
--

 Summary: Expose getSnapshottableDirListing in as a public API 
HdfsAdmin
 Key: HDFS-12257
 URL: https://issues.apache.org/jira/browse/HDFS-12257
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: snapshots
Affects Versions: 2.6.5
Reporter: Andrew Wang


Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no 
programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we 
should expose listing there as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12257) Expose getSnapshottableDirListing as a public API in HdfsAdmin

2017-08-03 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-12257:
---
Summary: Expose getSnapshottableDirListing as a public API in HdfsAdmin  
(was: Expose getSnapshottableDirListing in as a public API HdfsAdmin)

> Expose getSnapshottableDirListing as a public API in HdfsAdmin
> --
>
> Key: HDFS-12257
> URL: https://issues.apache.org/jira/browse/HDFS-12257
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>
> Found at HIVE-16294. We have a CLI API for listing snapshottable dirs, but no 
> programmatic API. Other snapshot APIs are exposed in HdfsAdmin, I think we 
> should expose listing there as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12134) libhdfs++: Add a synchronization interface for the GSSAPI

2017-08-03 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-12134:
---
Attachment: HDFS-12134.HDFS-8707.003.patch

Thanks for reviewing again [~mdeepak].  I also have some burn-in now with 
external tests that used to hit gssapi issues and it looks like this took care 
of them.

Reuploading the same patch since it looks like the CI build had issues.  If 
that goes well I'll commit this to HDFS-8707.

> libhdfs++: Add a synchronization interface for the GSSAPI
> -
>
> Key: HDFS-12134
> URL: https://issues.apache.org/jira/browse/HDFS-12134
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-12134.HDFS-8707.000.patch, 
> HDFS-12134.HDFS-8707.001.patch, HDFS-12134.HDFS-8707.002.patch, 
> HDFS-12134.HDFS-8707.003.patch
>
>
> Bits of the GSSAPI that Cyrus Sasl uses aren't thread safe.  There needs to 
> be a way for a client application to share a lock with this library in order 
> to prevent race conditions.  It can be done using event callbacks through the 
> C API but we can provide something more robust (RAII) in the C++ API.
> Proposed client supplied lock, pretty much the C++17 lockable concept. Use a 
> default if one isn't provided.  This would be scoped at the process level 
> since it's unlikely that multiple instances of libgssapi unless someone puts 
> some effort in with dlopen/dlsym.
> {code}
> class LockProvider
> {
>   virtual ~LockProvider() {}
>   // allow client application to deny access to the lock
>   virtual bool try_lock() = 0;
>   virtual void unlock() = 0;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10391) Always enable NameNode service RPC port

2017-08-03 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113394#comment-16113394
 ] 

Arpit Agarwal commented on HDFS-10391:
--

The TestNameNodeHttpServerXFrame failure is caused by the patch.

It may have uncovered a real issue, I will take a look at it.

> Always enable NameNode service RPC port
> ---
>
> Key: HDFS-10391
> URL: https://issues.apache.org/jira/browse/HDFS-10391
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode, namenode
>Reporter: Arpit Agarwal
>Assignee: Gergely Novák
>  Labels: Incompatible
> Attachments: HDFS-10391.001.patch, HDFS-10391.002.patch, 
> HDFS-10391.003.patch, HDFS-10391.004.patch, HDFS-10391.005.patch, 
> HDFS-10391.006.patch, HDFS-10391.007.patch, HDFS-10391.008.patch, 
> HDFS-10391.009.patch, HDFS-10391.v5-v6-delta.patch
>
>
> The NameNode should always be setup with a service RPC port so that it does 
> not have to be explicitly enabled by an administrator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10880) Federation Mount Table State Store internal API

2017-08-03 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10880:
---
Attachment: HDFS-10880-HDFS-10467-005.patch

Fixed checkstyles.

> Federation Mount Table State Store internal API
> ---
>
> Key: HDFS-10880
> URL: https://issues.apache.org/jira/browse/HDFS-10880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Inigo Goiri
> Attachments: HDFS-10880-HDFS-10467-000.patch, 
> HDFS-10880-HDFS-10467-001.patch, HDFS-10880-HDFS-10467-002.patch, 
> HDFS-10880-HDFS-10467-003.patch, HDFS-10880-HDFS-10467-004.patch, 
> HDFS-10880-HDFS-10467-005.patch
>
>
> The Federation Mount Table State encapsulates the mapping of file paths in 
> the global namespace to a specific NN(nameservice) and local NN path.  The 
> mount table is shared by all router instances and represents a unified view 
> of the global namespace.   The state store API for the mount table allows the 
> related records to be queried, updated and deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113384#comment-16113384
 ] 

Hadoop QA commented on HDFS-12131:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
35s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
5s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
44s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_144 Failed junit tests | hadoop.net.TestDNS |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestDistributedFileSystem |
| JDK v1.7.0_131 Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d946387 |
| JIRA Issue | HDFS-12131 |
| JIRA Patch URL | 

[jira] [Updated] (HDFS-12247) Ozone: KeySpaceManager should unregister KSMMetrics upon stop

2017-08-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12247:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-7240
   Status: Resolved  (was: Patch Available)

Thanks [~linyiqun] for the contribution. I've commit the fix to the feature 
branch. 

> Ozone: KeySpaceManager should unregister KSMMetrics upon stop
> -
>
> Key: HDFS-12247
> URL: https://issues.apache.org/jira/browse/HDFS-12247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: HDFS-7240
>
> Attachments: HDFS-12247-HDFS-7240.001.patch, 
> HDFS-12247-HDFS-7240.002.patch
>
>
> The test {{TestKSMMetrcis#[.testVolumeOps,.testKeyOps]}} fails constantly 
> recently. The stack info:
> {noformat}
> java.lang.AssertionError: Bad value for metric NumVolumeOps expected:<6> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:227)
>   at 
> org.apache.hadoop.ozone.ksm.TestKSMMetrcis.testVolumeOps(TestKSMMetrcis.java:89)
> {noformat}
> Seemed after the commit of HDFS-12034, the failures appeared.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12247) Ozone: KeySpaceManager should unregister KSMMetrics upon stop

2017-08-03 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-12247:
--
Summary: Ozone: KeySpaceManager should unregister KSMMetrics upon stop  
(was: Ozone: TestKSMMetrcis fails constantly)

> Ozone: KeySpaceManager should unregister KSMMetrics upon stop
> -
>
> Key: HDFS-12247
> URL: https://issues.apache.org/jira/browse/HDFS-12247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12247-HDFS-7240.001.patch, 
> HDFS-12247-HDFS-7240.002.patch
>
>
> The test {{TestKSMMetrcis#[.testVolumeOps,.testKeyOps]}} fails constantly 
> recently. The stack info:
> {noformat}
> java.lang.AssertionError: Bad value for metric NumVolumeOps expected:<6> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:227)
>   at 
> org.apache.hadoop.ozone.ksm.TestKSMMetrcis.testVolumeOps(TestKSMMetrcis.java:89)
> {noformat}
> Seemed after the commit of HDFS-12034, the failures appeared.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2017-08-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113358#comment-16113358
 ] 

Daryn Sharp commented on HDFS-11885:


bq. I think there's still some value here, since we're looking at usecases 
involving HSM key providers. HSMs are a lot slower at generating DEKs than 
/dev/random. To give you an idea, during testing we blew the 60s RPC timeout 
while waiting for the cache to fill to the low watermark (30 keys). We'd really 
like the cache to be warmed up before a workload hits the EZ.

With the current design, I see the motivation.  However, if Rushabh posts the 
patch we've been running internally which basically throws retriable if no EDEK 
is available, yet has kicked off an async fetch, can we entirely get rid of the 
warmup code?

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113348#comment-16113348
 ] 

Ajay Yadav edited comment on HDFS-12221 at 8/3/17 7:34 PM:
---

[~andrew.wang] I checked for references within hdfs and other projects but 
can't find one.  I agree, we shall remove it as dependency if its not used 
anywhere else. 


was (Author: ajayydv):
[~andrew.wang] I checked for references within hdfs and other projects but 
can't find one.  I agree, we should remove at as dependency if its not used 
anywhere else. 

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: edits_hdfs-12221.patch, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113348#comment-16113348
 ] 

Ajay Yadav commented on HDFS-12221:
---

[~andrew.wang] I checked for references within hdfs and other projects but 
can't find one.  I agree, we should remove at as dependency if its not used 
anywhere else. 

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: edits_hdfs-12221.patch, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file

2017-08-03 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav updated HDFS-12162:
--
Attachment: HDFS-12162.01.patch
Screen Shot 2017-08-03 at 11.02.19 AM.png
Screen Shot 2017-08-03 at 11.01.46 AM.png

> Update listStatus document to describe the behavior when the argument is a 
> file
> ---
>
> Key: HDFS-12162
> URL: https://issues.apache.org/jira/browse/HDFS-12162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, httpfs
>Reporter: Yongjun Zhang
>Assignee: Ajay Yadav
> Attachments: HDFS-12162.01.patch, Screen Shot 2017-08-03 at 11.01.46 
> AM.png, Screen Shot 2017-08-03 at 11.02.19 AM.png
>
>
> The listStatus method can take in either directory path or file path as 
> input, however, currently both the javadoc and external document describe it 
> as only taking directory as input. This jira is to update the document about 
> the behavior when the argument is a file path.
> Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this 
> jira is the result of our discussion there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12162) Update listStatus document to describe the behavior when the argument is a file

2017-08-03 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav updated HDFS-12162:
--
Status: Patch Available  (was: Open)

[~yzhangal] Attaching the patch for change in WebHDFS.md (added an entry for 
listing file)  and javadoc change in FSOperations. Please review it.

> Update listStatus document to describe the behavior when the argument is a 
> file
> ---
>
> Key: HDFS-12162
> URL: https://issues.apache.org/jira/browse/HDFS-12162
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, httpfs
>Reporter: Yongjun Zhang
>Assignee: Ajay Yadav
>
> The listStatus method can take in either directory path or file path as 
> input, however, currently both the javadoc and external document describe it 
> as only taking directory as input. This jira is to update the document about 
> the behavior when the argument is a file path.
> Thanks [~xiaochen] for the review and discussion in HDFS-12139, creating this 
> jira is the result of our discussion there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12247) Ozone: TestKSMMetrcis fails constantly

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113342#comment-16113342
 ] 

Hadoop QA commented on HDFS-12247:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
33s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.cblock.TestBufferManager |
|   | 
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.container.ozoneimpl.TestRatisManager |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
|   | org.apache.hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis |
|   | org.apache.hadoop.cblock.TestLocalBlockCache |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12247 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880190/HDFS-12247-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 315fbb7d4bd5 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 92945d0 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20543/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20543/testReport/ |
| asflicense | 

[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113291#comment-16113291
 ] 

Andrew Wang commented on HDFS-12221:


Can we drop the xerces dependency after this change?

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: edits_hdfs-12221.patch, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12246) Ozone: potential thread leaks

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113272#comment-16113272
 ] 

Hadoop QA commented on HDFS-12246:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
40s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
35s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
16s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 36s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}105m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.ozone.container.ozoneimpl.TestOzoneContainerRatis 
|
|   | hadoop.cblock.TestBufferManager |
|   | hadoop.ozone.ksm.TestKSMMetrcis |
|   | hadoop.ozone.scm.TestXceiverClientManager |
|   | hadoop.cblock.TestCBlockReadWrite |
|   | hadoop.ozone.web.client.TestKeys |
| Timed out junit tests | org.apache.hadoop.ozone.web.client.TestKeysRatis |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12246 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880196/HDFS-12246-HDFS-7240.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b57e8750c314 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-7240 / 92945d0 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-03 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113267#comment-16113267
 ] 

Andrew Wang commented on HDFS-10899:


quick comment on Wei-chiu's review:

bq. By definition, methods in HdfsAdmin are superuser only.

This is not actually true, this class is just for HDFS-specific operations. 
Putting "Admin" in the name is a misnomer, and since this continues to be 
confusing, maybe we should enhance the class javadoc to make this explicit.

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10880) Federation Mount Table State Store internal API

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113263#comment-16113263
 ] 

Hadoop QA commented on HDFS-10880:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-10467 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
57s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} HDFS-10467 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} HDFS-10467 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
52s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-10467 has 10 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HDFS-10467 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 6 new + 403 unchanged - 0 fixed = 409 total (was 403) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 12s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-10880 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880262/HDFS-10880-HDFS-10467-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux 0030f4540a47 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-10467 / 4d63e4a |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20544/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20544/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20544/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/20544/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-03 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113251#comment-16113251
 ] 

Daryn Sharp commented on HDFS-10899:


I'm really trying to find cycles for this jira.  Since I keep seeing locking 
being mentioned, I've skimmed the patch.  A few quick observations:
* I see a lot of full path reconstruction.  This is not cheap.  Avoid requiring 
the full path if possible.
* Why can't a file with a re-encrypting EDEK be renamed?
* The batching behavior is a bit worrisome to me.  Reading this: "my gut 
feeling is this will not significantly block NN (or pause)" makes my gut drop. 
I need more proof than a feeling that this very desirable feature will not tank 
a production cluster.   See below.
* The managing of the depth-first scan claims that tracking a list of path 
components is cheaper than tracking inodes.  How/why?  Creating a list of boxed 
byte arrays has a lot more overhead than tracking inodes.  Back and forth 
resolving of path components to inodes and vice-versa is not cheap at all.

Not clear to me that proper locking is always being used.   I'll spiel what 
might already be known but it's what I'll look for:
* Holding the fsdir lock technically requires holding the fsn lock.
* If the fsn lock is released & reacquired, checkOperation must always be 
called to ensure the NN hasn't dropped into standby.  Logging an edit as a 
standby would be very very bad.
* IIPs cannot be resolved w/o the lock and are null and void if the lock is 
released.
* Edit logs are surprisingly not thread safe.  logEdit must hold the fsn write 
lock or rolling can corrupt the logs.
* logSync must never be called with the write lock.

Regarding edits, you don't always have to sync immediately.  Syncing is 
technically only required before sending a client a response to ensure 
durability.  If you just log edits they will be batched and eventually synced 
by the next write op.  It looks like if the NN was crash during reencryption, 
and buffered edits are lost, that it will correctly resume.

Didn't check out how big the batches are.  Lock time in general definitely 
becomes a concern, read lock or not.  Another consideration is an edit flood.  
Sending more edits than can be buffered without a sync will cause a sync while 
holding the write lock.  Not good.

As for performance.  I'm concerned you are focusing only on raw re-encrypt 
performance.  That means little to me.  Throughput during the process matters.  
Here's some rudimentary ways to measure performance on a quiescent NN.
* Blast the NN with read ops like getFileInfo.  Try to peg it.  Should be easy 
to achieve a steady state of 100-300k ops/sec.  Measure ops/sec during a 
lengthy reencrypt.
* Blast the NN with write ops.  Rename files back and forth in a directory.  
Try to peg it and depending on hw, maybe a steady 5-10k ops/sec.  Measure 
ops/sec during a lengthy reencrypt.
This will give us an idea of how costly the implementation is.  I'll make up 
some numbers.  During an emergency, a 10% hit might be acceptable.  10% is 
unacceptable for a proactive roll.

I'll comment further after I review more.

> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav updated HDFS-12221:
--
Attachment: edits_hdfs-12221.patch
fsimage_hdfs-12221.xml
HDFS-12221.01.patch

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: edits_hdfs-12221.patch, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav updated HDFS-12221:
--
Status: Patch Available  (was: Open)

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
> Attachments: edits_hdfs-12221.patch, fsimage_hdfs-12221.xml, 
> HDFS-12221.01.patch
>
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12221) Replace xcerces in XmlEditsVisitor

2017-08-03 Thread Ajay Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113177#comment-16113177
 ] 

Ajay Yadav commented on HDFS-12221:
---

[~eddyxu] Thanks for confirming. Attaching the patch and output of fsimage and 
edit logs after the changes. Please review.

> Replace xcerces in XmlEditsVisitor 
> ---
>
> Key: HDFS-12221
> URL: https://issues.apache.org/jira/browse/HDFS-12221
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Ajay Yadav
>
> XmlEditsVisitor should use new XML capability  in the newer JDK, to make JAR 
> shading easier (HADOOP-14672)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12256) Ozone : handle inactive containers on DataNode

2017-08-03 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12256:
--
Summary: Ozone : handle inactive containers on DataNode  (was: Ozone : 
handle inactive containers on DataNode side)

> Ozone : handle inactive containers on DataNode
> --
>
> Key: HDFS-12256
> URL: https://issues.apache.org/jira/browse/HDFS-12256
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>
> When a container gets created, corresponding metadata gets added to 
> {{ContainerManagerImpl#containerMap}}. What {{containerMap}} stores is a 
> containerName to {{ContainerStatus}} instance map. When datanode starts, it 
> also loads this map from disk file metadata. As long as the containerName is 
> found in this map, it is considered an existing container.
> An issue we saw was that, occasionally, when the container creation on 
> datanode fails, the metadata of the failed container may still get added to 
> {{containerMap}}, with active flag set to false. But currently such 
> containers are not being handled, containers with active=false are just 
> treated as normal containers. Then when someone tries to write to this 
> container, fails can happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12251) Add document for StreamCapabilities

2017-08-03 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HDFS-12251:
-
Attachment: HDFS-12251.02.patch

Updated to add links between EC docs and {{StreamCapabilities}} doc

> Add document for StreamCapabilities
> ---
>
> Key: HDFS-12251
> URL: https://issues.apache.org/jira/browse/HDFS-12251
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HDFS-12251.00.patch, HDFS-12251.01.patch, 
> HDFS-12251.02.patch
>
>
> Update filesystem docs to describe the purpose and usage of 
> {{StreamCapabilities}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12256) Ozone : handle inactive containers on DataNode side

2017-08-03 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-12256:
--
Issue Type: Sub-task  (was: Bug)
Parent: HDFS-7240

> Ozone : handle inactive containers on DataNode side
> ---
>
> Key: HDFS-12256
> URL: https://issues.apache.org/jira/browse/HDFS-12256
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Chen Liang
>
> When a container gets created, corresponding metadata gets added to 
> {{ContainerManagerImpl#containerMap}}. What {{containerMap}} stores is a 
> containerName to {{ContainerStatus}} instance map. When datanode starts, it 
> also loads this map from disk file metadata. As long as the containerName is 
> found in this map, it is considered an existing container.
> An issue we saw was that, occasionally, when the container creation on 
> datanode fails, the metadata of the failed container may still get added to 
> {{containerMap}}, with active flag set to false. But currently such 
> containers are not being handled, containers with active=false are just 
> treated as normal containers. Then when someone tries to write to this 
> container, fails can happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12256) Ozone : handle inactive containers on DataNode side

2017-08-03 Thread Chen Liang (JIRA)
Chen Liang created HDFS-12256:
-

 Summary: Ozone : handle inactive containers on DataNode side
 Key: HDFS-12256
 URL: https://issues.apache.org/jira/browse/HDFS-12256
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chen Liang


When a container gets created, corresponding metadata gets added to 
{{ContainerManagerImpl#containerMap}}. What {{containerMap}} stores is a 
containerName to {{ContainerStatus}} instance map. When datanode starts, it 
also loads this map from disk file metadata. As long as the containerName is 
found in this map, it is considered an existing container.

An issue we saw was that, occasionally, when the container creation on datanode 
fails, the metadata of the failed container may still get added to 
{{containerMap}}, with active flag set to false. But currently such containers 
are not being handled, containers with active=false are just treated as normal 
containers. Then when someone tries to write to this container, fails can 
happen.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11885) createEncryptionZone should not block on initializing EDEK cache

2017-08-03 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113156#comment-16113156
 ] 

Xiao Chen commented on HDFS-11885:
--

Thanks Andrew for rebasing. The checkstyle does look related (extra line break 
on TestEncryptionZonesWithKMS class).

The test failure {{TestAclsEndToEnd#testCreateEncryptionZone}} appears to be 
related though.

> createEncryptionZone should not block on initializing EDEK cache
> 
>
> Key: HDFS-11885
> URL: https://issues.apache.org/jira/browse/HDFS-11885
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.6.5
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Critical
> Attachments: HDFS-11885.001.patch, HDFS-11885.002.patch, 
> HDFS-11885.003.patch, HDFS-11885.004.patch
>
>
> When creating an encryption zone, we call {{ensureKeyIsInitialized}}, which 
> calls {{provider.warmUpEncryptedKeys(keyName)}}. This is a blocking call, 
> which attempts to fill the key cache up to the low watermark.
> If the KMS is down or slow, this can take a very long time, and cause the 
> createZone RPC to fail with a timeout.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-03 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113099#comment-16113099
 ] 

Anu Engineer commented on HDFS-12255:
-

Thanks for filing this, if you are not working on this, feel free to assign 
this issue to me.

> Block Storage: Cblock should generated unique trace ID for the ops
> --
>
> Key: HDFS-12255
> URL: https://issues.apache.org/jira/browse/HDFS-12255
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
>
> Cblock tests fails because cblock does not generate unique trace id for each 
> op.
> {code}
> java.lang.AssertionError: expected:<0> but was:<1051>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
> {code}
> This failure is because of following error.
> {code}
> 017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
> Command with Trace already exists. Ignoring this command. . Previous Command: 
> java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
> 2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
> jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing 
> of block:44 failed, We have attempted to write this block 7 tim
> es to the container container2483304118.Trace ID:
> java.lang.IllegalStateException: Duplicate trace ID. Command with this trace 
> ID is already executing. Please ensure that trace IDs are not reused. ID: 
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
> at 
> org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
> at 
> org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
> at 
> org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
> at 
> org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12254) Upgrade JUnit from 4 to 5 in hadoop-hdfs

2017-08-03 Thread Ajay Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Yadav reassigned HDFS-12254:
-

Assignee: Ajay Yadav

> Upgrade JUnit from 4 to 5 in hadoop-hdfs
> 
>
> Key: HDFS-12254
> URL: https://issues.apache.org/jira/browse/HDFS-12254
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Ajay Yadav
>
> Feel free to create sub-tasks for each module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10880) Federation Mount Table State Store internal API

2017-08-03 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated HDFS-10880:
---
Attachment: HDFS-10880-HDFS-10467-004.patch

* Fixed checkstyle
* Fixed whitespace
* Fixed unit tests

> Federation Mount Table State Store internal API
> ---
>
> Key: HDFS-10880
> URL: https://issues.apache.org/jira/browse/HDFS-10880
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Jason Kace
>Assignee: Inigo Goiri
> Attachments: HDFS-10880-HDFS-10467-000.patch, 
> HDFS-10880-HDFS-10467-001.patch, HDFS-10880-HDFS-10467-002.patch, 
> HDFS-10880-HDFS-10467-003.patch, HDFS-10880-HDFS-10467-004.patch
>
>
> The Federation Mount Table State encapsulates the mapping of file paths in 
> the global namespace to a specific NN(nameservice) and local NN path.  The 
> mount table is shared by all router instances and represents a unified view 
> of the global namespace.   The state store API for the mount table allows the 
> related records to be queried, updated and deleted.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12246) Ozone: potential thread leaks

2017-08-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113042#comment-16113042
 ] 

Xiaoyu Yao commented on HDFS-12246:
---

Thanks for the update [~cheersyang]. 
+1 for v2 patch pending Jenkins.

> Ozone: potential thread leaks
> -
>
> Key: HDFS-12246
> URL: https://issues.apache.org/jira/browse/HDFS-12246
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12246-HDFS-7240.001.patch, 
> HDFS-12246-HDFS-7240.002.patch
>
>
> Per discussion in HDFS-12163, there might be some places potentially leaks 
> threads, we will use this jira to track the work to fix those leaks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12247) Ozone: TestKSMMetrcis fails constantly

2017-08-03 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113033#comment-16113033
 ] 

Xiaoyu Yao commented on HDFS-12247:
---

Thanks for updating the patch [~linyiqun]. 
+1 v2 patch pending Jenkins.

> Ozone: TestKSMMetrcis fails constantly
> --
>
> Key: HDFS-12247
> URL: https://issues.apache.org/jira/browse/HDFS-12247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12247-HDFS-7240.001.patch, 
> HDFS-12247-HDFS-7240.002.patch
>
>
> The test {{TestKSMMetrcis#[.testVolumeOps,.testKeyOps]}} fails constantly 
> recently. The stack info:
> {noformat}
> java.lang.AssertionError: Bad value for metric NumVolumeOps expected:<6> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:227)
>   at 
> org.apache.hadoop.ozone.ksm.TestKSMMetrcis.testVolumeOps(TestKSMMetrcis.java:89)
> {noformat}
> Seemed after the commit of HDFS-12034, the failures appeared.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12248) SNN will not upload fsimage on IOE and Interrupted exceptions

2017-08-03 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16113034#comment-16113034
 ] 

Rushabh S Shah commented on HDFS-12248:
---

[~brahmareddy]: can you please add a test case ?


> SNN will not upload fsimage on IOE and Interrupted exceptions
> -
>
> Key: HDFS-12248
> URL: https://issues.apache.org/jira/browse/HDFS-12248
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rolling upgrades
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Critical
> Attachments: HDFS-12248.patch
>
>
> Related to  HDFS-9787. When fsimage uploading to ANN, if there is any 
> interrupt or IOE comes {{isPrimaryCheckPointer}} set to 
> {{false}}.Rollingupgrade triggered same time then It does the checkpoint 
> without sending the fsimage since {{sendRequest}} will be {{false}}.
> So,here {{rollback}} image will not sent to ANN.
> {code}
>   } catch (ExecutionException e) {
> ioe = new IOException("Exception during image upload: " + 
> e.getMessage(),
> e.getCause());
> break;
>   } catch (InterruptedException e) {
> ie = e;
> break;
>   }
> }
> lastUploadTime = monotonicNow();
> // we are primary if we successfully updated the ANN
> this.isPrimaryCheckPointer = success;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12131) Add some of the FSNamesystem JMX values as metrics

2017-08-03 Thread Erik Krogen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Krogen updated HDFS-12131:
---
Attachment: HDFS-12131-branch-2.8.006.patch
HDFS-12131-branch-2.006.patch
HDFS-12131.006.patch

Great [~andrew.wang], thanks! When putting together the branch-2 patch I 
noticed two small issues in my trunk patch - a few of the new additions to 
Metrics.md did not have correct formatting (missing a pipe at the end of the 
line) and I didn't close the {{FsVolumeReferences}} object that I obtained in 
{{TestNameNodeMetrics#testVolumeFailures()}}. Fixed those two in v006 patch, 
and attached corresponding branch-2 and branch-2.8 patches.

> Add some of the FSNamesystem JMX values as metrics
> --
>
> Key: HDFS-12131
> URL: https://issues.apache.org/jira/browse/HDFS-12131
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
> Attachments: HDFS-12131.000.patch, HDFS-12131.001.patch, 
> HDFS-12131.002.patch, HDFS-12131.002.patch, HDFS-12131.003.patch, 
> HDFS-12131.004.patch, HDFS-12131.005.patch, HDFS-12131.006.patch, 
> HDFS-12131-branch-2.006.patch, HDFS-12131-branch-2.8.006.patch
>
>
> A number of useful numbers are emitted via the FSNamesystem JMX, but not 
> through the metrics system. These would be useful to be able to track over 
> time, e.g. to alert on via standard metrics systems or to view trends and 
> rate changes:
> * NumLiveDataNodes
> * NumDeadDataNodes
> * NumDecomLiveDataNodes
> * NumDecomDeadDataNodes
> * NumDecommissioningDataNodes
> * NumStaleStorages
> * VolumeFailuresTotal
> * EstimatedCapacityLostTotal
> * NumInMaintenanceLiveDataNodes
> * NumInMaintenanceDeadDataNodes
> * NumEnteringMaintenanceDataNodes
> This is a simple change that just requires annotating the JMX methods with 
> {{@Metric}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10899) Add functionality to re-encrypt EDEKs

2017-08-03 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112837#comment-16112837
 ] 

Wei-Chiu Chuang commented on HDFS-10899:


Still half way through my review. Post my notes for reference:

ReencryptionStatus#listReencryptionStatus
{code}
final boolean hasMore = (numResp < tailMap.size());
{code}
This seems untrue. if the system has 1001 (tailMap.size()) encryption zones, 
and we get 1 (count) encryption zone starting from the 1000th and we expect to 
get 100 results (numResp), the hasMore should be false.
It looks like it would cause an infinite loop between client and NN because 
hasMore is _always_ true.

Javadoc for HdfsAdmin#listReencryptionStatus:
This method can only be called by HDFS superusers.

This is a little unnecessary and misleading. 
By definition, methods in HdfsAdmin are superuser only.
Also, the permission check is enforced at namenode side, not client side.

ZoneReencryptionStatus

@InterfaceAudience.Private annotation for ZoneReencryptionStatus


It seems EncryptionZoneManager#reencryptionHandler can be null if NN does not 
configure any key providers.
So every access to reencryptionHandler should be checked to avoid NPE.


EncryptionZoneManager#fixReencryptionZoneNames()
suggest a more appropriate method name — “fixReencryptionZoneNames” feels like 
a hack for a bug.


EncryptionZoneManager#cancelReencryptEncryptionZone: the Javadoc “If the given 
path If the given path is not the root of an encryption zone,” has duplicate 
words

Would ReencryptionStatus#getZoneStatus ever return null? 
The method is called by multiple callers so I can’t be sure if it is ever 
possible. But it could return null, some value checking would be necessary for 
the callers.


ReencryptionHandler#submitCurrentBatch
IntelliJ is complaining that this code
{code}
TreeMap batch = (TreeMap)

((TreeMap) currentBatch).clone();
{code}
is an unchecked cast.

ReencryptionHandler#run()
Append “millisecond” at the end of log message "Starting up re-encrypt thread 
with interval={}”

A number of typos
“inode cannot be resolve to a full path” : “resolve” —> resolved


> Add functionality to re-encrypt EDEKs
> -
>
> Key: HDFS-10899
> URL: https://issues.apache.org/jira/browse/HDFS-10899
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: encryption, kms
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: editsStored, HDFS-10899.01.patch, HDFS-10899.02.patch, 
> HDFS-10899.03.patch, HDFS-10899.04.patch, HDFS-10899.05.patch, 
> HDFS-10899.06.patch, HDFS-10899.07.patch, HDFS-10899.08.patch, 
> HDFS-10899.09.patch, HDFS-10899.10.patch, HDFS-10899.10.wip.patch, 
> HDFS-10899.11.patch, HDFS-10899.12.patch, HDFS-10899.wip.2.patch, 
> HDFS-10899.wip.patch, Re-encrypt edek design doc.pdf, Re-encrypt edek design 
> doc V2.pdf
>
>
> Currently when an encryption zone (EZ) key is rotated, it only takes effect 
> on new EDEKs. We should provide a way to re-encrypt EDEKs after the EZ key 
> rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12157) Do fsyncDirectory(..) outside of FSDataset lock

2017-08-03 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112587#comment-16112587
 ] 

Vinayakumar B commented on HDFS-12157:
--

no test failures are related to this change.

> Do fsyncDirectory(..) outside of FSDataset lock
> ---
>
> Key: HDFS-12157
> URL: https://issues.apache.org/jira/browse/HDFS-12157
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Critical
> Attachments: HDFS-12157-01.patch, HDFS-12157-branch-2-01.patch, 
> HDFS-12157-branch-2.7-01.patch, HDFS-12157-branch-2.7-01.patch
>
>
> HDFS-5042 introduced fsyncDirectory(..) to save blocks from power failure. 
> Do it outside of FSDataset lock to avoid overall performance degradation if 
> disk takes more time to sync.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12255) Block Storage: Cblock should generated unique trace ID for the ops

2017-08-03 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12255:


 Summary: Block Storage: Cblock should generated unique trace ID 
for the ops
 Key: HDFS-12255
 URL: https://issues.apache.org/jira/browse/HDFS-12255
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


Cblock tests fails because cblock does not generate unique trace id for each op.

{code}
java.lang.AssertionError: expected:<0> but was:<1051>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.cblock.TestBufferManager.testRepeatedBlockWrites(TestBufferManager.java:448)
{code}

This failure is because of following error.

{code}
017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
scm.XceiverClientHandler (XceiverClientHandler.java:sendCommandAsync(134)) - 
Command with Trace already exists. Ignoring this command. . Previous Command: 
java.util.concurrent.CompletableFuture@7847fc2d[Not completed, 1 dependents]
2017-08-02 17:50:34,569 [Cache Block Writer Thread #4] ERROR 
jscsiHelper.ContainerCacheFlusher (BlockWriterTask.java:run(108)) - Writing of 
block:44 failed, We have attempted to write this block 7 tim
es to the container container2483304118.Trace ID:
java.lang.IllegalStateException: Duplicate trace ID. Command with this trace ID 
is already executing. Please ensure that trace IDs are not reused. ID: 
at 
org.apache.hadoop.scm.XceiverClientHandler.sendCommandAsync(XceiverClientHandler.java:139)
at 
org.apache.hadoop.scm.XceiverClientHandler.sendCommand(XceiverClientHandler.java:114)
at 
org.apache.hadoop.scm.XceiverClient.sendCommand(XceiverClient.java:132)
at 
org.apache.hadoop.scm.storage.ContainerProtocolCalls.writeSmallFile(ContainerProtocolCalls.java:225)
at 
org.apache.hadoop.cblock.jscsiHelper.BlockWriterTask.run(BlockWriterTask.java:97)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-12254) Upgrade JUnit from 4 to 5 in hadoop-hdfs

2017-08-03 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HDFS-12254:


 Summary: Upgrade JUnit from 4 to 5 in hadoop-hdfs
 Key: HDFS-12254
 URL: https://issues.apache.org/jira/browse/HDFS-12254
 Project: Hadoop HDFS
  Issue Type: Test
  Components: test
Reporter: Akira Ajisaka


Feel free to create sub-tasks for each module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-03 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11082:
-
Status: Patch Available  (was: Open)

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11082) Erasure Coding : Provide replicated EC policy to just replicating the files

2017-08-03 Thread SammiChen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SammiChen updated HDFS-11082:
-
Attachment: HDFS-11082.001.patch

Initial patch

> Erasure Coding : Provide replicated EC policy to just replicating the files
> ---
>
> Key: HDFS-11082
> URL: https://issues.apache.org/jira/browse/HDFS-11082
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Rakesh R
>Assignee: SammiChen
>Priority: Critical
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-11082.001.patch
>
>
> The idea of this jira is to provide a new {{replicated EC policy}} so that we 
> can override the EC policy on a parent directory and go back to just 
> replicating the files based on replication factors.
> Thanks [~andrew.wang] for the 
> [discussions|https://issues.apache.org/jira/browse/HDFS-11072?focusedCommentId=15620743=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15620743].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12246) Ozone: potential thread leaks

2017-08-03 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-12246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112411#comment-16112411
 ] 

Weiwei Yang commented on HDFS-12246:


Hi [~xyao]

Thanks for the review. These tests are failing even without the patch, it looks 
like they are related to HDFS-11580. I have addressed your comment 1 in v2 
patch.

> Ozone: potential thread leaks
> -
>
> Key: HDFS-12246
> URL: https://issues.apache.org/jira/browse/HDFS-12246
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12246-HDFS-7240.001.patch, 
> HDFS-12246-HDFS-7240.002.patch
>
>
> Per discussion in HDFS-12163, there might be some places potentially leaks 
> threads, we will use this jira to track the work to fix those leaks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12246) Ozone: potential thread leaks

2017-08-03 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-12246:
---
Attachment: HDFS-12246-HDFS-7240.002.patch

> Ozone: potential thread leaks
> -
>
> Key: HDFS-12246
> URL: https://issues.apache.org/jira/browse/HDFS-12246
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: HDFS-12246-HDFS-7240.001.patch, 
> HDFS-12246-HDFS-7240.002.patch
>
>
> Per discussion in HDFS-12163, there might be some places potentially leaks 
> threads, we will use this jira to track the work to fix those leaks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12247) Ozone: TestKSMMetrcis fails constantly

2017-08-03 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12247:
-
Attachment: HDFS-12247-HDFS-7240.002.patch

> Ozone: TestKSMMetrcis fails constantly
> --
>
> Key: HDFS-12247
> URL: https://issues.apache.org/jira/browse/HDFS-12247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12247-HDFS-7240.001.patch, 
> HDFS-12247-HDFS-7240.002.patch
>
>
> The test {{TestKSMMetrcis#[.testVolumeOps,.testKeyOps]}} fails constantly 
> recently. The stack info:
> {noformat}
> java.lang.AssertionError: Bad value for metric NumVolumeOps expected:<6> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:227)
>   at 
> org.apache.hadoop.ozone.ksm.TestKSMMetrcis.testVolumeOps(TestKSMMetrcis.java:89)
> {noformat}
> Seemed after the commit of HDFS-12034, the failures appeared.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12247) Ozone: TestKSMMetrcis fails constantly

2017-08-03 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12247:
-
Attachment: (was: HDFS-12247-HDFS-7240.002.patch)

> Ozone: TestKSMMetrcis fails constantly
> --
>
> Key: HDFS-12247
> URL: https://issues.apache.org/jira/browse/HDFS-12247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12247-HDFS-7240.001.patch
>
>
> The test {{TestKSMMetrcis#[.testVolumeOps,.testKeyOps]}} fails constantly 
> recently. The stack info:
> {noformat}
> java.lang.AssertionError: Bad value for metric NumVolumeOps expected:<6> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:227)
>   at 
> org.apache.hadoop.ozone.ksm.TestKSMMetrcis.testVolumeOps(TestKSMMetrcis.java:89)
> {noformat}
> Seemed after the commit of HDFS-12034, the failures appeared.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12231) Ozone: KSM: Add creation time field in volume info

2017-08-03 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12231:
-
Attachment: HDFS-12231-HDFS-7240.002.patch

Attach the new patch to fix checkstyle warning.

> Ozone: KSM: Add creation time field in volume info
> --
>
> Key: HDFS-12231
> URL: https://issues.apache.org/jira/browse/HDFS-12231
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12231-HDFS-7240.001.patch, 
> HDFS-12231-HDFS-7240.002.patch
>
>
> This JIRA similar to HDFS-12230. And it also expected to be returned in 
> design doc like following.
> {noformat}
> {
> "owner" : {
> "name" : "bilbo"
> },
> "quota" : {
> "unit" : "TB",
> "size" : 100
> },
> "volumeName" : "shire",
> "createdOn" : "Mon, Apr 04 2016 06:22:00 GMT",
> "createdBy" : "hdfs"
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-12247) Ozone: TestKSMMetrcis fails constantly

2017-08-03 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-12247:
-
Attachment: HDFS-12247-HDFS-7240.002.patch

Thanks [~xyao] for the review and comment.
Attach the updated patch.

> Ozone: TestKSMMetrcis fails constantly
> --
>
> Key: HDFS-12247
> URL: https://issues.apache.org/jira/browse/HDFS-12247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Affects Versions: HDFS-7240
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-12247-HDFS-7240.001.patch, 
> HDFS-12247-HDFS-7240.002.patch
>
>
> The test {{TestKSMMetrcis#[.testVolumeOps,.testKeyOps]}} fails constantly 
> recently. The stack info:
> {noformat}
> java.lang.AssertionError: Bad value for metric NumVolumeOps expected:<6> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.test.MetricsAsserts.assertCounter(MetricsAsserts.java:227)
>   at 
> org.apache.hadoop.ozone.ksm.TestKSMMetrcis.testVolumeOps(TestKSMMetrcis.java:89)
> {noformat}
> Seemed after the commit of HDFS-12034, the failures appeared.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11975) Provide a system-default EC policy

2017-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112278#comment-16112278
 ] 

Hadoop QA commented on HDFS-11975:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 1s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 9 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 
571 unchanged - 0 fixed = 574 total (was 571) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 48s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 |
|   | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-11975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880160/HDFS-11975-009.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  xml  |
| uname | Linux fb7f919d0397 3.13.0-119-generic