[jira] [Commented] (HDFS-10242) Cannot create space quota of zero

2016-05-17 Thread Takashi Ohnishi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286165#comment-15286165
 ] 

Takashi Ohnishi commented on HDFS-10242:


Thanks [~ajisakaa] for reviewing and comitting! :)

> Cannot create space quota of zero
> -
>
> Key: HDFS-10242
> URL: https://issues.apache.org/jira/browse/HDFS-10242
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
> Fix For: 2.8.0
>
> Attachments: HDFS-10242.1.patch
>
>
> The `HDFS Quotas Guide` says
> {noformat}
> A quota of zero still permits files to be created, but no blocks can be added 
> to the files.
> {noformat}
> But, this acutally is not so. When I tried,  dfsadmin command failed with the 
> below message.
> {noformat}
> $ hdfs dfsadmin -setSpaceQuota 0 /user/alice/dirWithSpaceQuota
> setSpaceQuota: Invalid values for quota : 9223372036854775807 and 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-17 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10400:
-
Attachment: HDFS-10400.002.patch

The failed unit test was caused by the exit code for errors changed in my 
patch. It looks so many places need to change in these failed tests. Maybe we 
can file another JIRA for working this and keep the exit code not changed. 
Update the patch for fixing failed unit tests related with this JIRA.

> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
> Attachments: HDFS-10400.001.patch, HDFS-10400.002.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10242) Cannot create space quota of zero

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286196#comment-15286196
 ] 

Hudson commented on HDFS-10242:
---

ABORTED: Integrated in Hadoop-trunk-Commit #9788 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9788/])
HDFS-10242. Cannot create space quota of zero. Contributed by Takashi 
(aajisaka: rev 9fe5828f05accc6746cb08a43916f7557dac533a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/QuotaUsage.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestQuota.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java


> Cannot create space quota of zero
> -
>
> Key: HDFS-10242
> URL: https://issues.apache.org/jira/browse/HDFS-10242
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
> Fix For: 2.8.0
>
> Attachments: HDFS-10242.1.patch
>
>
> The `HDFS Quotas Guide` says
> {noformat}
> A quota of zero still permits files to be created, but no blocks can be added 
> to the files.
> {noformat}
> But, this acutally is not so. When I tried,  dfsadmin command failed with the 
> below message.
> {noformat}
> $ hdfs dfsadmin -setSpaceQuota 0 /user/alice/dirWithSpaceQuota
> setSpaceQuota: Invalid values for quota : 9223372036854775807 and 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286212#comment-15286212
 ] 

Hadoop QA commented on HDFS-10400:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} hadoop-common-project/hadoop-common: patch 
generated 0 new + 85 unchanged - 1 fixed = 85 total (was 86) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 1s {color} | 
{color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 16s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804362/HDFS-10400.002.patch |
| JIRA Issue | HDFS-10400 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f90b077365f1 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 9fe5828 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15459/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HDFS-Build/15459/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15459/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15459/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-1040

[jira] [Created] (HDFS-10418) NPE in TestDistributedFileSystem.testDFSCloseOrdering

2016-05-17 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10418:
-

 Summary: NPE in TestDistributedFileSystem.testDFSCloseOrdering
 Key: HDFS-10418
 URL: https://issues.apache.org/jira/browse/HDFS-10418
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, test
Affects Versions: 2.8.0
 Environment: Jenkins
Reporter: Steve Loughran
Priority: Critical


Jenkins is failing with an NPE in close() —the close op assumes there's always 
a StorageStatistics. instance. If there isn"t, you get a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10418) NPE in TestDistributedFileSystem.testDFSCloseOrdering

2016-05-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286316#comment-15286316
 ] 

Steve Loughran commented on HDFS-10418:
---

{code}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
at 
org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217
{code}

> NPE in TestDistributedFileSystem.testDFSCloseOrdering
> -
>
> Key: HDFS-10418
> URL: https://issues.apache.org/jira/browse/HDFS-10418
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 2.8.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Priority: Critical
>
> Jenkins is failing with an NPE in close() —the close op assumes there's 
> always a StorageStatistics. instance. If there isn"t, you get a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Remove DelegationTokenIdentifier.toString() —for better logging output

2016-05-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286324#comment-15286324
 ] 

Steve Loughran commented on HDFS-9732:
--

I'll let you handle it then...

> Remove DelegationTokenIdentifier.toString() —for better logging output
> --
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2016-05-17 Thread Ravindra Babu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286419#comment-15286419
 ] 

Ravindra Babu commented on HDFS-8914:
-

This patch has not yet deployed.  Please let me know when it can be deployed.

> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2016-05-17 Thread Ravindra Babu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Babu reopened HDFS-8914:
-

Still the documentation page contains "Single Point of Failure comment" in 

http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html



> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8914) Document HA support in the HDFS HdfsDesign.md

2016-05-17 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286424#comment-15286424
 ] 

Akira AJISAKA commented on HDFS-8914:
-

It is deployed when Apache Hadoop 2.7.3 or higher version is released. I think 
Hadoop 2.7.3 will be released soon. Stay tuned!

> Document HA support in the HDFS HdfsDesign.md
> -
>
> Key: HDFS-8914
> URL: https://issues.apache.org/jira/browse/HDFS-8914
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
> Environment: Documentation page in live
>Reporter: Ravindra Babu
>Assignee: Lars Francke
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HDFS-8914.1.patch, HDFS-8914.2.patch
>
>
> Please refer to these two links and correct one of them.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
> The NameNode machine is a single point of failure for an HDFS cluster. If the 
> NameNode machine fails, manual intervention is necessary. Currently, 
> automatic restart and failover of the NameNode software to another machine is 
> not supported.
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
> The HDFS High Availability feature addresses the above problems by providing 
> the option of running two redundant NameNodes in the same cluster in an 
> Active/Passive configuration with a hot standby. This allows a fast failover 
> to a new NameNode in the case that a machine crashes, or a graceful 
> administrator-initiated failover for the purpose of planned maintenance.
> Please update hdfsDesign article with same facts to avoid confusion in 
> Reader's mind..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10303) DataStreamer#ResponseProcessor calculate packet acknowledge duration wrongly.

2016-05-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286636#comment-15286636
 ] 

Kihwal Lee commented on HDFS-10303:
---

+1 The latest patch looks good. The failed test cases are unrelated to this 
change.

> DataStreamer#ResponseProcessor calculate packet acknowledge duration wrongly.
> -
>
> Key: HDFS-10303
> URL: https://issues.apache.org/jira/browse/HDFS-10303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-10303-001.patch, HDFS-10303-002.patch
>
>
> Packets acknowledge duration should be calculated based on the packet send 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10303) DataStreamer#ResponseProcessor calculates packet ack latency incorrectly.

2016-05-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10303:
--
Summary: DataStreamer#ResponseProcessor calculates packet ack latency 
incorrectly.  (was: DataStreamer#ResponseProcessor calculate packet acknowledge 
duration wrongly.)

> DataStreamer#ResponseProcessor calculates packet ack latency incorrectly.
> -
>
> Key: HDFS-10303
> URL: https://issues.apache.org/jira/browse/HDFS-10303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Attachments: HDFS-10303-001.patch, HDFS-10303-002.patch
>
>
> Packets acknowledge duration should be calculated based on the packet send 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10303) DataStreamer#ResponseProcessor calculates packet ack latency incorrectly.

2016-05-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10303:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2 and branch-2.8. Thanks for fixing the 
issue, [~surendrasingh].

> DataStreamer#ResponseProcessor calculates packet ack latency incorrectly.
> -
>
> Key: HDFS-10303
> URL: https://issues.apache.org/jira/browse/HDFS-10303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-10303-001.patch, HDFS-10303-002.patch
>
>
> Packets acknowledge duration should be calculated based on the packet send 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10303) DataStreamer#ResponseProcessor calculates packet ack latency incorrectly.

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286681#comment-15286681
 ] 

Hudson commented on HDFS-10303:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9792 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9792/])
HDFS-10303. DataStreamer#ResponseProcessor calculates packet ack latency 
(kihwal: rev 4a5819dae2b0ca8f8b6d94ef464882d079d86593)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDataStream.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java


> DataStreamer#ResponseProcessor calculates packet ack latency incorrectly.
> -
>
> Key: HDFS-10303
> URL: https://issues.apache.org/jira/browse/HDFS-10303
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.7.2
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
> Fix For: 2.8.0
>
> Attachments: HDFS-10303-001.patch, HDFS-10303-002.patch
>
>
> Packets acknowledge duration should be calculated based on the packet send 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10418) NPE in TestDistributedFileSystem.testDFSCloseOrdering

2016-05-17 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286686#comment-15286686
 ] 

Rushabh S Shah commented on HDFS-10418:
---

Maybe a dup of HDFS-10415 ?

> NPE in TestDistributedFileSystem.testDFSCloseOrdering
> -
>
> Key: HDFS-10418
> URL: https://issues.apache.org/jira/browse/HDFS-10418
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 2.8.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Priority: Critical
>
> Jenkins is failing with an NPE in close() —the close op assumes there's 
> always a StorageStatistics. instance. If there isn"t, you get a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10400) hdfs dfs -put exits with zero on error

2016-05-17 Thread Koji Noguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286769#comment-15286769
 ] 

Koji Noguchi commented on HDFS-10400:
-

I'm confused with this jira.  Do we have a test case when hdfs dfs -put fails 
but still exits with zero? 

Uncaught exception on jvm would definitely exit with non-zero value.  
Also, the stack trace in the description is only "INFO", and usually it goes 
through a couple of retries before error-ing out with ERROR/FATAL.  
[~jo_des...@yahoo.com], how did you verify that dfs -put failed?   


> hdfs dfs -put exits with zero on error
> --
>
> Key: HDFS-10400
> URL: https://issues.apache.org/jira/browse/HDFS-10400
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Jo Desmet
>Assignee: Yiqun Lin
> Attachments: HDFS-10400.001.patch, HDFS-10400.002.patch
>
>
> On a filesystem that is about to fill up, execute "hdfs dfs -put" for a file 
> that is big enough to go over the limit. As a result, the command fails with 
> an exception, however the command terminates normally (exit code 0).
> Expectation is that any detectable failure generates an exit code different 
> than zero.
> Documentation on 
> https://hadoop.apache.org/docs/r1.2.1/file_system_shell.html#put states:
> Exit Code:
> Returns 0 on success and -1 on error. 
> following is the exception generated: 
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> at 
> org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2282)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1352)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1271)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:464)
> 16/05/11 13:37:07 INFO hdfs.DFSClient: Abandoning 
> BP-1964113808-130.8.138.99-1446787670498:blk_1073835906_95114
> 16/05/11 13:37:08 INFO hdfs.DFSClient: Excluding datanode 
> DatanodeInfoWithStorage[130.8.138.99:50010,DS-eed7039a-8031-499e-85a5-7216b9d766a8,DISK]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10414) allow disabling trash on per-directory basis

2016-05-17 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286790#comment-15286790
 ] 

Rushabh S Shah commented on HDFS-10414:
---

[~sershe]: Can you please elaborate your use case ?
You can use skipTrash to disable trash. 
Am I missing something ?

> allow disabling trash on per-directory basis
> 
>
> Key: HDFS-10414
> URL: https://issues.apache.org/jira/browse/HDFS-10414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>
> For ETL, it might be useful to disable trash for certain directories only to 
> avoid the overhead, while keeping it enabled for rest of the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10410) RedundantEditLogInputStream#LOG is set to wrong class

2016-05-17 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286820#comment-15286820
 ] 

John Zhuge commented on HDFS-10410:
---

Thanks [~eddyxu]!

> RedundantEditLogInputStream#LOG is set to wrong class
> -
>
> Key: HDFS-10410
> URL: https://issues.apache.org/jira/browse/HDFS-10410
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
>  Labels: supportability
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-10410.001.patch
>
>
> Found the issue while analyzing a log message that points to the wrong class.
> {code}
> class RedundantEditLogInputStream extends EditLogInputStream {
>   public static final Log LOG = 
> LogFactory.getLog(EditLogInputStream.class.getName());
> {code}
> should be changed to:
> {code}
>   public static final Log LOG = 
> LogFactory.getLog(RedundantEditLogInputStream.class.getName());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-17 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286912#comment-15286912
 ] 

Xiaowei Zhu commented on HDFS-10188:


When we allocate memory with new, we actually allocate sizeof(mem_struct + 
size). So when we delete, I think it's not enough to free only the memory p 
points to. Should also free mem_struct. 

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-17 Thread Xiaowei Zhu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286914#comment-15286914
 ] 

Xiaowei Zhu commented on HDFS-10188:


Sorry. I think for new/delete, mem_struct is not needed at all. But may still 
need for new[] and delete[].

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-17 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286929#comment-15286929
 ] 

Anu Engineer commented on HDFS-10363:
-

Hi [~arpitagarwal] Thanks for the patch. The code looks really nice, I really 
like the usage of optionals. So  +1 on the state of code. 
it might fail on jenkins since this patch could be applied on trunk, renaming 
it to *HDFS-10363-HDFS-7240.003.patch*  might help.






> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10363.01.patch, HDFS-10363.02.patch, 
> OzoneScmEndpointconfiguration.pdf, ozone-site.xml
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286960#comment-15286960
 ] 

Steve Loughran commented on HDFS-10415:
---

happens on Jenkins too

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Sangjin Lee
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-10415:
--
Environment: jenkins

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10418) NPE in TestDistributedFileSystem.testDFSCloseOrdering

2016-05-17 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-10418.
---
Resolution: Duplicate

you're right: closing

> NPE in TestDistributedFileSystem.testDFSCloseOrdering
> -
>
> Key: HDFS-10418
> URL: https://issues.apache.org/jira/browse/HDFS-10418
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 2.8.0
> Environment: Jenkins
>Reporter: Steve Loughran
>Priority: Critical
>
> Jenkins is failing with an NPE in close() —the close op assumes there's 
> always a StorageStatistics. instance. If there isn"t, you get a stack trace



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15286992#comment-15286992
 ] 

Arpit Agarwal commented on HDFS-10363:
--

Thanks for the quick review [~anu], I really appreciate it! I have renamed the 
patch and triggered a Jenkins run. 

I am sure it will come back with a lot of failures. At least any Ozone tests 
using MiniDFSCluster will break. I am going to look at the test failures today 
but barring any other issues apart from tests will commit this patch later 
today.

> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10363-HDFS-7240.02.patch, HDFS-10363.01.patch, 
> HDFS-10363.02.patch, OzoneScmEndpointconfiguration.pdf, ozone-site.xml
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9732) Improve DelegationTokenIdentifier.toString() —for better logging

2016-05-17 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9732:

Summary: Improve DelegationTokenIdentifier.toString() —for better logging  
(was: Remove DelegationTokenIdentifier.toString() —for better logging output)

> Improve DelegationTokenIdentifier.toString() —for better logging
> 
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10363:
-
Status: Patch Available  (was: Open)

> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10363-HDFS-7240.02.patch, HDFS-10363.01.patch, 
> HDFS-10363.02.patch, OzoneScmEndpointconfiguration.pdf, ozone-site.xml
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9732) Improve DelegationTokenIdentifier.toString() —for better logging

2016-05-17 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9732:

Target Version/s:   (was: )
  Labels: supportability  (was: )

> Improve DelegationTokenIdentifier.toString() —for better logging
> 
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10363:
-
Attachment: HDFS-10363-HDFS-7240.02.patch

> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10363-HDFS-7240.02.patch, HDFS-10363.01.patch, 
> HDFS-10363.02.patch, OzoneScmEndpointconfiguration.pdf, ozone-site.xml
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9732) Improve DelegationTokenIdentifier.toString() for better logging

2016-05-17 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9732:

Summary: Improve DelegationTokenIdentifier.toString() for better logging  
(was: Improve DelegationTokenIdentifier.toString() —for better logging)

> Improve DelegationTokenIdentifier.toString() for better logging
> ---
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10363:
-
Attachment: HDFS-10363-HDFS-7240.03.patch

v03 patch removes tabs from ozone-default.xml.

> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10363-HDFS-7240.02.patch, 
> HDFS-10363-HDFS-7240.03.patch, HDFS-10363.01.patch, HDFS-10363.02.patch, 
> OzoneScmEndpointconfiguration.pdf, ozone-site.xml
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-17 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: (was: HDFS-10390-HDFS-9924.002.patch)

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-17 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: HDFS-10390-HDFS-9924.003.patch

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-17 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10390:
-
Attachment: HDFS-10390-HDFS-9924.002.patch

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-17 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287074#comment-15287074
 ] 

Xiaobing Zhou commented on HDFS-10390:
--

v002 is posted again to trigger another build. v003 has some minor changes.

> Implement asynchronous setAcl/getAclStatus for DistributedFileSystem
> 
>
> Key: HDFS-10390
> URL: https://issues.apache.org/jira/browse/HDFS-10390
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10390-HDFS-9924.000.patch, 
> HDFS-10390-HDFS-9924.001.patch, HDFS-10390-HDFS-9924.002.patch, 
> HDFS-10390-HDFS-9924.003.patch
>
>
> This is proposed to implement asynchronous setAcl/getAclStatus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10414) allow disabling trash on per-directory basis

2016-05-17 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287091#comment-15287091
 ] 

Sergey Shelukhin commented on HDFS-10414:
-

Suppose there's ETL process consisting of a sequence of Hive queries/other 
tools that write intermediate data to an HDFS directory hierarchy; it would be 
nice to disable trash for the root of that hierarchy, so that the intermediate 
data is not preserved in the trash if it's deleted or moved to a different FS, 
for example. 
However, we don't want to disable the trash for the entire cluster, cause there 
is also production data there for which it should be enabled.

> allow disabling trash on per-directory basis
> 
>
> Key: HDFS-10414
> URL: https://issues.apache.org/jira/browse/HDFS-10414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>
> For ETL, it might be useful to disable trash for certain directories only to 
> avoid the overhead, while keeping it enabled for rest of the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting

2016-05-17 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10397:
-
Attachment: HDFS-10397.003.patch

Uploading the same patch for pre-commit Jenkins run.

> Distcp should ignore -delete option if -diff option is provided instead of 
> exiting
> --
>
> Key: HDFS-10397
> URL: https://issues.apache.org/jira/browse/HDFS-10397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10397.000.patch, HDFS-10397.001.patch, 
> HDFS-10397.002.patch, HDFS-10397.003.patch, HDFS-10397.003.patch
>
>
> In distcp, {{-delete}} and {{-diff}} options are mutually exclusive. 
> [HDFS-8828] brought strictly checking which makes the existing applications 
> (or scripts) that work just fine with both {{-delete}} and {{-diff}} options 
> previously stop performing because of the 
> {{java.lang.IllegalArgumentException: Diff is valid only with update 
> options}} exception.
> To make it backward incompatible, we can ignore the {{-delete}} option, given 
> {{-diff}} option, instead of exiting the program. Along with that, we can 
> print a warning message saying that _Diff is valid only with update options, 
> and -delete option is ignored_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9271) Implement basic NN operations

2016-05-17 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-9271:
--
Attachment: HDFS-9271.HDFS-8707.001.patch

I haven't had time to spend on this so I'm attaching the half baked patch I 
have sitting around for safe keeping (not worth reviewing).

> Implement basic NN operations
> -
>
> Key: HDFS-9271
> URL: https://issues.apache.org/jira/browse/HDFS-9271
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: Bob Hansen
>Assignee: James Clampffer
> Attachments: HDFS-9271.HDFS-8707.000.patch, 
> HDFS-9271.HDFS-8707.001.patch
>
>
> Expose via C and C++ API:
> * mkdirs
> * rename
> * delete
> * stat
> * chmod
> * chown
> * getListing
> * setOwner
> * fsync



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8829) Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol sockets and allow configuring auto-tuning

2016-05-17 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287150#comment-15287150
 ] 

Xiao Chen commented on HDFS-8829:
-

Thanks all for the contribution and nice discussions.
Any reason the interface {{PeerServer#getReceiveBufferSize}} is not public?

> Make SO_RCVBUF and SO_SNDBUF size configurable for DataTransferProtocol 
> sockets and allow configuring auto-tuning
> -
>
> Key: HDFS-8829
> URL: https://issues.apache.org/jira/browse/HDFS-8829
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.3.0, 2.6.0
>Reporter: He Tianyi
>Assignee: He Tianyi
> Fix For: 2.8.0
>
> Attachments: HDFS-8829.0001.patch, HDFS-8829.0002.patch, 
> HDFS-8829.0003.patch, HDFS-8829.0004.patch, HDFS-8829.0005.patch, 
> HDFS-8829.0006.patch
>
>
> {code:java}
>   private void initDataXceiver(Configuration conf) throws IOException {
> // find free port or use privileged port provided
> TcpPeerServer tcpPeerServer;
> if (secureResources != null) {
>   tcpPeerServer = new TcpPeerServer(secureResources);
> } else {
>   tcpPeerServer = new TcpPeerServer(dnConf.socketWriteTimeout,
>   DataNode.getStreamingAddr(conf));
> }
> 
> tcpPeerServer.setReceiveBufferSize(HdfsConstants.DEFAULT_DATA_SOCKET_SIZE);
> {code}
> The last line sets SO_RCVBUF explicitly, thus disabling tcp auto-tuning on 
> some system.
> Shall we make this behavior configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10404) CacheAdmin command usage message not shows completely

2016-05-17 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287149#comment-15287149
 ] 

Colin Patrick McCabe commented on HDFS-10404:
-

+1.  Thanks, [~linyiqun].

> CacheAdmin command usage message not shows completely
> -
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10404.001.patch, HDFS-10404.002.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10404) Fix formatting of CacheAdmin command usage help text

2016-05-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10404:

Summary: Fix formatting of CacheAdmin command usage help text  (was: 
CacheAdmin command usage message not shows completely)

> Fix formatting of CacheAdmin command usage help text
> 
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Attachments: HDFS-10404.001.patch, HDFS-10404.002.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10414) allow disabling trash on per-directory basis

2016-05-17 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287155#comment-15287155
 ] 

Rushabh S Shah commented on HDFS-10414:
---

bq. ; it would be nice to disable trash for the root of that hierarchy, so that 
the intermediate data is not preserved in the trash if it's deleted or moved to 
a different FS
If it is deleted programmatically then it is not stored in Trash directory.
If it is deleted via fs shell then you can use skipTrash.
Am I missing anything ?


> allow disabling trash on per-directory basis
> 
>
> Key: HDFS-10414
> URL: https://issues.apache.org/jira/browse/HDFS-10414
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>
> For ETL, it might be useful to disable trash for certain directories only to 
> avoid the overhead, while keeping it enabled for rest of the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10370) Allow DataNode to be started with numactl

2016-05-17 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HDFS-10370:
---
Attachment: HDFS-10370-2.patch

Tested locally by applying similar changes to 2.7.0.

> Allow DataNode to be started with numactl
> -
>
> Key: HDFS-10370
> URL: https://issues.apache.org/jira/browse/HDFS-10370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287202#comment-15287202
 ] 

Hadoop QA commented on HDFS-10397:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} hadoop-tools/hadoop-distcp: patch generated 0 new + 
76 unchanged - 11 fixed = 76 total (was 87) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 5s 
{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 51s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804477/HDFS-10397.003.patch |
| JIRA Issue | HDFS-10397 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2bed8f9571b2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 34fddd1 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15462/testReport/ |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15462/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Distcp should ignore -delete option if -diff option is provided instead of 
> exiting
> --
>
> Key: HDFS-10397
> URL: https://issues.apache.org/jira/browse/HDFS-10397
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10397.000.patch, HDFS-10397.001.patch, 
> HDFS-10397.002.patch, HDFS-10397.003.patch, HDFS-10397.003

[jira] [Updated] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-17 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-10188:
---
Attachment: HDFS-10188.HDFS-8707.003.patch

New patch using decltype as suggested by James. For new[] and delete, keep the 
mem_struct usage.

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch, 
> HDFS-10188.HDFS-8707.003.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-17 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HDFS-10415:


Assignee: Mingliang Liu

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9833) Erasure coding: recomputing block checksum on the fly by reconstructing the missed/corrupt block data

2016-05-17 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287275#comment-15287275
 ] 

Kai Zheng commented on HDFS-9833:
-

The big patch looks pretty good. Thanks Rakesh! Some minor comments so far 
according to a quick look.

* Unexpected change in PBHelperClient?
{code}
-case ENTERING_MAINTENANCE:
-  return DatanodeInfoProto.AdminState.ENTERING_MAINTENANCE;
-case IN_MAINTENANCE:
-  return DatanodeInfoProto.AdminState.IN_MAINTENANCE;
{code}

* Good idea to have {{StripedBlockReconstructor}} and 
{{StripedBlockChecksumReconstructor}} by extending {{StripedReconstructor}}. 
For StripedBlockChecksumReconstructor, the name of {{md5Writer}} may be renamed 
to a general one like {{checksumWriter}}? And {{reconstructAndTransfer}} could 
be {{reconstruct}} or {{reconstructChecksum}} as no transferring will happen 
here.

* I thought the original main comments in StripedReconstructor would be better 
to remain there because the rough idea still applies to the common base and can 
be shared by the both subclasses.

Look forward to the formal patch!

> Erasure coding: recomputing block checksum on the fly by reconstructing the 
> missed/corrupt block data
> -
>
> Key: HDFS-9833
> URL: https://issues.apache.org/jira/browse/HDFS-9833
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Rakesh R
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HDFS-9833-00-draft.patch
>
>
> As discussed in HDFS-8430 and HDFS-9694, to compute striped file checksum 
> even some of striped blocks are missed, we need to consider recomputing block 
> checksum on the fly for the missed/corrupt blocks. To recompute the block 
> checksum, the block data needs to be reconstructed by erasure decoding, and 
> the main needed codes for the block reconstruction could be borrowed from 
> HDFS-9719, the refactoring of the existing {{ErasureCodingWorker}}. In EC 
> worker, reconstructed blocks need to be written out to target datanodes, but 
> here in this case, the remote writing isn't necessary, as the reconstructed 
> block data is only used to recompute the checksum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-17 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10415:
-
Attachment: HDFS-10415-branch-2.000.patch

Thanks for reporting this [~sjlee0] and [~ste...@apache.org].

I think the problem was that, we added a new {{StorageStatistics}} object in 
[HADOOP-13065] to {{DistributedFileSystem}} and it needs initialized in the 
{{initialize()}} method. In the test, we simply create a new instance by 
calling constructor, in which way the {{initialize()}} method is not called,  
instead of using the factory methods like {{FileSystem#get()}}.

As a fix, I see two possibilities:
# Call the {{initialize()}} method explicitly before mocking the {{dfs}} field. 
This way, the newly added {{StorageStatistics}} object will be initialized 
before using it.
# For the {{InOrder}} unit test, actually we can use the spied objects other 
than mocked objects. This way, we don't need to create our test file system 
{{MyDistributedFileSystem}}.

I prefer the 2nd option as v0 patch does.

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10404) Fix formatting of CacheAdmin command usage help text

2016-05-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10404:

  Resolution: Fixed
   Fix Version/s: 2.9.0
Target Version/s: 2.9.0
  Status: Resolved  (was: Patch Available)

> Fix formatting of CacheAdmin command usage help text
> 
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.9.0
>
> Attachments: HDFS-10404.001.patch, HDFS-10404.002.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287294#comment-15287294
 ] 

Hadoop QA commented on HDFS-10188:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 5m 0s {color} 
| {color:red} Docker failed to build yetus/hadoop:0cf5e66. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804483/HDFS-10188.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-10188 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15463/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch, 
> HDFS-10188.HDFS-8707.003.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-17 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-10188:
---
Attachment: (was: HDFS-10188.HDFS-8707.003.patch)

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch, 
> HDFS-10188.HDFS-8707.003.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-17 Thread Xiaowei Zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaowei Zhu updated HDFS-10188:
---
Attachment: HDFS-10188.HDFS-8707.003.patch

> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: Xiaowei Zhu
> Attachments: HDFS-10188.HDFS-8707.000.patch, 
> HDFS-10188.HDFS-8707.001.patch, HDFS-10188.HDFS-8707.002.patch, 
> HDFS-10188.HDFS-8707.003.patch
>
>
> I propose implementing a set of memory new/delete pairs with additional 
> checking to detect double deletes, read-after-delete, and write-after-deletes 
> to help debug resource ownership issues and prevent new ones from entering 
> the library.
> One of the most common issues we have is use-after-free issues.  The 
> continuation pattern makes these really tricky to debug because by the time a 
> segsegv is raised the context of what has caused the error is long gone.
> The plan is to add allocators that can be turned on that can do the 
> following, in order of runtime cost.
> 1: no-op, forward through to default new/delete
> 2: make sure the memory given to the constructor is dirty, memset free'd 
> memory to 0
> 3: implement operator new with mmap, lock that region of memory once it's 
> been deleted; obviously this can't be left to run forever because the memory 
> is never unmapped
> This should also put some groundwork in place for implementing specialized 
> allocators for tiny objects that we churn through like std::string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10404) Fix formatting of CacheAdmin command usage help text

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287313#comment-15287313
 ] 

Hudson commented on HDFS-10404:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9803 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9803/])
HDFS-10404. Fix formatting of CacheAdmin command usage help text (Yiqun 
(cmccabe: rev 7cd5ae62f639a857f24f5463f2aefc099c631a14)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/CentralizedCacheManagement.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/CacheAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


> Fix formatting of CacheAdmin command usage help text
> 
>
> Key: HDFS-10404
> URL: https://issues.apache.org/jira/browse/HDFS-10404
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: caching
>Affects Versions: 2.7.1
>Reporter: Yiqun Lin
>Assignee: Yiqun Lin
> Fix For: 2.9.0
>
> Attachments: HDFS-10404.001.patch, HDFS-10404.002.patch
>
>
> In {{CacheAdmin}}, there are two places that not completely showing the cmd 
> usage message.
> {code}
> $ hdfs cacheadmin
> Usage: bin/hdfs cacheadmin [COMMAND]
>   [-addDirective -path  -pool  [-force] 
> [-replication ] [-ttl ]]
>   [-modifyDirective -id  [-path ] [-force] [-replication 
> ] [-pool ] [-ttl ]]
>   [-listDirectives [-stats] [-path ] [-pool ] [-id ]
>   [-removeDirective ]
>   [-removeDirectives -path ]
>   [-addPool  [-owner ] [-group ] [-mode ] 
> [-limit ] [-maxTtl ]
> {code}
> The command {{-listDirectives}} and {{-addPool}} are not showing completely, 
> they are both lacking a ']' in the end of line.
> In the {{CentralizedCacheManagement.md}}, there is also a similar problem. 
> The page of {{CentralizedCacheManagement}} can also showed this, 
> https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10370) Allow DataNode to be started with numactl

2016-05-17 Thread Dave Marion (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Marion updated HDFS-10370:
---
Attachment: HDFS-10370-3.patch

> Allow DataNode to be started with numactl
> -
>
> Key: HDFS-10370
> URL: https://issues.apache.org/jira/browse/HDFS-10370
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Dave Marion
>Assignee: Dave Marion
> Attachments: HDFS-10370-1.patch, HDFS-10370-2.patch, 
> HDFS-10370-3.patch
>
>
> Allow numactl constraints to be applied to the datanode process. The 
> implementation I have in mind involves two environment variables (enable and 
> parameters) in the datanode startup process. Basically, if enabled and 
> numactl exists on the system, then start the java process using it. Provide a 
> default set of parameters, and allow the user to override the default. Wiring 
> this up for the non-jsvc use case seems straightforward. Not sure how this 
> can be supported using jsvc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-17 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10415:
-
Status: Patch Available  (was: Open)

If the patch looks good, I think we can apply it to {{trunk}} branch as well. 
Thanks.

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10390) Implement asynchronous setAcl/getAclStatus for DistributedFileSystem

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287398#comment-15287398
 ] 

Hadoop QA commented on HDFS-10390:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 26s 
{color} | {color:red} root: patch generated 5 new + 418 unchanged - 0 fixed = 
423 total (was 418) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 52s 
{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 59m 0s 
{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 115m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804475/HDFS-10390-HDFS-9924.003.patch
 |
| JIRA Issue | HDFS-10390 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c175e6f1f332 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 34fddd1 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15461/artifact/patchprocess/diff-checkstyle-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15461/testReport/ |
| modules | C:  hadoop-common-project/hadoop-common   
hadoop-hdfs-project/hadoop-hdfs-client   hadoop-hdfs-project/hadoop-hdfs  U: . |
| Console out

[jira] [Commented] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287397#comment-15287397
 ] 

Hadoop QA commented on HDFS-10363:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
5s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
19s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 16 new + 
179 unchanged - 1 fixed = 195 total (was 180) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 34s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 43s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 160m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | hadoop.ozone.TestStorageContainerManager |
|   | hadoop.ozone.web.TestOzoneVolumes |
|   | hadoop.ozone.web.client.TestKeys |
|   | hadoop.ozone.web.client.TestVolume |
|   | hadoop.ozone.web.TestOzoneWebAccess |
|   | hadoop.ozone.container.common.impl.TestContainerPersistence |
|   | hadoop.ozone.container.oz

[jira] [Commented] (HDFS-1208) DFSClient swallows InterruptedException

2016-05-17 Thread Bob Tiernay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-1208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287435#comment-15287435
 ] 

Bob Tiernay commented on HDFS-1208:
---

Any word on this? My application is blocking because it cannot be interrupted. 

> DFSClient swallows InterruptedException
> ---
>
> Key: HDFS-1208
> URL: https://issues.apache.org/jira/browse/HDFS-1208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Zheng Shao
>
> DFSClient sometimes swallowed InterruptedException. Is that intended?
>  
> According to this, we should never swallow an InterruptedException. 
> Application might use InterruptedException for thread cooperation.
> http://www.ibm.com/developerworks/java/library/j-jtp05236.html
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287439#comment-15287439
 ] 

Steve Loughran commented on HDFS-10415:
---

we could also consider having dfs.close() being resilient to invocation prior 
to initialize() being called. Clearly, it used to be. Given that delete() is 
being called in teardown, why not have that skip the counting.

of course, there's the situation that normally, you wouldn't have any files to 
delete in an FS that was never inited —in which case this problem never arises 
in the real system.

What happens if {{initialize()}} is called in the test?

> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-10419) Building HDFS on top of Ozone's storage containers

2016-05-17 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-10419:


 Summary: Building HDFS on top of Ozone's storage containers
 Key: HDFS-10419
 URL: https://issues.apache.org/jira/browse/HDFS-10419
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Jing Zhao
Assignee: Jing Zhao


In HDFS-7240, Ozone defines storage containers to store both the data and the 
metadata. The storage container layer provides an object storage interface and 
aims to manage data/metadata in a distributed manner. More details about 
storage containers can be found in the design doc in HDFS-7240.

HDFS can adopt the storage containers to store and manage blocks. The general 
idea is:
# Each block can be treated as an object and the block ID is the object's key.
# Blocks will still be stored in DataNodes but as objects in storage containers.
# The block management work can be separated out of the NameNode and will be 
handled by the storage container layer in a more distributed way. The NameNode 
will only manage the namespace (i.e., files and directories).
# For each file, the NameNode only needs to record a list of block IDs which 
are used as keys to obtain real data from storage containers.
# A new DFSClient implementation talks to both NameNode and the storage
container layer to read/write.

HDFS, especially the NameNode, can get much better scalability from this
design. Currently the NameNode's heaviest workload comes from the block 
management, which includes maintaining the block-DataNode mapping, receiving 
full/incremental block reports, tracking block states (under/over/miss 
replicated), and joining every writing pipeline protocol to guarantee the data 
consistency. These work bring high memory footprint
and make NameNode suffer from GC. HDFS-5477 already proposes to convert 
BlockManager as a service. If we can build HDFS on top of the storage container 
layer, we not only separate out the BlockManager from the NameNode, but also 
replace it with a new distributed management scheme.

The storage container work is currently in progress in HDFS-7240, and the work 
proposed here is still in an experimental/exploring stage. We can do this 
experiment in a feature branch so that people with interests can be involved.

A design doc will be uploaded later explaining more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10419) Building HDFS on top of Ozone's storage containers

2016-05-17 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-10419:
-
Description: 
In HDFS-7240, Ozone defines storage containers to store both the data and the 
metadata. The storage container layer provides an object storage interface and 
aims to manage data/metadata in a distributed manner. More details about 
storage containers can be found in the design doc in HDFS-7240.

HDFS can adopt the storage containers to store and manage blocks. The general 
idea is:
# Each block can be treated as an object and the block ID is the object's key.
# Blocks will still be stored in DataNodes but as objects in storage containers.
# The block management work can be separated out of the NameNode and will be 
handled by the storage container layer in a more distributed way. The NameNode 
will only manage the namespace (i.e., files and directories).
# For each file, the NameNode only needs to record a list of block IDs which 
are used as keys to obtain real data from storage containers.
# A new DFSClient implementation talks to both NameNode and the storage
container layer to read/write.

HDFS, especially the NameNode, can get much better scalability from this
design. Currently the NameNode's heaviest workload comes from the block 
management, which includes maintaining the block-DataNode mapping, receiving 
full/incremental block reports, tracking block states (under/over/miss 
replicated), and joining every writing pipeline protocol to guarantee the data 
consistency. These work bring high memory footprint and make NameNode suffer 
from GC. HDFS-5477 already proposes to convert BlockManager as a service. If we 
can build HDFS on top of the storage container layer, we not only separate out 
the BlockManager from the NameNode, but also replace it with a new distributed 
management scheme.

The storage container work is currently in progress in HDFS-7240, and the work 
proposed here is still in an experimental/exploring stage. We can do this 
experiment in a feature branch so that people with interests can be involved.

A design doc will be uploaded later explaining more details.

  was:
In HDFS-7240, Ozone defines storage containers to store both the data and the 
metadata. The storage container layer provides an object storage interface and 
aims to manage data/metadata in a distributed manner. More details about 
storage containers can be found in the design doc in HDFS-7240.

HDFS can adopt the storage containers to store and manage blocks. The general 
idea is:
# Each block can be treated as an object and the block ID is the object's key.
# Blocks will still be stored in DataNodes but as objects in storage containers.
# The block management work can be separated out of the NameNode and will be 
handled by the storage container layer in a more distributed way. The NameNode 
will only manage the namespace (i.e., files and directories).
# For each file, the NameNode only needs to record a list of block IDs which 
are used as keys to obtain real data from storage containers.
# A new DFSClient implementation talks to both NameNode and the storage
container layer to read/write.

HDFS, especially the NameNode, can get much better scalability from this
design. Currently the NameNode's heaviest workload comes from the block 
management, which includes maintaining the block-DataNode mapping, receiving 
full/incremental block reports, tracking block states (under/over/miss 
replicated), and joining every writing pipeline protocol to guarantee the data 
consistency. These work bring high memory footprint
and make NameNode suffer from GC. HDFS-5477 already proposes to convert 
BlockManager as a service. If we can build HDFS on top of the storage container 
layer, we not only separate out the BlockManager from the NameNode, but also 
replace it with a new distributed management scheme.

The storage container work is currently in progress in HDFS-7240, and the work 
proposed here is still in an experimental/exploring stage. We can do this 
experiment in a feature branch so that people with interests can be involved.

A design doc will be uploaded later explaining more details.


> Building HDFS on top of Ozone's storage containers
> --
>
> Key: HDFS-10419
> URL: https://issues.apache.org/jira/browse/HDFS-10419
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>
> In HDFS-7240, Ozone defines storage containers to store both the data and the 
> metadata. The storage container layer provides an object storage interface 
> and aims to manage data/metadata in a distributed manner. More details about 
> storage containers can be found in the design doc in HDFS-7240.
> HDFS can adopt the storage containers to store and manage blocks. The general 
> idea

[jira] [Updated] (HDFS-10419) Building HDFS on top of Ozone's storage containers

2016-05-17 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-10419:
-
Description: 
In HDFS-7240, Ozone defines storage containers to store both the data and the 
metadata. The storage container layer provides an object storage interface and 
aims to manage data/metadata in a distributed manner. More details about 
storage containers can be found in the design doc in HDFS-7240.

HDFS can adopt the storage containers to store and manage blocks. The general 
idea is:
# Each block can be treated as an object and the block ID is the object's key.
# Blocks will still be stored in DataNodes but as objects in storage containers.
# The block management work can be separated out of the NameNode and will be 
handled by the storage container layer in a more distributed way. The NameNode 
will only manage the namespace (i.e., files and directories).
# For each file, the NameNode only needs to record a list of block IDs which 
are used as keys to obtain real data from storage containers.
# A new DFSClient implementation talks to both NameNode and the storage 
container layer to read/write.

HDFS, especially the NameNode, can get much better scalability from this 
design. Currently the NameNode's heaviest workload comes from the block 
management, which includes maintaining the block-DataNode mapping, receiving 
full/incremental block reports, tracking block states (under/over/miss 
replicated), and joining every writing pipeline protocol to guarantee the data 
consistency. These work bring high memory footprint and make NameNode suffer 
from GC. HDFS-5477 already proposes to convert BlockManager as a service. If we 
can build HDFS on top of the storage container layer, we not only separate out 
the BlockManager from the NameNode, but also replace it with a new distributed 
management scheme.

The storage container work is currently in progress in HDFS-7240, and the work 
proposed here is still in an experimental/exploring stage. We can do this 
experiment in a feature branch so that people with interests can be involved.

A design doc will be uploaded later explaining more details.

  was:
In HDFS-7240, Ozone defines storage containers to store both the data and the 
metadata. The storage container layer provides an object storage interface and 
aims to manage data/metadata in a distributed manner. More details about 
storage containers can be found in the design doc in HDFS-7240.

HDFS can adopt the storage containers to store and manage blocks. The general 
idea is:
# Each block can be treated as an object and the block ID is the object's key.
# Blocks will still be stored in DataNodes but as objects in storage containers.
# The block management work can be separated out of the NameNode and will be 
handled by the storage container layer in a more distributed way. The NameNode 
will only manage the namespace (i.e., files and directories).
# For each file, the NameNode only needs to record a list of block IDs which 
are used as keys to obtain real data from storage containers.
# A new DFSClient implementation talks to both NameNode and the storage
container layer to read/write.

HDFS, especially the NameNode, can get much better scalability from this
design. Currently the NameNode's heaviest workload comes from the block 
management, which includes maintaining the block-DataNode mapping, receiving 
full/incremental block reports, tracking block states (under/over/miss 
replicated), and joining every writing pipeline protocol to guarantee the data 
consistency. These work bring high memory footprint and make NameNode suffer 
from GC. HDFS-5477 already proposes to convert BlockManager as a service. If we 
can build HDFS on top of the storage container layer, we not only separate out 
the BlockManager from the NameNode, but also replace it with a new distributed 
management scheme.

The storage container work is currently in progress in HDFS-7240, and the work 
proposed here is still in an experimental/exploring stage. We can do this 
experiment in a feature branch so that people with interests can be involved.

A design doc will be uploaded later explaining more details.


> Building HDFS on top of Ozone's storage containers
> --
>
> Key: HDFS-10419
> URL: https://issues.apache.org/jira/browse/HDFS-10419
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jing Zhao
>Assignee: Jing Zhao
>
> In HDFS-7240, Ozone defines storage containers to store both the data and the 
> metadata. The storage container layer provides an object storage interface 
> and aims to manage data/metadata in a distributed manner. More details about 
> storage containers can be found in the design doc in HDFS-7240.
> HDFS can adopt the storage containers to store and manage blocks. The general 
> i

[jira] [Updated] (HDFS-10417) Actionable msgs for checkBlockLocalPathAccess

2016-05-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10417:
--
Assignee: Tianyin Xu

> Actionable msgs for checkBlockLocalPathAccess
> -
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10417:
--
Summary: Improve error message from checkBlockLocalPathAccess  (was: 
Actionable msgs for checkBlockLocalPathAccess)

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287461#comment-15287461
 ] 

Kihwal Lee commented on HDFS-10417:
---

+1.  BTW, this is for the "legacy" short-circuit reads. The newer short-circuit 
read is more flexible and does not require special user setup.

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10417:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8. Thanks for improving the error 
message, [~tianyin].

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287465#comment-15287465
 ] 

Tianyin Xu commented on HDFS-10417:
---

Thanks a lot, [~kihwal]! Yes, I understand. The related docs are here,
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reopened HDFS-10417:
---

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10417:
--
Comment: was deleted

(was: Committed to trunk, branch-2 and branch-2.8. Thanks for improving the 
error message, [~tianyin].)

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287488#comment-15287488
 ] 

Kihwal Lee commented on HDFS-10417:
---

Sorry, I reverted it. It breaks a test case which compares the error string. It 
will be a simple fix. Please provide an update patch.

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10188) libhdfs++: Implement debug allocators

2016-05-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287491#comment-15287491
 ] 

Hadoop QA commented on HDFS-10188:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 47s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
10s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 5s 
{color} | {color:green} HDFS-8707 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 52s 
{color} | {color:green} HDFS-8707 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-8707 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 47s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 4m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 35s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 33s 
{color} | {color:green} hadoop-hdfs-native-client in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0cf5e66 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12804493/HDFS-10188.HDFS-8707.003.patch
 |
| JIRA Issue | HDFS-10188 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 0b3c6ca6f66a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HDFS-8707 / d187112 |
| Default Java | 1.7.0_101 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_91 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_101 |
| JDK v1.7.0_101  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15464/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/15464/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> libhdfs++: Implement debug allocators
> -
>
> Key: HDFS-10188
> URL: https://issues.apache.org/jira/browse/HDFS-10188
> Project: Hadoop HDFS
>  Issue Type: 

[jira] [Comment Edited] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287488#comment-15287488
 ] 

Kihwal Lee edited comment on HDFS-10417 at 5/17/16 8:24 PM:


Sorry, I reverted it. It breaks a test case which compares the error string. It 
will be a simple fix. Please provide an update patch.

See {{TestShortCircuitLocalRead.testDeprecatedGetBlockLocalPathInfoRpc}}.


was (Author: kihwal):
Sorry, I reverted it. It breaks a test case which compares the error string. It 
will be a simple fix. Please provide an update patch.

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287522#comment-15287522
 ] 

Hudson commented on HDFS-10417:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9806 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9806/])
HDFS-10417. Improve error message from checkBlockLocalPathAccess. (kihwal: rev 
0942954e8ab6bef636c2648d0d5a0227a50f799c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
Revert "HDFS-10417. Improve error message from (kihwal: rev 
1356cbe9941c1692c6614201e63187916a3eebb0)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10363) Ozone: Introduce new config keys for SCM service endpoints

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-10363:
-
Attachment: HDFS-10363-HDFS-7240.04.patch

v04 patch fixes checkstyle issues and a bugfix in SCM#updateListenAddress where 
the listener port number was not updated correctly. I found the second issue 
while debugging MiniOzoneCluster failures.

Will post a patch to fully fix the Mini{DFS,Ozone}Cluster test failures in a 
separate Jira.

> Ozone: Introduce new config keys for SCM service endpoints
> --
>
> Key: HDFS-10363
> URL: https://issues.apache.org/jira/browse/HDFS-10363
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-10363-HDFS-7240.02.patch, 
> HDFS-10363-HDFS-7240.03.patch, HDFS-10363-HDFS-7240.04.patch, 
> HDFS-10363.01.patch, HDFS-10363.02.patch, OzoneScmEndpointconfiguration.pdf, 
> ozone-site.xml
>
>
> The SCM should have its own config keys to specify service addresses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-17 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287586#comment-15287586
 ] 

Karthik Kambatla commented on HDFS-9782:


Sorry for the delay in getting to this. Looks mostly good. Some comments:
# Is the empty constructor so Reflection works?  
# Javadoc for stringifySecurityProperty, findCurrentDirectory, 
createOrAppendLogFile, doTestGetRollInterval are broken. Mind fixing them?
# Nit: Should checkForProperty be renamed to checkIfPropertyExists for more 
clarity? 
# RollingFileSystemSink#setInitialFlushTime is quite confusing to me. Can we 
clarify all the funkiness going on there? May be more comments? May be more 
meaningful variable name than millis? 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Improve DelegationTokenIdentifier.toString() for better logging

2016-05-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287588#comment-15287588
 ] 

Yongjun Zhang commented on HDFS-9732:
-

Thanks [~steve_l], [~cnauroth] and [~aw] again!

I just committed to trunk.

I attempted to cherry-pick to branch-2 etc, and found that there are quite a 
few conflicts.  Then, I found that one dependent jira is incompatible itself 
and thus not made its path to branch-2, See:

https://issues.apache.org/jira/browse/HDFS-9085?focusedCommentId=14791503&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14791503

Even if I get this to branch-2, I found there is still some other missing jiras 
in branch-2 that makes it unclean. The test I used in 
TestDelegationTokenFetcher doesn't exist in branch-2. 

At this point, I decide to get it to trunk only, since we are working on 3.0 
anyways.


> Improve DelegationTokenIdentifier.toString() for better logging
> ---
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9732) Improve DelegationTokenIdentifier.toString() for better logging

2016-05-17 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-9732:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

> Improve DelegationTokenIdentifier.toString() for better logging
> ---
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Tianyin Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tianyin Xu updated HDFS-10417:
--
Attachment: HDFS-10417.001.patch

New patch with the unit test adjusted.

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch, HDFS-10417.001.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10417) Improve error message from checkBlockLocalPathAccess

2016-05-17 Thread Tianyin Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287606#comment-15287606
 ] 

Tianyin Xu commented on HDFS-10417:
---

Sorry for that. I changed the unit test, and passed the 
TestShortCircuitLocalRead. Thanks again, [~kihwal]!

> Improve error message from checkBlockLocalPathAccess
> 
>
> Key: HDFS-10417
> URL: https://issues.apache.org/jira/browse/HDFS-10417
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.2
>Reporter: Tianyin Xu
>Assignee: Tianyin Xu
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10417.000.patch, HDFS-10417.001.patch
>
>
> The exception msg thrown by {{checkBlockLocalPathAccess}} is very specific to 
> the implementation detail. It's really hard for users to understand it unless 
> she reads and understands the code. 
> The code is shown as follows:
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
>   private void checkBlockLocalPathAccess() throws IOException {
> checkKerberosAuthMethod("getBlockLocalPathInfo()");
> String currentUser = 
> UserGroupInformation.getCurrentUser().getShortUserName();
> if (!usersWithLocalPathAccess.contains(currentUser)) {
>   throw new AccessControlException(
>   "Can't continue with getBlockLocalPathInfo() "
>   + "authorization. The user " + currentUser
>   + " is not allowed to call getBlockLocalPathInfo");
> }
>   }
> {code}
> (basically she needs to understand the code logic of getBlockLocalPathInfo)
> \\
> Note that {{usersWithLocalPathAccess}} is a *private final* purely coming 
> from the configuration settings of {{dfs.block.local-path-access.user}},
> {code:title=org.apache.hadoop.hdfs.server.datanode.DataNode|borderStyle=solid}
> private final List usersWithLocalPathAccess;
> 
> this.usersWithLocalPathAccess = Arrays.asList(
> 
> conf.getTrimmedStrings(DFSConfigKeys.DFS_BLOCK_LOCAL_PATH_ACCESS_USER_KEY));
> {code}
> In other word, the checking fails simply because the current user is not 
> specified in the configuration setting of 
> {{dfs.block.local-path-access.user}}. The log message should be much more 
> clearer to make it easy for users to take actions, as demonstrated in the 
> attached patch. 
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10360) DataNode may format directory and lose blocks if current/VERSION is missing

2016-05-17 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10360:
---
Attachment: HDFS-10360.007.patch

Many thanks to [~eddyxu] for the comments. Here's the new patch that includes a 
test. This test validates that DN sees the failed volume in its JMX message, 
and NN also sees the same failure.

Please review again! Thanks again!

> DataNode may format directory and lose blocks if current/VERSION is missing
> ---
>
> Key: HDFS-10360
> URL: https://issues.apache.org/jira/browse/HDFS-10360
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HDFS-10360.001.patch, HDFS-10360.002.patch, 
> HDFS-10360.003.patch, HDFS-10360.004.patch, HDFS-10360.004.patch, 
> HDFS-10360.005.patch, HDFS-10360.007.patch
>
>
> Under certain circumstances, if the current/VERSION of a storage directory is 
> missing, DataNode may format the storage directory even though _block files 
> are not missing_.
> This is very easy to reproduce. Simply launch a HDFS cluster and create some 
> files. Delete current/VERSION, and restart the data node.
> After the restart, the data node will format the directory and remove all 
> existing block files:
> {noformat}
> 2016-05-03 12:57:15,387 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on /data/dfs/dn/in_use.lock acquired by nodename 
> 5...@weichiu-dn-2.vpc.cloudera.com
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Storage directory /data/dfs/dn is not formatted for 
> BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,389 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Analyzing storage directories for bpid BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,464 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Locking is disabled for 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Block pool storage directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642 is not formatted 
> for BP-787466439-172
> .26.24.43-1462305406642
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting ...
> 2016-05-03 12:57:15,465 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-787466439-172.26.24.43-1462305406642 directory 
> /data/dfs/dn/current/BP-787466439-172.26.24.43-1462305406642/current
> {noformat}
> The bug is: DataNode assumes that if none of {{current/VERSION}}, 
> {{previous/}}, {{previous.tmp/}}, {{removed.tmp/}}, {{finalized.tmp/}} and 
> {{lastcheckpoint.tmp/}} exists, the storage directory contains nothing 
> important to HDFS and decides to format it. 
> https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java#L526-L545
> However, block files may still exist, and in my opinion, we should do 
> everything possible to retain the block files.
> I have two suggestions:
> # check if {{current/}} directory is empty. If not, throw an 
> InconsistentFSStateException in {{Storage#analyzeStorage}} instead of 
> asumming its not formatted. Or,
> # In {{Storage#clearDirectory}}, before it formats the storage directory, 
> rename or move {{current/}} directory. Also, log whatever is being 
> renamed/moved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10323) transient deleteOnExit failure in ViewFileSystem due to close() ordering

2016-05-17 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HDFS-10323:
--
Affects Version/s: 2.6.0
 Target Version/s: 3.0.0-beta1

Agree, seems incompatible. Targeting for 3.0.

> transient deleteOnExit failure in ViewFileSystem due to close() ordering
> 
>
> Key: HDFS-10323
> URL: https://issues.apache.org/jira/browse/HDFS-10323
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: federation
>Affects Versions: 2.6.0
>Reporter: Ben Podgursky
>
> After switching to using a ViewFileSystem, fs.deleteOnExit calls began 
> failing frequently, displaying this error on failure:
> 16/04/21 13:56:24 INFO fs.FileSystem: Ignoring failure to deleteOnExit for 
> path /tmp/delete_on_exit_test_123/a438afc0-a3ca-44f1-9eb5-010ca4a62d84
> Since FileSystem eats the error involved, it is difficult to be sure what the 
> error is, but I believe what is happening is that the ViewFileSystem’s child 
> FileSystems are being close()’d before the ViewFileSystem, due to the random 
> order ClientFinalizer closes FileSystems; so then when the ViewFileSystem 
> tries to close(), it tries to forward the delete() calls to the appropriate 
> child, and fails because the child is already closed.
> I’m unsure how to write an actual Hadoop test to reproduce this, since it 
> involves testing behavior on actual JVM shutdown.  However, I can verify that 
> while
> {code:java}
> fs.deleteOnExit(randomTemporaryDir);

> {code}
> regularly (~50% of the time) fails to delete the temporary directory, this 
> code:
> {code:java}
> ViewFileSystem viewfs = (ViewFileSystem)fs1;

> for (FileSystem fileSystem : viewfs.getChildFileSystems()) {
  
>   if (fileSystem.exists(randomTemporaryDir)) {

> fileSystem.deleteOnExit(randomTemporaryDir);
  
>   }
> 
}

> {code}
> always successfully deletes the temporary directory on JVM shutdown.
> I am not very familiar with FileSystem inheritance hierarchies, but at first 
> glance I see two ways to fix this behavior:
> 1)  ViewFileSystem could forward deleteOnExit calls to the appropriate child 
> FileSystem, and not hold onto that path itself.
> 2) FileSystem.Cache.closeAll could first close all ViewFileSystems, then all 
> other FileSystems.  
> Would appreciate any thoughts of whether this seems accurate, and thoughts 
> (or help) on the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9782) RollingFileSystemSink should have configurable roll interval

2016-05-17 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HDFS-9782:
---
Status: Open  (was: Patch Available)

Canceling patch to address comments. 

> RollingFileSystemSink should have configurable roll interval
> 
>
> Key: HDFS-9782
> URL: https://issues.apache.org/jira/browse/HDFS-9782
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HDFS-9782.001.patch, HDFS-9782.002.patch, 
> HDFS-9782.003.patch, HDFS-9782.004.patch, HDFS-9782.005.patch, 
> HDFS-9782.006.patch, HDFS-9782.007.patch
>
>
> Right now it defaults to rolling at the top of every hour.  Instead that 
> interval should be configurable.  The interval should also allow for some 
> play so that all hosts don't try to flush their files simultaneously.
> I'm filing this in HDFS because I suspect it will involve touching the HDFS 
> tests.  If it turns out not to, I'll move it into common instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10415) TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2

2016-05-17 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287671#comment-15287671
 ] 

Mingliang Liu commented on HDFS-10415:
--

# If we call the {{initialize()}} method in the test, it will pass. Of course 
we have to do this before mocking the {{dfsclient}} object.
{code:java|title=Solution 2 - calling initialize() explicitly}
   private static class MyDistributedFileSystem extends DistributedFileSystem {
 MyDistributedFileSystem() {
-  statistics = new FileSystem.Statistics("myhdfs"); // can't mock finals
+  initialize(new URI("hdfs://localhost"), new HdfsConfiguration()); // 
exception may be thrown
   dfs = mock(DFSClient.class);
 }
{code}
# {{DistributedFileSystem#close()}} per se is not resilient to invocation prior 
to {{initialize()}} being called. Meanwhile, the {{MyDistributedFileSystem}} 
also has to create a {{Statistics}} object explicitly which is partial of what 
the {{initialize()}} method does. To me, this is not ideal. It also has to 
trick out the {{deleteOnExit()}} by returning true for any path. I'm more 
comfortable by simply using a real DFS object, and validating the order of 
implicit operations when closing.
# Anyway, we can avoid calling {{initialize()}} method by constructing the 
{{storageStatistics}} object in the {{MyDistributedFileSystem}} as following:
{code:title=Solution 3 - constructing the storageStatistics}
 MyDistributedFileSystem() {
   statistics = new FileSystem.Statistics("myhdfs"); // can't mock finals
+  storageStatistics = new DFSOpsCountStatistics(); // field needs to be 
protected
   dfs = mock(DFSClient.class);
 }
{code}


> TestDistributedFileSystem#testDFSCloseOrdering() fails on branch-2
> --
>
> Key: HDFS-10415
> URL: https://issues.apache.org/jira/browse/HDFS-10415
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
> Environment: jenkins
>Reporter: Sangjin Lee
>Assignee: Mingliang Liu
> Attachments: HDFS-10415-branch-2.000.patch
>
>
> {noformat}
> Tests run: 24, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 51.096 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
> testDFSCloseOrdering(org.apache.hadoop.hdfs.TestDistributedFileSystem)  Time 
> elapsed: 0.045 sec  <<< ERROR!
> java.lang.NullPointerException: null
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:790)
>   at 
> org.apache.hadoop.fs.FileSystem.processDeleteOnExit(FileSystem.java:1417)
>   at org.apache.hadoop.fs.FileSystem.close(FileSystem.java:2084)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1187)
>   at 
> org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSCloseOrdering(TestDistributedFileSystem.java:217)
> {noformat}
> This is with Java 8 on Mac. It passes fine on trunk. I haven't tried other 
> combinations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9732) Improve DelegationTokenIdentifier.toString() for better logging

2016-05-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287687#comment-15287687
 ] 

Hudson commented on HDFS-9732:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9807 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9807/])
HDFS-9732, Improve DelegationTokenIdentifier.toString() for better (yzhang: rev 
e24fe2641b4117601105fa097c8848d82b96b74c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenIdentifier.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DelegationTokenFetcher.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDelegationTokenFetcher.java


> Improve DelegationTokenIdentifier.toString() for better logging
> ---
>
> Key: HDFS-9732
> URL: https://issues.apache.org/jira/browse/HDFS-9732
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Yongjun Zhang
>  Labels: supportability
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12752-001.patch, HDFS-9732-000.patch, 
> HDFS-9732.001.patch, HDFS-9732.002.patch, HDFS-9732.003.patch, 
> HDFS-9732.004.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> HDFS {{DelegationTokenIdentifier.toString()}} adds some diagnostics info, 
> owner, sequence number. But its superclass,  
> {{AbstractDelegationTokenIdentifier}} contains a lot more information, 
> including token issue and expiry times.
> Because  {{DelegationTokenIdentifier.toString()}} doesn't include this data,
> information that is potentially useful for kerberos diagnostics is lost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-8457) Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into parent interface

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-8457:
-

> Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into 
> parent interface
> -
>
> Key: HDFS-8457
> URL: https://issues.apache.org/jira/browse/HDFS-8457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8457-HDFS-7240.01.patch, 
> HDFS-8457-HDFS-7240.02.patch, HDFS-8457-HDFS-7240.03.patch, 
> HDFS-8457-HDFS-7240.04.patch, HDFS-8457-HDFS-7240.05.patch, 
> HDFS-8457-HDFS-7240.06.patch, HDFS-8457-HDFS-7240.07.patch
>
>
> FsDatasetSpi can be split up into HDFS-specific and HDFS-agnostic parts. The 
> HDFS-specific parts can continue to be retained in FsDataSpi while those 
> relating to volume management, block pools and upgrade can be moved to a 
> parent interface.
> There will be no change to implementations of FsDatasetSpi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8457) Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into parent interface

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-8457.
-
Resolution: Later

These changes were discarded in the HDFS-7240 branch due to merge resolution 
complexity.

We may consider reintroducing them later.

> Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into 
> parent interface
> -
>
> Key: HDFS-8457
> URL: https://issues.apache.org/jira/browse/HDFS-8457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8457-HDFS-7240.01.patch, 
> HDFS-8457-HDFS-7240.02.patch, HDFS-8457-HDFS-7240.03.patch, 
> HDFS-8457-HDFS-7240.04.patch, HDFS-8457-HDFS-7240.05.patch, 
> HDFS-8457-HDFS-7240.06.patch, HDFS-8457-HDFS-7240.07.patch
>
>
> FsDatasetSpi can be split up into HDFS-specific and HDFS-agnostic parts. The 
> HDFS-specific parts can continue to be retained in FsDataSpi while those 
> relating to volume management, block pools and upgrade can be moved to a 
> parent interface.
> There will be no change to implementations of FsDatasetSpi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-8677) Ozone: Introduce KeyValueContainerDatasetSpi

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-8677:
-

> Ozone: Introduce KeyValueContainerDatasetSpi
> 
>
> Key: HDFS-8677
> URL: https://issues.apache.org/jira/browse/HDFS-8677
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8677-HDFS-7240.01.patch, 
> HDFS-8677-HDFS-7240.02.patch, HDFS-8677-HDFS-7240.03.patch, 
> HDFS-8677-HDFS-7240.04.patch, HDFS-8677-HDFS-7240.05.patch
>
>
> KeyValueContainerDatasetSpi will be a new interface for Ozone containers, 
> just as FsDatasetSpi is an interface for manipulating HDFS block files.
> The interface will have support for both key-value containers for storing 
> Ozone metadata and blobs for storing user data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8679) Move DatasetSpi to new package

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-8679.
-
Resolution: Later

These changes were discarded in the HDFS-7240 branch due to merge resolution 
complexity.

We may consider reintroducing them later.

> Move DatasetSpi to new package
> --
>
> Key: HDFS-8679
> URL: https://issues.apache.org/jira/browse/HDFS-8679
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8679-HDFS-7240.01.patch, 
> HDFS-8679-HDFS-7240.02.patch
>
>
> The DatasetSpi and VolumeSpi interfaces are currently in 
> {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a 
> new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-8679) Move DatasetSpi to new package

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-8679:
-

> Move DatasetSpi to new package
> --
>
> Key: HDFS-8679
> URL: https://issues.apache.org/jira/browse/HDFS-8679
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8679-HDFS-7240.01.patch, 
> HDFS-8679-HDFS-7240.02.patch
>
>
> The DatasetSpi and VolumeSpi interfaces are currently in 
> {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a 
> new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets

2016-05-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287717#comment-15287717
 ] 

Arpit Agarwal commented on HDFS-8661:
-

These changes were discarded in the HDFS-7240 branch due to merge resolution 
complexity.

We may consider reintroducing them later.

> DataNode should filter the set of NameSpaceInfos passed to Datasets
> ---
>
> Key: HDFS-8661
> URL: https://issues.apache.org/jira/browse/HDFS-8661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-7240
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8661-HDFS-7240.01.patch, 
> HDFS-8661-HDFS-7240.02.patch, HDFS-8661-HDFS-7240.03.patch, 
> HDFS-8661-HDFS-7240.04.patch, v03-v04.diff
>
>
> {{DataNode#refreshVolumes}} passes the list of NamespaceInfos to each dataset 
> when adding new volumes.
> This list should be filtered by the correct NodeType(s) for each dataset. 
> e.g. in a shared HDFS+Ozone cluster, FsDatasets would be notified of NN block 
> pools and Ozone datasets would be notified of Ozone block pool(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-8661.
-
Resolution: Later

> DataNode should filter the set of NameSpaceInfos passed to Datasets
> ---
>
> Key: HDFS-8661
> URL: https://issues.apache.org/jira/browse/HDFS-8661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-7240
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8661-HDFS-7240.01.patch, 
> HDFS-8661-HDFS-7240.02.patch, HDFS-8661-HDFS-7240.03.patch, 
> HDFS-8661-HDFS-7240.04.patch, v03-v04.diff
>
>
> {{DataNode#refreshVolumes}} passes the list of NamespaceInfos to each dataset 
> when adding new volumes.
> This list should be filtered by the correct NodeType(s) for each dataset. 
> e.g. in a shared HDFS+Ozone cluster, FsDatasets would be notified of NN block 
> pools and Ozone datasets would be notified of Ozone block pool(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDFS-8661:
-

> DataNode should filter the set of NameSpaceInfos passed to Datasets
> ---
>
> Key: HDFS-8661
> URL: https://issues.apache.org/jira/browse/HDFS-8661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-7240
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8661-HDFS-7240.01.patch, 
> HDFS-8661-HDFS-7240.02.patch, HDFS-8661-HDFS-7240.03.patch, 
> HDFS-8661-HDFS-7240.04.patch, v03-v04.diff
>
>
> {{DataNode#refreshVolumes}} passes the list of NamespaceInfos to each dataset 
> when adding new volumes.
> This list should be filtered by the correct NodeType(s) for each dataset. 
> e.g. in a shared HDFS+Ozone cluster, FsDatasets would be notified of NN block 
> pools and Ozone datasets would be notified of Ozone block pool(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10383) Safely close resources in DFSTestUtil

2016-05-17 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287715#comment-15287715
 ] 

Xiaoyu Yao commented on HDFS-10383:
---

Thanks [~liuml07] for updating the patch. +1 for v3 patch and I will commit it 
shortly.

> Safely close resources in DFSTestUtil
> -
>
> Key: HDFS-10383
> URL: https://issues.apache.org/jira/browse/HDFS-10383
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10383.000.patch, HDFS-10383.001.patch, 
> HDFS-10383.002.patch, HDFS-10383.003.patch
>
>
> There are a few of methods in {{DFSTestUtil}} that do not close the resource 
> safely, or elegantly. We can use the try-with-resource statement to address 
> this problem.
> Specially, as {{DFSTestUtil}} is popularly used in test, we need to preserve 
> any exceptions thrown during the processing of the resource while still 
> guaranteeing it's closed finally. Take for example,the current implementation 
> of {{DFSTestUtil#createFile()}} closes the FSDataOutputStream in the 
> {{finally}} block, and when closing if the internal 
> {{DFSOutputStream#close()}} throws any exception, which it often does, the 
> exception thrown during the processing will be lost. See this [test 
> failure|https://builds.apache.org/job/PreCommit-HADOOP-Build/9320/testReport/org.apache.hadoop.hdfs/TestAsyncDFSRename/testAggressiveConcurrentAsyncRenameWithOverwrite/],
>  and we have to guess what was the root cause.
> Using try-with-resource, we can close the resources safely, and the 
> exceptions thrown both in processing and closing will be available (closing 
> exception will be suppressed). Besides the try-with-resource, if a stream is 
> not necessary, don't create/close it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-8677) Ozone: Introduce KeyValueContainerDatasetSpi

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-8677.
-
Resolution: Later

> Ozone: Introduce KeyValueContainerDatasetSpi
> 
>
> Key: HDFS-8677
> URL: https://issues.apache.org/jira/browse/HDFS-8677
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8677-HDFS-7240.01.patch, 
> HDFS-8677-HDFS-7240.02.patch, HDFS-8677-HDFS-7240.03.patch, 
> HDFS-8677-HDFS-7240.04.patch, HDFS-8677-HDFS-7240.05.patch
>
>
> KeyValueContainerDatasetSpi will be a new interface for Ozone containers, 
> just as FsDatasetSpi is an interface for manipulating HDFS block files.
> The interface will have support for both key-value containers for storing 
> Ozone metadata and blobs for storing user data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8677) Ozone: Introduce KeyValueContainerDatasetSpi

2016-05-17 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287714#comment-15287714
 ] 

Arpit Agarwal commented on HDFS-8677:
-

These changes were discarded in the HDFS-7240 branch due to merge resolution 
complexity.

We may consider reintroducing them later.

> Ozone: Introduce KeyValueContainerDatasetSpi
> 
>
> Key: HDFS-8677
> URL: https://issues.apache.org/jira/browse/HDFS-8677
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8677-HDFS-7240.01.patch, 
> HDFS-8677-HDFS-7240.02.patch, HDFS-8677-HDFS-7240.03.patch, 
> HDFS-8677-HDFS-7240.04.patch, HDFS-8677-HDFS-7240.05.patch
>
>
> KeyValueContainerDatasetSpi will be a new interface for Ozone containers, 
> just as FsDatasetSpi is an interface for manipulating HDFS block files.
> The interface will have support for both key-value containers for storing 
> Ozone metadata and blobs for storing user data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-8661) DataNode should filter the set of NameSpaceInfos passed to Datasets

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal closed HDFS-8661.
---

> DataNode should filter the set of NameSpaceInfos passed to Datasets
> ---
>
> Key: HDFS-8661
> URL: https://issues.apache.org/jira/browse/HDFS-8661
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-7240
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8661-HDFS-7240.01.patch, 
> HDFS-8661-HDFS-7240.02.patch, HDFS-8661-HDFS-7240.03.patch, 
> HDFS-8661-HDFS-7240.04.patch, v03-v04.diff
>
>
> {{DataNode#refreshVolumes}} passes the list of NamespaceInfos to each dataset 
> when adding new volumes.
> This list should be filtered by the correct NodeType(s) for each dataset. 
> e.g. in a shared HDFS+Ozone cluster, FsDatasets would be notified of NN block 
> pools and Ozone datasets would be notified of Ozone block pool(s).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-8457) Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into parent interface

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal closed HDFS-8457.
---

> Ozone: Refactor FsDatasetSpi to pull up HDFS-agnostic functionality into 
> parent interface
> -
>
> Key: HDFS-8457
> URL: https://issues.apache.org/jira/browse/HDFS-8457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8457-HDFS-7240.01.patch, 
> HDFS-8457-HDFS-7240.02.patch, HDFS-8457-HDFS-7240.03.patch, 
> HDFS-8457-HDFS-7240.04.patch, HDFS-8457-HDFS-7240.05.patch, 
> HDFS-8457-HDFS-7240.06.patch, HDFS-8457-HDFS-7240.07.patch
>
>
> FsDatasetSpi can be split up into HDFS-specific and HDFS-agnostic parts. The 
> HDFS-specific parts can continue to be retained in FsDataSpi while those 
> relating to volume management, block pools and upgrade can be moved to a 
> parent interface.
> There will be no change to implementations of FsDatasetSpi.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-8677) Ozone: Introduce KeyValueContainerDatasetSpi

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal closed HDFS-8677.
---

> Ozone: Introduce KeyValueContainerDatasetSpi
> 
>
> Key: HDFS-8677
> URL: https://issues.apache.org/jira/browse/HDFS-8677
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8677-HDFS-7240.01.patch, 
> HDFS-8677-HDFS-7240.02.patch, HDFS-8677-HDFS-7240.03.patch, 
> HDFS-8677-HDFS-7240.04.patch, HDFS-8677-HDFS-7240.05.patch
>
>
> KeyValueContainerDatasetSpi will be a new interface for Ozone containers, 
> just as FsDatasetSpi is an interface for manipulating HDFS block files.
> The interface will have support for both key-value containers for storing 
> Ozone metadata and blobs for storing user data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Closed] (HDFS-8679) Move DatasetSpi to new package

2016-05-17 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal closed HDFS-8679.
---

> Move DatasetSpi to new package
> --
>
> Key: HDFS-8679
> URL: https://issues.apache.org/jira/browse/HDFS-8679
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-8679-HDFS-7240.01.patch, 
> HDFS-8679-HDFS-7240.02.patch
>
>
> The DatasetSpi and VolumeSpi interfaces are currently in 
> {{org.apache.hadoop.hdfs.server.datanode.fsdataset}}. They can be moved to a 
> new package {{org.apache.hadoop.hdfs.server.datanode.dataset}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10381) DataStreamer DataNode exclusion log message should be warning

2016-05-17 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15287730#comment-15287730
 ] 

Yongjun Zhang commented on HDFS-10381:
--

Hi [~jzhuge],

Thanks for reporting and working on this issue. The patch looks  good to me. 
+1. I will wait for tomorrow before I commit it, in case [~jingzhao] has 
further comments.


> DataStreamer DataNode exclusion log message should be warning
> -
>
> Key: HDFS-10381
> URL: https://issues.apache.org/jira/browse/HDFS-10381
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Minor
> Attachments: HDFS-10381.001.patch
>
>
> When adding a DN to {{excludedNodes}}, it should log a warning message 
> instead of info.
> {code}
>   success = createBlockOutputStream(nodes, storageTypes, 0L, false);
>   if (!success) {
> LOG.info("Abandoning " + block);
> dfsClient.namenode.abandonBlock(block, stat.getFileId(), src,
> dfsClient.clientName);
> block = null;
> final DatanodeInfo badNode = nodes[errorState.getBadNodeIndex()];
> LOG.info("Excluding datanode " + badNode);
> excludedNodes.put(badNode, badNode);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >