[jira] [Commented] (HDFS-10284) o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode fails intermittently

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240705#comment-15240705
 ] 

Hadoop QA commented on HDFS-10284:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 106m 51s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 43s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 239m 6s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestParallelImageWrite |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.server.da

[jira] [Updated] (HDFS-9670) DistCp throws NPE when source is root

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9670:
-
Component/s: distcp

> DistCp throws NPE when source is root
> -
>
> Key: HDFS-9670
> URL: https://issues.apache.org/jira/browse/HDFS-9670
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9670.001.patch
>
>
> Symptom:
> {quote}
> [root@vb0724 ~]# hadoop distcp hdfs://X:8020/ hdfs://Y:8020/
> 16/01/20 11:33:33 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[hdfs://X:8020/], targetPath=hdfs://Y:8020/, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 16/01/20 11:33:33 INFO client.RMProxy: Connecting to ResourceManager at Z:8032
> 16/01/20 11:33:33 ERROR tools.DistCp: Exception encountered 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.getRelativePath(DistCpUtils.java:144)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(SimpleCopyListing.java:598)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListingRoot(SimpleCopyListing.java:583)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:313)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:174)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:365)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:171)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {quote}
> Relevant code:
> {code}
>   private Path computeSourceRootPath(FileStatus sourceStatus,
>  DistCpOptions options) throws 
> IOException {
> Path target = options.getTargetPath();
> FileSystem targetFS = target.getFileSystem(getConf());
> final boolean targetPathExists = options.getTargetPathExists();
> boolean solitaryFile = options.getSourcePaths().size() == 1
> && 
> !sourceStatus.isDirectory();
> if (solitaryFile) {
>   if (targetFS.isFile(target) || !targetPathExists) {
> return sourceStatus.getPath();
>   } else {
> return sourceStatus.getPath().getParent();
>   }
> } else {
>   boolean specialHandling = (options.getSourcePaths().size() == 1 && 
> !targetPathExists) ||
>   options.shouldSyncFolder() || options.shouldOverwrite();
>   return specialHandling && sourceStatus.isDirectory() ? 
> sourceStatus.getPath() :
>   sourceStatus.getPath().getParent();
> }
>   }
> {code}
> We can see that it could return NULL at the end when doing 
> {{sourceStatus.getPath().getParent()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9670) DistCp throws NPE when source is root

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9670:
-
Fix Version/s: 2.8.0
Affects Version/s: 2.6.0
 Target Version/s: 2.8.0
   Status: Patch Available  (was: In Progress)

> DistCp throws NPE when source is root
> -
>
> Key: HDFS-9670
> URL: https://issues.apache.org/jira/browse/HDFS-9670
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>  Labels: supportability
> Fix For: 2.8.0
>
> Attachments: HDFS-9670.001.patch
>
>
> Symptom:
> {quote}
> [root@vb0724 ~]# hadoop distcp hdfs://X:8020/ hdfs://Y:8020/
> 16/01/20 11:33:33 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[hdfs://X:8020/], targetPath=hdfs://Y:8020/, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 16/01/20 11:33:33 INFO client.RMProxy: Connecting to ResourceManager at Z:8032
> 16/01/20 11:33:33 ERROR tools.DistCp: Exception encountered 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.getRelativePath(DistCpUtils.java:144)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(SimpleCopyListing.java:598)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListingRoot(SimpleCopyListing.java:583)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:313)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:174)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:365)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:171)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {quote}
> Relevant code:
> {code}
>   private Path computeSourceRootPath(FileStatus sourceStatus,
>  DistCpOptions options) throws 
> IOException {
> Path target = options.getTargetPath();
> FileSystem targetFS = target.getFileSystem(getConf());
> final boolean targetPathExists = options.getTargetPathExists();
> boolean solitaryFile = options.getSourcePaths().size() == 1
> && 
> !sourceStatus.isDirectory();
> if (solitaryFile) {
>   if (targetFS.isFile(target) || !targetPathExists) {
> return sourceStatus.getPath();
>   } else {
> return sourceStatus.getPath().getParent();
>   }
> } else {
>   boolean specialHandling = (options.getSourcePaths().size() == 1 && 
> !targetPathExists) ||
>   options.shouldSyncFolder() || options.shouldOverwrite();
>   return specialHandling && sourceStatus.isDirectory() ? 
> sourceStatus.getPath() :
>   sourceStatus.getPath().getParent();
> }
>   }
> {code}
> We can see that it could return NULL at the end when doing 
> {{sourceStatus.getPath().getParent()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9670) DistCp throws NPE when source is root

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9670:
-
Attachment: HDFS-9670.001.patch

Patch 001:
* {{computeSourceRootPath}} returns itself instead of its parent when source is 
root.
* Add unit test {{TestDistCpSystem#testSourceRoot}}

> DistCp throws NPE when source is root
> -
>
> Key: HDFS-9670
> URL: https://issues.apache.org/jira/browse/HDFS-9670
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>  Labels: supportability
> Attachments: HDFS-9670.001.patch
>
>
> Symptom:
> {quote}
> [root@vb0724 ~]# hadoop distcp hdfs://X:8020/ hdfs://Y:8020/
> 16/01/20 11:33:33 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[hdfs://X:8020/], targetPath=hdfs://Y:8020/, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 16/01/20 11:33:33 INFO client.RMProxy: Connecting to ResourceManager at Z:8032
> 16/01/20 11:33:33 ERROR tools.DistCp: Exception encountered 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.getRelativePath(DistCpUtils.java:144)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(SimpleCopyListing.java:598)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListingRoot(SimpleCopyListing.java:583)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:313)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:174)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:365)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:171)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {quote}
> Relevant code:
> {code}
>   private Path computeSourceRootPath(FileStatus sourceStatus,
>  DistCpOptions options) throws 
> IOException {
> Path target = options.getTargetPath();
> FileSystem targetFS = target.getFileSystem(getConf());
> final boolean targetPathExists = options.getTargetPathExists();
> boolean solitaryFile = options.getSourcePaths().size() == 1
> && 
> !sourceStatus.isDirectory();
> if (solitaryFile) {
>   if (targetFS.isFile(target) || !targetPathExists) {
> return sourceStatus.getPath();
>   } else {
> return sourceStatus.getPath().getParent();
>   }
> } else {
>   boolean specialHandling = (options.getSourcePaths().size() == 1 && 
> !targetPathExists) ||
>   options.shouldSyncFolder() || options.shouldOverwrite();
>   return specialHandling && sourceStatus.isDirectory() ? 
> sourceStatus.getPath() :
>   sourceStatus.getPath().getParent();
> }
>   }
> {code}
> We can see that it could return NULL at the end when doing 
> {{sourceStatus.getPath().getParent()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8986) Add option to -du to calculate directory space usage excluding snapshots

2016-04-13 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HDFS-8986:

Attachment: HDFS-8986.01.patch

I'm attaching a preliminary patch 1 to show the general idea of this change.

I think Chris' comment of modifying {{-count}} together with {{-du}} makes 
perfect sense, since they are the 2 usages of {{FileSystem#getContentSummary}} 
in the shell. (Except for {{-rm}}'s internal usage, which is not in the scope 
of this exclude snapshot discussion.)

The general idea is to add a {{-x}} flag to the shell commands, to exclude 
snapshots from calculation.

Implementation-wise, since {{getContentSummary}} is a public API on 
{{FileSystem}}, I don't think we can change anything there. Alternatively, I 
added fields and methods to {{ContentSummary}} (which is 
{{InterfaceStability.Evolving}} - and I think the changes here doesn't break 
compatibility anyway), to store snapshot-related values in the same object. 
Then at the caller of {{getContentSummary}}, one can subtract the 
snapshot-related values from the total, to get the desired {{-x}} result.
The calculation is done by adding a snapshot specific {{ContentCounts}} to the 
{{ContentSummaryComputationContext}} object.

Please review and provide feedback regarding this approach. I plan to polish 
the tests/docs in a later rev. Thanks very much!

> Add option to -du to calculate directory space usage excluding snapshots
> 
>
> Key: HDFS-8986
> URL: https://issues.apache.org/jira/browse/HDFS-8986
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: snapshots
>Reporter: Gautam Gopalakrishnan
>Assignee: Xiao Chen
> Attachments: HDFS-8986.01.patch
>
>
> When running {{hadoop fs -du}} on a snapshotted directory (or one of its 
> children), the report includes space consumed by blocks that are only present 
> in the snapshots. This is confusing for end users.
> {noformat}
> $  hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -createSnapshot /tmp/parent snap1
> Created snapshot /tmp/parent/.snapshot/snap1
> $ hadoop fs -rm -skipTrash /tmp/parent/sub1/*
> ...
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 799.7 M  2.3 G  /tmp/parent
> 799.7 M  2.3 G  /tmp/parent/sub1
> $ hdfs dfs -deleteSnapshot /tmp/parent snap1
> $ hadoop fs -du -h -s /tmp/parent /tmp/parent/*
> 0  0  /tmp/parent
> 0  0  /tmp/parent/sub1
> {noformat}
> It would be helpful if we had a flag, say -X, to exclude any snapshot related 
> disk usage in the output



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10288) MiniYARNCluster should implement AutoCloseable

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10288:
--
Description: 
{{MiniYARNCluster}} should implement {{AutoCloseable}} in order to support 
[try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
 It will make test code a little cleaner and more reliable.

Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
backported to Hadoop version prior to 2.7.

  was:
{{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
[try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
 It will make test code a little cleaner and more reliable.

Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
backported to Hadoop version prior to 2.7.


> MiniYARNCluster should implement AutoCloseable
> --
>
> Key: HDFS-10288
> URL: https://issues.apache.org/jira/browse/HDFS-10288
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Trivial
>
> {{MiniYARNCluster}} should implement {{AutoCloseable}} in order to support 
> [try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
>  It will make test code a little cleaner and more reliable.
> Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
> backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10288) MiniYARNCluster should implement AutoCloseable

2016-04-13 Thread John Zhuge (JIRA)
John Zhuge created HDFS-10288:
-

 Summary: MiniYARNCluster should implement AutoCloseable
 Key: HDFS-10288
 URL: https://issues.apache.org/jira/browse/HDFS-10288
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.7.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Trivial


{{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
[try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
 It will make test code a little cleaner and more reliable.

Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9670) DistCp throws NPE when source is root

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9670:
-
Summary: DistCp throws NPE when source is root  (was: DistCp throws NPE 
when source is root and target exists)

> DistCp throws NPE when source is root
> -
>
> Key: HDFS-9670
> URL: https://issues.apache.org/jira/browse/HDFS-9670
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>  Labels: supportability
>
> Symptom:
> {quote}
> [root@vb0724 ~]# hadoop distcp hdfs://X:8020/ hdfs://Y:8020/
> 16/01/20 11:33:33 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[hdfs://X:8020/], targetPath=hdfs://Y:8020/, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 16/01/20 11:33:33 INFO client.RMProxy: Connecting to ResourceManager at Z:8032
> 16/01/20 11:33:33 ERROR tools.DistCp: Exception encountered 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.getRelativePath(DistCpUtils.java:144)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(SimpleCopyListing.java:598)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListingRoot(SimpleCopyListing.java:583)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:313)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:174)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:365)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:171)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {quote}
> Relevant code:
> {code}
>   private Path computeSourceRootPath(FileStatus sourceStatus,
>  DistCpOptions options) throws 
> IOException {
> Path target = options.getTargetPath();
> FileSystem targetFS = target.getFileSystem(getConf());
> final boolean targetPathExists = options.getTargetPathExists();
> boolean solitaryFile = options.getSourcePaths().size() == 1
> && 
> !sourceStatus.isDirectory();
> if (solitaryFile) {
>   if (targetFS.isFile(target) || !targetPathExists) {
> return sourceStatus.getPath();
>   } else {
> return sourceStatus.getPath().getParent();
>   }
> } else {
>   boolean specialHandling = (options.getSourcePaths().size() == 1 && 
> !targetPathExists) ||
>   options.shouldSyncFolder() || options.shouldOverwrite();
>   return specialHandling && sourceStatus.isDirectory() ? 
> sourceStatus.getPath() :
>   sourceStatus.getPath().getParent();
> }
>   }
> {code}
> We can see that it could return NULL at the end when doing 
> {{sourceStatus.getPath().getParent()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10287) MiniDFSCluster should implement AutoCloseable

2016-04-13 Thread John Zhuge (JIRA)
John Zhuge created HDFS-10287:
-

 Summary: MiniDFSCluster should implement AutoCloseable
 Key: HDFS-10287
 URL: https://issues.apache.org/jira/browse/HDFS-10287
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: test
Affects Versions: 2.7.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Trivial


{{MiniDFSCluster}} should implement {{AutoCloseable}} in order to support 
[try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html].
 It will make test code a little cleaner and more reliable.

Since {{AutoCloseable}} is only in Java 1.7 or later, this can not be 
backported to Hadoop version prior to 2.7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10286) Fix TestDFSAdmin#testNameNodeGetReconfigurableProperties

2016-04-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240631#comment-15240631
 ] 

Xiaoyu Yao commented on HDFS-10286:
---

Patch looks good to me. +1 pending Jenkins.

> Fix TestDFSAdmin#testNameNodeGetReconfigurableProperties
> 
>
> Key: HDFS-10286
> URL: https://issues.apache.org/jira/browse/HDFS-10286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10286.000.patch
>
>
> HDFS-10209 introduced a new reconfigurable properties which requires an 
> update to the validation in 
> TestDFSAdmin#testNameNodeGetReconfigurableProperties. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240630#comment-15240630
 ] 

Xiaobing Zhou commented on HDFS-10209:
--

I have posted v000 patch to HDFS-10286.

> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch, HDFS-10209-HDFS-9000.UT-fix.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10286) Fix TestDFSAdmin#testNameNodeGetReconfigurableProperties

2016-04-13 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10286:
-
Attachment: HDFS-10286.000.patch

I posted v000 patch, please kindly review, thanks.

> Fix TestDFSAdmin#testNameNodeGetReconfigurableProperties
> 
>
> Key: HDFS-10286
> URL: https://issues.apache.org/jira/browse/HDFS-10286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10286.000.patch
>
>
> HDFS-10209 introduced a new reconfigurable properties which requires an 
> update to the validation in 
> TestDFSAdmin#testNameNodeGetReconfigurableProperties. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10286) Fix TestDFSAdmin#testNameNodeGetReconfigurableProperties

2016-04-13 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10286:
-
Status: Patch Available  (was: Open)

> Fix TestDFSAdmin#testNameNodeGetReconfigurableProperties
> 
>
> Key: HDFS-10286
> URL: https://issues.apache.org/jira/browse/HDFS-10286
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10286.000.patch
>
>
> HDFS-10209 introduced a new reconfigurable properties which requires an 
> update to the validation in 
> TestDFSAdmin#testNameNodeGetReconfigurableProperties. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10281) o.a.h.hdfs.server.namenode.ha.TestPendingCorruptDnMessages fails intermittently

2016-04-13 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240618#comment-15240618
 ] 

Jitendra Nath Pandey commented on HDFS-10281:
-

+1, pending jenkins.

> o.a.h.hdfs.server.namenode.ha.TestPendingCorruptDnMessages fails 
> intermittently
> ---
>
> Key: HDFS-10281
> URL: https://issues.apache.org/jira/browse/HDFS-10281
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10281.000.patch
>
>
> In our daily UT test, we found the 
> {{TestPendingCorruptDnMessages#testChangedStorageId}} failed intermittently, 
> see following information:
> *Error Message*
> expected:<1> but was:<0>
> *Stacktrace*
> {code}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages.getRegisteredDatanodeUid(TestPendingCorruptDnMessages.java:124)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages.testChangedStorageId(TestPendingCorruptDnMessages.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240592#comment-15240592
 ] 

Xiaoyu Yao commented on HDFS-10209:
---

I opened HDFS-10286 and attached your patch to it. 

> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch, HDFS-10209-HDFS-9000.UT-fix.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10286) Fix TestDFSAdmin#testNameNodeGetReconfigurableProperties

2016-04-13 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-10286:
-

 Summary: Fix TestDFSAdmin#testNameNodeGetReconfigurableProperties
 Key: HDFS-10286
 URL: https://issues.apache.org/jira/browse/HDFS-10286
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaobing Zhou


HDFS-10209 introduced a new reconfigurable properties which requires an update 
to the validation in TestDFSAdmin#testNameNodeGetReconfigurableProperties. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240588#comment-15240588
 ] 

Xiaoyu Yao commented on HDFS-10209:
---

[~xiaobingo], please open a separate ticket for the unit test fix and link it 
to HDFS-10209. Thanks

> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch, HDFS-10209-HDFS-9000.UT-fix.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Xiaobing Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaobing Zhou updated HDFS-10209:
-
Attachment: HDFS-10209-HDFS-9000.UT-fix.patch

> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch, HDFS-10209-HDFS-9000.UT-fix.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240575#comment-15240575
 ] 

Xiaobing Zhou commented on HDFS-10209:
--

[~xyao] thanks for committing it. I just noticed that one of unit test failures 
is related to the patch v001. I posted another patch 
 to fix that. Sorry for catching it up too 
late.

> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch, HDFS-10209-HDFS-9000.UT-fix.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240554#comment-15240554
 ] 

Hadoop QA commented on HDFS-10232:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 38s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
50s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 9s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 38s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 54m 4s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 149m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
|   | hadoop.hdfs.TestHFlush |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798624/HDFS-10232-HDFS-7240.002.patch
 |
| JIRA Issue | HDFS-10232 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 24c15124c61c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/

[jira] [Commented] (HDFS-9016) Display upgrade domain information in fsck

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240544#comment-15240544
 ] 

Hadoop QA commented on HDFS-9016:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 41s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 29s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 51s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 164m 52s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | 
hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | hadoop.security.TestPermissionSymlinks |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.fs.TestSymlinkHdfsFileSystem |
|   | hadoop.hdfs.tools.TestDFSAdmin |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.fs.TestFcHdfsSetUMask |
|   | hadoop.fs.contract.hdfs.TestHDFSContractRename |
|   | hadoop.hdfs.TestHDFSTrash |
|   | hadoop.hdfs.TestCrcCorruption |
|   | hadoop.hdfs.TestErasureCodeBenchmarkThroughput |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
| JDK v1.8.0_77 Timed out junit tests | 
org.apache.hadoop.hdfs.

[jira] [Updated] (HDFS-9412) getBlocks occupies FSLock and takes too long to complete

2016-04-13 Thread He Tianyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Tianyi updated HDFS-9412:

Attachment: HDFS-9412.0002.patch

Fix codestyle, whitespace and unit test.

> getBlocks occupies FSLock and takes too long to complete
> 
>
> Key: HDFS-9412
> URL: https://issues.apache.org/jira/browse/HDFS-9412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: He Tianyi
>Assignee: He Tianyi
> Attachments: HDFS-9412..patch, HDFS-9412.0001.patch, 
> HDFS-9412.0002.patch
>
>
> {{getBlocks}} in {{NameNodeRpcServer}} acquires a read lock then may take a 
> long time to complete (probably several seconds, if number of blocks are too 
> much). 
> During this period, other threads attempting to acquire write lock will wait. 
> In an extreme case, RPC handlers are occupied by one reader thread calling 
> {{getBlocks}} and all other threads waiting for write lock, rpc server acts 
> like hung. Unfortunately, this tends to happen in heavy loaded cluster, since 
> read operations come and go fast (they do not need to wait), leaving write 
> operations waiting.
> Looks like we can optimize this thing like DN block report did in past, by 
> splitting the operation into smaller sub operations, and let other threads do 
> their work between each sub operation. The whole result is returned at once, 
> though (one thing different from DN block report). 
> I am not sure whether this will work. Any better idea?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240506#comment-15240506
 ] 

Hadoop QA commented on HDFS-10216:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 1s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 11s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798629/HDFS-10216.3.patch |
| JIRA Issue | HDFS-10216 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3fa88677d830 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 27b131e |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle

[jira] [Updated] (HDFS-9670) DistCp throws NPE when source is root and target exists

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-9670:
-
Summary: DistCp throws NPE when source is root and target exists  (was: 
DistCp throws NPE when source is root "/")

> DistCp throws NPE when source is root and target exists
> ---
>
> Key: HDFS-9670
> URL: https://issues.apache.org/jira/browse/HDFS-9670
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>  Labels: supportability
>
> Symptom:
> {quote}
> [root@vb0724 ~]# hadoop distcp hdfs://X:8020/ hdfs://Y:8020/
> 16/01/20 11:33:33 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[hdfs://X:8020/], targetPath=hdfs://Y:8020/, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 16/01/20 11:33:33 INFO client.RMProxy: Connecting to ResourceManager at Z:8032
> 16/01/20 11:33:33 ERROR tools.DistCp: Exception encountered 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.getRelativePath(DistCpUtils.java:144)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(SimpleCopyListing.java:598)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListingRoot(SimpleCopyListing.java:583)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:313)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:174)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:365)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:171)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {quote}
> Relevant code:
> {code}
>   private Path computeSourceRootPath(FileStatus sourceStatus,
>  DistCpOptions options) throws 
> IOException {
> Path target = options.getTargetPath();
> FileSystem targetFS = target.getFileSystem(getConf());
> final boolean targetPathExists = options.getTargetPathExists();
> boolean solitaryFile = options.getSourcePaths().size() == 1
> && 
> !sourceStatus.isDirectory();
> if (solitaryFile) {
>   if (targetFS.isFile(target) || !targetPathExists) {
> return sourceStatus.getPath();
>   } else {
> return sourceStatus.getPath().getParent();
>   }
> } else {
>   boolean specialHandling = (options.getSourcePaths().size() == 1 && 
> !targetPathExists) ||
>   options.shouldSyncFolder() || options.shouldOverwrite();
>   return specialHandling && sourceStatus.isDirectory() ? 
> sourceStatus.getPath() :
>   sourceStatus.getPath().getParent();
> }
>   }
> {code}
> We can see that it could return NULL at the end when doing 
> {{sourceStatus.getPath().getParent()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9670) DistCp throws NPE when source is root "/"

2016-04-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240490#comment-15240490
 ] 

John Zhuge commented on HDFS-9670:
--

Source file system can also be different, e.g., {{s3a}}.

> DistCp throws NPE when source is root "/"
> -
>
> Key: HDFS-9670
> URL: https://issues.apache.org/jira/browse/HDFS-9670
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>  Labels: supportability
>
> Symptom:
> {quote}
> [root@vb0724 ~]# hadoop distcp hdfs://X:8020/ hdfs://Y:8020/
> 16/01/20 11:33:33 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[hdfs://X:8020/], targetPath=hdfs://Y:8020/, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 16/01/20 11:33:33 INFO client.RMProxy: Connecting to ResourceManager at Z:8032
> 16/01/20 11:33:33 ERROR tools.DistCp: Exception encountered 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.getRelativePath(DistCpUtils.java:144)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(SimpleCopyListing.java:598)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListingRoot(SimpleCopyListing.java:583)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:313)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:174)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:365)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:171)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {quote}
> Relevant code:
> {code}
>   private Path computeSourceRootPath(FileStatus sourceStatus,
>  DistCpOptions options) throws 
> IOException {
> Path target = options.getTargetPath();
> FileSystem targetFS = target.getFileSystem(getConf());
> final boolean targetPathExists = options.getTargetPathExists();
> boolean solitaryFile = options.getSourcePaths().size() == 1
> && 
> !sourceStatus.isDirectory();
> if (solitaryFile) {
>   if (targetFS.isFile(target) || !targetPathExists) {
> return sourceStatus.getPath();
>   } else {
> return sourceStatus.getPath().getParent();
>   }
> } else {
>   boolean specialHandling = (options.getSourcePaths().size() == 1 && 
> !targetPathExists) ||
>   options.shouldSyncFolder() || options.shouldOverwrite();
>   return specialHandling && sourceStatus.isDirectory() ? 
> sourceStatus.getPath() :
>   sourceStatus.getPath().getParent();
> }
>   }
> {code}
> We can see that it could return NULL at the end when doing 
> {{sourceStatus.getPath().getParent()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HDFS-9670) DistCp throws NPE when source is root "/"

2016-04-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-9670 started by John Zhuge.

> DistCp throws NPE when source is root "/"
> -
>
> Key: HDFS-9670
> URL: https://issues.apache.org/jira/browse/HDFS-9670
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>  Labels: supportability
>
> Symptom:
> {quote}
> [root@vb0724 ~]# hadoop distcp hdfs://X:8020/ hdfs://Y:8020/
> 16/01/20 11:33:33 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[hdfs://X:8020/], targetPath=hdfs://Y:8020/, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 16/01/20 11:33:33 INFO client.RMProxy: Connecting to ResourceManager at Z:8032
> 16/01/20 11:33:33 ERROR tools.DistCp: Exception encountered 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.getRelativePath(DistCpUtils.java:144)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(SimpleCopyListing.java:598)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListingRoot(SimpleCopyListing.java:583)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:313)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:174)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:365)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:171)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {quote}
> Relevant code:
> {code}
>   private Path computeSourceRootPath(FileStatus sourceStatus,
>  DistCpOptions options) throws 
> IOException {
> Path target = options.getTargetPath();
> FileSystem targetFS = target.getFileSystem(getConf());
> final boolean targetPathExists = options.getTargetPathExists();
> boolean solitaryFile = options.getSourcePaths().size() == 1
> && 
> !sourceStatus.isDirectory();
> if (solitaryFile) {
>   if (targetFS.isFile(target) || !targetPathExists) {
> return sourceStatus.getPath();
>   } else {
> return sourceStatus.getPath().getParent();
>   }
> } else {
>   boolean specialHandling = (options.getSourcePaths().size() == 1 && 
> !targetPathExists) ||
>   options.shouldSyncFolder() || options.shouldOverwrite();
>   return specialHandling && sourceStatus.isDirectory() ? 
> sourceStatus.getPath() :
>   sourceStatus.getPath().getParent();
> }
>   }
> {code}
> We can see that it could return NULL at the end when doing 
> {{sourceStatus.getPath().getParent()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9670) DistCp throws NPE when source is root "/"

2016-04-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240486#comment-15240486
 ] 

John Zhuge commented on HDFS-9670:
--

Reproduced test in different ways:
* The target dir must exist.
* The target can be:
** on a different cluster (the jira test case),
** or on the same cluster,
** or on a different file system, e.g., {{s3a}}

> DistCp throws NPE when source is root "/"
> -
>
> Key: HDFS-9670
> URL: https://issues.apache.org/jira/browse/HDFS-9670
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>Assignee: John Zhuge
>  Labels: supportability
>
> Symptom:
> {quote}
> [root@vb0724 ~]# hadoop distcp hdfs://X:8020/ hdfs://Y:8020/
> 16/01/20 11:33:33 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, 
> sourcePaths=[hdfs://X:8020/], targetPath=hdfs://Y:8020/, 
> targetPathExists=true, preserveRawXattrs=false, filtersFile='null'}
> 16/01/20 11:33:33 INFO client.RMProxy: Connecting to ResourceManager at Z:8032
> 16/01/20 11:33:33 ERROR tools.DistCp: Exception encountered 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.tools.util.DistCpUtils.getRelativePath(DistCpUtils.java:144)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListing(SimpleCopyListing.java:598)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.writeToFileListingRoot(SimpleCopyListing.java:583)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:313)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:174)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:365)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:171)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:429)
> {quote}
> Relevant code:
> {code}
>   private Path computeSourceRootPath(FileStatus sourceStatus,
>  DistCpOptions options) throws 
> IOException {
> Path target = options.getTargetPath();
> FileSystem targetFS = target.getFileSystem(getConf());
> final boolean targetPathExists = options.getTargetPathExists();
> boolean solitaryFile = options.getSourcePaths().size() == 1
> && 
> !sourceStatus.isDirectory();
> if (solitaryFile) {
>   if (targetFS.isFile(target) || !targetPathExists) {
> return sourceStatus.getPath();
>   } else {
> return sourceStatus.getPath().getParent();
>   }
> } else {
>   boolean specialHandling = (options.getSourcePaths().size() == 1 && 
> !targetPathExists) ||
>   options.shouldSyncFolder() || options.shouldOverwrite();
>   return specialHandling && sourceStatus.isDirectory() ? 
> sourceStatus.getPath() :
>   sourceStatus.getPath().getParent();
> }
>   }
> {code}
> We can see that it could return NULL at the end when doing 
> {{sourceStatus.getPath().getParent()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10280) Document new dfsadmin command -evictWriters

2016-04-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10280:
---
Status: Open  (was: Patch Available)

> Document new dfsadmin command -evictWriters
> ---
>
> Key: HDFS-10280
> URL: https://issues.apache.org/jira/browse/HDFS-10280
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10280.001.patch
>
>
> HDFS-9945 added a new dfsadmin command -evictWriters, which is great.
> However I noticed typing {{dfs dfsadmin}} does not show a command line help 
> summary. It is shown only when I type {{dfs dfsadmin  -help}}.
> Also, it would be great to document it in {{HDFS Commands Guide}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10280) Document new dfsadmin command -evictWriters

2016-04-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-10280:
---
Status: Patch Available  (was: Open)

Jenkins seem to be stucked. Resubmit again.

> Document new dfsadmin command -evictWriters
> ---
>
> Key: HDFS-10280
> URL: https://issues.apache.org/jira/browse/HDFS-10280
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10280.001.patch
>
>
> HDFS-9945 added a new dfsadmin command -evictWriters, which is great.
> However I noticed typing {{dfs dfsadmin}} does not show a command line help 
> summary. It is shown only when I type {{dfs dfsadmin  -help}}.
> Also, it would be great to document it in {{HDFS Commands Guide}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9349) Support reconfiguring fs.protected.directories without NN restart

2016-04-13 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240474#comment-15240474
 ] 

Xiaobing Zhou commented on HDFS-9349:
-

[~arpitagarwal] is there any reason why this is not in trunk? Thanks.

> Support reconfiguring fs.protected.directories without NN restart
> -
>
> Key: HDFS-9349
> URL: https://issues.apache.org/jira/browse/HDFS-9349
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Reporter: Xiaobing Zhou
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-9349-HDFS-9000.003.patch, 
> HDFS-9349-HDFS-9000.004.patch, HDFS-9349-HDFS-9000.005.patch, 
> HDFS-9349-HDFS-9000.006.patch, HDFS-9349-HDFS-9000.007.patch, 
> HDFS-9349-HDFS-9000.008.patch, HDFS-9349.001.patch, HDFS-9349.002.patch
>
>
> This is to reconfigure
> {code}
> fs.protected.directories
> {code}
> without restarting NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10283) o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending fails intermittently

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240471#comment-15240471
 ] 

Hadoop QA commented on HDFS-10283:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 57s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 50s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 31s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 200m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.TestFSEditLogLoader |
|   | hadoop.hdfs.tools.TestDFSAdmin |
|   | hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.TestDistributedFileSystem |
| JDK v1.8.0_77 Timed out junit tests | 
org.apache.hadoop.hdfs.TestWriteReadStripedFile |
|   | org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.TestHFlush

[jira] [Created] (HDFS-10285) Storage Policy Satisfier in Namenode

2016-04-13 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-10285:
--

 Summary: Storage Policy Satisfier in Namenode
 Key: HDFS-10285
 URL: https://issues.apache.org/jira/browse/HDFS-10285
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Affects Versions: 2.7.2
Reporter: Uma Maheswara Rao G
Assignee: Uma Maheswara Rao G


Heterogeneous storage in HDFS introduced the concept of storage policy. These 
policies can be set on directory/file to specify the user preference, where to 
store the physical block. When user set the storage policy before writing data, 
then the blocks could take advantage of storage policy preferences and stores 
physical block accordingly. 

If user set the storage policy after writing and completing the file, then the 
blocks would have been written with default storage policy (nothing but DISK). 
User has to run the ‘Mover tool’ explicitly by specifying all such file names 
as a list. In some distributed system scenarios (ex: HBase) it would be 
difficult to collect all the files and run the tool as different nodes can 
write files separately and file can have different paths.

Another scenarios is, when user rename the files from one effected storage 
policy file (inherited policy from parent directory) to another storage policy 
effected directory, it will not copy inherited storage policy from source. So 
it will take effect from destination file/dir parent storage policy. This 
rename operation is just a metadata change in Namenode. The physical blocks 
still remain with source storage policy.

So, Tracking all such business logic based file names could be difficult for 
admins from distributed nodes(ex: region servers) and running the Mover tool. 

Here the proposal is to provide an API from Namenode itself for trigger the 
storage policy satisfaction. A Daemon thread inside Namenode should track such 
calls and process to DN as movement commands. 

Will post the detailed design thoughts document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240461#comment-15240461
 ] 

John Zhuge commented on HDFS-10216:
---

Very nice. Consider reviewed after 2 tiny updates:
* Line 709 is too long
* Line 710-711 can be merged into 1 line: {{new DistCp(...).execute()}}

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch, HDFS-10216.2.patch, 
> HDFS-10216.3.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10282) The VolumeScanner should warn about replica files which are misplaced

2016-04-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240458#comment-15240458
 ] 

Mingliang Liu commented on HDFS-10282:
--

Failing test {{hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode}} is 
not related. I filed [HDFS-10284] for addressing this. Thanks.

> The VolumeScanner should warn about replica files which are misplaced
> -
>
> Key: HDFS-10282
> URL: https://issues.apache.org/jira/browse/HDFS-10282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-10282.001.patch, HDFS-10282.002.patch
>
>
> The VolumeScanner should warn about replica files which are misplaced



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10284) o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10284:
-
Status: Patch Available  (was: Open)

> o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode 
> fails intermittently
> -
>
> Key: HDFS-10284
> URL: https://issues.apache.org/jira/browse/HDFS-10284
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-10284.000.patch
>
>
> *Stacktrace*
> {code}
> org.mockito.exceptions.misusing.UnfinishedStubbingException: 
> Unfinished stubbing detected here:
> -> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> E.g. thenReturn() may be missing.
> Examples of correct stubbing:
> when(mock.isOk()).thenReturn(true);
> when(mock.isOk()).thenThrow(exception);
> doThrow(exception).when(mock).someVoidMethod();
> Hints:
>  1. missing thenReturn()
>  2. although stubbed methods may return mocks, you cannot inline mock 
> creation (mock()) call inside a thenReturn method (see issue 53)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> {code}
> Sample failing pre-commit UT: 
> https://builds.apache.org/job/PreCommit-HDFS-Build/15153/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManagerSafeMode/testCheckSafeMode/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10284) o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10284:
-
Attachment: HDFS-10284.000.patch

For testing the safe mode state machine transition ({{testCheckSafeMode()}}), 
the v0 patch uses separate test methods instead of a single one because the 
previous state transition may have side effects that should be reset before 
next test case.
For example, in {{BlockManagerSafeMode}}, there is a safe mode monitor thread, 
which is created when BlockManagerSafeMode is constructed. It will start the 
first time the safe mode enters extension stage and will stop if the safe mode 
leaves to OFF state. Across different test cases, this thread should be reset.

> o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode 
> fails intermittently
> -
>
> Key: HDFS-10284
> URL: https://issues.apache.org/jira/browse/HDFS-10284
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HDFS-10284.000.patch
>
>
> *Stacktrace*
> {code}
> org.mockito.exceptions.misusing.UnfinishedStubbingException: 
> Unfinished stubbing detected here:
> -> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> E.g. thenReturn() may be missing.
> Examples of correct stubbing:
> when(mock.isOk()).thenReturn(true);
> when(mock.isOk()).thenThrow(exception);
> doThrow(exception).when(mock).someVoidMethod();
> Hints:
>  1. missing thenReturn()
>  2. although stubbed methods may return mocks, you cannot inline mock 
> creation (mock()) call inside a thenReturn method (see issue 53)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> {code}
> Sample failing pre-commit UT: 
> https://builds.apache.org/job/PreCommit-HDFS-Build/15153/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManagerSafeMode/testCheckSafeMode/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9412) getBlocks occupies FSLock and takes too long to complete

2016-04-13 Thread Walter Su (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240455#comment-15240455
 ] 

Walter Su commented on HDFS-9412:
-

Thank you for updating. The test {{TestGetBlocks}} failed. Do you mind changing 
the test accordingly? And fix the checkstyle issue as well. 

> getBlocks occupies FSLock and takes too long to complete
> 
>
> Key: HDFS-9412
> URL: https://issues.apache.org/jira/browse/HDFS-9412
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: He Tianyi
>Assignee: He Tianyi
> Attachments: HDFS-9412..patch, HDFS-9412.0001.patch
>
>
> {{getBlocks}} in {{NameNodeRpcServer}} acquires a read lock then may take a 
> long time to complete (probably several seconds, if number of blocks are too 
> much). 
> During this period, other threads attempting to acquire write lock will wait. 
> In an extreme case, RPC handlers are occupied by one reader thread calling 
> {{getBlocks}} and all other threads waiting for write lock, rpc server acts 
> like hung. Unfortunately, this tends to happen in heavy loaded cluster, since 
> read operations come and go fast (they do not need to wait), leaving write 
> operations waiting.
> Looks like we can optimize this thing like DN block report did in past, by 
> splitting the operation into smaller sub operations, and let other threads do 
> their work between each sub operation. The whole result is returned at once, 
> though (one thing different from DN block report). 
> I am not sure whether this will work. Any better idea?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10284) o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240439#comment-15240439
 ] 

Mingliang Liu commented on HDFS-10284:
--

I think it's due to mocking {{fsn}} while being concurrently accessed by 
another thread ({{smmthread}}).


> o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode 
> fails intermittently
> -
>
> Key: HDFS-10284
> URL: https://issues.apache.org/jira/browse/HDFS-10284
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
>
> *Stacktrace*
> {code}
> org.mockito.exceptions.misusing.UnfinishedStubbingException: 
> Unfinished stubbing detected here:
> -> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> E.g. thenReturn() may be missing.
> Examples of correct stubbing:
> when(mock.isOk()).thenReturn(true);
> when(mock.isOk()).thenThrow(exception);
> doThrow(exception).when(mock).someVoidMethod();
> Hints:
>  1. missing thenReturn()
>  2. although stubbed methods may return mocks, you cannot inline mock 
> creation (mock()) call inside a thenReturn method (see issue 53)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HDFS-10216:
---
Status: Open  (was: Patch Available)

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch, HDFS-10216.2.patch, 
> HDFS-10216.3.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Takashi Ohnishi (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240438#comment-15240438
 ] 

Takashi Ohnishi commented on HDFS-10216:


Thank you, [~jzhuge] for helpful reviewing ! :)

I have updated the patch with your advice. Please see the attached one.

{quote}
  1.  683: foo is unused
{quote}

Oops. I want to create f3 under foo dir, but not did so.

{quote}
   2. 697: use dfs.getWorkingDirectory(), e.g.:
{code}
Path sourcePath = new Path(dfs.getWorkingDirectory(), "source");
{code}
{quote}
Ah, that's a very smart. 

{quote}
  3. 708: use target.toString() instead of "/target".
  4. 710-714: try block is not necessary; junit will catch and print full 
stacktrace.
In general, avoid hard-coded "/", use Path.SEPARATOR instead.
{quote}
I have fixed them.


> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch, HDFS-10216.2.patch, 
> HDFS-10216.3.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HDFS-10216:
---
Attachment: HDFS-10216.3.patch

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch, HDFS-10216.2.patch, 
> HDFS-10216.3.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10284) o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10284:
-
Description: 
*Stacktrace*

{code}
org.mockito.exceptions.misusing.UnfinishedStubbingException: 
Unfinished stubbing detected here:
-> at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)

E.g. thenReturn() may be missing.
Examples of correct stubbing:
when(mock.isOk()).thenReturn(true);
when(mock.isOk()).thenThrow(exception);
doThrow(exception).when(mock).someVoidMethod();
Hints:
 1. missing thenReturn()
 2. although stubbed methods may return mocks, you cannot inline mock creation 
(mock()) call inside a thenReturn method (see issue 53)

at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
{code}

Sample failing pre-commit UT: 
https://builds.apache.org/job/PreCommit-HDFS-Build/15153/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManagerSafeMode/testCheckSafeMode/

  was:
*Stacktrace*

{code}
org.mockito.exceptions.misusing.UnfinishedStubbingException: 
Unfinished stubbing detected here:
-> at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)

E.g. thenReturn() may be missing.
Examples of correct stubbing:
when(mock.isOk()).thenReturn(true);
when(mock.isOk()).thenThrow(exception);
doThrow(exception).when(mock).someVoidMethod();
Hints:
 1. missing thenReturn()
 2. although stubbed methods may return mocks, you cannot inline mock creation 
(mock()) call inside a thenReturn method (see issue 53)

at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
{code}


> o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode 
> fails intermittently
> -
>
> Key: HDFS-10284
> URL: https://issues.apache.org/jira/browse/HDFS-10284
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
>
> *Stacktrace*
> {code}
> org.mockito.exceptions.misusing.UnfinishedStubbingException: 
> Unfinished stubbing detected here:
> -> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> E.g. thenReturn() may be missing.
> Examples of correct stubbing:
> when(mock.isOk()).thenReturn(true);
> when(mock.isOk()).thenThrow(exception);
> doThrow(exception).when(mock).someVoidMethod();
> Hints:
>  1. missing thenReturn()
>  2. although stubbed methods may return mocks, you cannot inline mock 
> creation (mock()) call inside a thenReturn method (see issue 53)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> {code}
> Sample failing pre-commit UT: 
> https://builds.apache.org/job/PreCommit-HDFS-Build/15153/testReport/org.apache.hadoop.hdfs.server.blockmanagement/TestBlockManagerSafeMode/testCheckSafeMode/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HDFS-10216:
---
Status: Patch Available  (was: Open)

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch, HDFS-10216.2.patch, 
> HDFS-10216.3.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10284) o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10284:
-
Priority: Minor  (was: Major)

> o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode 
> fails intermittently
> -
>
> Key: HDFS-10284
> URL: https://issues.apache.org/jira/browse/HDFS-10284
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
>
> *Stacktrace*
> {code}
> org.mockito.exceptions.misusing.UnfinishedStubbingException: 
> Unfinished stubbing detected here:
> -> at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> E.g. thenReturn() may be missing.
> Examples of correct stubbing:
> when(mock.isOk()).thenReturn(true);
> when(mock.isOk()).thenThrow(exception);
> doThrow(exception).when(mock).someVoidMethod();
> Hints:
>  1. missing thenReturn()
>  2. although stubbed methods may return mocks, you cannot inline mock 
> creation (mock()) call inside a thenReturn method (see issue 53)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10284) o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-10284:


 Summary: 
o.a.h.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode 
fails intermittently
 Key: HDFS-10284
 URL: https://issues.apache.org/jira/browse/HDFS-10284
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.9.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


*Stacktrace*

{code}
org.mockito.exceptions.misusing.UnfinishedStubbingException: 
Unfinished stubbing detected here:
-> at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)

E.g. thenReturn() may be missing.
Examples of correct stubbing:
when(mock.isOk()).thenReturn(true);
when(mock.isOk()).thenThrow(exception);
doThrow(exception).when(mock).someVoidMethod();
Hints:
 1. missing thenReturn()
 2. although stubbed methods may return mocks, you cannot inline mock creation 
(mock()) call inside a thenReturn method (see issue 53)

at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManagerSafeMode.testCheckSafeMode(TestBlockManagerSafeMode.java:169)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9772) TestBlockReplacement#testThrottler doesn't work as expected

2016-04-13 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240420#comment-15240420
 ] 

Lin Yiqun commented on HDFS-9772:
-

Thanks [~walter.k.su] for commit!

> TestBlockReplacement#testThrottler doesn't work as expected
> ---
>
> Key: HDFS-9772
> URL: https://issues.apache.org/jira/browse/HDFS-9772
> Project: Hadoop HDFS
>  Issue Type: Test
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
>Priority: Minor
>  Labels: test
> Fix For: 2.7.3
>
> Attachments: HDFS.001.patch
>
>
> In {{TestBlockReplacement#testThrottler}}, it use a fault variable to 
> calculate the ended bandwidth. It use variable {{totalBytes}} rathe than 
> final variable {{TOTAL_BYTES}}. And the value of {{TOTAL_BYTES}} is set to 
> {{bytesToSend}}. The {{totalBytes}} looks no meaning here and this will make 
> {{totalBytes*1000/(end-start)}} always be 0 and the comparison always true. 
> The method code is below:
> {code}
> @Test
>   public void testThrottler() throws IOException {
> Configuration conf = new HdfsConfiguration();
> FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
> long bandwidthPerSec = 1024*1024L;
> final long TOTAL_BYTES =6*bandwidthPerSec; 
> long bytesToSend = TOTAL_BYTES; 
> long start = Time.monotonicNow();
> DataTransferThrottler throttler = new 
> DataTransferThrottler(bandwidthPerSec);
> long totalBytes = 0L;
> long bytesSent = 1024*512L; // 0.5MB
> throttler.throttle(bytesSent);
> bytesToSend -= bytesSent;
> bytesSent = 1024*768L; // 0.75MB
> throttler.throttle(bytesSent);
> bytesToSend -= bytesSent;
> try {
>   Thread.sleep(1000);
> } catch (InterruptedException ignored) {}
> throttler.throttle(bytesToSend);
> long end = Time.monotonicNow();
> assertTrue(totalBytes*1000/(end-start)<=bandwidthPerSec);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10279) Improve validation of the configured number of tolerated failed volumes

2016-04-13 Thread Lin Yiqun (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240418#comment-15240418
 ] 

Lin Yiqun commented on HDFS-10279:
--

Thanks [~andrew.wang] for commit!

> Improve validation of the configured number of tolerated failed volumes
> ---
>
> Key: HDFS-10279
> URL: https://issues.apache.org/jira/browse/HDFS-10279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Fix For: 2.8.0
>
> Attachments: HDFS-10279.001.patch, HDFS-10279.002.patch
>
>
> Now the misconfiguration for dfs.datanode.failed.volumes.tolerated are 
> detected too late and not easily be found. We can move the validation logic 
> for tolerated volumes to a eariler time that before datanode regists to 
> namenode. And this will let us detect the misconfiguration soon and easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10232:

Attachment: HDFS-10232-HDFS-7240.002.patch

[~arpitagarwal] Thanks for the review. I have fixed the keys where we still had 
dfs.storage. prefix instead of the containers.

> Ozone: Make config key naming consistent
> 
>
> Key: HDFS-10232
> URL: https://issues.apache.org/jira/browse/HDFS-10232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-10232-HDFS-7240.001.patch, 
> HDFS-10232-HDFS-7240.002.patch
>
>
> We seem to use StorageHandler, ozone, Objectstore etc as prefix. We should 
> pick one -- Ideally ozone and use that consistently as the prefix for the 
> ozone config management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9016) Display upgrade domain information in fsck

2016-04-13 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9016:
--
Description: This will make it easy for people to use fsck to check block 
placement when upgrade domain is enabled.  (was: We can add a new command line 
parameter called "-upgradedomain" to fsck to display the upgrade domain value 
of each replica.)

> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9016.patch
>
>
> This will make it easy for people to use fsck to check block placement when 
> upgrade domain is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9016) Display upgrade domain information in fsck

2016-04-13 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9016:
--
Assignee: Ming Ma
  Status: Patch Available  (was: Open)

> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-9016.patch
>
>
> We can add a new command line parameter called "-upgradedomain" to fsck to 
> display the upgrade domain value of each replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9016) Display upgrade domain information in fsck

2016-04-13 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9016:
--
Attachment: HDFS-9016.patch

Here is the draft patch. Given there is no backward compatibility issue, it 
seems unnecessary to introduce another upgrade domain flag to fsck. fsck will 
return the upgrade domain when it is set and one or more of {"-locations", 
"-racks", "-replicadetails"} are specified.

> Display upgrade domain information in fsck
> --
>
> Key: HDFS-9016
> URL: https://issues.apache.org/jira/browse/HDFS-9016
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
> Attachments: HDFS-9016.patch
>
>
> We can add a new command line parameter called "-upgradedomain" to fsck to 
> display the upgrade domain value of each replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240296#comment-15240296
 ] 

Hudson commented on HDFS-10209:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9606 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9606/])
Revert "HDFS-10209. Support enable caller context in HDFS namenode audit (xyao: 
rev 4895c73dd493a53eab43f0d16e92c19af15c460b)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
HDFS-10209. Support enable caller context in HDFS namenode audit log (xyao: rev 
5566177c9af913baf380811dbbb1fa7e70235491)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java


> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10279) Improve validation of the configured number of tolerated failed volumes

2016-04-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240295#comment-15240295
 ] 

Hudson commented on HDFS-10279:
---

SUCCESS: Integrated in Hadoop-trunk-Commit #9606 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9606/])
HDFS-10279. Improve validation of the configured number of tolerated (wang: rev 
314aa21a89134fac68ac3cb95efdeb56bd3d7b05)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java


> Improve validation of the configured number of tolerated failed volumes
> ---
>
> Key: HDFS-10279
> URL: https://issues.apache.org/jira/browse/HDFS-10279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Fix For: 2.8.0
>
> Attachments: HDFS-10279.001.patch, HDFS-10279.002.patch
>
>
> Now the misconfiguration for dfs.datanode.failed.volumes.tolerated are 
> detected too late and not easily be found. We can move the validation logic 
> for tolerated volumes to a eariler time that before datanode regists to 
> namenode. And this will let us detect the misconfiguration soon and easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-10209:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Thanks [~xiaobingo] for the contribution. I've committed the patch to trunk and 
branch-2.9.

> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Fix For: 2.9.0
>
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10279) Improve validation of the configured number of tolerated failed volumes

2016-04-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10279:
---
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks again [~linyiqun], committed to trunk, branch-2, branch-2.8.

> Improve validation of the configured number of tolerated failed volumes
> ---
>
> Key: HDFS-10279
> URL: https://issues.apache.org/jira/browse/HDFS-10279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Fix For: 2.8.0
>
> Attachments: HDFS-10279.001.patch, HDFS-10279.002.patch
>
>
> Now the misconfiguration for dfs.datanode.failed.volumes.tolerated are 
> detected too late and not easily be found. We can move the validation logic 
> for tolerated volumes to a eariler time that before datanode regists to 
> namenode. And this will let us detect the misconfiguration soon and easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10279) Improve validation of the configured number of tolerated failed volumes

2016-04-13 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-10279:
---
Summary: Improve validation of the configured number of tolerated failed 
volumes  (was: Improve the validation for tolerated volumes)

> Improve validation of the configured number of tolerated failed volumes
> ---
>
> Key: HDFS-10279
> URL: https://issues.apache.org/jira/browse/HDFS-10279
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Lin Yiqun
>Assignee: Lin Yiqun
> Attachments: HDFS-10279.001.patch, HDFS-10279.002.patch
>
>
> Now the misconfiguration for dfs.datanode.failed.volumes.tolerated are 
> detected too late and not easily be found. We can move the validation logic 
> for tolerated volumes to a eariler time that before datanode regists to 
> namenode. And this will let us detect the misconfiguration soon and easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240245#comment-15240245
 ] 

Hudson commented on HDFS-10209:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9605 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9605/])
HDFS-10209. Support enable caller context in HDFS namenode audit log (xyao: rev 
192112d5a2e7ce4ec8eb47e21ab744b34c848893)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeReconfigure.java


> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10208) Addendum for HDFS-9579: to handle the case when client machine can't resolve network path

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240216#comment-15240216
 ] 

Hadoop QA commented on HDFS-10208:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 55s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 8s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 30s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | 

[jira] [Commented] (HDFS-10282) The VolumeScanner should warn about replica files which are misplaced

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240202#comment-15240202
 ] 

Hadoop QA commented on HDFS-10282:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 1s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 4s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
56s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 229m 7s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | hadoop.hdfs.TestHFlush |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
|   | hadoop.hdfs.TestRemoteBlockReader |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.shortcircuit.TestShortCircuitLocalRead |
|   | hadoop.hdfs.TestDFSUpgradeFromImage |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.TestInjectionForSimulatedStorage |
| JDK v1.8.0_77 Timed out junit tests | 
org

[jira] [Commented] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240184#comment-15240184
 ] 

Arpit Agarwal commented on HDFS-10232:
--

bq. I wanted to differentiate the SCM keys from ozone keys. Do you think I 
should add Ozone to this ? The container keys relate to management of 
containers and ozone keys relate to ozone.
Thanks that makes sense. Neither DFS or Ozone prefixes may be right for for 
container layer keys, so we can leave them alone until we have a better idea.

> Ozone: Make config key naming consistent
> 
>
> Key: HDFS-10232
> URL: https://issues.apache.org/jira/browse/HDFS-10232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-10232-HDFS-7240.001.patch
>
>
> We seem to use StorageHandler, ozone, Objectstore etc as prefix. We should 
> pick one -- Ideally ozone and use that consistently as the prefix for the 
> ozone config management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10209) Support enable caller context in HDFS namenode audit log without restart namenode

2016-04-13 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240173#comment-15240173
 ] 

Xiaoyu Yao commented on HDFS-10209:
---

The test failure does not relate seem to relate to this patch. 
+1 for 001 patch and I will commit this shortly. 

> Support enable caller context in HDFS namenode audit log without restart 
> namenode
> -
>
> Key: HDFS-10209
> URL: https://issues.apache.org/jira/browse/HDFS-10209
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaobing Zhou
> Attachments: HDFS-10209-HDFS-9000.000.patch, 
> HDFS-10209-HDFS-9000.001.patch
>
>
> RPC caller context is a useful feature to track down the origin of the 
> caller, which can track down "bad" jobs that overload the namenode. This 
> ticket is opened to allow enabling caller context without namenode restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10283) o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending fails intermittently

2016-04-13 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240168#comment-15240168
 ] 

Jing Zhao commented on HDFS-10283:
--

+1 pending jenkins.

> o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending
>  fails intermittently
> --
>
> Key: HDFS-10283
> URL: https://issues.apache.org/jira/browse/HDFS-10283
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10283.000.patch
>
>
> The test fails with exception as following: 
> {code}
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10281) o.a.h.hdfs.server.namenode.ha.TestPendingCorruptDnMessages fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10281:
-
Status: Patch Available  (was: Open)

> o.a.h.hdfs.server.namenode.ha.TestPendingCorruptDnMessages fails 
> intermittently
> ---
>
> Key: HDFS-10281
> URL: https://issues.apache.org/jira/browse/HDFS-10281
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10281.000.patch
>
>
> In our daily UT test, we found the 
> {{TestPendingCorruptDnMessages#testChangedStorageId}} failed intermittently, 
> see following information:
> *Error Message*
> expected:<1> but was:<0>
> *Stacktrace*
> {code}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages.getRegisteredDatanodeUid(TestPendingCorruptDnMessages.java:124)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages.testChangedStorageId(TestPendingCorruptDnMessages.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10281) o.a.h.hdfs.server.namenode.ha.TestPendingCorruptDnMessages fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10281:
-
Status: Open  (was: Patch Available)

> o.a.h.hdfs.server.namenode.ha.TestPendingCorruptDnMessages fails 
> intermittently
> ---
>
> Key: HDFS-10281
> URL: https://issues.apache.org/jira/browse/HDFS-10281
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10281.000.patch
>
>
> In our daily UT test, we found the 
> {{TestPendingCorruptDnMessages#testChangedStorageId}} failed intermittently, 
> see following information:
> *Error Message*
> expected:<1> but was:<0>
> *Stacktrace*
> {code}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages.getRegisteredDatanodeUid(TestPendingCorruptDnMessages.java:124)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages.testChangedStorageId(TestPendingCorruptDnMessages.java:103)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10283) o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10283:
-
Attachment: HDFS-10283.000.patch

A simple fix is to disable the replace-data-on-failure feature by setting the 
{{dfs.client.block.write.replace-datanode-on-failure.policy}} key as false.

Actually the test is not related to writing (appending) pipelines. When 
creating files, we can simply reduce the replica factor from 3 to 1 and keeping 
the number of datanodes as 3 for placing. Another benefit is that the test will 
be faster as the pipeline is much shorter.

> o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending
>  fails intermittently
> --
>
> Key: HDFS-10283
> URL: https://issues.apache.org/jira/browse/HDFS-10283
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10283.000.patch
>
>
> The test fails with exception as following: 
> {code}
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10283) o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HDFS-10283:
-
Status: Patch Available  (was: Open)

> o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending
>  fails intermittently
> --
>
> Key: HDFS-10283
> URL: https://issues.apache.org/jira/browse/HDFS-10283
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Attachments: HDFS-10283.000.patch
>
>
> The test fails with exception as following: 
> {code}
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10283) o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15240097#comment-15240097
 ] 

Mingliang Liu commented on HDFS-10283:
--

It happens in our internal daily UT Jenkins, and recent Apache trunk pre-commit 
builds (e.g. 
https://builds.apache.org/job/PreCommit-HDFS-Build/15140/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-jdk1.8.0_77.txt)

Before this exception, the NN has complained about not enough replicas as 
following:
{code}
2016-04-12 13:21:30,511 WARN  blockmanagement.BlockPlacementPolicy 
(BlockPlacementPolicyDefault.java:chooseTarget(380)) - Failed to place enough 
replicas, still in need of 1 to reach 3 (unavailableStorages=[], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more 
information, please enable DEBUG log level on 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2016-04-12 13:21:30,511 WARN  blockmanagement.BlockPlacementPolicy 
(BlockPlacementPolicyDefault.java:chooseTarget(380)) - Failed to place enough 
replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) For more 
information, please enable DEBUG log level on 
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2016-04-12 13:21:30,511 WARN  protocol.BlockStoragePolicy 
(BlockStoragePolicy.java:chooseStorageTypes(162)) - Failed to place enough 
replicas: expected size is 1 but only 0 storage types can be selected 
(replication=3, selected=[], unavailable=[DISK, ARCHIVE], removed=[DISK], 
policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], 
replicationFallbacks=[ARCHIVE]})
2016-04-12 13:21:30,512 WARN  blockmanagement.BlockPlacementPolicy 
(BlockPlacementPolicyDefault.java:chooseTarget(380)) - Failed to place enough 
replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK, ARCHIVE], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=false) All 
required storage types are unavailable:  unavailableStorages=[DISK, ARCHIVE], 
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], 
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
{code}

The basic problem here is that the number of datanodes equals to the 
replication numbers (which is 3 in the test). If there is failure in the 
writing (appending) pipeline, there is no more good datanodes to replace with. 
The block manager will complain the above error but will not fail the request. 
That's why we did not see exceptions thrown by NN. The client side will check 
that there is no newly allocated DN for replacing nodes in the pipeline, and 
the DataStreamer will throw the exception as java.io.IOException: Failed to 
replace a bad datanode on the existing pipeline due to no more good datanodes 
being available to try.

> o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending
>  fails intermittently
> --
>
> Key: HDFS-10283
> URL: https://issues.apache.org/jira/browse/HDFS-10283
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> The test fails with exception as following: 
> {code}
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
> {co

[jira] [Created] (HDFS-10283) o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending fails intermittently

2016-04-13 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-10283:


 Summary: 
o.a.h.hdfs.server.namenode.TestFSImageWithSnapshot#testSaveLoadImageWithAppending
 fails intermittently
 Key: HDFS-10283
 URL: https://issues.apache.org/jira/browse/HDFS-10283
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


The test fails with exception as following: 

{code}
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:47227,DS-dd109c14-79e5-4380-ac5e-4434cd7e25b5,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:56949,DS-6c0be75e-a78c-41b9-bfd0-7ee0cdefaa0e,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1162)
at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1232)
at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1423)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1338)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1321)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:599)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239997#comment-15239997
 ] 

Hadoop QA commented on HDFS-10232:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
45s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} HDFS-7240 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 18s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in HDFS-7240 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} HDFS-7240 passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s 
{color} | {color:green} HDFS-7240 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 0s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 57m 30s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 149m 32s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | 
hadoop.ozone.web.TestOzoneRestWithMiniCluster |
|   | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.TestHFlush |
| JDK v1.7.0_95 Failed junit tests | hadoop.hdfs.server.namenode.TestEditLog |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798538/HDFS-10232-HDFS-7240.001.patch
 |
| JIRA Issue | HDFS-10232 |
| Optional Tests |  asflicense  compi

[jira] [Commented] (HDFS-9820) Improve distcp to support efficient restore to an earlier snapshot

2016-04-13 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239955#comment-15239955
 ] 

Jing Zhao commented on HDFS-9820:
-

Thanks again for working on this, Yongjun. Some comments:
# Even without HDFS-10263, looks like we can still achieve the goal by 
computing the diff with reversed snapshot names. I guess we only need to skip 
{{DistCpSync#prepareDiffList}} here. Can you confirm? Or do you see any other 
issue?
# When using snapshot diff for distcp, our basic assumption is "there is no 
change between snapshot {{from}} and the current status in target
file system." This is our best effort to avoid applying the diff on a wrong 
state of the target. Now with the patch this check is disabled for rdiff. 
However, in the rdiff scenario, the current status of target is always mapped 
to a snapshot in both source and target, thus looks to me we do not need to 
make any changes in command options.
# Because of HDFS-10263, we only need to understand which snapshot is earlier, 
the {{from}} or {{to}}. This information can be retrieved from the modification 
time of the two snapshots (i.e., to call {{getListing} against 
{{path-of-snapshottable-dir/.snapshot}}}).

> Improve distcp to support efficient restore to an earlier snapshot
> --
>
> Key: HDFS-9820
> URL: https://issues.apache.org/jira/browse/HDFS-9820
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: distcp
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HDFS-9820.001.patch, HDFS-9820.002.patch, 
> HDFS-9820.003.patch, HDFS-9820.004.patch
>
>
> HDFS-4167 intends to restore HDFS to the most recent snapshot, and there are 
> some complexity and challenges. 
> HDFS-7535 improved distcp performance by avoiding copying files that changed 
> name since last backup.
> On top of HDFS-7535, HDFS-8828 improved distcp performance when copying data 
> from source to target cluster, by only copying changed files since last 
> backup. The way it works is use snapshot diff to find out all files changed, 
> and copy the changed files only.
> See 
> https://blog.cloudera.com/blog/2015/12/distcp-performance-improvements-in-apache-hadoop/
> This jira is to propose a variation of HDFS-8828, to find out the files 
> changed in target cluster since last snapshot sx, and copy these from the 
> source target's same snapshot sx, to restore target cluster to sx.
> If a file/dir is
> - renamed, rename it back
> - created in target cluster, delete it
> - modified, put it to the copy list
> - run distcp with the copy list, copy from the source cluster's corresponding 
> snapshot
> This could be a new command line switch -rdiff in distcp.
> HDFS-4167 would still be nice to have. It just seems to me that HDFS-9820 
> would hopefully be easier to implement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239952#comment-15239952
 ] 

John Zhuge commented on HDFS-10216:
---

Great job. Just minor nitpicks:
* {{TestDistCpSync.java}}
## 683: foo is unused
## 697: use {{dfs.getWorkingDirectory()}}, e.g.:
{code}
Path sourcePath = new Path(dfs.getWorkingDirectory(), "source");
{code}
## 708: use {{target.toString()}} instead of "/target".
## 710-714: try block is not necessary; junit will catch and print full 
stacktrace.
* In general, avoid hard-coded "/", use {{Path.SEPARATOR}} instead.

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch, HDFS-10216.2.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10282) The VolumeScanner should warn about replica files which are misplaced

2016-04-13 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-10282:

Attachment: HDFS-10282.002.patch

OK.  The new revision makes the DirectoryScanner message about a misplaced 
block more consistent with the VolumeScanner one.

It makes more sense to point out that the block is misplaced rather than to 
claim that it "has to be upgraded."  For example, some sysadmins copy blocks 
around manually-- in that case, the mis-placement has nothing to do with 
upgrade.

> The VolumeScanner should warn about replica files which are misplaced
> -
>
> Key: HDFS-10282
> URL: https://issues.apache.org/jira/browse/HDFS-10282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-10282.001.patch, HDFS-10282.002.patch
>
>
> The VolumeScanner should warn about replica files which are misplaced



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10208) Addendum for HDFS-9579: to handle the case when client machine can't resolve network path

2016-04-13 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-10208:
---
Attachment: HDFS-10208-5.patch

Here is the updated patch. Thank you [~sjlee0] and [~brahmareddy].

> Addendum for HDFS-9579: to handle the case when client machine can't resolve 
> network path
> -
>
> Key: HDFS-10208
> URL: https://issues.apache.org/jira/browse/HDFS-10208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-10208-2.patch, HDFS-10208-3.patch, 
> HDFS-10208-4.patch, HDFS-10208-5.patch, HDFS-10208.patch
>
>
> If DFSClient runs on a machine that can't resolve network path, 
> {{DNSToSwitchMapping}} will return {{DEFAULT_RACK}}. In addition, if somehow 
> {{dnsToSwitchMapping.resolve}} returns null, that will cause exception when 
> it tries to create {{clientNode}}. In either case, there is no need to create 
> {{clientNode}} and we should treat its network distance with any datanode as 
> Integer.MAX_VALUE.
> {noformat}
> clientNode = new NodeBase(clientHostName,
> dnsToSwitchMapping.resolve(nodes).get(0));
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239679#comment-15239679
 ] 

Anu Engineer commented on HDFS-10232:
-

I wanted to differentiate the SCM keys from ozone keys. Do you think I should 
add Ozone to this ? The container keys relate to management of containers and 
ozone keys relate to ozone. Do you think it will be clearer if we have ozone in 
all keys names ? Other reason why I did not add ozone on these is because we 
have ozone-site.xml


> Ozone: Make config key naming consistent
> 
>
> Key: HDFS-10232
> URL: https://issues.apache.org/jira/browse/HDFS-10232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-10232-HDFS-7240.001.patch
>
>
> We seem to use StorageHandler, ozone, Objectstore etc as prefix. We should 
> pick one -- Ideally ozone and use that consistently as the prefix for the 
> ozone config management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239670#comment-15239670
 ] 

Arpit Agarwal commented on HDFS-10232:
--

Some keys are still missing "ozone" as part of their name. Was that 
intentional? e.g.

{code}
  public static final String DFS_CONTAINER_LOCATION_RPC_ADDRESS_KEY =
  "dfs.container.location.rpc-address";
  public static final int DFS_CONTAINER_LOCATION_RPC_DEFAULT_PORT = 50200;
  public static final String DFS_CONTAINER_LOCATION_RPC_ADDRESS_DEFAULT =
  "0.0.0.0:" + DFS_CONTAINER_LOCATION_RPC_DEFAULT_PORT;
  public static final String DFS_CONTAINER_LOCATION_RPC_BIND_HOST_KEY =
  "dfs.storage.rpc-bind-host";
  public static final String DFS_CONTAINER_LOCATION_HANDLER_COUNT_KEY =
  "dfs.storage.handler.count";
  public static final int DFS_STORAGE_HANDLER_COUNT_DEFAULT = 10;
{code}

> Ozone: Make config key naming consistent
> 
>
> Key: HDFS-10232
> URL: https://issues.apache.org/jira/browse/HDFS-10232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-10232-HDFS-7240.001.patch
>
>
> We seem to use StorageHandler, ozone, Objectstore etc as prefix. We should 
> pick one -- Ideally ozone and use that consistently as the prefix for the 
> ozone config management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239660#comment-15239660
 ] 

Arpit Agarwal commented on HDFS-10232:
--

Hi Anu, IMO we don't need the DFS_ prefix for Ozone keys.

> Ozone: Make config key naming consistent
> 
>
> Key: HDFS-10232
> URL: https://issues.apache.org/jira/browse/HDFS-10232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-10232-HDFS-7240.001.patch
>
>
> We seem to use StorageHandler, ozone, Objectstore etc as prefix. We should 
> pick one -- Ideally ozone and use that consistently as the prefix for the 
> ozone config management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10232:

Attachment: HDFS-10232-HDFS-7240.001.patch

> Ozone: Make config key naming consistent
> 
>
> Key: HDFS-10232
> URL: https://issues.apache.org/jira/browse/HDFS-10232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-10232-HDFS-7240.001.patch
>
>
> We seem to use StorageHandler, ozone, Objectstore etc as prefix. We should 
> pick one -- Ideally ozone and use that consistently as the prefix for the 
> ozone config management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10232) Ozone: Make config key naming consistent

2016-04-13 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDFS-10232:

Status: Patch Available  (was: Open)

> Ozone: Make config key naming consistent
> 
>
> Key: HDFS-10232
> URL: https://issues.apache.org/jira/browse/HDFS-10232
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Trivial
> Attachments: HDFS-10232-HDFS-7240.001.patch
>
>
> We seem to use StorageHandler, ozone, Objectstore etc as prefix. We should 
> pick one -- Ideally ozone and use that consistently as the prefix for the 
> ozone config management.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9412) getBlocks occupies FSLock and takes too long to complete

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239642#comment-15239642
 ] 

Hadoop QA commented on HDFS-9412:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs: patch generated 1 new + 
130 unchanged - 0 fixed = 131 total (was 130) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 115m 4s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 107m 12s 
{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 256m 9s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77 Failed junit tests | 
hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.TestBlockStoragePolicy |
|   | hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.serve

[jira] [Commented] (HDFS-10208) Addendum for HDFS-9579: to handle the case when client machine can't resolve network path

2016-04-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239606#comment-15239606
 ] 

Brahma Reddy Battula commented on HDFS-10208:
-

 LGTM,[~sjlee0] thanks for pinging me..

bq.applicable for most existing properties defined in 
CommonConfigurationKeysPublic.

yes..May be , To make Jenkins happy can update patch with checkstyle fixes..:)

> Addendum for HDFS-9579: to handle the case when client machine can't resolve 
> network path
> -
>
> Key: HDFS-10208
> URL: https://issues.apache.org/jira/browse/HDFS-10208
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
> Attachments: HDFS-10208-2.patch, HDFS-10208-3.patch, 
> HDFS-10208-4.patch, HDFS-10208.patch
>
>
> If DFSClient runs on a machine that can't resolve network path, 
> {{DNSToSwitchMapping}} will return {{DEFAULT_RACK}}. In addition, if somehow 
> {{dnsToSwitchMapping.resolve}} returns null, that will cause exception when 
> it tries to create {{clientNode}}. In either case, there is no need to create 
> {{clientNode}} and we should treat its network distance with any datanode as 
> Integer.MAX_VALUE.
> {noformat}
> clientNode = new NodeBase(clientHostName,
> dnsToSwitchMapping.resolve(nodes).get(0));
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9530) huge Non-DFS Used in hadoop 2.6.2 & 2.7.1

2016-04-13 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239582#comment-15239582
 ] 

Brahma Reddy Battula commented on HDFS-9530:


anybody can review the attached patch..?

> huge Non-DFS Used in hadoop 2.6.2 & 2.7.1
> -
>
> Key: HDFS-9530
> URL: https://issues.apache.org/jira/browse/HDFS-9530
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Fei Hui
> Attachments: HDFS-9530-01.patch
>
>
> i think there are bugs in HDFS
> ===
> here is config
>   
> dfs.datanode.data.dir
> 
> 
> file:///mnt/disk4,file:///mnt/disk1,file:///mnt/disk3,file:///mnt/disk2
> 
>   
> here is dfsadmin report 
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 238604832768 (222.22 GB)
> DFS Remaining: 215772954624 (200.95 GB)
> DFS Used: 22831878144 (21.26 GB)
> DFS Used%: 9.57%
> Under replicated blocks: 4
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7190958080 (6.70 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72343986176 (67.38 GB)
> DFS Used%: 8.96%
> DFS Remaining%: 90.14%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:02 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 7219073024 (6.72 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 72315871232 (67.35 GB)
> DFS Used%: 9.00%
> DFS Remaining%: 90.11%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 8421847040 (7.84 GB)
> Non DFS Used: 721473536 (688.05 MB)
> DFS Remaining: 71113097216 (66.23 GB)
> DFS Used%: 10.49%
> DFS Remaining%: 88.61%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Wed Dec 09 15:55:03 CST 2015
> 
> when running hive job , dfsadmin report as follows
> [hadoop@worker-1 ~]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 240769253376 (224.23 GB)
> Present Capacity: 108266011136 (100.83 GB)
> DFS Remaining: 80078416384 (74.58 GB)
> DFS Used: 28187594752 (26.25 GB)
> DFS Used%: 26.04%
> Under replicated blocks: 7
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -
> Live datanodes (3):
> Name: 10.117.60.59:50010 (worker-2)
> Hostname: worker-2
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9015627776 (8.40 GB)
> Non DFS Used: 44303742464 (41.26 GB)
> DFS Remaining: 26937047552 (25.09 GB)
> DFS Used%: 11.23%
> DFS Remaining%: 33.56%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 693
> Last contact: Wed Dec 09 15:37:35 CST 2015
> Name: 10.168.156.0:50010 (worker-3)
> Hostname: worker-3
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 9163116544 (8.53 GB)
> Non DFS Used: 47895897600 (44.61 GB)
> DFS Remaining: 23197403648 (21.60 GB)
> DFS Used%: 11.42%
> DFS Remaining%: 28.90%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 750
> Last contact: Wed Dec 09 15:37:36 CST 2015
> Name: 10.117.15.38:50010 (worker-1)
> Hostname: worker-1
> Decommission Status : Normal
> Configured Capacity: 80256417792 (74.74 GB)
> DFS Used: 10008850432 (9.32 GB)
> Non DFS Used: 40303602176 (37.54 GB)
> DFS Remaining: 29943965184 (27.89 GB)
> DFS Used%: 12.47%
> DFS Remaining%: 37.31%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 632
> Last conta

[jira] [Commented] (HDFS-10270) TestJMXGet:testNameNode() fails

2016-04-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239566#comment-15239566
 ] 

Hudson commented on HDFS-10270:
---

FAILURE: Integrated in Hadoop-trunk-Commit #9603 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9603/])
HDFS-10270. TestJMXGet:testNameNode() fails. Contributed by Gergely (kihwal: 
rev d2f3bbc29046435904ad9418073795439c71b441)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestJMXGet.java


> TestJMXGet:testNameNode() fails
> ---
>
> Key: HDFS-10270
> URL: https://issues.apache.org/jira/browse/HDFS-10270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Andras Bokor
>Assignee: Gergely Novák
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10270.001.patch, TestJMXGet.log, TestJMXGetFails.log
>
>
> It fails with java.util.concurrent.TimeoutException. Actually the problem 
> here is that we expect 2 as NumOpenConnections metric but it is only 1. So 
> the test waits 60 sec then fails.
> Please find maven output so the stack trace attached ([^TestJMXGetFails.log]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10270) TestJMXGet:testNameNode() fails

2016-04-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10270:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed it to trunk, branch-2 and branch-2.8. Thanks for fixing this, Gergely.

> TestJMXGet:testNameNode() fails
> ---
>
> Key: HDFS-10270
> URL: https://issues.apache.org/jira/browse/HDFS-10270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Andras Bokor
>Assignee: Gergely Novák
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-10270.001.patch, TestJMXGet.log, TestJMXGetFails.log
>
>
> It fails with java.util.concurrent.TimeoutException. Actually the problem 
> here is that we expect 2 as NumOpenConnections metric but it is only 1. So 
> the test waits 60 sec then fails.
> Please find maven output so the stack trace attached ([^TestJMXGetFails.log]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10270) TestJMXGet:testNameNode() fails

2016-04-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239532#comment-15239532
 ] 

Kihwal Lee commented on HDFS-10270:
---

bq.  Do you see any disadvantages removing NumOpenConnections check?
I was trying to keep whatever the original author did, but in term of the value 
of the check, I think it is fine to remove. So +1 for the patch.

> TestJMXGet:testNameNode() fails
> ---
>
> Key: HDFS-10270
> URL: https://issues.apache.org/jira/browse/HDFS-10270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Andras Bokor
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: HDFS-10270.001.patch, TestJMXGet.log, TestJMXGetFails.log
>
>
> It fails with java.util.concurrent.TimeoutException. Actually the problem 
> here is that we expect 2 as NumOpenConnections metric but it is only 1. So 
> the test waits 60 sec then fails.
> Please find maven output so the stack trace attached ([^TestJMXGetFails.log]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10270) TestJMXGet:testNameNode() fails

2016-04-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239525#comment-15239525
 ] 

Kihwal Lee commented on HDFS-10270:
---

bq. In addition playing with timeouts makes a JUnit test more complicated than 
a JUnit test should be. What is you opinion about these?
Totally agree, but in reality hadoop unit tests are riddled with unreliable 
timeouts. Many of them are not really unit tests.  They exist because of many 
reasons including Inherent difficulties in unit testing distributed 
functionalities, test unfriendly designs and simply bad test designs.  Many are 
aware of the issue.

> TestJMXGet:testNameNode() fails
> ---
>
> Key: HDFS-10270
> URL: https://issues.apache.org/jira/browse/HDFS-10270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Andras Bokor
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: HDFS-10270.001.patch, TestJMXGet.log, TestJMXGetFails.log
>
>
> It fails with java.util.concurrent.TimeoutException. Actually the problem 
> here is that we expect 2 as NumOpenConnections metric but it is only 1. So 
> the test waits 60 sec then fails.
> Please find maven output so the stack trace attached ([^TestJMXGetFails.log]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6600) fsck -move fails in secured clusters.

2016-04-13 Thread Amitabha Karmakar (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239420#comment-15239420
 ] 

Amitabha Karmakar commented on HDFS-6600:
-

Hi,
  Did we do anything further with this patch ? Right now its spawning a lot of 
problems for us.

Thanks,
Amitabha.

> fsck -move fails in secured clusters.
> -
>
> Key: HDFS-6600
> URL: https://issues.apache.org/jira/browse/HDFS-6600
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security, tools
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Krish Dey
>  Labels: BB2015-05-TBR
> Attachments: HDFS-6600.1.PATCH, HDFS-6600.2.patch
>
>
> In a secured cluster, running hdfs fsck -move fails.  When trying to move the 
> recovered blocks to lost+found, fsck tries to start using a DFSClient, but it 
> doesn't have the credentials to authenticate that client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10247) libhdfs++: Datanode protocol version mismatch

2016-04-13 Thread James Clampffer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clampffer updated HDFS-10247:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> libhdfs++: Datanode protocol version mismatch
> -
>
> Key: HDFS-10247
> URL: https://issues.apache.org/jira/browse/HDFS-10247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10247.HDFS-8707.000.patch, 
> HDFS-10247.HDFS-8707.001.patch, HDFS-10247.HDFS-8707.002.patch
>
>
> Occasionally "Version Mismatch (Expected: 28, Received: 22794 )" shows up in 
> the logs.  This doesn't happen much at all with less than 500 concurrent 
> reads and starts happening often enough to be an issue at 1000 concurrent 
> reads.
> I've seen 3 distinct numbers: 23050 (most common), 22538, and 22794.  If you 
> break these shorts into bytes you get
> {code}
> 23050 -> [90,10]
> 22794 -> [89,10]
> 22538 -> [88,10]
> {code}
> Interestingly enough if we dump buffers holding protobuf messages just before 
> they hit the wire we see things like the following with the first two bytes 
> as 90,10
> {code}
> buffer 
> ={90,10,82,10,64,10,52,10,37,66,80,45,49,51,56,49,48,51,51,57,57,49,45,49,50,55,46,48,46,48,46,49,45,49,52,53,57,53,50,53,54,49,53,55,50,53,16,-127,-128,-128,-128,4,24,-23,7,32,-128,-128,64,18,8,10,0,18,0,26,0,34,0,18,14,108,105,98,104,100,102,115,43,43,95,75,67,43,49,16,0,24,23,32,1}
> {code}
> The first 3 bytes the DN is expecting for an unsecured read block request = 
> {code}
> {0,28,81} //[0, 28]->a short for protocol, 81 is read block opcode
> {code}
> This seems like either connections are getting swapped between readers or
> the header isn't being sent for some reason but the protobuf message is.
> I've ruled out memory stomps on the header data (see HDFS-10241) by sticking 
> the 3 byte header in it's own static buffer that all requests use.
> Some notes:
> -The mismatched number will stay the same for the duration of a stress test.
> -The mismatch is distributed fairly evenly throughout the logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10247) libhdfs++: Datanode protocol version mismatch

2016-04-13 Thread James Clampffer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239379#comment-15239379
 ] 

James Clampffer commented on HDFS-10247:


Thanks for the reviews and feedback Bob!

I've committed this to HDFS-8707

> libhdfs++: Datanode protocol version mismatch
> -
>
> Key: HDFS-10247
> URL: https://issues.apache.org/jira/browse/HDFS-10247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10247.HDFS-8707.000.patch, 
> HDFS-10247.HDFS-8707.001.patch, HDFS-10247.HDFS-8707.002.patch
>
>
> Occasionally "Version Mismatch (Expected: 28, Received: 22794 )" shows up in 
> the logs.  This doesn't happen much at all with less than 500 concurrent 
> reads and starts happening often enough to be an issue at 1000 concurrent 
> reads.
> I've seen 3 distinct numbers: 23050 (most common), 22538, and 22794.  If you 
> break these shorts into bytes you get
> {code}
> 23050 -> [90,10]
> 22794 -> [89,10]
> 22538 -> [88,10]
> {code}
> Interestingly enough if we dump buffers holding protobuf messages just before 
> they hit the wire we see things like the following with the first two bytes 
> as 90,10
> {code}
> buffer 
> ={90,10,82,10,64,10,52,10,37,66,80,45,49,51,56,49,48,51,51,57,57,49,45,49,50,55,46,48,46,48,46,49,45,49,52,53,57,53,50,53,54,49,53,55,50,53,16,-127,-128,-128,-128,4,24,-23,7,32,-128,-128,64,18,8,10,0,18,0,26,0,34,0,18,14,108,105,98,104,100,102,115,43,43,95,75,67,43,49,16,0,24,23,32,1}
> {code}
> The first 3 bytes the DN is expecting for an unsecured read block request = 
> {code}
> {0,28,81} //[0, 28]->a short for protocol, 81 is read block opcode
> {code}
> This seems like either connections are getting swapped between readers or
> the header isn't being sent for some reason but the protobuf message is.
> I've ruled out memory stomps on the header data (see HDFS-10241) by sticking 
> the 3 byte header in it's own static buffer that all requests use.
> Some notes:
> -The mismatched number will stay the same for the duration of a stress test.
> -The mismatch is distributed fairly evenly throughout the logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239297#comment-15239297
 ] 

Hadoop QA commented on HDFS-10216:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 49s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 54s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798508/HDFS-10216.2.patch |
| JIRA Issue | HDFS-10216 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea3cd1104fd7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 903428b |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-orac

[jira] [Commented] (HDFS-3702) Add an option for NOT writing the blocks locally if there is a datanode on the same box as the client

2016-04-13 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239291#comment-15239291
 ] 

stack commented on HDFS-3702:
-

bq. Suppose we find that the CreateFlag.NO_LOCAL_WRITE is bad. How do we remove 
it, i.e. what is the procedure to remove it? I believe we cannot simply remove 
it since it probably will break HBASE compilation.

Just remove it. HBase has loads of practice dealing with stuff being 
moved/removed and changed under it by HDFS.

You could also just leave the flag in place since there is no obligation that 
any filesystem respect the flag. It is a suggestion only (See 
http://linux.die.net/man/2/open / create for the long, interesting set of flags 
it has) 

 bq. Another possible case: suppose that we find the disfavorNodes feature is 
very useful later on. How do we add it?

Same way you'd add any feature.. and HBase would look for it the way it does 
now peeking for presence of extra facility with if/else hdfs, reflection, 
try/catches of nosuchmethod, etc. We have lots of practice doing this also. 
We'd keep using the NO_LOCAL_WRITE flag though, unless it purged, since it does 
what we want. As I understand it, disfavoredNodes would require a lot more work 
of hbase to get the same functionality as NO_LOCAL_WRITE provides.

bq. It seems that the "whatever proofing" is to let the community try the 
features for a period of time. Then, we may add it to the FileSystem API.

Sorry. 'whatever proofing' is overly expansive. We are just adding a flag. I 
just meant, if the tests added here are not sufficient or you want some other 
proof it works, pre-commit, just say. No problem.

Also, the community has been running with this 'feature' for years (See 
HBASE-6435) so no need of our taking the suggested disruptive 'indirection' 
just to add a filesystem 'hint' with attendant mess in HDFS -- extra params on 
create -- that cannot subsequently be removed.

Thanks [~szetszwo]

What do you think of our adding the attributes LimitedPrivate and Evolving to 
the flag. Would that be indicator enough for you?

> Add an option for NOT writing the blocks locally if there is a datanode on 
> the same box as the client
> -
>
> Key: HDFS-3702
> URL: https://issues.apache.org/jira/browse/HDFS-3702
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 2.5.1
>Reporter: Nicolas Liochon
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HDFS-3702.000.patch, HDFS-3702.001.patch, 
> HDFS-3702.002.patch, HDFS-3702.003.patch, HDFS-3702.004.patch, 
> HDFS-3702.005.patch, HDFS-3702.006.patch, HDFS-3702.007.patch, 
> HDFS-3702.008.patch, HDFS-3702.009.patch, HDFS-3702.010.patch, 
> HDFS-3702.011.patch, HDFS-3702_Design.pdf
>
>
> This is useful for Write-Ahead-Logs: these files are writen for recovery 
> only, and are not read when there are no failures.
> Taking HBase as an example, these files will be read only if the process that 
> wrote them (the 'HBase regionserver') dies. This will likely come from a 
> hardware failure, hence the corresponding datanode will be dead as well. So 
> we're writing 3 replicas, but in reality only 2 of them are really useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10247) libhdfs++: Datanode protocol version mismatch

2016-04-13 Thread Bob Hansen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239254#comment-15239254
 ] 

Bob Hansen commented on HDFS-10247:
---

Fair enough.  +1

> libhdfs++: Datanode protocol version mismatch
> -
>
> Key: HDFS-10247
> URL: https://issues.apache.org/jira/browse/HDFS-10247
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Reporter: James Clampffer
>Assignee: James Clampffer
> Attachments: HDFS-10247.HDFS-8707.000.patch, 
> HDFS-10247.HDFS-8707.001.patch, HDFS-10247.HDFS-8707.002.patch
>
>
> Occasionally "Version Mismatch (Expected: 28, Received: 22794 )" shows up in 
> the logs.  This doesn't happen much at all with less than 500 concurrent 
> reads and starts happening often enough to be an issue at 1000 concurrent 
> reads.
> I've seen 3 distinct numbers: 23050 (most common), 22538, and 22794.  If you 
> break these shorts into bytes you get
> {code}
> 23050 -> [90,10]
> 22794 -> [89,10]
> 22538 -> [88,10]
> {code}
> Interestingly enough if we dump buffers holding protobuf messages just before 
> they hit the wire we see things like the following with the first two bytes 
> as 90,10
> {code}
> buffer 
> ={90,10,82,10,64,10,52,10,37,66,80,45,49,51,56,49,48,51,51,57,57,49,45,49,50,55,46,48,46,48,46,49,45,49,52,53,57,53,50,53,54,49,53,55,50,53,16,-127,-128,-128,-128,4,24,-23,7,32,-128,-128,64,18,8,10,0,18,0,26,0,34,0,18,14,108,105,98,104,100,102,115,43,43,95,75,67,43,49,16,0,24,23,32,1}
> {code}
> The first 3 bytes the DN is expecting for an unsecured read block request = 
> {code}
> {0,28,81} //[0, 28]->a short for protocol, 81 is read block opcode
> {code}
> This seems like either connections are getting swapped between readers or
> the header isn't being sent for some reason but the protobuf message is.
> I've ruled out memory stomps on the header data (see HDFS-10241) by sticking 
> the 3 byte header in it's own static buffer that all requests use.
> Some notes:
> -The mismatched number will stay the same for the duration of a stress test.
> -The mismatch is distributed fairly evenly throughout the logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10280) Document new dfsadmin command -evictWriters

2016-04-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239244#comment-15239244
 ] 

Kihwal Lee commented on HDFS-10280:
---

+1 pending jenkins.

> Document new dfsadmin command -evictWriters
> ---
>
> Key: HDFS-10280
> URL: https://issues.apache.org/jira/browse/HDFS-10280
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10280.001.patch
>
>
> HDFS-9945 added a new dfsadmin command -evictWriters, which is great.
> However I noticed typing {{dfs dfsadmin}} does not show a command line help 
> summary. It is shown only when I type {{dfs dfsadmin  -help}}.
> Also, it would be great to document it in {{HDFS Commands Guide}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10280) Document new dfsadmin command -evictWriters

2016-04-13 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-10280:
--
Status: Patch Available  (was: Open)

> Document new dfsadmin command -evictWriters
> ---
>
> Key: HDFS-10280
> URL: https://issues.apache.org/jira/browse/HDFS-10280
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: supportability
> Attachments: HDFS-10280.001.patch
>
>
> HDFS-9945 added a new dfsadmin command -evictWriters, which is great.
> However I noticed typing {{dfs dfsadmin}} does not show a command line help 
> summary. It is shown only when I type {{dfs dfsadmin  -help}}.
> Also, it would be great to document it in {{HDFS Commands Guide}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-10282) The VolumeScanner should warn about replica files which are misplaced

2016-04-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239237#comment-15239237
 ] 

Kihwal Lee edited comment on HDFS-10282 at 4/13/16 1:34 PM:


Can you make the equivalent log message in DirectoryScanner more consistent in 
terms of log level and content? 


was (Author: kihwal):
Can you make the equivalent log message in DirectoryScanner more consistent in 
terms of log level and message? 

> The VolumeScanner should warn about replica files which are misplaced
> -
>
> Key: HDFS-10282
> URL: https://issues.apache.org/jira/browse/HDFS-10282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-10282.001.patch
>
>
> The VolumeScanner should warn about replica files which are misplaced



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10270) TestJMXGet:testNameNode() fails

2016-04-13 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239238#comment-15239238
 ] 

Andras Bokor commented on HDFS-10270:
-

In addition playing with timeouts makes a JUnit test more complicated than a 
JUnit test should be. What is you opinion about these?

> TestJMXGet:testNameNode() fails
> ---
>
> Key: HDFS-10270
> URL: https://issues.apache.org/jira/browse/HDFS-10270
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0, 2.8.0
>Reporter: Andras Bokor
>Assignee: Gergely Novák
>Priority: Minor
> Attachments: HDFS-10270.001.patch, TestJMXGet.log, TestJMXGetFails.log
>
>
> It fails with java.util.concurrent.TimeoutException. Actually the problem 
> here is that we expect 2 as NumOpenConnections metric but it is only 1. So 
> the test waits 60 sec then fails.
> Please find maven output so the stack trace attached ([^TestJMXGetFails.log]).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10282) The VolumeScanner should warn about replica files which are misplaced

2016-04-13 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239237#comment-15239237
 ] 

Kihwal Lee commented on HDFS-10282:
---

Can you make the equivalent log message in DirectoryScanner more consistent in 
terms of log level and message? 

> The VolumeScanner should warn about replica files which are misplaced
> -
>
> Key: HDFS-10282
> URL: https://issues.apache.org/jira/browse/HDFS-10282
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.6.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-10282.001.patch
>
>
> The VolumeScanner should warn about replica files which are misplaced



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HDFS-10216:
---
Attachment: HDFS-10216.2.patch

Attached the updated patch.
I have added a unit test for relative path problem in TestDistCpSync.

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch, HDFS-10216.2.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HDFS-10216:
---
Status: Patch Available  (was: Open)

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch, HDFS-10216.2.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-10216) distcp -diff relative path exception

2016-04-13 Thread Takashi Ohnishi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takashi Ohnishi updated HDFS-10216:
---
Status: Open  (was: Patch Available)

> distcp -diff relative path exception
> 
>
> Key: HDFS-10216
> URL: https://issues.apache.org/jira/browse/HDFS-10216
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: Takashi Ohnishi
>Priority: Critical
> Attachments: HDFS-10216.1.patch
>
>
> Got this exception when running {{distcp -diff}} with relative paths:
> {code}
> $ hadoop distcp -update -diff s1 s2 d1 d2
> 16/03/25 09:45:40 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, 
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[d1], 
> targetPath=d2, targetPathExists=true, preserveRawXattrs=false, 
> filtersFile='null'}
> 16/03/25 09:45:40 INFO client.RMProxy: Connecting to ResourceManager at 
> jzhuge-balancer-1.vpc.cloudera.com/172.26.21.70:8032
> 16/03/25 09:45:41 ERROR tools.DistCp: Exception encountered 
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at org.apache.hadoop.fs.Path.initialize(Path.java:206)
>   at org.apache.hadoop.fs.Path.(Path.java:197)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.getPathWithSchemeAndAuthority(SimpleCopyListing.java:193)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.addToFileListing(SimpleCopyListing.java:202)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListingWithSnapshotDiff(SimpleCopyListing.java:243)
>   at 
> org.apache.hadoop.tools.SimpleCopyListing.doBuildListing(SimpleCopyListing.java:172)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>   at 
> org.apache.hadoop.tools.DistCp.createInputFileListingWithDiff(DistCp.java:388)
>   at org.apache.hadoop.tools.DistCp.execute(DistCp.java:164)
>   at org.apache.hadoop.tools.DistCp.run(DistCp.java:123)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.tools.DistCp.main(DistCp.java:436)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> hdfs://jzhuge-balancer-1.vpc.cloudera.com:8020./d1/.snapshot/s2
>   at java.net.URI.checkPath(URI.java:1804)
>   at java.net.URI.(URI.java:752)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:203)
>   ... 11 more
> {code}
> But theses commands worked:
> * Absolute path: {{hadoop distcp -update -diff s1 s2 /user/systest/d1 
> /user/systest/d2}}
> * No {{-diff}}: {{hadoop distcp -update d1 d2}}
> However, everything was fine when I ran {{hadoop distcp -update -diff s1 s2 
> d1 d2}} again. I am not sure the problem only exists with option {{-diff}}. 
> Trying to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-10276) Different results for exist call for file.ext/name

2016-04-13 Thread Kevin Cox (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239233#comment-15239233
 ] 

Kevin Cox commented on HDFS-10276:
--

I have tested on two moderately recent versions of hadoop and the following 
commands will reproduce the issue. The chmod is unnecessary in most cases but I 
have included it just to be sure. The following outputs are on a HDFS. When run 
on a local path 

{code}
% hdfs version
Hadoop 2.7.2
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
b165c4fe8a74265c792ce23f546c64604acf0e41
Compiled by jenkins on 2016-01-26T00:08Z
Compiled with protoc 2.5.0
>From source with checksum d0fda26633fa762bff87ec759ebe689c
This command was run using 
/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/common/hadoop-common-2.7.2.jar
% hdfs --config hadoop dfs -put <(echo "test") /test
2016-04-13 09:24:58,851 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
hdfs --config hadoop dfs -put <(echo "test") /test  2.74s user 0.21s system 
180% cpu 1.638 total
% hdfs --config hadoop dfs -chmod 777 /test 
2016-04-13 09:25:11,737 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
% hdfs --config hadoop dfs -cat /test/bar   
2016-04-13 09:25:18,905 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
cat: `/test/bar': No such file or directory
% hdfs --config hadoop dfs -cat file:///home/kevincox/test/foo
2016-04-13 09:29:34,150 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
cat: `file:///Users/kevincox/test/foo': No such file or directory
{code}

This isn't a huge issue for the command line client because it returns the same 
exit code but in particular for the {{FileSystem.exists()}} method this is the 
difference between returning false and throwing an error.

> Different results for exist call for file.ext/name
> --
>
> Key: HDFS-10276
> URL: https://issues.apache.org/jira/browse/HDFS-10276
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Kevin Cox
>Assignee: Yuanbo Liu
>
> Given you have a file {{/file}} an existence check for the path 
> {{/file/whatever}} will give different responses for different 
> implementations of FileSystem.
> LocalFileSystem will return false while DistributedFileSystem will throw 
> {{org.apache.hadoop.security.AccessControlException: Permission denied: ..., 
> access=EXECUTE, ...}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >