[jira] [Updated] (HDFS-6235) TestFileJournalManager can fail on Windows due to file locking if tests run out of order.

2014-04-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6235:


Attachment: HDFS-6235.1.patch

The easiest thing to do here is simply to make sure that each test in the suite 
uses a unique storage directory.  That way, there is no chance of collision on 
locked files between multiple tests in the suite.  At the end of the test 
suite, all of these file handles will get released automatically during process 
exit.  I'm attaching a patch that changes the storage directory names to match 
the names of the individual tests.

> TestFileJournalManager can fail on Windows due to file locking if tests run 
> out of order.
> -
>
> Key: HDFS-6235
> URL: https://issues.apache.org/jira/browse/HDFS-6235
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode, test
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-6235.1.patch
>
>
> {{TestFileJournalManager}} has multiple tests that reuse the same storage 
> directory: /filejournaltest2.  The last test in the suite intentionally 
> leaves a file open to test behavior of an unclosed edit log.  On some 
> environments though, tests within a suite execute out of order.  In this 
> case, a lock is still held on /filejournaltest2, and subsequent tests fail 
> trying to delete the directory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6235) TestFileJournalManager can fail on Windows due to file locking if tests run out of order.

2014-04-10 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-6235:
---

 Summary: TestFileJournalManager can fail on Windows due to file 
locking if tests run out of order.
 Key: HDFS-6235
 URL: https://issues.apache.org/jira/browse/HDFS-6235
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 2.4.0, 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth


{{TestFileJournalManager}} has multiple tests that reuse the same storage 
directory: /filejournaltest2.  The last test in the suite intentionally leaves 
a file open to test behavior of an unclosed edit log.  On some environments 
though, tests within a suite execute out of order.  In this case, a lock is 
still held on /filejournaltest2, and subsequent tests fail trying to delete the 
directory.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6232) OfflineEditsViewer throws a NPE on edits containing ACL modifications

2014-04-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966291#comment-13966291
 ] 

Akira AJISAKA commented on HDFS-6232:
-

I confirmed the patch fixed the issue locally.
{code}
[root@trunk hadoop-3.0.0-SNAPSHOT]# bin/hdfs oev -i 
edits_inprogress_0005251 -o fsedits.out
[root@trunk hadoop-3.0.0-SNAPSHOT]# cat fsedits.out


  -56
  
OP_START_LOG_SEGMENT

  5251

  
  
OP_SET_ACL

  5252
  /user/root
  
ACCESS
USER

rwx
  
{code}

> OfflineEditsViewer throws a NPE on edits containing ACL modifications
> -
>
> Key: HDFS-6232
> URL: https://issues.apache.org/jira/browse/HDFS-6232
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Stephen Chu
>Assignee: Akira AJISAKA
> Attachments: HDFS-6232.patch
>
>
> The OfflineEditsViewer using the XML parser will through a NPE when using an 
> edit with a SET_ACL op.
> {code}
> [root@hdfs-nfs current]# hdfs oev -i 
> edits_001-007 -o fsedits.out
> 14/04/10 14:14:18 ERROR offlineEditsViewer.OfflineEditsBinaryLoader: Got 
> RuntimeException at position 505
> Encountered exception. Exiting: null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.util.XMLUtils.mangleXmlString(XMLUtils.java:122)
>   at org.apache.hadoop.hdfs.util.XMLUtils.addSaxString(XMLUtils.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendAclEntriesToXml(FSEditLogOp.java:4085)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$3300(FSEditLogOp.java:132)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$SetAclOp.toXml(FSEditLogOp.java:3528)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.outputToXml(FSEditLogOp.java:3928)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.XmlEditsVisitor.visitOp(XmlEditsVisitor.java:116)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsBinaryLoader.loadEdits(OfflineEditsBinaryLoader.java:80)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go(OfflineEditsViewer.java:142)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.run(OfflineEditsViewer.java:228)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.main(OfflineEditsViewer.java:237)
> [root@hdfs-nfs current]# 
> {code}
> This is reproducible by setting an acl on a file and then running the OEV on 
> the editsinprogress file.
> The stats and binary parsers run OK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6234) TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.

2014-04-10 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966289#comment-13966289
 ] 

Jing Zhao commented on HDFS-6234:
-

Patch looks good to me. +1. Also thanks for cleaning the test code, [~cnauroth]!

> TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.
> --
>
> Key: HDFS-6234
> URL: https://issues.apache.org/jira/browse/HDFS-6234
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HDFS-6234.1.patch
>
>
> {{TestDatanodeConfig#testMemlockLimit}} fails to initialize a {{DataNode}} 
> due to an invalid URI configured in {{dfs.datanode.data.dir}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6232) OfflineEditsViewer throws a NPE on edits containing ACL modifications

2014-04-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6232:


Status: Patch Available  (was: Open)

> OfflineEditsViewer throws a NPE on edits containing ACL modifications
> -
>
> Key: HDFS-6232
> URL: https://issues.apache.org/jira/browse/HDFS-6232
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Stephen Chu
>Assignee: Akira AJISAKA
> Attachments: HDFS-6232.patch
>
>
> The OfflineEditsViewer using the XML parser will through a NPE when using an 
> edit with a SET_ACL op.
> {code}
> [root@hdfs-nfs current]# hdfs oev -i 
> edits_001-007 -o fsedits.out
> 14/04/10 14:14:18 ERROR offlineEditsViewer.OfflineEditsBinaryLoader: Got 
> RuntimeException at position 505
> Encountered exception. Exiting: null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.util.XMLUtils.mangleXmlString(XMLUtils.java:122)
>   at org.apache.hadoop.hdfs.util.XMLUtils.addSaxString(XMLUtils.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendAclEntriesToXml(FSEditLogOp.java:4085)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$3300(FSEditLogOp.java:132)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$SetAclOp.toXml(FSEditLogOp.java:3528)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.outputToXml(FSEditLogOp.java:3928)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.XmlEditsVisitor.visitOp(XmlEditsVisitor.java:116)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsBinaryLoader.loadEdits(OfflineEditsBinaryLoader.java:80)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go(OfflineEditsViewer.java:142)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.run(OfflineEditsViewer.java:228)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.main(OfflineEditsViewer.java:237)
> [root@hdfs-nfs current]# 
> {code}
> This is reproducible by setting an acl on a file and then running the OEV on 
> the editsinprogress file.
> The stats and binary parsers run OK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6232) OfflineEditsViewer throws a NPE on edits containing ACL modifications

2014-04-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966282#comment-13966282
 ] 

Akira AJISAKA commented on HDFS-6232:
-

I reproduced the error. It occurs because {{XMLUtils.addSaxString}} can't 
handle null ACL entry name. The name is an optional value, so it can be null.
I attached a patch to add null check.

> OfflineEditsViewer throws a NPE on edits containing ACL modifications
> -
>
> Key: HDFS-6232
> URL: https://issues.apache.org/jira/browse/HDFS-6232
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Stephen Chu
>Assignee: Akira AJISAKA
> Attachments: HDFS-6232.patch
>
>
> The OfflineEditsViewer using the XML parser will through a NPE when using an 
> edit with a SET_ACL op.
> {code}
> [root@hdfs-nfs current]# hdfs oev -i 
> edits_001-007 -o fsedits.out
> 14/04/10 14:14:18 ERROR offlineEditsViewer.OfflineEditsBinaryLoader: Got 
> RuntimeException at position 505
> Encountered exception. Exiting: null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.util.XMLUtils.mangleXmlString(XMLUtils.java:122)
>   at org.apache.hadoop.hdfs.util.XMLUtils.addSaxString(XMLUtils.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendAclEntriesToXml(FSEditLogOp.java:4085)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$3300(FSEditLogOp.java:132)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$SetAclOp.toXml(FSEditLogOp.java:3528)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.outputToXml(FSEditLogOp.java:3928)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.XmlEditsVisitor.visitOp(XmlEditsVisitor.java:116)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsBinaryLoader.loadEdits(OfflineEditsBinaryLoader.java:80)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go(OfflineEditsViewer.java:142)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.run(OfflineEditsViewer.java:228)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.main(OfflineEditsViewer.java:237)
> [root@hdfs-nfs current]# 
> {code}
> This is reproducible by setting an acl on a file and then running the OEV on 
> the editsinprogress file.
> The stats and binary parsers run OK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6232) OfflineEditsViewer throws a NPE on edits containing ACL modifications

2014-04-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6232:


Attachment: HDFS-6232.patch

> OfflineEditsViewer throws a NPE on edits containing ACL modifications
> -
>
> Key: HDFS-6232
> URL: https://issues.apache.org/jira/browse/HDFS-6232
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Stephen Chu
>Assignee: Akira AJISAKA
> Attachments: HDFS-6232.patch
>
>
> The OfflineEditsViewer using the XML parser will through a NPE when using an 
> edit with a SET_ACL op.
> {code}
> [root@hdfs-nfs current]# hdfs oev -i 
> edits_001-007 -o fsedits.out
> 14/04/10 14:14:18 ERROR offlineEditsViewer.OfflineEditsBinaryLoader: Got 
> RuntimeException at position 505
> Encountered exception. Exiting: null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.util.XMLUtils.mangleXmlString(XMLUtils.java:122)
>   at org.apache.hadoop.hdfs.util.XMLUtils.addSaxString(XMLUtils.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendAclEntriesToXml(FSEditLogOp.java:4085)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$3300(FSEditLogOp.java:132)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$SetAclOp.toXml(FSEditLogOp.java:3528)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.outputToXml(FSEditLogOp.java:3928)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.XmlEditsVisitor.visitOp(XmlEditsVisitor.java:116)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsBinaryLoader.loadEdits(OfflineEditsBinaryLoader.java:80)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go(OfflineEditsViewer.java:142)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.run(OfflineEditsViewer.java:228)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.main(OfflineEditsViewer.java:237)
> [root@hdfs-nfs current]# 
> {code}
> This is reproducible by setting an acl on a file and then running the OEV on 
> the editsinprogress file.
> The stats and binary parsers run OK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6234) TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.

2014-04-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6234:


Attachment: HDFS-6234.1.patch

I'm attaching a patch that sets a valid URI in {{dfs.datanode.data.dir}}.  
While I was in here, I also made some minor changes to make sure every created 
{{DataNode}} gets shut down.  I ran the test successfully on Mac and Windows 
with this patch.

> TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.
> --
>
> Key: HDFS-6234
> URL: https://issues.apache.org/jira/browse/HDFS-6234
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HDFS-6234.1.patch
>
>
> {{TestDatanodeConfig#testMemlockLimit}} fails to initialize a {{DataNode}} 
> due to an invalid URI configured in {{dfs.datanode.data.dir}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6234) TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.

2014-04-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6234:


Status: Patch Available  (was: Open)

> TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.
> --
>
> Key: HDFS-6234
> URL: https://issues.apache.org/jira/browse/HDFS-6234
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
> Attachments: HDFS-6234.1.patch
>
>
> {{TestDatanodeConfig#testMemlockLimit}} fails to initialize a {{DataNode}} 
> due to an invalid URI configured in {{dfs.datanode.data.dir}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6234) TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.

2014-04-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966274#comment-13966274
 ] 

Chris Nauroth commented on HDFS-6234:
-

Here is the default for {{dfs.datanode.data.dir}}:

{code}
file://${hadoop.tmp.dir}/dfs/data
{code}

{{hadoop.tmp.dir}} will be a Windows file system path with back slashes.  The 
problem is that prepending "file://" in front of a Windows file system path 
does not necessarily yield a valid URI because of the back slashes.  The test 
fails with this exception:

{code}
java.lang.IllegalArgumentException: Failed to parse conf property 
dfs.datanode.data.dir: 
file://D:\w\hbk\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/data
at java.io.WinNTFileSystem.canonicalize0(Native Method)
at java.io.Win32FileSystem.canonicalize(Win32FileSystem.java:414)
at java.io.File.getCanonicalPath(File.java:589)
at java.io.File.getCanonicalFile(File.java:614)
at org.apache.hadoop.hdfs.server.common.Util.fileAsURI(Util.java:73)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:58)
at 
org.apache.hadoop.hdfs.server.datanode.StorageLocation.parse(StorageLocation.java:94)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getStorageLocations(DataNode.java:1784)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1768)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1812)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1802)
at 
org.apache.hadoop.hdfs.TestDatanodeConfig.testMemlockLimit(TestDatanodeConfig.java:133)
{code}


> TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.
> --
>
> Key: HDFS-6234
> URL: https://issues.apache.org/jira/browse/HDFS-6234
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, test
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Trivial
>
> {{TestDatanodeConfig#testMemlockLimit}} fails to initialize a {{DataNode}} 
> due to an invalid URI configured in {{dfs.datanode.data.dir}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966275#comment-13966275
 ] 

Hadoop QA commented on HDFS-6233:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639738/HDFS-6233.01.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.fs.TestHardLink

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6649//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6649//console

This message is automatically generated.

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
> Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.jav

[jira] [Created] (HDFS-6234) TestDatanodeConfig#testMemlockLimit fails on Windows due to invalid file path.

2014-04-10 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-6234:
---

 Summary: TestDatanodeConfig#testMemlockLimit fails on Windows due 
to invalid file path.
 Key: HDFS-6234
 URL: https://issues.apache.org/jira/browse/HDFS-6234
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, test
Affects Versions: 2.4.0, 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Trivial


{{TestDatanodeConfig#testMemlockLimit}} fails to initialize a {{DataNode}} due 
to an invalid URI configured in {{dfs.datanode.data.dir}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966272#comment-13966272
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6233:
---

Please also change HardLink.createHardLinkMult so that if the command fails, 
include hardLinkCommand to the exception message.

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
> Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop

[jira] [Commented] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966263#comment-13966263
 ] 

Chris Nauroth commented on HDFS-6233:
-

Good point.  :-)  Thanks again.

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
> Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>   at java.lang.Thread.run(Thread.jav

[jira] [Commented] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966259#comment-13966259
 ] 

Arpit Agarwal commented on HDFS-6233:
-

Our comments crossed, thanks for the quick review Chris! :-)

I'd also like to add a unit test, will look into it tomorrow.

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
> Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.dat

[jira] [Commented] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966258#comment-13966258
 ] 

Arpit Agarwal commented on HDFS-6233:
-

Initial patch, I will probably add a unit test before its ready for review.

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
> Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837

[jira] [Updated] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6233:


Hadoop Flags: Reviewed

+1 for the patch, pending Jenkins run.  Thanks a lot for tracking down this 
tricky bug!

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
> Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>   at jav

[jira] [Updated] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6233:


Status: Patch Available  (was: Open)

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
> Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,359 INFO org.apache

[jira] [Updated] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6233:


Attachment: HDFS-6233.01.patch

The "1>NUL" is passed as a parameter to the winutils command instead of the 
being interpreted by the shell. The simplest fix is to just remove it.

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
> Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.

[jira] [Assigned] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDFS-6233:
---

Assignee: Arpit Agarwal

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,359 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Removed B

[jira] [Updated] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6233:


Component/s: datanode

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,359 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Removed Block poo

[jira] [Updated] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6233:


Component/s: (was: datanode)
 tools

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> -
>
> Key: HDFS-6233
> URL: https://issues.apache.org/jira/browse/HDFS-6233
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, tools
>Affects Versions: 2.4.0
> Environment: Windows
>Reporter: Huan Huang
>Assignee: Arpit Agarwal
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>old LV = -44; old CTime = 0.
>new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool  (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>   at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool  (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>   at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,359 INFO org.apache.hadoop.hdfs.server.datan

[jira] [Created] (HDFS-6233) Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.

2014-04-10 Thread Huan Huang (JIRA)
Huan Huang created HDFS-6233:


 Summary: Datanode upgrade in Windows from 1.x to 2.4 fails with 
symlink error.
 Key: HDFS-6233
 URL: https://issues.apache.org/jira/browse/HDFS-6233
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.4.0
 Environment: Windows
Reporter: Huan Huang


I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
hard link exception.
Repro steps:
*Installed Hadoop 1.x
*hadoop dfsadmin -safemode enter
*hadoop dfsadmin -saveNamespace
*hadoop namenode -finalize
*Stop all services
*Uninstall Hadoop 1.x 
*Install Hadoop 2.4 
*Start namenode with -upgrade option
*Try to start datanode, begin to see Hardlink exception in datanode service log.

{code}

2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server listener 
on 8010: starting
2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Data-node version: -55 and name-node layout version: -56
2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock 
on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Upgrading storage directory d:\hadoop\data\hdfs\dn.
   old LV = -44; old CTime = 0.
   new LV = -55; new CTime = 1397168400373
2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
2014-04-10 22:47:12,254 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: 
Initialization failed for block pool Block pool  (Datanode Uuid 
unassigned) service to myhost/10.0.0.1:8020
java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
command line arguments.
at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
at 
org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
at java.lang.Thread.run(Thread.java:722)
2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Ending block pool service for: Block pool  (Datanode Uuid 
unassigned) service to myhost/10.0.0.1:8020
2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Block pool ID needed, but service not yet registered with NN
java.lang.Exception: trace
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
at 
org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
at java.lang.Thread.run(Thread.java:722)
2014-04-10 22:47:12,359 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Removed Block pool  (Datanode Uuid unassigned)
2014-04-10 22:47:12,360 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Block pool ID needed, but service not yet registered with NN
java.lang.Exception: trace
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataN

[jira] [Commented] (HDFS-2831) Description of dfs.namenode.name.dir should be changed

2014-04-10 Thread J.Andreina (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966243#comment-13966243
 ] 

J.Andreina commented on HDFS-2831:
--

Thanks everyone for explaining. I got the difference. I too agree with your 
point. 

> Description of dfs.namenode.name.dir should be changed 
> ---
>
> Key: HDFS-2831
> URL: https://issues.apache.org/jira/browse/HDFS-2831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.21.0, 0.23.0
> Environment: NA
>Reporter: J.Andreina
>Priority: Minor
> Fix For: 0.24.0
>
>
> {noformat}
> 
>   dfs.namenode.name.dir
>   file://${hadoop.tmp.dir}/dfs/name
>   Determines where on the local filesystem the DFS name node
>   should store the name table(fsimage).  If this is a comma-delimited list
>   of directories then the name table is replicated in all of the
>   directories, for redundancy. 
> 
> {noformat}
> In the above property the description part is given as "Determines where on 
> the local filesystem the DFS name node should store the name table(fsimage).  
> " but it stores both name table(If nametable means only fsimage) and edits 
> file. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6231) DFSClient hangs infinitely if using hedged reads and all eligible datanodes die.

2014-04-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966228#comment-13966228
 ] 

stack commented on HDFS-6231:
-

+1

[~cnauroth] Thanks for the fix-up.

> DFSClient hangs infinitely if using hedged reads and all eligible datanodes 
> die.
> 
>
> Key: HDFS-6231
> URL: https://issues.apache.org/jira/browse/HDFS-6231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.4.1
>
> Attachments: HDFS-6231.1.patch
>
>
> When using hedged reads, and all eligible datanodes for the read get flagged 
> as dead or ignored, then the client is supposed to refetch block locations 
> from the NameNode to retry the read.  Instead, we've seen that the client can 
> hang indefinitely.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6231) DFSClient hangs infinitely if using hedged reads and all eligible datanodes die.

2014-04-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6231:


   Resolution: Fixed
Fix Version/s: 2.4.1
   3.0.0
   Status: Resolved  (was: Patch Available)

Nicholas, thank you for the code review.  I have committed this to trunk, 
branch-2 and branch-2.4.

> DFSClient hangs infinitely if using hedged reads and all eligible datanodes 
> die.
> 
>
> Key: HDFS-6231
> URL: https://issues.apache.org/jira/browse/HDFS-6231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.4.1
>
> Attachments: HDFS-6231.1.patch
>
>
> When using hedged reads, and all eligible datanodes for the read get flagged 
> as dead or ignored, then the client is supposed to refetch block locations 
> from the NameNode to retry the read.  Instead, we've seen that the client can 
> hang indefinitely.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6231) DFSClient hangs infinitely if using hedged reads and all eligible datanodes die.

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966199#comment-13966199
 ] 

Hudson commented on HDFS-6231:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5496 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5496/])
HDFS-6231. DFSClient hangs infinitely if using hedged reads and all eligible 
datanodes die. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586551)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestPread.java


> DFSClient hangs infinitely if using hedged reads and all eligible datanodes 
> die.
> 
>
> Key: HDFS-6231
> URL: https://issues.apache.org/jira/browse/HDFS-6231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-6231.1.patch
>
>
> When using hedged reads, and all eligible datanodes for the read get flagged 
> as dead or ignored, then the client is supposed to refetch block locations 
> from the NameNode to retry the read.  Instead, we've seen that the client can 
> hang indefinitely.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966118#comment-13966118
 ] 

Hudson commented on HDFS-6224:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5494 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5494/])
Undo accidental FSNamesystem change introduced in HDFS-6224 commit. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586515)
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6232) OfflineEditsViewer throws a NPE on edits containing ACL modifications

2014-04-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HDFS-6232:
---

Assignee: Akira AJISAKA

> OfflineEditsViewer throws a NPE on edits containing ACL modifications
> -
>
> Key: HDFS-6232
> URL: https://issues.apache.org/jira/browse/HDFS-6232
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Stephen Chu
>Assignee: Akira AJISAKA
>
> The OfflineEditsViewer using the XML parser will through a NPE when using an 
> edit with a SET_ACL op.
> {code}
> [root@hdfs-nfs current]# hdfs oev -i 
> edits_001-007 -o fsedits.out
> 14/04/10 14:14:18 ERROR offlineEditsViewer.OfflineEditsBinaryLoader: Got 
> RuntimeException at position 505
> Encountered exception. Exiting: null
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.util.XMLUtils.mangleXmlString(XMLUtils.java:122)
>   at org.apache.hadoop.hdfs.util.XMLUtils.addSaxString(XMLUtils.java:193)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendAclEntriesToXml(FSEditLogOp.java:4085)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$3300(FSEditLogOp.java:132)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$SetAclOp.toXml(FSEditLogOp.java:3528)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.outputToXml(FSEditLogOp.java:3928)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.XmlEditsVisitor.visitOp(XmlEditsVisitor.java:116)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsBinaryLoader.loadEdits(OfflineEditsBinaryLoader.java:80)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go(OfflineEditsViewer.java:142)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.run(OfflineEditsViewer.java:228)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.main(OfflineEditsViewer.java:237)
> [root@hdfs-nfs current]# 
> {code}
> This is reproducible by setting an acl on a file and then running the OEV on 
> the editsinprogress file.
> The stats and binary parsers run OK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6229) Race condition in failover can cause RetryCache fail to work

2014-04-10 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966109#comment-13966109
 ] 

Jing Zhao commented on HDFS-6229:
-

The failed unit test should be unrelated.

> Race condition in failover can cause RetryCache fail to work
> 
>
> Key: HDFS-6229
> URL: https://issues.apache.org/jira/browse/HDFS-6229
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.1.0-beta
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6229.000.patch, retrycache-race.patch
>
>
> Currently when NN failover happens, the old SBN first sets its state to 
> active, then starts the active services (including tailing all the remaining 
> editlog and building a complete retry cache based on the editlog). If a retry 
> request, which has already succeeded in the old ANN (but the client fails to 
> receive the response), comes in between, this retry may still get served by 
> the new ANN but miss the retry cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6229) Race condition in failover can cause RetryCache fail to work

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966106#comment-13966106
 ] 

Hadoop QA commented on HDFS-6229:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639639/HDFS-6229.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6647//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6647//console

This message is automatically generated.

> Race condition in failover can cause RetryCache fail to work
> 
>
> Key: HDFS-6229
> URL: https://issues.apache.org/jira/browse/HDFS-6229
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.1.0-beta
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6229.000.patch, retrycache-race.patch
>
>
> Currently when NN failover happens, the old SBN first sets its state to 
> active, then starts the active services (including tailing all the remaining 
> editlog and building a complete retry cache based on the editlog). If a retry 
> request, which has already succeeded in the old ANN (but the client fails to 
> receive the response), comes in between, this retry may still get served by 
> the new ANN but miss the retry cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6103) FSImage file system image version check throw a (slightly) wrong parameter.

2014-04-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-6103:


  Resolution: Duplicate
Assignee: (was: Akira AJISAKA)
Target Version/s:   (was: 2.4.0)
  Status: Resolved  (was: Patch Available)

This issue was fixed by HDFS-6215. Closing.

> FSImage file system image version check throw a (slightly) wrong parameter.
> ---
>
> Key: HDFS-6103
> URL: https://issues.apache.org/jira/browse/HDFS-6103
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.0.0, 2.4.0
>Reporter: jun aoki
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-6103.patch
>
>
> Trivial error message issue:
> When upgrading hdfs, say from 2.0.5 to 2.2.0, users will need to start 
> namenode with "upgrade" option.
> e.g. 
> {code}
> sudo service namenode upgrade
> {code}
> That said, the actual error while without the option said "-upgrade" (with a 
> hyphen) 
> {code}
> 2014-03-13 23:38:15,488 FATAL 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.io.IOException:
> File system image contains an old layout version -40.
> An upgrade to version -47 is required.
> Please restart NameNode with -upgrade option.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:221)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
> 2014-03-13 23:38:15,492 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1
> 2014-03-13 23:38:15,493 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NameNode at nn1/192.168.2.202
> /
> ~
> {code}
> I'm referring to 2.0.5 above, 
> https://github.com/apache/hadoop-common/blob/branch-2.0.5/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java#L225
> I haven't tried the trunk but it seems to return "UPGRADE" (all upper case) 
> which again anther slightly wrong error description.
> https://github.com/apache/hadoop-common/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java#L232



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6231) DFSClient hangs infinitely if using hedged reads and all eligible datanodes die.

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966102#comment-13966102
 ] 

Hadoop QA commented on HDFS-6231:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639652/HDFS-6231.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6648//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6648//console

This message is automatically generated.

> DFSClient hangs infinitely if using hedged reads and all eligible datanodes 
> die.
> 
>
> Key: HDFS-6231
> URL: https://issues.apache.org/jira/browse/HDFS-6231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-6231.1.patch
>
>
> When using hedged reads, and all eligible datanodes for the read get flagged 
> as dead or ignored, then the client is supposed to refetch block locations 
> from the NameNode to retry the read.  Instead, we've seen that the client can 
> hang indefinitely.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6217) Webhdfs PUT operations may not work via a http proxy

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966078#comment-13966078
 ] 

Hadoop QA commented on HDFS-6217:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639620/HDFS-6217.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6646//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6646//console

This message is automatically generated.

> Webhdfs PUT operations may not work via a http proxy
> 
>
> Key: HDFS-6217
> URL: https://issues.apache.org/jira/browse/HDFS-6217
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-6217.patch
>
>
> Most of webhdfs's PUT operations have no message body.  The HTTP/1.1 spec is 
> fuzzy in how PUT requests with no body should be handled.  If the request 
> does not specify chunking or Content-Length, the server _may_ consider the 
> request to have no body.  However, popular proxies such as Apache Traffic 
> Server will reject PUT requests with no body unless Content-Length: 0 is 
> specified.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966073#comment-13966073
 ] 

Andrew Wang commented on HDFS-6224:
---

I messed up the commit a bit and had to revert out some accidental changes to 
FSNamesystem, so there's an additional fixup commit in trunk and branch-2 as 
well. Sorry for the fuss.

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965999#comment-13965999
 ] 

Hudson commented on HDFS-6224:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5493 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5493/])
HDFS-6224. Add a unit test to TestAuditLogger for file permissions passed to 
logAuditEvent. Contributed by Charles Lamb. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586490)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestAuditLogger.java


> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-6224:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2, thanks Charles for the patch and Nicholas for 
the additional review.

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Fix For: 2.5.0
>
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965935#comment-13965935
 ] 

Andrew Wang commented on HDFS-6224:
---

Latest patch just went through Jenkins and LGTM, will commit shortly.

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965933#comment-13965933
 ] 

Hadoop QA commented on HDFS-6224:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639634/HDFS-6224.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6645//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6645//console

This message is automatically generated.

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6168) Remove deprecated methods in DistributedFileSystem

2014-04-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965926#comment-13965926
 ] 

Andrew Wang commented on HDFS-6168:
---

DFS is kind of a special case since a lot of users dig downcast FileSystem to 
get at HDFS-specific methods.

This is me being conservative about compatibility. Strictly by the annotations 
this is okay, but this kind of change has a small benefit for potentially lots 
of pain. Can we restrict it to trunk, and leave it out of branch-2?

> Remove deprecated methods in DistributedFileSystem
> --
>
> Key: HDFS-6168
> URL: https://issues.apache.org/jira/browse/HDFS-6168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.5.0
>
> Attachments: h6168_20140327.patch, h6168_20140327b.patch
>
>
> Some methods in DistributedFileSystem are already deprecated for a long time. 
>  They should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965921#comment-13965921
 ] 

Hadoop QA commented on HDFS-6224:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639631/HDFS-6224.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6644//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6644//console

This message is automatically generated.

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965879#comment-13965879
 ] 

Hadoop QA commented on HDFS-6224:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639624/HDFS-6224.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6643//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6643//console

This message is automatically generated.

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6232) OfflineEditsViewer throws a NPE on edits containing ACL modifications

2014-04-10 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-6232:
-

 Summary: OfflineEditsViewer throws a NPE on edits containing ACL 
modifications
 Key: HDFS-6232
 URL: https://issues.apache.org/jira/browse/HDFS-6232
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: tools
Affects Versions: 2.4.0, 3.0.0
Reporter: Stephen Chu


The OfflineEditsViewer using the XML parser will through a NPE when using an 
edit with a SET_ACL op.

{code}
[root@hdfs-nfs current]# hdfs oev -i 
edits_001-007 -o fsedits.out
14/04/10 14:14:18 ERROR offlineEditsViewer.OfflineEditsBinaryLoader: Got 
RuntimeException at position 505
Encountered exception. Exiting: null
java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.util.XMLUtils.mangleXmlString(XMLUtils.java:122)
at org.apache.hadoop.hdfs.util.XMLUtils.addSaxString(XMLUtils.java:193)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.appendAclEntriesToXml(FSEditLogOp.java:4085)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.access$3300(FSEditLogOp.java:132)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$SetAclOp.toXml(FSEditLogOp.java:3528)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogOp.outputToXml(FSEditLogOp.java:3928)
at 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.XmlEditsVisitor.visitOp(XmlEditsVisitor.java:116)
at 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsBinaryLoader.loadEdits(OfflineEditsBinaryLoader.java:80)
at 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.go(OfflineEditsViewer.java:142)
at 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.run(OfflineEditsViewer.java:228)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.OfflineEditsViewer.main(OfflineEditsViewer.java:237)
[root@hdfs-nfs current]# 
{code}

This is reproducible by setting an acl on a file and then running the OEV on 
the editsinprogress file.

The stats and binary parsers run OK.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6231) DFSClient hangs infinitely if using hedged reads and all eligible datanodes die.

2014-04-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6231:
--

Hadoop Flags: Reviewed

+1 on the patch.  Good catch! 

> DFSClient hangs infinitely if using hedged reads and all eligible datanodes 
> die.
> 
>
> Key: HDFS-6231
> URL: https://issues.apache.org/jira/browse/HDFS-6231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-6231.1.patch
>
>
> When using hedged reads, and all eligible datanodes for the read get flagged 
> as dead or ignored, then the client is supposed to refetch block locations 
> from the NameNode to retry the read.  Instead, we've seen that the client can 
> hang indefinitely.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5776) Support 'hedged' reads in DFSClient

2014-04-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965839#comment-13965839
 ] 

Chris Nauroth commented on HDFS-5776:
-

FYI, I've discovered that the {{DFSClient}} can hang infinitely if using hedged 
reads and all eligible datanodes die.  This bug is present in 2.4.0.  I've 
posted a patch on HDFS-6231 to fix it, hopefully for inclusion in 2.4.1.

> Support 'hedged' reads in DFSClient
> ---
>
> Key: HDFS-5776
> URL: https://issues.apache.org/jira/browse/HDFS-5776
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
> Fix For: 2.4.0
>
> Attachments: HDFS-5776-v10.txt, HDFS-5776-v11.txt, HDFS-5776-v12.txt, 
> HDFS-5776-v12.txt, HDFS-5776-v13.wip.txt, HDFS-5776-v14.txt, 
> HDFS-5776-v15.txt, HDFS-5776-v17.txt, HDFS-5776-v17.txt, HDFS-5776-v2.txt, 
> HDFS-5776-v3.txt, HDFS-5776-v4.txt, HDFS-5776-v5.txt, HDFS-5776-v6.txt, 
> HDFS-5776-v7.txt, HDFS-5776-v8.txt, HDFS-5776-v9.txt, HDFS-5776.txt, 
> HDFS-5776v18.txt, HDFS-5776v21-branch2.txt, HDFS-5776v21.txt
>
>
> This is a placeholder of hdfs related stuff backport from 
> https://issues.apache.org/jira/browse/HBASE-7509
> The quorum read ability should be helpful especially to optimize read outliers
> we can utilize "dfs.dfsclient.quorum.read.threshold.millis" & 
> "dfs.dfsclient.quorum.read.threadpool.size" to enable/disable the hedged read 
> ability from client side(e.g. HBase), and by using DFSQuorumReadMetrics, we 
> could export the interested metric valus into client system(e.g. HBase's 
> regionserver metric).
> The core logic is in pread code path, we decide to goto the original 
> fetchBlockByteRange or the new introduced fetchBlockByteRangeSpeculative per 
> the above config items.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6231) DFSClient hangs infinitely if using hedged reads and all eligible datanodes die.

2014-04-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6231:


Attachment: HDFS-6231.1.patch

I found this problem from observing runs of {{TestPread}} that were hanging.  
It turns out that on most fast machines, {{TestPread}} doesn't actually end up 
triggering a hedged read.  The initial read completes before the hedged read 
threshold, so we don't bother.  On one of my slower VMs, I was seeing the test 
hang.  I was then able to repro even on my fast machines by aggressively 
down-tuning the hedged read threshold.

Here is a patch to fix the bug.
# {{DFSInputStream#getFromOneDataNode}}: This was the main problem.  The 
returned {{Callable}} needs to release a {{CountDownLatch}}, but it wasn't 
doing it in the failure case.  It was only doing it in the success case.  I 
changed it to release the latch inside a finally clause.
# {{DFSInputStream#hedgedFetchBlockByteRange}}: After I applied the first 
change, it exposed another problem here.  If all datanodes die, then we need to 
refetch block locations from the datanode.  That wasn't happening, because this 
code used the condition {{futures == null}} to decide whether or not to refetch 
block locations via a call to {{chooseDataNode}}.  After a hedged read has been 
issued, {{futures}} is always non-null, so this wasn't sufficient.  I changed 
the code to check for empty {{futures}}.  The reason this works is that 
{{getFirstToComplete}} removes failed futures from the list.  This means that 
if all datanodes die, then {{futures}} drops back to an empty list, and then we 
go into {{chooseDataNode}} to refetch block locations.
# In {{TestPread}}, I downtuned the hedged read threshold a lot so that this 
test really does issue hedged reads even on fast machines.  That ought to help 
us catch regressions in the future.  Now that hedged reads are really happening 
during the test runs, I found that I needed to reset the metrics counts in 
order to satisfy some assertions.  This is required because the metrics 
instance is static/global.

I've had multiple successful test runs of {{TestPread}} with this patch on both 
my fast Mac and my slow Windows VM.

> DFSClient hangs infinitely if using hedged reads and all eligible datanodes 
> die.
> 
>
> Key: HDFS-6231
> URL: https://issues.apache.org/jira/browse/HDFS-6231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-6231.1.patch
>
>
> When using hedged reads, and all eligible datanodes for the read get flagged 
> as dead or ignored, then the client is supposed to refetch block locations 
> from the NameNode to retry the read.  Instead, we've seen that the client can 
> hang indefinitely.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6231) DFSClient hangs infinitely if using hedged reads and all eligible datanodes die.

2014-04-10 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-6231:


Status: Patch Available  (was: Open)

> DFSClient hangs infinitely if using hedged reads and all eligible datanodes 
> die.
> 
>
> Key: HDFS-6231
> URL: https://issues.apache.org/jira/browse/HDFS-6231
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-6231.1.patch
>
>
> When using hedged reads, and all eligible datanodes for the read get flagged 
> as dead or ignored, then the client is supposed to refetch block locations 
> from the NameNode to retry the read.  Instead, we've seen that the client can 
> hang indefinitely.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6203) check other namenode's state before HAadmin transitionToActive

2014-04-10 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965820#comment-13965820
 ] 

Kihwal Lee commented on HDFS-6203:
--

> I think this is same as HDFS-2949 and some discussion there
Yes, you are right, Uma.  I will dupe this jira to HDFS-2949 and work on it.

> check other namenode's state before HAadmin transitionToActive
> --
>
> Key: HDFS-6203
> URL: https://issues.apache.org/jira/browse/HDFS-6203
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 2.3.0
>Reporter: patrick white
>Assignee: Kihwal Lee
>
> Current behavior is that the HAadmin -transitionToActive command will 
> complete the transition to Active even if the other namenode is already in 
> Active state. This is not an allowed condition and should be handled by 
> fencing, however setting both namenode's active can happen accidentally with 
> relative ease, especially in a production environment when performing manual 
> maintenance operations. 
> If this situation does occur it is very serious and will likely cause data 
> loss, or best case, require a difficult recovery to avoid data loss.
> This is requesting an enhancement to haadmin's -transitionToActive command, 
> to have HAadmin check the Active state of the other namenode before 
> completing the transition. If the other namenode is Active, then fail the 
> request due to other nn already-active.
> Not sure if there is a scenario where both namenode's being Active is valid 
> or desired, but to maintain functional compatibility a 'force' parameter 
> could be added to  override this check and allow previous behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6231) DFSClient hangs infinitely if using hedged reads and all eligible datanodes die.

2014-04-10 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-6231:
---

 Summary: DFSClient hangs infinitely if using hedged reads and all 
eligible datanodes die.
 Key: HDFS-6231
 URL: https://issues.apache.org/jira/browse/HDFS-6231
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.4.0, 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth


When using hedged reads, and all eligible datanodes for the read get flagged as 
dead or ignored, then the client is supposed to refetch block locations from 
the NameNode to retry the read.  Instead, we've seen that the client can hang 
indefinitely.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6229) Race condition in failover can cause RetryCache fail to work

2014-04-10 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6229:


Status: Patch Available  (was: Open)

> Race condition in failover can cause RetryCache fail to work
> 
>
> Key: HDFS-6229
> URL: https://issues.apache.org/jira/browse/HDFS-6229
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.1.0-beta
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6229.000.patch, retrycache-race.patch
>
>
> Currently when NN failover happens, the old SBN first sets its state to 
> active, then starts the active services (including tailing all the remaining 
> editlog and building a complete retry cache based on the editlog). If a retry 
> request, which has already succeeded in the old ANN (but the client fails to 
> receive the response), comes in between, this retry may still get served by 
> the new ANN but miss the retry cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6229) Race condition in failover can cause RetryCache fail to work

2014-04-10 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6229:


Attachment: HDFS-6229.000.patch

We can let HAState#setStateInternal hold locks of both FSNamesystem and 
FSNamesystem#retryCache. In this way reading retryCache will not happen between 
the NN failover. Upload an initial patch to fix.

> Race condition in failover can cause RetryCache fail to work
> 
>
> Key: HDFS-6229
> URL: https://issues.apache.org/jira/browse/HDFS-6229
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.1.0-beta
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: HDFS-6229.000.patch, retrycache-race.patch
>
>
> Currently when NN failover happens, the old SBN first sets its state to 
> active, then starts the active services (including tailing all the remaining 
> editlog and building a complete retry cache based on the editlog). If a retry 
> request, which has already succeeded in the old ANN (but the client fails to 
> receive the response), comes in between, this retry may still get served by 
> the new ANN but miss the retry cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6217) Webhdfs PUT operations may not work via a http proxy

2014-04-10 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-6217:
--

Status: Patch Available  (was: Open)

> Webhdfs PUT operations may not work via a http proxy
> 
>
> Key: HDFS-6217
> URL: https://issues.apache.org/jira/browse/HDFS-6217
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-6217.patch
>
>
> Most of webhdfs's PUT operations have no message body.  The HTTP/1.1 spec is 
> fuzzy in how PUT requests with no body should be handled.  If the request 
> does not specify chunking or Content-Length, the server _may_ consider the 
> request to have no body.  However, popular proxies such as Apache Traffic 
> Server will reject PUT requests with no body unless Content-Length: 0 is 
> specified.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-04-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-6230:


Description: The NameNode web UI does not show upgrade information anymore. 
Hadoop 2.0 also does not have the _hadoop dfsadmin -upgradeProgress_ command to 
check the upgrade status.  (was: The NameNode web UI does not show upgrade 
information anymore. Hadoop 2.0 also does not have the _hadoop dfsadmin 
-upgradeProgress_ command to check the upgrade status.

The status should be exposed via the web UI.)

> Expose upgrade status through NameNode web UI
> -
>
> Key: HDFS-6230
> URL: https://issues.apache.org/jira/browse/HDFS-6230
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>
> The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 
> also does not have the _hadoop dfsadmin -upgradeProgress_ command to check 
> the upgrade status.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6230) Expose upgrade status through NameNode web UI

2014-04-10 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-6230:
---

 Summary: Expose upgrade status through NameNode web UI
 Key: HDFS-6230
 URL: https://issues.apache.org/jira/browse/HDFS-6230
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


The NameNode web UI does not show upgrade information anymore. Hadoop 2.0 also 
does not have the _hadoop dfsadmin -upgradeProgress_ command to check the 
upgrade status.

The status should be exposed via the web UI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6224:
---

Attachment: HDFS-6224.004.patch

Removed LOG, and imports. Fixed indentation.

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch, HDFS-6224.004.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965736#comment-13965736
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6224:
---

Why adding LOG?  Could you also clean up the unused imports?

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HDFS-6224:
--

Assignee: Charles Lamb  (was: Andrew Wang)

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Charles Lamb
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6204) TestRBWBlockInvalidation may fail

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965719#comment-13965719
 ] 

Hudson commented on HDFS-6204:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6204. Fix TestRBWBlockInvalidation: change the last sleep to a loop. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586039)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java


> TestRBWBlockInvalidation may fail
> -
>
> Key: HDFS-6204
> URL: https://issues.apache.org/jira/browse/HDFS-6204
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.4.1
>
> Attachments: h6204_20140408.patch
>
>
> {code}
> java.lang.AssertionError: There should not be any replica in the 
> corruptReplicasMap expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testBlockInvalidationWhenRBWReplicaMissedInDN(TestRBWBlockInvalidation.java:137)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6206) DFSUtil.substituteForWildcardAddress may throw NPE

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965724#comment-13965724
 ] 

Hudson commented on HDFS-6206:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6206. Fix NullPointerException in DFSUtil.substituteForWildcardAddress. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586034)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java


> DFSUtil.substituteForWildcardAddress may throw NPE
> --
>
> Key: HDFS-6206
> URL: https://issues.apache.org/jira/browse/HDFS-6206
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.4.1
>
> Attachments: h6206_20140408.patch
>
>
> InetSocketAddress.getAddress() may return null if the address null is 
> unresolved.  In such case, DFSUtil.substituteForWildcardAddress may throw NPE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6228) comments typo fix for FsDatasetImpl.java

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965723#comment-13965723
 ] 

Hudson commented on HDFS-6228:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6228. comments typo fix for FsDatasetImpl.java Contributed by 
zhaoyunjiong. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586264)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> comments typo fix for FsDatasetImpl.java
> 
>
> Key: HDFS-6228
> URL: https://issues.apache.org/jira/browse/HDFS-6228
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-6228.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965714#comment-13965714
 ] 

Hudson commented on HDFS-6208:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6208. DataNode caching can leak file descriptors. Contributed by Chris 
Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586154)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java


> DataNode caching can leak file descriptors.
> ---
>
> Key: HDFS-6208
> URL: https://issues.apache.org/jira/browse/HDFS-6208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.4.1
>
> Attachments: HDFS-6208.1.patch
>
>
> In the DataNode, management of mmap'd/mlock'd block files can leak file 
> descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6215) Wrong error message for upgrade

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965720#comment-13965720
 ] 

Hudson commented on HDFS-6215:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6215. Wrong error message for upgrade. (Kihwal Lee via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586011)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java


> Wrong error message for upgrade
> ---
>
> Key: HDFS-6215
> URL: https://issues.apache.org/jira/browse/HDFS-6215
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Fix For: 3.0.0, 2.5.0, 2.4.1
>
> Attachments: HDFS-6215.patch
>
>
> UPGRADE is printed instead of -upgrade.
> {panel}
> File system image contains an old layout version -51.
> An upgrade to version -56 is required.
> Please restart NameNode with the "-rollingUpgrade started" option if a rolling
> upgraded is already started; or restart NameNode with the "UPGRADE" to start 
> a new upgrade.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6225) Remove the o.a.h.hdfs.server.common.UpgradeStatusReport

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965718#comment-13965718
 ] 

Hudson commented on HDFS-6225:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6225. Remove the o.a.h.hdfs.server.common.UpgradeStatusReport. Contributed 
by Haohui Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586181)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/UpgradeStatusReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java


> Remove the o.a.h.hdfs.server.common.UpgradeStatusReport
> ---
>
> Key: HDFS-6225
> URL: https://issues.apache.org/jira/browse/HDFS-6225
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.5.0
>
> Attachments: HDFS-6225.000.patch
>
>
> The class o.a.h.hdfs.server.common.UpgradeStatusReport has been dead since 
> HDFS-2686. This jira proposes to remove it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6209) Fix flaky test TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965712#comment-13965712
 ] 

Hudson commented on HDFS-6209:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6209. TestValidateConfigurationSettings should use random ports.  
Contributed by Arpit Agarwal (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586079)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestValidateConfigurationSettings.java


> Fix flaky test 
> TestValidateConfigurationSettings.testThatDifferentRPCandHttpPortsAreOK
> --
>
> Key: HDFS-6209
> URL: https://issues.apache.org/jira/browse/HDFS-6209
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
>Priority: Minor
> Fix For: 2.4.1
>
> Attachments: HDFS-6209.01.patch
>
>
> The test depends on hard-coded port numbers being available. It should retry 
> if the chosen port is in use.
> Exception details below in a comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6170) Support GETFILESTATUS operation in WebImageViewer

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965707#comment-13965707
 ] 

Hudson commented on HDFS-6170:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6170. Support GETFILESTATUS operation in WebImageViewer. Contributed by 
Akira Ajisaka. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586152)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java


> Support GETFILESTATUS operation in WebImageViewer
> -
>
> Key: HDFS-6170
> URL: https://issues.apache.org/jira/browse/HDFS-6170
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: tools
>Affects Versions: 2.5.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>  Labels: newbie
> Fix For: 2.5.0
>
> Attachments: HDFS-6170.2.patch, HDFS-6170.patch
>
>
> WebImageViewer is created by HDFS-5978 but now supports only {{LISTSTATUS}} 
> operation. {{GETFILESTATUS}} operation is required for users to execute "hdfs 
> dfs -ls webhdfs://foo" on WebImageViewer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6160) TestSafeMode occasionally fails

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965709#comment-13965709
 ] 

Hudson commented on HDFS-6160:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1728 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1728/])
HDFS-6160. TestSafeMode occasionally fails. (Contributed by Arpit Agarwal) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586007)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/metrics/NameNodeMetrics.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestSafeMode.java


> TestSafeMode occasionally fails
> ---
>
> Key: HDFS-6160
> URL: https://issues.apache.org/jira/browse/HDFS-6160
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Arpit Agarwal
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-6160.01.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-HDFS-Build/6511//testReport/org.apache.hadoop.hdfs/TestSafeMode/testInitializeReplQueuesEarly/
>  :
> {code}
> java.lang.AssertionError: expected:<13> but was:<0>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at org.junit.Assert.assertEquals(Assert.java:456)
>   at 
> org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:212)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6168) Remove deprecated methods in DistributedFileSystem

2014-04-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965704#comment-13965704
 ] 

Tsz Wo Nicholas Sze commented on HDFS-6168:
---

DistributedFileSystem is not only LimitedPrivate but also an Unstable API.  Do 
we need to keep Unstable APIs stable?

> Remove deprecated methods in DistributedFileSystem
> --
>
> Key: HDFS-6168
> URL: https://issues.apache.org/jira/browse/HDFS-6168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.5.0
>
> Attachments: h6168_20140327.patch, h6168_20140327b.patch
>
>
> Some methods in DistributedFileSystem are already deprecated for a long time. 
>  They should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965699#comment-13965699
 ] 

Charles Lamb commented on HDFS-6224:


I removed the whitespace changes at the end.

Thanks for the review Andrew.



> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6224:
---

Attachment: HDFS-6224.003.patch

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch, 
> HDFS-6224.003.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965689#comment-13965689
 ] 

Andrew Wang commented on HDFS-6224:
---

Thanks Charles, looks good with just one nit: please remove the whitespace 
changes at the end of the file to keep the size of the diff down. +1 pending 
that and Jenkins.

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HDFS-6224:
---

Attachment: HDFS-6224.002.patch

Well, now that's embarrassing. I uploaded the wrong diffs the first time (don't 
ask).

Attached you'll find the correct ones. I believe it will address your concerns.

Thanks for the review.


> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-6224.001.patch, HDFS-6224.002.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-3828) Block Scanner rescans blocks too frequently

2014-04-10 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965664#comment-13965664
 ] 

Eric Payne commented on HDFS-3828:
--

Since this patch has already been merged to trunk and branch 2, is this patch 
still needed in 0.23? If not, can we close this issue?

> Block Scanner rescans blocks too frequently
> ---
>
> Key: HDFS-3828
> URL: https://issues.apache.org/jira/browse/HDFS-3828
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Fix For: 2.5.0
>
> Attachments: hdfs-3828-1.txt, hdfs-3828-2.txt, hdfs-3828-3.txt, 
> hdfs3828.txt
>
>
> {{BlockPoolSliceScanner#scan}} calls cleanUp every time it's invoked from 
> {{DataBlockScanner#run}} via {{scanBlockPoolSlice}}.  But cleanUp 
> unconditionally roll()s the verificationLogs, so after two iterations we have 
> lost the first iteration of block verification times.  As a result a cluster 
> with just one block repeatedly rescans it every 10 seconds:
> {noformat}
> 2012-08-16 15:59:57,884 INFO  datanode.BlockPoolSliceScanner 
> (BlockPoolSliceScanner.java:verifyBlock(391)) - Verification succeeded for 
> BP-2101131164-172.29.122.91-1337906886255:blk_7919273167187535506_4915
> 2012-08-16 16:00:07,904 INFO  datanode.BlockPoolSliceScanner 
> (BlockPoolSliceScanner.java:verifyBlock(391)) - Verification succeeded for 
> BP-2101131164-172.29.122.91-1337906886255:blk_7919273167187535506_4915
> 2012-08-16 16:00:17,925 INFO  datanode.BlockPoolSliceScanner 
> (BlockPoolSliceScanner.java:verifyBlock(391)) - Verification succeeded for 
> BP-2101131164-172.29.122.91-1337906886255:blk_7919273167187535506_4915
> {noformat}
> {quote}
> To fix this, we need to avoid roll()ing the logs multiple times per period.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6217) Webhdfs PUT operations may not work via a http proxy

2014-04-10 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HDFS-6217:
--

Attachment: HDFS-6217.patch

> Webhdfs PUT operations may not work via a http proxy
> 
>
> Key: HDFS-6217
> URL: https://issues.apache.org/jira/browse/HDFS-6217
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: webhdfs
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Attachments: HDFS-6217.patch
>
>
> Most of webhdfs's PUT operations have no message body.  The HTTP/1.1 spec is 
> fuzzy in how PUT requests with no body should be handled.  If the request 
> does not specify chunking or Content-Length, the server _may_ consider the 
> request to have no body.  However, popular proxies such as Apache Traffic 
> Server will reject PUT requests with no body unless Content-Length: 0 is 
> specified.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5669) Storage#tryLock() should check for null before logging successfull message

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965661#comment-13965661
 ] 

Hudson commented on HDFS-5669:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5490 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5490/])
HDFS-5669. Storage#tryLock() should check for null before logging successfull 
message. Contributed by Vinayakumar B (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586392)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/Storage.java


> Storage#tryLock() should check for null before logging successfull message
> --
>
> Key: HDFS-5669
> URL: https://issues.apache.org/jira/browse/HDFS-5669
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-5669.patch, HDFS-5669.patch
>
>
> In the following code in Storage#tryLock(), there is a possibility that 
> {{file.getChannel().tryLock()}} returns null if the lock is acquired by some 
> other process. In that case even though return value is null, a successfull 
> message confuses.
> {code}try {
> res = file.getChannel().tryLock();
> file.write(jvmName.getBytes(Charsets.UTF_8));
> LOG.info("Lock on " + lockF + " acquired by nodename " + jvmName);
>   } catch(OverlappingFileLockException oe) {{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6194) Create new tests for {{ByteRangeInputStream}}

2014-04-10 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965658#comment-13965658
 ] 

Haohui Mai commented on HDFS-6194:
--

{code}
+doNothing().when(mockConnection).connect();
+doNothing().when(mockConnection).disconnect();
{code}

They are no-ops.

{code}
+Whitebox.setInternalState(bris, "resolvedURL", rMock);
+Whitebox.setInternalState(bris, "startPos", 0);
+Whitebox.setInternalState(bris, "currentPos", 0);
+Whitebox.setInternalState(bris, "status",
+  ByteRangeInputStream.StreamStatus.SEEK);

+assertEquals("Initial call made incorrectly (offset check)",
+0, bris.startPos);
{code}

The high level goal is to verify that the seek() method is written correctly. 
The test code should not depend on the internal state of the 
{{ByteRangeInputStream}}. Instead, it should verify {{seek()}} calls the 
methods underlying objects (i.e., {{URLOpener}}, {{URLConnection}}) correctly. 
Please see check how this is done before HDFS-5570.

> Create new tests for {{ByteRangeInputStream}}
> -
>
> Key: HDFS-6194
> URL: https://issues.apache.org/jira/browse/HDFS-6194
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Akira AJISAKA
> Attachments: HDFS-6194.2.patch, HDFS-6194.patch
>
>
> HDFS-5570 removes old tests for {{ByteRangeInputStream}}, because the tests 
> only are tightly coupled with hftp / hsftp. New tests need to be written 
> because the same class is also used by {{WebHdfsFileSystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6229) Race condition in failover can cause RetryCache fail to work

2014-04-10 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-6229:


Attachment: retrycache-race.patch

The patch can mimic the scenario and cause unit tests in TestRetryCacheWithHA 
timeout.

> Race condition in failover can cause RetryCache fail to work
> 
>
> Key: HDFS-6229
> URL: https://issues.apache.org/jira/browse/HDFS-6229
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.1.0-beta
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Attachments: retrycache-race.patch
>
>
> Currently when NN failover happens, the old SBN first sets its state to 
> active, then starts the active services (including tailing all the remaining 
> editlog and building a complete retry cache based on the editlog). If a retry 
> request, which has already succeeded in the old ANN (but the client fails to 
> receive the response), comes in between, this retry may still get served by 
> the new ANN but miss the retry cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5669) Storage#tryLock() should check for null before logging successfull message

2014-04-10 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HDFS-5669:
--

   Resolution: Fixed
Fix Version/s: 2.5.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I have just committed this to trunk and branch-2

> Storage#tryLock() should check for null before logging successfull message
> --
>
> Key: HDFS-5669
> URL: https://issues.apache.org/jira/browse/HDFS-5669
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HDFS-5669.patch, HDFS-5669.patch
>
>
> In the following code in Storage#tryLock(), there is a possibility that 
> {{file.getChannel().tryLock()}} returns null if the lock is acquired by some 
> other process. In that case even though return value is null, a successfull 
> message confuses.
> {code}try {
> res = file.getChannel().tryLock();
> file.write(jvmName.getBytes(Charsets.UTF_8));
> LOG.info("Lock on " + lockF + " acquired by nodename " + jvmName);
>   } catch(OverlappingFileLockException oe) {{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5669) Storage#tryLock() should check for null before logging successfull message

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965632#comment-13965632
 ] 

Hadoop QA commented on HDFS-5669:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639615/HDFS-5669.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6642//console

This message is automatically generated.

> Storage#tryLock() should check for null before logging successfull message
> --
>
> Key: HDFS-5669
> URL: https://issues.apache.org/jira/browse/HDFS-5669
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-5669.patch, HDFS-5669.patch
>
>
> In the following code in Storage#tryLock(), there is a possibility that 
> {{file.getChannel().tryLock()}} returns null if the lock is acquired by some 
> other process. In that case even though return value is null, a successfull 
> message confuses.
> {code}try {
> res = file.getChannel().tryLock();
> file.write(jvmName.getBytes(Charsets.UTF_8));
> LOG.info("Lock on " + lockF + " acquired by nodename " + jvmName);
>   } catch(OverlappingFileLockException oe) {{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6168) Remove deprecated methods in DistributedFileSystem

2014-04-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965631#comment-13965631
 ] 

Andrew Wang commented on HDFS-6168:
---

Why was this incompatible change committed to branch-2? Normally, we deprecate 
methods for one major release and then remove in the next. Since 2.2 went GA, 
we shouldn't be removing any deprecated methods until 3.x.

I know that DFS is limited private to MapReduce and HBase, but we need to be 
serious about compatibility now that we're GA.

> Remove deprecated methods in DistributedFileSystem
> --
>
> Key: HDFS-6168
> URL: https://issues.apache.org/jira/browse/HDFS-6168
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.5.0
>
> Attachments: h6168_20140327.patch, h6168_20140327b.patch
>
>
> Some methods in DistributedFileSystem are already deprecated for a long time. 
>  They should be removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5669) Storage#tryLock() should check for null before logging successfull message

2014-04-10 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965627#comment-13965627
 ] 

Uma Maheswara Rao G commented on HDFS-5669:
---

oops I forgot this totally. Thanks for re base. 
I will commit the patch momentarily.
+1

> Storage#tryLock() should check for null before logging successfull message
> --
>
> Key: HDFS-5669
> URL: https://issues.apache.org/jira/browse/HDFS-5669
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-5669.patch, HDFS-5669.patch
>
>
> In the following code in Storage#tryLock(), there is a possibility that 
> {{file.getChannel().tryLock()}} returns null if the lock is acquired by some 
> other process. In that case even though return value is null, a successfull 
> message confuses.
> {code}try {
> res = file.getChannel().tryLock();
> file.write(jvmName.getBytes(Charsets.UTF_8));
> LOG.info("Lock on " + lockF + " acquired by nodename " + jvmName);
>   } catch(OverlappingFileLockException oe) {{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6229) Race condition in failover can cause RetryCache fail to work

2014-04-10 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-6229:
---

 Summary: Race condition in failover can cause RetryCache fail to 
work
 Key: HDFS-6229
 URL: https://issues.apache.org/jira/browse/HDFS-6229
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 2.1.0-beta
Reporter: Jing Zhao
Assignee: Jing Zhao


Currently when NN failover happens, the old SBN first sets its state to active, 
then starts the active services (including tailing all the remaining editlog 
and building a complete retry cache based on the editlog). If a retry request, 
which has already succeeded in the old ANN (but the client fails to receive the 
response), comes in between, this retry may still get served by the new ANN but 
miss the retry cache.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-5669) Storage#tryLock() should check for null before logging successfull message

2014-04-10 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-5669:


Attachment: HDFS-5669.patch

Rebased the patch.

> Storage#tryLock() should check for null before logging successfull message
> --
>
> Key: HDFS-5669
> URL: https://issues.apache.org/jira/browse/HDFS-5669
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HDFS-5669.patch, HDFS-5669.patch
>
>
> In the following code in Storage#tryLock(), there is a possibility that 
> {{file.getChannel().tryLock()}} returns null if the lock is acquired by some 
> other process. In that case even though return value is null, a successfull 
> message confuses.
> {code}try {
> res = file.getChannel().tryLock();
> file.write(jvmName.getBytes(Charsets.UTF_8));
> LOG.info("Lock on " + lockF + " acquired by nodename " + jvmName);
>   } catch(OverlappingFileLockException oe) {{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6228) comments typo fix for FsDatasetImpl.java

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965601#comment-13965601
 ] 

Hudson commented on HDFS-6228:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #536 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/536/])
HDFS-6228. comments typo fix for FsDatasetImpl.java Contributed by 
zhaoyunjiong. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586264)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> comments typo fix for FsDatasetImpl.java
> 
>
> Key: HDFS-6228
> URL: https://issues.apache.org/jira/browse/HDFS-6228
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-6228.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965603#comment-13965603
 ] 

Andrew Wang commented on HDFS-6224:
---

Hey Charles, I took a quick look at the patch, and it seems like it's just 
checking that it logs twice, but not the actual content of the audit log 
messages. Shouldn't we be verifying that too? It'd also be good to have a more 
descriptive comment than "Tests that AuditLogger works as expected."

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-6224.001.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6224) Add a unit test to TestAuditLogger for file permissions passed to logAuditEvent

2014-04-10 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reassigned HDFS-6224:
-

Assignee: Andrew Wang  (was: Charles Lamb)

> Add a unit test to TestAuditLogger for file permissions passed to 
> logAuditEvent
> ---
>
> Key: HDFS-6224
> URL: https://issues.apache.org/jira/browse/HDFS-6224
> Project: Hadoop HDFS
>  Issue Type: Test
>  Components: test
>Reporter: Charles Lamb
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HDFS-6224.001.patch
>
>
> Add a unit test which verifies behavior of HADOOP-9155. Specifically, ensure 
> that during a setPermission operation the permission returned is the one that 
> was just set, not the permission before the operation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6226) Replace logging calls that use StringUtils.stringifyException to pass the exception instance to the log call.

2014-04-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965598#comment-13965598
 ] 

Chris Nauroth commented on HDFS-6226:
-

bq. ...loses data which different appenders ...

Yes, agreed.  Just to expand on this a bit, I was prompted to file this jira by 
a use case for a custom appender that wants to forward the stack trace, but not 
the log message, to an external system for tracking.  Right now, this ends up 
missing out on the exceptions that were packed into the message field.

bq. This could also be a time to move to SLF4J for a logging api?

This seems reasonable as long as there is no risk of impacting library 
dependencies for downstream projects.  I see we already have slf4j as a 
dependency in hadoop-common, so I expect this is not a problem.

> Replace logging calls that use StringUtils.stringifyException to pass the 
> exception instance to the log call.
> -
>
> Key: HDFS-6226
> URL: https://issues.apache.org/jira/browse/HDFS-6226
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Chris Nauroth
>
> There are multiple places in the code that call 
> {{StringUtils#stringifyException}} to capture the stack trace in a string and 
> pass the result to a logging call.  The resulting information is identical to 
> passing the exception instance directly to the logger, i.e. 
> {{LOG.error("fail", e)}}, so we can simplify the code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6203) check other namenode's state before HAadmin transitionToActive

2014-04-10 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965525#comment-13965525
 ] 

Uma Maheswara Rao G commented on HDFS-6203:
---

I think this is same as HDFS-2949 and some discussion there

> check other namenode's state before HAadmin transitionToActive
> --
>
> Key: HDFS-6203
> URL: https://issues.apache.org/jira/browse/HDFS-6203
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 2.3.0
>Reporter: patrick white
>Assignee: Kihwal Lee
>
> Current behavior is that the HAadmin -transitionToActive command will 
> complete the transition to Active even if the other namenode is already in 
> Active state. This is not an allowed condition and should be handled by 
> fencing, however setting both namenode's active can happen accidentally with 
> relative ease, especially in a production environment when performing manual 
> maintenance operations. 
> If this situation does occur it is very serious and will likely cause data 
> loss, or best case, require a difficult recovery to avoid data loss.
> This is requesting an enhancement to haadmin's -transitionToActive command, 
> to have HAadmin check the Active state of the other namenode before 
> completing the transition. If the other namenode is Active, then fail the 
> request due to other nn already-active.
> Not sure if there is a scenario where both namenode's being Active is valid 
> or desired, but to maintain functional compatibility a 'force' parameter 
> could be added to  override this check and allow previous behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-4489) Use InodeID as as an identifier of a file in HDFS protocols and APIs

2014-04-10 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-4489.
-

  Resolution: Fixed
   Fix Version/s: (was: 2.5.0)
  2.1.0-beta
  3.0.0
Target Version/s: 2.1.0-beta

Resolving to avoid spurious version updates.

> Use InodeID as as an identifier of a file in HDFS protocols and APIs
> 
>
> Key: HDFS-4489
> URL: https://issues.apache.org/jira/browse/HDFS-4489
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Brandon Li
>Assignee: Brandon Li
> Fix For: 3.0.0, 2.1.0-beta
>
> Attachments: 4434.optimized.patch
>
>
> The benefit of using InodeID to uniquely identify a file can be multiple 
> folds. Here are a few of them:
> 1. uniquely identify a file cross rename, related JIRAs include HDFS-4258, 
> HDFS-4437.
> 2. modification checks in tools like distcp. Since a file could have been 
> replaced or renamed to, the file name and size combination is no t reliable, 
> but the combination of file id and size is unique.
> 3. id based protocol support (e.g., NFS)
> 4. to make the pluggable block placement policy use fileid instead of 
> filename (HDFS-385).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-2831) Description of dfs.namenode.name.dir should be changed

2014-04-10 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDFS-2831.
---

Resolution: Fixed

Lets resolve this.  Feel free to reopen with the reason if you disagree.

> Description of dfs.namenode.name.dir should be changed 
> ---
>
> Key: HDFS-2831
> URL: https://issues.apache.org/jira/browse/HDFS-2831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.21.0, 0.23.0
> Environment: NA
>Reporter: J.Andreina
>Priority: Minor
> Fix For: 0.24.0
>
>
> {noformat}
> 
>   dfs.namenode.name.dir
>   file://${hadoop.tmp.dir}/dfs/name
>   Determines where on the local filesystem the DFS name node
>   should store the name table(fsimage).  If this is a comma-delimited list
>   of directories then the name table is replicated in all of the
>   directories, for redundancy. 
> 
> {noformat}
> In the above property the description part is given as "Determines where on 
> the local filesystem the DFS name node should store the name table(fsimage).  
> " but it stores both name table(If nametable means only fsimage) and edits 
> file. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-2831) Description of dfs.namenode.name.dir should be changed

2014-04-10 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965505#comment-13965505
 ] 

Rushabh S Shah commented on HDFS-2831:
--

Hey,
I was just watching the backlog of 0.23.x version and I came across this jira.
I too agree with [~qwertymaniac]   point.
[~andreina]: Can you please put forward your view on Harsh comment so that we 
can resolve it.


> Description of dfs.namenode.name.dir should be changed 
> ---
>
> Key: HDFS-2831
> URL: https://issues.apache.org/jira/browse/HDFS-2831
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 0.21.0, 0.23.0
> Environment: NA
>Reporter: J.Andreina
>Priority: Minor
> Fix For: 0.24.0
>
>
> {noformat}
> 
>   dfs.namenode.name.dir
>   file://${hadoop.tmp.dir}/dfs/name
>   Determines where on the local filesystem the DFS name node
>   should store the name table(fsimage).  If this is a comma-delimited list
>   of directories then the name table is replicated in all of the
>   directories, for redundancy. 
> 
> {noformat}
> In the above property the description part is given as "Determines where on 
> the local filesystem the DFS name node should store the name table(fsimage).  
> " but it stores both name table(If nametable means only fsimage) and edits 
> file. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-3594) ListPathsServlet should not log a warning for paths that do not exist

2014-04-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965480#comment-13965480
 ] 

Hadoop QA commented on HDFS-3594:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12536047/HDFS-3594.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6641//console

This message is automatically generated.

> ListPathsServlet should not log a warning for paths that do not exist
> -
>
> Key: HDFS-3594
> URL: https://issues.apache.org/jira/browse/HDFS-3594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.3
>Reporter: Robert Joseph Evans
> Attachments: HDFS-3594.patch, HDFS-3594.patch
>
>
> ListPathsServlet logs a warning message every time someone request a listing 
> for a directory that does not exist.  This should be a debug or at most an 
> info message, because the is expected behavior.  People will ask for things 
> that do not exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-3594) ListPathsServlet should not log a warning for paths that do not exist

2014-04-10 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965474#comment-13965474
 ] 

Eric Payne commented on HDFS-3594:
--

This looks like it is trying to solve the same problem as HADOOP-10015. Should 
this JIRA be duped to?

> ListPathsServlet should not log a warning for paths that do not exist
> -
>
> Key: HDFS-3594
> URL: https://issues.apache.org/jira/browse/HDFS-3594
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 0.23.3
>Reporter: Robert Joseph Evans
> Attachments: HDFS-3594.patch, HDFS-3594.patch
>
>
> ListPathsServlet logs a warning message every time someone request a listing 
> for a directory that does not exist.  This should be a debug or at most an 
> info message, because the is expected behavior.  People will ask for things 
> that do not exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6203) check other namenode's state before HAadmin transitionToActive

2014-04-10 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-6203:
-

Target Version/s: 2.5.0

> check other namenode's state before HAadmin transitionToActive
> --
>
> Key: HDFS-6203
> URL: https://issues.apache.org/jira/browse/HDFS-6203
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 2.3.0
>Reporter: patrick white
>Assignee: Kihwal Lee
>
> Current behavior is that the HAadmin -transitionToActive command will 
> complete the transition to Active even if the other namenode is already in 
> Active state. This is not an allowed condition and should be handled by 
> fencing, however setting both namenode's active can happen accidentally with 
> relative ease, especially in a production environment when performing manual 
> maintenance operations. 
> If this situation does occur it is very serious and will likely cause data 
> loss, or best case, require a difficult recovery to avoid data loss.
> This is requesting an enhancement to haadmin's -transitionToActive command, 
> to have HAadmin check the Active state of the other namenode before 
> completing the transition. If the other namenode is Active, then fail the 
> request due to other nn already-active.
> Not sure if there is a scenario where both namenode's being Active is valid 
> or desired, but to maintain functional compatibility a 'force' parameter 
> could be added to  override this check and allow previous behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HDFS-6203) check other namenode's state before HAadmin transitionToActive

2014-04-10 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee reassigned HDFS-6203:


Assignee: Kihwal Lee

> check other namenode's state before HAadmin transitionToActive
> --
>
> Key: HDFS-6203
> URL: https://issues.apache.org/jira/browse/HDFS-6203
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 2.3.0
>Reporter: patrick white
>Assignee: Kihwal Lee
>
> Current behavior is that the HAadmin -transitionToActive command will 
> complete the transition to Active even if the other namenode is already in 
> Active state. This is not an allowed condition and should be handled by 
> fencing, however setting both namenode's active can happen accidentally with 
> relative ease, especially in a production environment when performing manual 
> maintenance operations. 
> If this situation does occur it is very serious and will likely cause data 
> loss, or best case, require a difficult recovery to avoid data loss.
> This is requesting an enhancement to haadmin's -transitionToActive command, 
> to have HAadmin check the Active state of the other namenode before 
> completing the transition. If the other namenode is Active, then fail the 
> request due to other nn already-active.
> Not sure if there is a scenario where both namenode's being Active is valid 
> or desired, but to maintain functional compatibility a 'force' parameter 
> could be added to  override this check and allow previous behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6215) Wrong error message for upgrade

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965348#comment-13965348
 ] 

Hudson commented on HDFS-6215:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1753 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1753/])
HDFS-6215. Wrong error message for upgrade. (Kihwal Lee via jeagles) (jeagles: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586011)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java


> Wrong error message for upgrade
> ---
>
> Key: HDFS-6215
> URL: https://issues.apache.org/jira/browse/HDFS-6215
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>Priority: Minor
> Fix For: 3.0.0, 2.5.0, 2.4.1
>
> Attachments: HDFS-6215.patch
>
>
> UPGRADE is printed instead of -upgrade.
> {panel}
> File system image contains an old layout version -51.
> An upgrade to version -56 is required.
> Please restart NameNode with the "-rollingUpgrade started" option if a rolling
> upgraded is already started; or restart NameNode with the "UPGRADE" to start 
> a new upgrade.
> {panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6225) Remove the o.a.h.hdfs.server.common.UpgradeStatusReport

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965346#comment-13965346
 ] 

Hudson commented on HDFS-6225:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1753 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1753/])
HDFS-6225. Remove the o.a.h.hdfs.server.common.UpgradeStatusReport. Contributed 
by Haohui Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586181)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/UpgradeStatusReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/common/TestJspHelper.java


> Remove the o.a.h.hdfs.server.common.UpgradeStatusReport
> ---
>
> Key: HDFS-6225
> URL: https://issues.apache.org/jira/browse/HDFS-6225
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.5.0
>
> Attachments: HDFS-6225.000.patch
>
>
> The class o.a.h.hdfs.server.common.UpgradeStatusReport has been dead since 
> HDFS-2686. This jira proposes to remove it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6228) comments typo fix for FsDatasetImpl.java

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965350#comment-13965350
 ] 

Hudson commented on HDFS-6228:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1753 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1753/])
HDFS-6228. comments typo fix for FsDatasetImpl.java Contributed by 
zhaoyunjiong. (umamahesh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586264)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java


> comments typo fix for FsDatasetImpl.java
> 
>
> Key: HDFS-6228
> URL: https://issues.apache.org/jira/browse/HDFS-6228
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: zhaoyunjiong
>Assignee: zhaoyunjiong
>Priority: Trivial
> Fix For: 3.0.0
>
> Attachments: HDFS-6228.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6204) TestRBWBlockInvalidation may fail

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965347#comment-13965347
 ] 

Hudson commented on HDFS-6204:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1753 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1753/])
HDFS-6204. Fix TestRBWBlockInvalidation: change the last sleep to a loop. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586039)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestRBWBlockInvalidation.java


> TestRBWBlockInvalidation may fail
> -
>
> Key: HDFS-6204
> URL: https://issues.apache.org/jira/browse/HDFS-6204
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>Priority: Minor
> Fix For: 2.4.1
>
> Attachments: h6204_20140408.patch
>
>
> {code}
> java.lang.AssertionError: There should not be any replica in the 
> corruptReplicasMap expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.failNotEquals(Assert.java:647)
>   at org.junit.Assert.assertEquals(Assert.java:128)
>   at org.junit.Assert.assertEquals(Assert.java:472)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation.testBlockInvalidationWhenRBWReplicaMissedInDN(TestRBWBlockInvalidation.java:137)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6206) DFSUtil.substituteForWildcardAddress may throw NPE

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965351#comment-13965351
 ] 

Hudson commented on HDFS-6206:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1753 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1753/])
HDFS-6206. Fix NullPointerException in DFSUtil.substituteForWildcardAddress. 
(szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586034)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java


> DFSUtil.substituteForWildcardAddress may throw NPE
> --
>
> Key: HDFS-6206
> URL: https://issues.apache.org/jira/browse/HDFS-6206
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
> Fix For: 2.4.1
>
> Attachments: h6206_20140408.patch
>
>
> InetSocketAddress.getAddress() may return null if the address null is 
> unresolved.  In such case, DFSUtil.substituteForWildcardAddress may throw NPE.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6208) DataNode caching can leak file descriptors.

2014-04-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13965342#comment-13965342
 ] 

Hudson commented on HDFS-6208:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1753 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1753/])
HDFS-6208. DataNode caching can leak file descriptors. Contributed by Chris 
Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1586154)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlock.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java


> DataNode caching can leak file descriptors.
> ---
>
> Key: HDFS-6208
> URL: https://issues.apache.org/jira/browse/HDFS-6208
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.4.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0, 2.4.1
>
> Attachments: HDFS-6208.1.patch
>
>
> In the DataNode, management of mmap'd/mlock'd block files can leak file 
> descriptors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >