[jira] [Commented] (HDFS-4632) globStatus using backslash for escaping does not work on Windows

2013-08-15 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13740732#comment-13740732
 ] 

Ivan Mitic commented on HDFS-4632:
--

I am also fine with Chris’ proposal #1 from above. Changing path behavior would 
indeed break many downstream projects. The fact that java.io.File converts all 
paths to backslash-paths on Windows additionally complicates things. 

I also did investigation on Windows paths in Hadoop a while back, and this 
issue came up. On top of the above 3 proposals, we have 2 other options:
4. Replace backslash with a forward slash only if the backslash is not followed 
by a meta-character ('*', '?', etc.). This can be achieved through a negative 
look ahead regular expression (regex = (?!(\\*|\\?))). 
5. Use '^' as an escape character on the Windows platform
 - On the negative side, this model would push the platform differences to 
platforms built on top of Hadoop instead of keeping them consistent. Hadoop 
documentation broadly available could still get someone to make a mistake 
described in HADOOP-8139. Just adding this for completeness, I don’t think we 
want to go this route.

#3 seems like a nice long term solution. We would have to do a deeper dive on 
what would it take to do it/prototyping, but seems like a good proposal from 
high-level. 


 globStatus using backslash for escaping does not work on Windows
 

 Key: HDFS-4632
 URL: https://issues.apache.org/jira/browse/HDFS-4632
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4632-trunk.patch


 {{Path}} normalizes backslashes to forward slashes on Windows.  Later, when 
 passed to {{FileSystem#globStatus}}, the path is no longer treated as an 
 escape sequence.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-5065) TestSymlinkHdfsDisable fails on Windows

2013-08-10 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13736171#comment-13736171
 ] 

Ivan Mitic commented on HDFS-5065:
--

Thanks Chris for the review! Will commit this shortly. 

 TestSymlinkHdfsDisable fails on Windows
 ---

 Key: HDFS-5065
 URL: https://issues.apache.org/jira/browse/HDFS-5065
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-5065.patch


 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.798 sec  
 FAILURE!
 testSymlinkHdfsDisable(org.apache.hadoop.fs.TestSymlinkHdfsDisable)  Time 
 elapsed: 8704 sec   ERROR!
 java.lang.IllegalArgumentException: Pathname 
 /I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup from 
 hdfs://testhostname:34452/I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$1(DistributedFileSystem.java:180)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:1)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:830)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:805)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1932)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:232)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:224)
   at 
 org.apache.hadoop.fs.TestSymlinkHdfsDisable.testSymlinkHdfsDisable(TestSymlinkHdfsDisable.java:49)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5065) TestSymlinkHdfsDisable fails on Windows

2013-08-10 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-5065:
-

  Resolution: Fixed
   Fix Version/s: 2.3.0
  3.0.0
Target Version/s: 3.0.0, 2.3.0  (was: 3.0.0, 2.1.1-beta)
  Status: Resolved  (was: Patch Available)

Patch committed to trunk and branch-2.

 TestSymlinkHdfsDisable fails on Windows
 ---

 Key: HDFS-5065
 URL: https://issues.apache.org/jira/browse/HDFS-5065
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-5065.patch


 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.798 sec  
 FAILURE!
 testSymlinkHdfsDisable(org.apache.hadoop.fs.TestSymlinkHdfsDisable)  Time 
 elapsed: 8704 sec   ERROR!
 java.lang.IllegalArgumentException: Pathname 
 /I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup from 
 hdfs://testhostname:34452/I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$1(DistributedFileSystem.java:180)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:1)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:830)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:805)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1932)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:232)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:224)
   at 
 org.apache.hadoop.fs.TestSymlinkHdfsDisable.testSymlinkHdfsDisable(TestSymlinkHdfsDisable.java:49)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5065) TestSymlinkHdfsDisable fails on Windows

2013-08-10 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-5065:
-

Affects Version/s: (was: 2.1.0-beta)
   2.3.0

 TestSymlinkHdfsDisable fails on Windows
 ---

 Key: HDFS-5065
 URL: https://issues.apache.org/jira/browse/HDFS-5065
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0, 2.3.0

 Attachments: HDFS-5065.patch


 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.798 sec  
 FAILURE!
 testSymlinkHdfsDisable(org.apache.hadoop.fs.TestSymlinkHdfsDisable)  Time 
 elapsed: 8704 sec   ERROR!
 java.lang.IllegalArgumentException: Pathname 
 /I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup from 
 hdfs://testhostname:34452/I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$1(DistributedFileSystem.java:180)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:1)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:830)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:805)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1932)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:232)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:224)
   at 
 org.apache.hadoop.fs.TestSymlinkHdfsDisable.testSymlinkHdfsDisable(TestSymlinkHdfsDisable.java:49)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Moved] (HDFS-5065) TestSymlinkHdfsDisable fails on Windows

2013-08-05 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic moved HADOOP-9824 to HDFS-5065:
--

Affects Version/s: (was: 2.1.0-beta)
   (was: 3.0.0)
   2.1.0-beta
   3.0.0
  Key: HDFS-5065  (was: HADOOP-9824)
  Project: Hadoop HDFS  (was: Hadoop Common)

 TestSymlinkHdfsDisable fails on Windows
 ---

 Key: HDFS-5065
 URL: https://issues.apache.org/jira/browse/HDFS-5065
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ivan Mitic
Assignee: Ivan Mitic

 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.798 sec  
 FAILURE!
 testSymlinkHdfsDisable(org.apache.hadoop.fs.TestSymlinkHdfsDisable)  Time 
 elapsed: 8704 sec   ERROR!
 java.lang.IllegalArgumentException: Pathname 
 /I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup from 
 hdfs://testhostname:34452/I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$1(DistributedFileSystem.java:180)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:1)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:830)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:805)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1932)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:232)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:224)
   at 
 org.apache.hadoop.fs.TestSymlinkHdfsDisable.testSymlinkHdfsDisable(TestSymlinkHdfsDisable.java:49)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5065) TestSymlinkHdfsDisable fails on Windows

2013-08-05 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-5065:
-

Attachment: HDFS-5065.patch

Attaching the fix. This is the common issue with paths on Windows, and the fix 
is not to use local file system path in the context of HDFS. The test was added 
recently, that's why this was not caught earlier. 

 TestSymlinkHdfsDisable fails on Windows
 ---

 Key: HDFS-5065
 URL: https://issues.apache.org/jira/browse/HDFS-5065
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-5065.patch


 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.798 sec  
 FAILURE!
 testSymlinkHdfsDisable(org.apache.hadoop.fs.TestSymlinkHdfsDisable)  Time 
 elapsed: 8704 sec   ERROR!
 java.lang.IllegalArgumentException: Pathname 
 /I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup from 
 hdfs://testhostname:34452/I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$1(DistributedFileSystem.java:180)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:1)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:830)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:805)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1932)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:232)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:224)
   at 
 org.apache.hadoop.fs.TestSymlinkHdfsDisable.testSymlinkHdfsDisable(TestSymlinkHdfsDisable.java:49)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-5065) TestSymlinkHdfsDisable fails on Windows

2013-08-05 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-5065:
-

Status: Patch Available  (was: Open)

 TestSymlinkHdfsDisable fails on Windows
 ---

 Key: HDFS-5065
 URL: https://issues.apache.org/jira/browse/HDFS-5065
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-5065.patch


 {noformat}
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 8.798 sec  
 FAILURE!
 testSymlinkHdfsDisable(org.apache.hadoop.fs.TestSymlinkHdfsDisable)  Time 
 elapsed: 8704 sec   ERROR!
 java.lang.IllegalArgumentException: Pathname 
 /I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup from 
 hdfs://testhostname:34452/I:/svn/tr/hadoop-hdfs-project/hadoop-hdfs/target/test/data/tO9GO35Iup
  is not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:184)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$1(DistributedFileSystem.java:180)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:816)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:1)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:830)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:805)
   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1932)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:232)
   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:224)
   at 
 org.apache.hadoop.fs.TestSymlinkHdfsDisable.testSymlinkHdfsDisable(TestSymlinkHdfsDisable.java:49)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:62)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-06-08 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

Attachment: HDFS-4677.3.patch

Uploading the latest patch again.

Thanks Chris for the review, will commit this once Jenkins comes back with a +1.

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.patch, HDFS-4677.branch-2.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-06-08 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

Attachment: (was: HDFS-4677.branch-2.patch)

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4677) Editlog should support synchronous writes

2013-06-08 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13678868#comment-13678868
 ] 

Ivan Mitic commented on HDFS-4677:
--

After the recent merges to branch-2, trunk patch now directly applies. I 
deleted the branch-2 patch from the Jira.

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-06-08 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

Attachment: HDFS-4677.branch-1-win.2.patch

Rebasing the branch-1-win patch.

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.2.patch, HDFS-4677.branch-1-win.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-06-08 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

   Resolution: Fixed
Fix Version/s: 2.1.0-beta
   1-win
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Patch committed to trunk, branch-2, branch-2.1-beta and branch-1-win.

Thank you Chris for the quality review!

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 1-win, 2.1.0-beta

 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.2.patch, HDFS-4677.branch-1-win.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4871) Skip failing commons tests on Windows

2013-06-04 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13674661#comment-13674661
 ] 

Ivan Mitic commented on HDFS-4871:
--

Thanks Arpit, +1 on the proposal. Getting CI working on Windows is a great step 
forward for Hadoop on Windows!

bq. JIRAs for remaining failing tests to follow soon.
My personal preference would be to file a single parent task Jira, and as we 
fix individual tests we can link their Jiras to the parent Jira.

bq. However I am running into some problems with using assumptions to skip 
tests in branch-2, I am not sure if this is due to a different version of 
jUnit. I am looking into it.
Can't we do this in a less intrusive manner, for example in test patch (or 
Windows equivalent), or some other script external to test code?

 Skip failing commons tests on Windows
 -

 Key: HDFS-4871
 URL: https://issues.apache.org/jira/browse/HDFS-4871
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.1.0-beta
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.1.0-beta


 This is a temporary fix proposed to get CI working. We will skip the 
 following failing tests on Windows:
 # TestChRootedFs
 # TestFSMainOperationsLocalFileSystem
 # TestFcCreateMkdirLocalFs
 # TestFcMainOperationsLocalFs
 # TestFcPermissionsLocalFs
 # TestLocalFSFileContextSymlink - HADOOP-9527
 # TestLocalFileSystem
 # TestShellCommandFencer - HADOOP-9526
 # TestSocketIOWithTimeout - HADOOP-8982
 # TestViewFsLocalFs
 # TestViewFsTrash
 # TestViewFsWithAuthorityLocalFs
 The tests will be re-enabled as we fix each. JIRAs for remaining failing 
 tests to follow soon.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-05-23 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

Attachment: HDFS-4677.branch-2.patch

Attaching the branch-2 compatible patch.

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.patch, HDFS-4677.branch-2.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4677) Editlog should support synchronous writes

2013-05-23 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13664923#comment-13664923
 ] 

Ivan Mitic commented on HDFS-4677:
--

bq. It would be possible to get this into branch-2, considering that the patch 
doesn't really have any Windows-specific code. If we put it in branch-2 right 
now, then it would be less code to merge later.
Thanks Chris. It was strait-forward to rebase the patch for branch-2, I just 
attached it.

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.patch, HDFS-4677.branch-2.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4839) add NativeIO#mkdirs, that provides an error message on failure

2013-05-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662737#comment-13662737
 ] 

Ivan Mitic commented on HDFS-4839:
--

Thanks Colin for providing the additional context. Some comments inline.

bq. It's not difficult to give relevant error messages on all platforms, 
including Windows and all the UNIXes.
I was never arguing that it’s hard to give error messages on all platforms. 
However, each time we add platform dependent code it adds on the cost over time 
and makes things more complex w.r.t. cross-platform support. IOW, let’s do this 
only when absolutely necessary. 

bq. I have had to diagnose issues in production clusters where rename or mkdir 
failed, and the logs did not reveal why. It's not fun. And it can lead to very 
serious code and/or system administration problems getting misdiagnosed.
Thanks, this is a valid point, I agree. 



 add NativeIO#mkdirs, that provides an error message on failure
 --

 Key: HDFS-4839
 URL: https://issues.apache.org/jira/browse/HDFS-4839
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-beta
Reporter: Colin Patrick McCabe
Priority: Minor

 It would be nice to have a variant of mkdirs that provided an error message 
 explaining why it failed.  This would make it easier to debug certain failing 
 unit tests that rely on mkdir / mkdirs-- the ChecksumFilesystem tests, for 
 example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4837) Allow DFSAdmin to run when HDFS is not the default file system

2013-05-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662772#comment-13662772
 ] 

Ivan Mitic commented on HDFS-4837:
--

+1 on the proposal

I reviewed the patch, approach looks good to me with a few comments/questions 
below:
1. One thing worth checking is what this means for HA enabled clusters, when 
you have two configured namenodes
2. Should we also query for DFS in DFSAdmin#setBalancerBandwidth()?
3. It would be good to add a unittest for the new functionality. TestDFSShell 
looks like a good place since it already has a test case for DFSAdmin (see 
testInvalidShell).



 Allow DFSAdmin to run when HDFS is not the default file system
 --

 Key: HDFS-4837
 URL: https://issues.apache.org/jira/browse/HDFS-4837
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: Mostafa Elhemali
Assignee: Mostafa Elhemali
 Attachments: HDFS-4837.patch


 When Hadoop is running a different default file system than HDFS, but still 
 have HDFS namenode running, we are unable to run dfsadmin commands.
 I suggest that DFSAdmin use the same mechanism as NameNode does today to get 
 its address: look at dfs.namenode.rpc-address, and if not set fallback on 
 getting it from the default file system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4677) Editlog should support synchronous writes

2013-05-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663651#comment-13663651
 ] 

Ivan Mitic commented on HDFS-4677:
--

Awesome, big thanks Chris! Looks good, +1. Let me prepare the branch-1-win 
patch.

Quick question, should this go to branch-2 as well, given that there was a bit 
of refactoring going on? This would make things easier for future backports. I 
just checked, and the patch is almost completely compatible. 



 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-05-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

Attachment: HDFS-4677.branch-1-win.patch

Attaching the branch-1-win compatible patch. 


 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4677) Editlog should support synchronous writes

2013-05-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663689#comment-13663689
 ] 

Ivan Mitic commented on HDFS-4677:
--

bq. -1 overall. Here are the results of testing the latest attachment 
This is expected, Jenkins tried to apply the branch-1 patch to trunk and 
failed. Trunk compatible patch (HDFS-4677.3.patch) already received a +1 from 
Jenkins.

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.3.patch, 
 HDFS-4677.branch-1-win.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4839) add NativeIO#mkdirs, that provides an error message on failure

2013-05-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13663711#comment-13663711
 ] 

Ivan Mitic commented on HDFS-4839:
--

Thanks Colin and Chris.

Bq. Let's create JIRAs to use the JDK7 APIs when they become available to us.
Good idea! I created HADOOP-9590 and documented all problems we run into w.r.t. 
file operations on JDK6. Feel free to add to the Jira if I missed something.


 add NativeIO#mkdirs, that provides an error message on failure
 --

 Key: HDFS-4839
 URL: https://issues.apache.org/jira/browse/HDFS-4839
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-beta
Reporter: Colin Patrick McCabe
Priority: Minor

 It would be nice to have a variant of mkdirs that provided an error message 
 explaining why it failed.  This would make it easier to debug certain failing 
 unit tests that rely on mkdir / mkdirs-- the ChecksumFilesystem tests, for 
 example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4839) add NativeIO#mkdirs, that provides an error message on failure

2013-05-20 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662515#comment-13662515
 ] 

Ivan Mitic commented on HDFS-4839:
--

Folks, I am sorry to say this, but my vote is not to go in this direction. I 
see NativeIO as means to implement the functionality that simply cannot be 
achieved in Java, in a cross-platform friendly way. Java already made a trade 
off not to return the error code from mkdirs/rename and similar APIs, as it 
would be hard to achieve this in a cross-platform consistent way. I would argue 
that error code for debugging unittests is not a strong enough case to 
introduce this discrepancy between platforms. There will be many other places 
where by going down on the stack we can gain greater control/flexibility, let's 
just do this when it is absolutely needed.

I saw that we already introduced rename for similar reasons. My initial thought 
when I saw a Jira to implement the same functionality on Windows was, why not 
just remove the native implementation altogether :)

Feel free to comment with your views. I am just sharing my honest opinion.

 add NativeIO#mkdirs, that provides an error message on failure
 --

 Key: HDFS-4839
 URL: https://issues.apache.org/jira/browse/HDFS-4839
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-beta
Reporter: Colin Patrick McCabe
Priority: Minor

 It would be nice to have a variant of mkdirs that provided an error message 
 explaining why it failed.  This would make it easier to debug certain failing 
 unit tests that rely on mkdir / mkdirs-- the ChecksumFilesystem tests, for 
 example.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-05-18 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

Attachment: HDFS-4677.2.patch

Attaching the updated patch with the following changes:
 - Addressing Chris' feedback to pass the Configuration object from FSEditLog 
down to EditLogFileOutputStream
 - Changing the new config name

One thing that bothers me in the latest patch is now having two constructors 
for FileJournalManager and EditLogFileOutputStream, one with conf and one 
without. Given that the one with conf is the right choice for most cases, it 
might make sense to lose the other one. However, making this change would 
further increase the scope of this simple Jira, so I’m deferring this question 
to the community. 


 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.2.patch, HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-28 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13644223#comment-13644223
 ] 

Ivan Mitic commented on HDFS-4610:
--

Thanks Chris! As a follow up on your question on TestCheckpoint and 
TestNNStorageRetentionFunctional and filed HADOOP-9525. Please check it out.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-28 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4610:
-

Attachment: HDFS-4610.commonfileutils.3.patch

Rebasing the patch. 

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.3.patch, HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-26 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13643519#comment-13643519
 ] 

Ivan Mitic commented on HDFS-4610:
--

Thanks Chris and Arpit for the review and comments.

bq. I do see that this patch is making changes in TestCheckpoint and 
TestNNStorageRetentionFunctional though. Ivan, can you clarify if this patch 
makes these 2 tests pass for you?
Thanks, let me take a look. I did not explicitly try to debug every unittest I 
changed that was already failing. My main goal was to add the missing 
functionality for Windows and set us up for the better cross platform support. 

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-24 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13640888#comment-13640888
 ] 

Ivan Mitic commented on HDFS-4610:
--

bq. TestNNStorageRetentionFunctional is another HDFS test that may benefit from 
the new utils. It currently uses File#setExecutable.
Thanks Chris, I prepared the updated patch, did not attach it yet, will do so 
now.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-04-24 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4610:
-

Attachment: HDFS-4610.commonfileutils.2.patch

Attaching the updated patch after rebase.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4610.commonfileutils.2.patch, 
 HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4722) TestGetConf#testFederation times out on Windows

2013-04-22 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13638260#comment-13638260
 ] 

Ivan Mitic commented on HDFS-4722:
--

Thanks Daryn!

 TestGetConf#testFederation times out on Windows
 ---

 Key: HDFS-4722
 URL: https://issues.apache.org/jira/browse/HDFS-4722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4722.patch


 Test times out on the below stack:
 {code}
 java.lang.Exception: test timed out after 1 milliseconds
   at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
   at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:849)
   at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1202)
   at java.net.InetAddress.getAllByName0(InetAddress.java:1153)
   at java.net.InetAddress.getAllByName(InetAddress.java:1083)
   at java.net.InetAddress.getAllByName(InetAddress.java:1019)
   at java.net.InetAddress.getByName(InetAddress.java:969)
   at 
 org.apache.hadoop.security.SecurityUtil$StandardHostResolver.getByName(SecurityUtil.java:543)
   at 
 org.apache.hadoop.security.SecurityUtil.getByName(SecurityUtil.java:530)
   at 
 org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:232)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:160)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:149)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:483)
   at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:466)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:592)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:109)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:209)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testFederation(TestGetConf.java:313)
 {code} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4722) TestGetConf#testFederation times out on Windows

2013-04-21 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4722:


 Summary: TestGetConf#testFederation times out on Windows
 Key: HDFS-4722
 URL: https://issues.apache.org/jira/browse/HDFS-4722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Test times out on the below stack:

{code}
java.lang.Exception: test timed out after 1 milliseconds
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:849)
at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1202)
at java.net.InetAddress.getAllByName0(InetAddress.java:1153)
at java.net.InetAddress.getAllByName(InetAddress.java:1083)
at java.net.InetAddress.getAllByName(InetAddress.java:1019)
at java.net.InetAddress.getByName(InetAddress.java:969)
at 
org.apache.hadoop.security.SecurityUtil$StandardHostResolver.getByName(SecurityUtil.java:543)
at 
org.apache.hadoop.security.SecurityUtil.getByName(SecurityUtil.java:530)
at 
org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:232)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:160)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:149)
at 
org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:483)
at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:466)
at 
org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:592)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:109)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:209)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.testFederation(TestGetConf.java:313)
{code} 


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4722) TestGetConf#testFederation times out on Windows

2013-04-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4722:
-

Attachment: HDFS-4722.patch

The test times out because the hostname resolution for non-existing hosts can 
be slow.

Attaching the fix which adds test hostnames (namenode addresses) to the static 
resolution list, hence avoiding the slow DNS resolution step.

 TestGetConf#testFederation times out on Windows
 ---

 Key: HDFS-4722
 URL: https://issues.apache.org/jira/browse/HDFS-4722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4722.patch


 Test times out on the below stack:
 {code}
 java.lang.Exception: test timed out after 1 milliseconds
   at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
   at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:849)
   at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1202)
   at java.net.InetAddress.getAllByName0(InetAddress.java:1153)
   at java.net.InetAddress.getAllByName(InetAddress.java:1083)
   at java.net.InetAddress.getAllByName(InetAddress.java:1019)
   at java.net.InetAddress.getByName(InetAddress.java:969)
   at 
 org.apache.hadoop.security.SecurityUtil$StandardHostResolver.getByName(SecurityUtil.java:543)
   at 
 org.apache.hadoop.security.SecurityUtil.getByName(SecurityUtil.java:530)
   at 
 org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:232)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:160)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:149)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:483)
   at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:466)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:592)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:109)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:209)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testFederation(TestGetConf.java:313)
 {code} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4722) TestGetConf#testFederation times out on Windows

2013-04-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4722:
-

Status: Patch Available  (was: Open)

 TestGetConf#testFederation times out on Windows
 ---

 Key: HDFS-4722
 URL: https://issues.apache.org/jira/browse/HDFS-4722
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4722.patch


 Test times out on the below stack:
 {code}
 java.lang.Exception: test timed out after 1 milliseconds
   at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
   at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:849)
   at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1202)
   at java.net.InetAddress.getAllByName0(InetAddress.java:1153)
   at java.net.InetAddress.getAllByName(InetAddress.java:1083)
   at java.net.InetAddress.getAllByName(InetAddress.java:1019)
   at java.net.InetAddress.getByName(InetAddress.java:969)
   at 
 org.apache.hadoop.security.SecurityUtil$StandardHostResolver.getByName(SecurityUtil.java:543)
   at 
 org.apache.hadoop.security.SecurityUtil.getByName(SecurityUtil.java:530)
   at 
 org.apache.hadoop.net.NetUtils.createSocketAddrForHost(NetUtils.java:232)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:160)
   at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:149)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getAddressesForNameserviceId(DFSUtil.java:483)
   at org.apache.hadoop.hdfs.DFSUtil.getAddresses(DFSUtil.java:466)
   at 
 org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:592)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:109)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:209)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testFederation(TestGetConf.java:313)
 {code} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4705) Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir

2013-04-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13637658#comment-13637658
 ] 

Ivan Mitic commented on HDFS-4705:
--

Thanks Chris for the investigation and review!

TestQuorumJournalManager appears to be a flaky test, I haven't changed anything 
in this area. Test fails in the shutdown step which ties back to HDFS-4643. 
Will take a look orthogonally to this Jira.

 Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir
 --

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HDFS-4705.1.patch, HDFS-4705.2.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4705) TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir

2013-04-20 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4705:
-

Attachment: HDFS-4705.2.patch

Hi Chris,

I spent some time thinking here, and wasn’t able to come up with anything 
better than what you suggested. Basically, the value for 
{{dfs.namenode.name.dir}} ends up being an invalid URI on Windows because of 
how its value is expanded. In production, passing a valid URI will work, so we 
don’t have any problems. In test, reasonable approach is to be explicit and set 
{{dfs.namenode.name.dir}} to a value that is a valid URI on both Unix and 
Windows platforms.

I went ahead and fixed all above tests with the similar approach. This will get 
all of them to pass, with the exception of TestCheckpoint, which will continue 
to fail on Windows for a different reason.

Let me know if you have any feedback on the patch. And big thanks for your 
proactive help! 

 TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir
 -

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HDFS-4705.1.patch, HDFS-4705.2.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4705) Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir

2013-04-20 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4705:
-

Summary: Address HDFS test failures on Windows because of invalid 
dfs.namenode.name.dir  (was: TestAllowFormat fails on Windows because of 
invalid dfs.namenode.name.dir)

 Address HDFS test failures on Windows because of invalid dfs.namenode.name.dir
 --

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HDFS-4705.1.patch, HDFS-4705.2.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4705) TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir

2013-04-18 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635857#comment-13635857
 ] 

Ivan Mitic commented on HDFS-4705:
--

Hey Chris, no worries at all, and thanks for attaching the patch!

I started with a similar approach to yours and then saw the other test 
failures, so I thought it is worth to spent some time seeing if it makes sense 
to fix them all at ones.

I was thinking along the lines of changing {{Util#fileAsURI}} such that it 
converts the given File to Path and then from Path to the URI. However, on top 
of this we'd also have to change the default value for 
{{dfs.namenode.name.dir}} as file://$hadoop.tmp.dir/dfs/name is actually not 
a valid local URI (there should 3 forward slashes after file:, IOW: 
file:///$hadoop.tmp.dir/dfs/name). I haven't tested this out, so it might be 
that there are some problems here.

You're welcome to take this up if you want :)

 TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir
 -

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HDFS-4705.1.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4705) TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir

2013-04-18 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635859#comment-13635859
 ] 

Ivan Mitic commented on HDFS-4705:
--

PS. I think the approach from the current patch is actually a fine way to 
address the problem, I just thought it is worth to check if there are better 
ways.

 TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir
 -

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HDFS-4705.1.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4705) TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir

2013-04-18 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13635865#comment-13635865
 ] 

Ivan Mitic commented on HDFS-4705:
--

bq. However, on top of this we'd also have to change the default value for 
dfs.namenode.name.dir as file://$hadoop.tmp.dir/dfs/name is actually not a 
valid local URI (there should 3 forward slashes after file:, IOW: 
file:///$hadoop.tmp.dir/dfs/name).
Correcting myself here. The above URI is actually valid since $hadoop.tmp.dir 
contains one forward slash at the beginning. The rest of what I said below 
should still make sense.

 TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir
 -

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Attachments: HDFS-4705.1.patch


 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4705) TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir

2013-04-17 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4705:


 Summary: TestAllowFormat fails on Windows because of invalid 
dfs.namenode.name.dir
 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor


Test fails on Windows with the below exception:

{code}
testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
  Time elapsed: 49 sec   ERROR!
java.io.IOException: No image directories available!
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
at 
org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4705) TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir

2013-04-17 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633839#comment-13633839
 ] 

Ivan Mitic commented on HDFS-4705:
--

The test fails on Windows because the {{dfs.namenode.name.dir}} resolves to 
file://I:\svn\tr\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/name in the 
test what fails the namenode storage dir URI conversion (see below exception). 
{{dfs.namenode.name.dir}} default value is based on 
file://$hadoop.tmp.dir/dfs/name and this expands in the above invalid path. 

{code}
Error while processing URI: 
file://I:\svn\tr\hadoop-hdfs-project\hadoop-hdfs\target/test/dfs/name
java.io.IOException: The filename, directory name, or volume label syntax is 
incorrect
at java.io.WinNTFileSystem.canonicalize0(Native Method)
at java.io.Win32FileSystem.canonicalize(Win32FileSystem.java:396)
at java.io.File.getCanonicalPath(File.java:559)
at java.io.File.getCanonicalFile(File.java:583)
at org.apache.hadoop.hdfs.server.common.Util.fileAsURI(Util.java:73)
at org.apache.hadoop.hdfs.server.common.Util.stringAsURI(Util.java:58)
at 
org.apache.hadoop.hdfs.server.common.Util.stringCollectionAsURIs(Util.java:98)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getStorageDirs(FSNamesystem.java:959)
{code}

 TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir
 -

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor

 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4705) TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir

2013-04-17 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13633843#comment-13633843
 ] 

Ivan Mitic commented on HDFS-4705:
--

The following tests fail with the same No image directories available error 
on Windows:

TestCheckpoint
TestFSNamesystem
TestNameEditsConfigs
TestNNThroughputBenchmark
TestValidateConfigurationSettings

Will check if it makes sense to address all at the same time.

 TestAllowFormat fails on Windows because of invalid dfs.namenode.name.dir
 -

 Key: HDFS-4705
 URL: https://issues.apache.org/jira/browse/HDFS-4705
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor

 Test fails on Windows with the below exception:
 {code}
 testFormatShouldBeIgnoredForNonFileBasedDirs(org.apache.hadoop.hdfs.server.namenode.TestAllowFormat)
   Time elapsed: 49 sec   ERROR!
 java.io.IOException: No image directories available!
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:912)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:905)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:758)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:259)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestAllowFormat.testFormatShouldBeIgnoredForNonFileBasedDirs(TestAllowFormat.java:181)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4695) TestEditLog leaks open file handles between tests

2013-04-15 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4695:


 Summary: TestEditLog leaks open file handles between tests
 Key: HDFS-4695
 URL: https://issues.apache.org/jira/browse/HDFS-4695
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


The test leaks open file handles causing subsequent test cases to fail on 
Windows (common cross-platform issue we've seen multiple times so far).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4695) TestEditLog leaks open file handles between tests

2013-04-15 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4695:
-

Attachment: HDFS-4695.patch

Attaching the patch.

With the patch all test cases will pass on Windows except for 
{{TestEditLog#testFailedOpen}} which is dependent on HDFS-4610.

 TestEditLog leaks open file handles between tests
 -

 Key: HDFS-4695
 URL: https://issues.apache.org/jira/browse/HDFS-4695
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4695.patch


 The test leaks open file handles causing subsequent test cases to fail on 
 Windows (common cross-platform issue we've seen multiple times so far).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4695) TestEditLog leaks open file handles between tests

2013-04-15 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4695:
-

Status: Patch Available  (was: Open)

 TestEditLog leaks open file handles between tests
 -

 Key: HDFS-4695
 URL: https://issues.apache.org/jira/browse/HDFS-4695
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4695.patch


 The test leaks open file handles causing subsequent test cases to fail on 
 Windows (common cross-platform issue we've seen multiple times so far).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4695) TestEditLog leaks open file handles between tests

2013-04-15 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4695:
-

Attachment: HDFS-4695.2.patch

Thanks Colin and Chris for the review! 

Attaching the updated patch. I switched to {{IOUtils.cleanup()}} as I was not 
able to find {{IOUtils.closeQuietly()}} that accepts a list of closables. 
Colin, let me know if this looks good.

 TestEditLog leaks open file handles between tests
 -

 Key: HDFS-4695
 URL: https://issues.apache.org/jira/browse/HDFS-4695
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4695.2.patch, HDFS-4695.patch


 The test leaks open file handles causing subsequent test cases to fail on 
 Windows (common cross-platform issue we've seen multiple times so far).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4695) TestEditLog leaks open file handles between tests

2013-04-15 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13632482#comment-13632482
 ] 

Ivan Mitic commented on HDFS-4695:
--

bq. There are actually two IOUtils classes in common use-- 
org.apache.commons.io.IOUtils and org.apache.hadoop.io.IOUtils. The former has 
closeQuietly, and the latter has cleanup. Either function works here, and if 
you're already using the org.apache.hadoop.io class, it's probably easiest to 
stick with that, just as you did here.
Thanks Colin. I also noticed {{org.apache.commons.io.IOUtils}} however I did 
not find {{closeQuietly}} that accepts a list of closables. The current patch 
should be good then, thanks!

 TestEditLog leaks open file handles between tests
 -

 Key: HDFS-4695
 URL: https://issues.apache.org/jira/browse/HDFS-4695
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4695.2.patch, HDFS-4695.patch


 The test leaks open file handles causing subsequent test cases to fail on 
 Windows (common cross-platform issue we've seen multiple times so far).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-04-14 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

Attachment: HDFS-4677.patch

Attaching the patch. Will also attach the branch-1 compatible patch once this 
one is reviewed.

I see that {{EditLogFileOutputStream}} already has a 
{{shouldSkipFsyncForTests}} flag which is used to speed up test execution. I 
did a quick experiment on my Ubuntu box to see if we can remove this test hook 
in favor of the newly introduced config and still keep the test runtime 
reasonably low. Turned out that we cannot. TestEditLog which run ~3 minutes 
took more than 15 minutes with the new config (it timed out actually), so 
keeping the existing logic as-is.

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4677) Editlog should support synchronous writes

2013-04-14 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4677:
-

Status: Patch Available  (was: Open)

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4677.patch


 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4677) Editlog should support synchronous writes

2013-04-09 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4677:


 Summary: Editlog should support synchronous writes
 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic


In the current implementation, NameNode editlog performs syncs to the 
persistent storage using the {{FileChannel#force}} Java APIs. This API is 
documented to be slower compared to an alternative where {{RandomAccessFile}} 
is opened with rws flags (synchronous writes). 

We instrumented {{FileChannel#force}} on Windows and it some software/hardware 
configurations it can perform significantly slower than the “rws” alternative.

In terms of the Windows APIs, FileChannel#force internally calls 
[FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
 while RandomAccessFile (“rws”) opens the file with the 
[FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 

With this Jira I'd like to introduce a flag that provide means to configure 
NameNode to use synchronous writes. There is a catch though, the behavior of 
the rws flags is platform and hardware specific and might not provide the 
same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4677) Editlog should support synchronous writes

2013-04-09 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13627273#comment-13627273
 ] 

Ivan Mitic commented on HDFS-4677:
--

bq. 
http://stas-blogspot.blogspot.com/2011/11/java-file-flushing-performance.html
Thanks, indeed interesting data. Diff on Windows between rwd (which is also 
based on FILE_FLAG_WRITE_THROUGH) and {{FileChannel#force}} looks reasonable.

 Editlog should support synchronous writes
 -

 Key: HDFS-4677
 URL: https://issues.apache.org/jira/browse/HDFS-4677
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic

 In the current implementation, NameNode editlog performs syncs to the 
 persistent storage using the {{FileChannel#force}} Java APIs. This API is 
 documented to be slower compared to an alternative where {{RandomAccessFile}} 
 is opened with rws flags (synchronous writes). 
 We instrumented {{FileChannel#force}} on Windows and it some 
 software/hardware configurations it can perform significantly slower than the 
 “rws” alternative.
 In terms of the Windows APIs, FileChannel#force internally calls 
 [FlushFileBuffers|http://msdn.microsoft.com/en-us/library/windows/desktop/aa364439(v=vs.85).aspx]
  while RandomAccessFile (“rws”) opens the file with the 
 [FILE_FLAG_WRITE_THROUGH flag|http://support.microsoft.com/kb/99794]. 
 With this Jira I'd like to introduce a flag that provide means to configure 
 NameNode to use synchronous writes. There is a catch though, the behavior of 
 the rws flags is platform and hardware specific and might not provide the 
 same level of guarantees as {{FileChannel#force}} w.r.t. flushing the on-disk 
 cache. This is an expert level setting, and it should be documented as such.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4625) Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows

2013-04-04 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13622926#comment-13622926
 ] 

Ivan Mitic commented on HDFS-4625:
--

Thanks Suresh for the commit and Arpit for the review!

 Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows
 -

 Key: HDFS-4625
 URL: https://issues.apache.org/jira/browse/HDFS-4625
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Ivan Mitic
Priority: Minor
  Labels: windows
 Fix For: 3.0.0

 Attachments: HDFS-4625.patch


 This test is being skipped on Windows since we are unable to read from locked 
 files. Filing the Jira to keep track of the skipped test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4625) Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows

2013-03-24 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4625:
-

Attachment: HDFS-4625.patch

Attaching the patch.

I decoupled creation of the image dir from what the test is trying to validate. 
This allows the test to run on Windows as the image dir is not locked by the 
namenode when we try to make its copy.

 Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows
 -

 Key: HDFS-4625
 URL: https://issues.apache.org/jira/browse/HDFS-4625
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Ivan Mitic
Priority: Minor
  Labels: windows
 Attachments: HDFS-4625.patch


 This test is being skipped on Windows since we are unable to read from locked 
 files. Filing the Jira to keep track of the skipped test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4625) Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows

2013-03-24 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4625:
-

Status: Patch Available  (was: Open)

 Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows
 -

 Key: HDFS-4625
 URL: https://issues.apache.org/jira/browse/HDFS-4625
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Ivan Mitic
Priority: Minor
  Labels: windows
 Attachments: HDFS-4625.patch


 This test is being skipped on Windows since we are unable to read from locked 
 files. Filing the Jira to keep track of the skipped test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4625) Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows

2013-03-24 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4625:
-

Attachment: HDFS-4625.patch

Attaching the patch.

I decoupled creation of the image dir from what the test is trying to validate. 
This allows the test to run on Windows as the image dir is not locked by the 
namenode when we try to make its copy.

 Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows
 -

 Key: HDFS-4625
 URL: https://issues.apache.org/jira/browse/HDFS-4625
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Ivan Mitic
Priority: Minor
  Labels: windows
 Attachments: HDFS-4625.patch


 This test is being skipped on Windows since we are unable to read from locked 
 files. Filing the Jira to keep track of the skipped test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4584) Fix TestNNWithQJM failures on Windows

2013-03-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13608736#comment-13608736
 ] 

Ivan Mitic commented on HDFS-4584:
--

Hi Arpit, thanks for the patch. 

I was thinking whether it's possible to alter the problematic test 
{{testNewNamenodeTakesOverWriter}} in a way such that it can still run on 
Windows. What do you think about doing the following:
1. cluster1#format(true)#build() - same as now, this will generate local image 
file and namespace id
2. cluster1#shutdown() - release file locks
3. FileUtil#copy() - 
4. cluster1#format(false)#build()
5. cluster1#FileSystem#mkdirs() - move mkdirs here such that it is saved in the 
edits log
6. cluster2#format(false)#build()

This should preserve the intent of the test and still allow it to run on 
Windows I think. Let me know if this works. I am fine with the current patch if 
not.

 Fix TestNNWithQJM failures on Windows
 -

 Key: HDFS-4584
 URL: https://issues.apache.org/jira/browse/HDFS-4584
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: Windows
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4584.1.patch, HDFS-4584.2.patch, HDFS-4584.3.patch


 Multiple test cases fail in TestNNWithQJM.
 List of failing test cases:
 -  
 testNewNamenodeTakesOverWriter(org.apache.hadoop.hdfs.qjournal.TestNNWithQJM):
  The process cannot access the file because another process has locked a 
 portion of the file
 -  testMismatchedNNIsRejected(org.apache.hadoop.hdfs.qjournal.TestNNWithQJM): 
 Could not format one or more JournalNodes. 1 exceptions thrown:
 -  testWebPageHasQjmInfo(org.apache.hadoop.hdfs.qjournal.TestNNWithQJM): 
 Could not format one or more JournalNodes. 1 exceptions thrown:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4625) Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows

2013-03-21 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic reassigned HDFS-4625:


Assignee: Ivan Mitic

 Make TestNNWithQJM#testNewNamenodeTakesOverWriter work on Windows
 -

 Key: HDFS-4625
 URL: https://issues.apache.org/jira/browse/HDFS-4625
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Ivan Mitic
Priority: Minor
  Labels: windows

 This test is being skipped on Windows since we are unable to read from locked 
 files. Filing the Jira to keep track of the skipped test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4584) Fix TestNNWithQJM failures on Windows

2013-03-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13609775#comment-13609775
 ] 

Ivan Mitic commented on HDFS-4584:
--

bq. I think the test is fragile by nature, see the commented shutdown() call in 
the original test. I have filed HDFS-4625 and linked it to track the skipped 
test case and hope this is a reasonable alternative for now.
Thanks Arpit for trying this out, sounds good.

 Fix TestNNWithQJM failures on Windows
 -

 Key: HDFS-4584
 URL: https://issues.apache.org/jira/browse/HDFS-4584
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: Windows
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 2.0.5-beta

 Attachments: HDFS-4584.1.patch, HDFS-4584.2.patch, HDFS-4584.3.patch


 Multiple test cases fail in TestNNWithQJM.
 List of failing test cases:
 -  
 testNewNamenodeTakesOverWriter(org.apache.hadoop.hdfs.qjournal.TestNNWithQJM):
  The process cannot access the file because another process has locked a 
 portion of the file
 -  testMismatchedNNIsRejected(org.apache.hadoop.hdfs.qjournal.TestNNWithQJM): 
 Could not format one or more JournalNodes. 1 exceptions thrown:
 -  testWebPageHasQjmInfo(org.apache.hadoop.hdfs.qjournal.TestNNWithQJM): 
 Could not format one or more JournalNodes. 1 exceptions thrown:

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-21 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13609778#comment-13609778
 ] 

Ivan Mitic commented on HDFS-4607:
--

Thanks Nicholas for committing and Chris for the review!

 TestGetConf#testGetSpecificKey fails on Windows
 ---

 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
Priority: Minor
 Fix For: 2.0.5-beta

 Attachments: HDFS-4607.2.patch, HDFS-4607.patch


 Test fails on the below stack:
 {code}
 testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 
 37 sec   FAILURE!
 java.lang.AssertionError: 
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-18 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4607:
-

Attachment: HDFS-4607.2.patch

Attaching updated patch that addresses Chris' comment.

 TestGetConf#testGetSpecificKey fails on Windows
 ---

 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4607.2.patch, HDFS-4607.patch


 Test fails on the below stack:
 {code}
 testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 
 37 sec   FAILURE!
 java.lang.AssertionError: 
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-18 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13605781#comment-13605781
 ] 

Ivan Mitic commented on HDFS-4607:
--

Thanks a lot Chris for testing this out on Mac. I agree, we should understand 
why the test is taking so long. I attached the updated patch. testFederation 
now consistently fails on Windows with a timeout, so we won't forget about the 
problem. Will file a Jira and investigate the timeout problem separately. 

 TestGetConf#testGetSpecificKey fails on Windows
 ---

 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4607.2.patch, HDFS-4607.patch


 Test fails on the below stack:
 {code}
 testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 
 37 sec   FAILURE!
 java.lang.AssertionError: 
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-18 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13605863#comment-13605863
 ] 

Ivan Mitic commented on HDFS-4607:
--

bq. Digging into the issue, it appears to be a problem of slow response trying 
to resolve an unknown host name. The test generates multiple bogus host names 
of the form nn1, nn2, nn3, etc. Each call to TestGetConf#setupAddress ends up 
iterating through each of 10 such addresses, ultimately calling 
InetAddress#getByName. The same thing happens again later for each call to 
TestGetConf#verifyAddresses. On Mac, this comes back nearly instantaneously 
with host not found. On my Windows VM, each host name takes ~2.25s before 
responding with host not found. Simply running ping nn1 on the command prompt 
confirms this behavior. Do you know if we're experiencing a Windows network 
misconfiguration?
Thanks Chris. I also looked deeper into the issue. I don't think we can do 
anything here, and actually, I am not sure if this as a problem. It should be 
OK for unknown host resolution to take a few seconds (please correct me if 
needed, I am no expert here :)). I have an idea of how to fix this such that 
the test runs faster on Windows, but will do this via a separate patch to avoid 
randomizing this fix.

 TestGetConf#testGetSpecificKey fails on Windows
 ---

 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4607.2.patch, HDFS-4607.patch


 Test fails on the below stack:
 {code}
 testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 
 37 sec   FAILURE!
 java.lang.AssertionError: 
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4609) TestAuditLogs should release log handles between tests

2013-03-18 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4609:


 Summary: TestAuditLogs should release log handles between tests
 Key: HDFS-4609
 URL: https://issues.apache.org/jira/browse/HDFS-4609
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


TestAuditLogs does not release audit log file handle before moving on to the 
next test. This causes many test cases to fail on Windows as the log file is 
not successfully deleted in TestAuditLogs#setupAuditLogs what the test expects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4609) TestAuditLogs should release log handles between tests

2013-03-18 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4609:
-

Attachment: HDFS-4609.patch

Attaching the patch. 

The fix is to shutdown the LogManager in the test. Unfortunately, Apache 
commons logging library does not provide means to release underlying loggers. 
For additional info look up commons library FAQ: 
http://wiki.apache.org/commons/Logging/FrequentlyAskedQuestions How can I 
close loggers when using Commons-Logging?

 TestAuditLogs should release log handles between tests
 --

 Key: HDFS-4609
 URL: https://issues.apache.org/jira/browse/HDFS-4609
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4609.patch


 TestAuditLogs does not release audit log file handle before moving on to the 
 next test. This causes many test cases to fail on Windows as the log file is 
 not successfully deleted in TestAuditLogs#setupAuditLogs what the test 
 expects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4609) TestAuditLogs should release log handles between tests

2013-03-18 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4609:
-

Status: Patch Available  (was: Open)

 TestAuditLogs should release log handles between tests
 --

 Key: HDFS-4609
 URL: https://issues.apache.org/jira/browse/HDFS-4609
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4609.patch


 TestAuditLogs does not release audit log file handle before moving on to the 
 next test. This causes many test cases to fail on Windows as the log file is 
 not successfully deleted in TestAuditLogs#setupAuditLogs what the test 
 expects.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-03-18 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4610:


 Summary: Move to using common utils 
FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute
 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Switch to using common utils described in HADOOP-9413 that work well 
cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4610) Move to using common utils FileUtil#setReadable/Writable/Executable and FileUtil#canRead/Write/Execute

2013-03-18 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4610:
-

Attachment: HDFS-4610.commonfileutils.patch

Attaching the patch for demonstration.

This is dependent on Jira HADOOP-9413.

 Move to using common utils FileUtil#setReadable/Writable/Executable and 
 FileUtil#canRead/Write/Execute
 --

 Key: HDFS-4610
 URL: https://issues.apache.org/jira/browse/HDFS-4610
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4610.commonfileutils.patch


 Switch to using common utils described in HADOOP-9413 that work well 
 cross-platform.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-17 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4607:


 Summary: TestGetConf#testGetSpecificKey fails on Windows
 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Test fails on the below stack:

{code}
testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 37 
sec   FAILURE!
java.lang.AssertionError: 
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-17 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4607:
-

Attachment: HDFS-4607.patch

Attaching the patch. 

Simple fix, the test fails because a line ending does not match.

Also adding timeout annotations to all test cases. testFederation takes 
consistently 254s to run on my machine, so I set the timeout to 300.

 TestGetConf#testGetSpecificKey fails on Windows
 ---

 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4607.patch


 Test fails on the below stack:
 {code}
 testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 
 37 sec   FAILURE!
 java.lang.AssertionError: 
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-17 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4607:
-

Attachment: (was: HDFS-4607.patch)

 TestGetConf#testGetSpecificKey fails on Windows
 ---

 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4607.patch


 Test fails on the below stack:
 {code}
 testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 
 37 sec   FAILURE!
 java.lang.AssertionError: 
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-17 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4607:
-

Attachment: HDFS-4607.patch

 TestGetConf#testGetSpecificKey fails on Windows
 ---

 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4607.patch


 Test fails on the below stack:
 {code}
 testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 
 37 sec   FAILURE!
 java.lang.AssertionError: 
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4607) TestGetConf#testGetSpecificKey fails on Windows

2013-03-17 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4607:
-

Status: Patch Available  (was: Open)

 TestGetConf#testGetSpecificKey fails on Windows
 ---

 Key: HDFS-4607
 URL: https://issues.apache.org/jira/browse/HDFS-4607
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4607.patch


 Test fails on the below stack:
 {code}
 testGetSpecificKey(org.apache.hadoop.hdfs.tools.TestGetConf)  Time elapsed: 
 37 sec   FAILURE!
 java.lang.AssertionError: 
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at org.junit.Assert.assertTrue(Assert.java:54)
   at 
 org.apache.hadoop.hdfs.tools.TestGetConf.testGetSpecificKey(TestGetConf.java:341)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4603) TestMiniDFSCluster fails on Windows

2013-03-15 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4603:


 Summary: TestMiniDFSCluster fails on Windows
 Key: HDFS-4603
 URL: https://issues.apache.org/jira/browse/HDFS-4603
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Test fails with on the below assert:
{code}
testClusterWithoutSystemProperties(org.apache.hadoop.hdfs.TestMiniDFSCluster)  
Time elapsed: 8252 sec   FAILURE!
org.junit.ComparisonFailure: expected:...t\test\data\cluster1[/]data but 
was:...t\test\data\cluster1[\]data
at org.junit.Assert.assertEquals(Assert.java:123)
at org.junit.Assert.assertEquals(Assert.java:145)
at 
org.apache.hadoop.hdfs.TestMiniDFSCluster.testClusterWithoutSystemProperties(TestMiniDFSCluster.java:77)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4603) TestMiniDFSCluster fails on Windows

2013-03-15 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4603:
-

Status: Patch Available  (was: Open)

 TestMiniDFSCluster fails on Windows
 ---

 Key: HDFS-4603
 URL: https://issues.apache.org/jira/browse/HDFS-4603
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4603.patch


 Test fails with on the below assert:
 {code}
 testClusterWithoutSystemProperties(org.apache.hadoop.hdfs.TestMiniDFSCluster) 
  Time elapsed: 8252 sec   FAILURE!
 org.junit.ComparisonFailure: expected:...t\test\data\cluster1[/]data but 
 was:...t\test\data\cluster1[\]data
   at org.junit.Assert.assertEquals(Assert.java:123)
   at org.junit.Assert.assertEquals(Assert.java:145)
   at 
 org.apache.hadoop.hdfs.TestMiniDFSCluster.testClusterWithoutSystemProperties(TestMiniDFSCluster.java:77)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4603) TestMiniDFSCluster fails on Windows

2013-03-15 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4603:
-

Attachment: HDFS-4603.patch

Attaching the patch. A common problem we've seen where the slashes mismatch. 
The fix is to use java.io.File for comparison as it normalizes local paths in a 
consistent way. 

 TestMiniDFSCluster fails on Windows
 ---

 Key: HDFS-4603
 URL: https://issues.apache.org/jira/browse/HDFS-4603
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4603.patch


 Test fails with on the below assert:
 {code}
 testClusterWithoutSystemProperties(org.apache.hadoop.hdfs.TestMiniDFSCluster) 
  Time elapsed: 8252 sec   FAILURE!
 org.junit.ComparisonFailure: expected:...t\test\data\cluster1[/]data but 
 was:...t\test\data\cluster1[\]data
   at org.junit.Assert.assertEquals(Assert.java:123)
   at org.junit.Assert.assertEquals(Assert.java:145)
   at 
 org.apache.hadoop.hdfs.TestMiniDFSCluster.testClusterWithoutSystemProperties(TestMiniDFSCluster.java:77)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4604) TestJournalNode fails on Windows

2013-03-15 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4604:


 Summary: TestJournalNode fails on Windows
 Key: HDFS-4604
 URL: https://issues.apache.org/jira/browse/HDFS-4604
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


The test fails with the below assertion:

{code}
testFailToStartWithBadConfig(org.apache.hadoop.hdfs.qjournal.server.TestJournalNode)
  Time elapsed: 209 sec   FAILURE!
java.lang.AssertionError: Expected to find 'is not a directory' but got 
unexpected exception:java.lang.IllegalArgumentException: Journal dir 
'\dev\null' should be an absolute path
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.validateAndCreateJournalDir(JournalNode.java:96)
at 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:132)
at 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNode.assertJNFailsToStart(TestJournalNode.java:294)
at 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNode.testFailToStartWithBadConfig(TestJournalNode.java:281)
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4604) TestJournalNode fails on Windows

2013-03-15 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4604:
-

Attachment: HDFS-4604.patch

Test fails on Windows because it assumes certain unix local paths (like 
{{/dev/null}} and {{/proc/does-not-exist}}).

While working on the fix I noticed that logic under 
JournalNode#validateAndCreateJournalDir uses java.io.File#canWrite() which does 
not work as expected on Windows. To address that I replaced the majority of the 
logic under validateAndCreateJournalDir by calling into DiskChecker#checkDir. I 
think this gets us the behavior we want here. Please comment if this sounds 
good.

Just to add, there is one incremental improvement we can do in the patch. 
Currently, the patch assumes certain exception strings back from the 
DiskChecker. If someone changes those strings, it can break HDFS test patch by 
accident. 

 TestJournalNode fails on Windows
 

 Key: HDFS-4604
 URL: https://issues.apache.org/jira/browse/HDFS-4604
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4604.patch


 The test fails with the below assertion:
 {code}
 testFailToStartWithBadConfig(org.apache.hadoop.hdfs.qjournal.server.TestJournalNode)
   Time elapsed: 209 sec   FAILURE!
 java.lang.AssertionError: Expected to find 'is not a directory' but got 
 unexpected exception:java.lang.IllegalArgumentException: Journal dir 
 '\dev\null' should be an absolute path
   at 
 org.apache.hadoop.hdfs.qjournal.server.JournalNode.validateAndCreateJournalDir(JournalNode.java:96)
   at 
 org.apache.hadoop.hdfs.qjournal.server.JournalNode.start(JournalNode.java:132)
   at 
 org.apache.hadoop.hdfs.qjournal.server.TestJournalNode.assertJNFailsToStart(TestJournalNode.java:294)
   at 
 org.apache.hadoop.hdfs.qjournal.server.TestJournalNode.testFailToStartWithBadConfig(TestJournalNode.java:281)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4582) TestHostsFiles fails on Windows

2013-03-13 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13601443#comment-13601443
 ] 

Ivan Mitic commented on HDFS-4582:
--

bq. Ivan, you can just name the patches as jira.patch. No need to mention 
trunk in the patch file name.
Thanks, will do so from now on.

Big thanks for committing this patch (and many others)!

 TestHostsFiles fails on Windows
 ---

 Key: HDFS-4582
 URL: https://issues.apache.org/jira/browse/HDFS-4582
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4582.trunk.patch


 Test failure details: 
 java.lang.AssertionError: dfshealth should contain localhost, got:!DOCTYPE 
 html
 html
 head
 link rel=stylesheet type=text/css href=/static/hadoop.css
 titleHadoop NameNode 127.0.0.1:53373/title
 /head
 body
 h1NameNode '127.0.0.1:53373' (active)/h1
 div class='dfstable'table
   trtd class='col1'Started:/tdtdSun Mar 10 22:45:28 PDT 2013/td/tr
   trtd class='col1'Version:/tdtd3.0.0-SNAPSHOT, 
 6094bfab6459e20eba44304ffc7e65c6416dfe18/td/tr
   trtd class='col1'Compiled:/tdtd2013-03-11T05:42Z by ivanmi from 
 trunk/td/tr
   trtd class='col1'Cluster ID:/tdtdtestClusterID/td/tr
   trtd class='col1'Block Pool 
 ID:/tdtdBP-549950874-10.120.2.171-1362980728518/td/tr
 /table/divbr /
 ba href=/nn_browsedfscontent.jspBrowse the filesystem/a/bbr
 ba href=/logs/NameNode Logs/a/b
 hr
 h3Cluster Summary/h3
 b div class=securitySecurity is emOFF/em/div/b
 b /b
 b div2 files and directories, 1 blocks = 3 total filesystem 
 objects./divdivHeap Memory used 90.61 MB is  49% of Commited Heap Memory 
 183.81 MB. Max Heap Memory is 2.66 GB. /divdivNon Heap Memory used 36.67 
 MB is 96% of  Commited Non Heap Memory 37.81 MB. Max Non Heap Memory is 130 
 MB./div/b
 div class=dfstable table
 tr class=rowAlt td id=col1 Configured Capacitytd id=col2 :td 
 id=col3 670.73 GBtr class=rowNormal td id=col1 DFS Usedtd 
 id=col2 :td id=col3 1.09 KBtr class=rowAlt td id=col1 Non DFS 
 Usedtd id=col2 :td id=col3 513.45 GBtr class=rowNormal td 
 id=col1 DFS Remainingtd id=col2 :td id=col3 157.28 GBtr 
 class=rowAlt td id=col1 DFS Used%td id=col2 :td id=col3 
 0.00%tr class=rowNormal td id=col1 DFS Remaining%td id=col2 :td 
 id=col3 23.45%tr class=rowAlt td id=col1 Block Pool Usedtd 
 id=col2 :td id=col3 1.09 KBtr class=rowNormal td id=col1 Block 
 Pool Used%td id=col2 :td id=col3 0.00%tr class=rowAlt td 
 id=col1 DataNodes usagestd id=col2 :td id=col3 Min %td id=col4 
 Median %td id=col5 Max %td id=col6 stdev %tr class=rowNormal td 
 id=col1 td id=col2 td id=col3 0.00%td id=col4 0.00%td 
 id=col5 0.00%td id=col6 0.00%tr class=rowAlt td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=LIVELive Nodes/a td id=col2 :td 
 id=col3 4 (Decommissioned: 1)tr class=rowNormal td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=DEADDead Nodes/a td id=col2 :td 
 id=col3 0 (Decommissioned: 0)tr class=rowAlt td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=DECOMMISSIONINGDecommissioning Nodes/a 
 td id=col2 :td id=col3 0tr class=rowNormal td id=col1 
 title=Excludes missing blocks. Number of Under-Replicated Blockstd 
 id=col2 :td id=col3 0/table/divbr
 h3 NameNode Journal Status: /h3
 bCurrent transaction ID:/b 6br/
 div class=dfstable
 table class=storage title=NameNode Journals
 theadtrtdbJournal Manager/b/tdtdbState/b/td/tr/thead
 trtdFileJournalManager(root=I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1)/tdtdEditLogFileOutputStream(I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1\current\edits_inprogress_001)
 /td/tr
 trtdFileJournalManager(root=I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2)/tdtdEditLogFileOutputStream(I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2\current\edits_inprogress_001)
 /td/tr
 /table/div
 hr/
 h3 NameNode Storage: /h3div class=dfstable table class=storage 
 title=NameNode Storage
 theadtrtdbStorage 
 Directory/b/tdtdbType/b/tdtdbState/b/td/tr/theadtrtdI:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1/tdtdIMAGE_AND_EDITS/tdtdActive/td/trtrtdI:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2/tdtdIMAGE_AND_EDITS/tdtdActive/td/tr/table/div
 hr
 hr /
 a href='http://hadoop.apache.org/core'Hadoop/a, 2013.
 /body/html
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestHostsFiles.testHostsExcludeDfshealthJsp(TestHostsFiles.java:127)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 

[jira] [Commented] (HDFS-4287) HTTPFS tests fail on Windows

2013-03-12 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599769#comment-13599769
 ] 

Ivan Mitic commented on HDFS-4287:
--

Thanks Chris, looks good! +1

 HTTPFS tests fail on Windows
 

 Key: HDFS-4287
 URL: https://issues.apache.org/jira/browse/HDFS-4287
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4287.1.patch, HDFS-4287.2.patch, HDFS-4287.3.patch, 
 HDFS-4287.4.patch


 The HTTPFS tests have some platform-specific assumptions that cause the tests 
 to fail when run on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4583) TestNodeCount fails with: Timeout: excess replica count not equal to 2

2013-03-12 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600247#comment-13600247
 ] 

Ivan Mitic commented on HDFS-4583:
--

Thanks for the review!

bq. The fix makes sense, but I didn't follow why the test consistently passes 
on Linux even without this patch. Is it because we get a different iteration 
order from bm.blocksMap.nodeIterator on Linux, and it just happens to get a 
non-excess node by coincidence?
Good question. I haven't debugged this explicitly side-by-side, but I suspect 
this is what is happening. The iterator goes over datanodes which have the 
block, so there is likely some timing involved. The test did pass for me once 
on Windows while I was debugging.

 TestNodeCount fails with: Timeout: excess replica count not equal to 2
 --

 Key: HDFS-4583
 URL: https://issues.apache.org/jira/browse/HDFS-4583
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4583.trunk.patch


 Test fails on the following assertion:
 java.util.concurrent.TimeoutException: Timeout: excess replica count not 
 equal to 2 for block blk_6432712012621304004_1002 after 2 msec.  Last 
 counts: live = 2, excess = 1, corrupt = 0
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:155)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:149)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:133)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4586) TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid

2013-03-12 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13600479#comment-13600479
 ] 

Ivan Mitic commented on HDFS-4586:
--

Nice, thanks Chris and Aaron for the review and commit!

 TestDataDirs.testGetDataDirsFromURIs fails with all directories in 
 dfs.datanode.data.dir are invalid
 

 Key: HDFS-4586
 URL: https://issues.apache.org/jira/browse/HDFS-4586
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Fix For: 3.0.0

 Attachments: HDFS-4586.trunk.2.patch, HDFS-4586.trunk.3.patch, 
 HDFS-4586.trunk.patch


 Error Message
 All directories in dfs.datanode.data.dir are invalid: /p1 /p2 /p3 
  Stacktrace
 {code}
 java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
 /p1 /p2 /p3 
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 Seems like Jenkins will return -1 on all HDFS patches because of this (check 
 HDFS-4583)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4582) TestHostsFiles fails on Windows

2013-03-11 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4582:


 Summary: TestHostsFiles fails on Windows
 Key: HDFS-4582
 URL: https://issues.apache.org/jira/browse/HDFS-4582
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Test failure details: 

java.lang.AssertionError: dfshealth should contain localhost, got:!DOCTYPE 
html
html
head
link rel=stylesheet type=text/css href=/static/hadoop.css
titleHadoop NameNode 127.0.0.1:53373/title
/head
body
h1NameNode '127.0.0.1:53373' (active)/h1
div class='dfstable'table
  trtd class='col1'Started:/tdtdSun Mar 10 22:45:28 PDT 2013/td/tr

  trtd class='col1'Version:/tdtd3.0.0-SNAPSHOT, 
6094bfab6459e20eba44304ffc7e65c6416dfe18/td/tr

  trtd class='col1'Compiled:/tdtd2013-03-11T05:42Z by ivanmi from 
trunk/td/tr
  trtd class='col1'Cluster ID:/tdtdtestClusterID/td/tr
  trtd class='col1'Block Pool 
ID:/tdtdBP-549950874-10.120.2.171-1362980728518/td/tr
/table/divbr /
ba href=/nn_browsedfscontent.jspBrowse the filesystem/a/bbr
ba href=/logs/NameNode Logs/a/b

hr
h3Cluster Summary/h3
b div class=securitySecurity is emOFF/em/div/b
b /b
b div2 files and directories, 1 blocks = 3 total filesystem 
objects./divdivHeap Memory used 90.61 MB is  49% of Commited Heap Memory 
183.81 MB. Max Heap Memory is 2.66 GB. /divdivNon Heap Memory used 36.67 MB 
is 96% of  Commited Non Heap Memory 37.81 MB. Max Non Heap Memory is 130 
MB./div/b
div class=dfstable table
tr class=rowAlt td id=col1 Configured Capacitytd id=col2 :td 
id=col3 670.73 GBtr class=rowNormal td id=col1 DFS Usedtd 
id=col2 :td id=col3 1.09 KBtr class=rowAlt td id=col1 Non DFS 
Usedtd id=col2 :td id=col3 513.45 GBtr class=rowNormal td 
id=col1 DFS Remainingtd id=col2 :td id=col3 157.28 GBtr 
class=rowAlt td id=col1 DFS Used%td id=col2 :td id=col3 0.00%tr 
class=rowNormal td id=col1 DFS Remaining%td id=col2 :td id=col3 
23.45%tr class=rowAlt td id=col1 Block Pool Usedtd id=col2 :td 
id=col3 1.09 KBtr class=rowNormal td id=col1 Block Pool Used%td 
id=col2 :td id=col3 0.00%tr class=rowAlt td id=col1 DataNodes 
usagestd id=col2 :td id=col3 Min %td id=col4 Median %td id=col5 
Max %td id=col6 stdev %tr class=rowNormal td id=col1 td id=col2 
td id=col3 0.00%td id=col4 0.00%td id=col5 0.00%td id=col6 
0.00%tr class=rowAlt td id=col1 a 
href=dfsnodelist.jsp?whatNodes=LIVELive Nodes/a td id=col2 :td 
id=col3 4 (Decommissioned: 1)tr class=rowNormal td id=col1 a 
href=dfsnodelist.jsp?whatNodes=DEADDead Nodes/a td id=col2 :td 
id=col3 0 (Decommissioned: 0)tr class=rowAlt td id=col1 a 
href=dfsnodelist.jsp?whatNodes=DECOMMISSIONINGDecommissioning Nodes/a td 
id=col2 :td id=col3 0tr class=rowNormal td id=col1 
title=Excludes missing blocks. Number of Under-Replicated Blockstd 
id=col2 :td id=col3 0/table/divbr
h3 NameNode Journal Status: /h3
bCurrent transaction ID:/b 6br/
div class=dfstable
table class=storage title=NameNode Journals
theadtrtdbJournal Manager/b/tdtdbState/b/td/tr/thead
trtdFileJournalManager(root=I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1)/tdtdEditLogFileOutputStream(I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1\current\edits_inprogress_001)
/td/tr
trtdFileJournalManager(root=I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2)/tdtdEditLogFileOutputStream(I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2\current\edits_inprogress_001)
/td/tr
/table/div
hr/
h3 NameNode Storage: /h3div class=dfstable table class=storage 
title=NameNode Storage
theadtrtdbStorage 
Directory/b/tdtdbType/b/tdtdbState/b/td/tr/theadtrtdI:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1/tdtdIMAGE_AND_EDITS/tdtdActive/td/trtrtdI:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2/tdtdIMAGE_AND_EDITS/tdtdActive/td/tr/table/div
hr
hr /
a href='http://hadoop.apache.org/core'Hadoop/a, 2013.
/body/html

at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.hadoop.hdfs.server.namenode.TestHostsFiles.testHostsExcludeDfshealthJsp(TestHostsFiles.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 

[jira] [Updated] (HDFS-4582) TestHostsFiles fails on Windows

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4582:
-

Status: Patch Available  (was: Open)

 TestHostsFiles fails on Windows
 ---

 Key: HDFS-4582
 URL: https://issues.apache.org/jira/browse/HDFS-4582
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4582.trunk.patch


 Test failure details: 
 java.lang.AssertionError: dfshealth should contain localhost, got:!DOCTYPE 
 html
 html
 head
 link rel=stylesheet type=text/css href=/static/hadoop.css
 titleHadoop NameNode 127.0.0.1:53373/title
 /head
 body
 h1NameNode '127.0.0.1:53373' (active)/h1
 div class='dfstable'table
   trtd class='col1'Started:/tdtdSun Mar 10 22:45:28 PDT 2013/td/tr
   trtd class='col1'Version:/tdtd3.0.0-SNAPSHOT, 
 6094bfab6459e20eba44304ffc7e65c6416dfe18/td/tr
   trtd class='col1'Compiled:/tdtd2013-03-11T05:42Z by ivanmi from 
 trunk/td/tr
   trtd class='col1'Cluster ID:/tdtdtestClusterID/td/tr
   trtd class='col1'Block Pool 
 ID:/tdtdBP-549950874-10.120.2.171-1362980728518/td/tr
 /table/divbr /
 ba href=/nn_browsedfscontent.jspBrowse the filesystem/a/bbr
 ba href=/logs/NameNode Logs/a/b
 hr
 h3Cluster Summary/h3
 b div class=securitySecurity is emOFF/em/div/b
 b /b
 b div2 files and directories, 1 blocks = 3 total filesystem 
 objects./divdivHeap Memory used 90.61 MB is  49% of Commited Heap Memory 
 183.81 MB. Max Heap Memory is 2.66 GB. /divdivNon Heap Memory used 36.67 
 MB is 96% of  Commited Non Heap Memory 37.81 MB. Max Non Heap Memory is 130 
 MB./div/b
 div class=dfstable table
 tr class=rowAlt td id=col1 Configured Capacitytd id=col2 :td 
 id=col3 670.73 GBtr class=rowNormal td id=col1 DFS Usedtd 
 id=col2 :td id=col3 1.09 KBtr class=rowAlt td id=col1 Non DFS 
 Usedtd id=col2 :td id=col3 513.45 GBtr class=rowNormal td 
 id=col1 DFS Remainingtd id=col2 :td id=col3 157.28 GBtr 
 class=rowAlt td id=col1 DFS Used%td id=col2 :td id=col3 
 0.00%tr class=rowNormal td id=col1 DFS Remaining%td id=col2 :td 
 id=col3 23.45%tr class=rowAlt td id=col1 Block Pool Usedtd 
 id=col2 :td id=col3 1.09 KBtr class=rowNormal td id=col1 Block 
 Pool Used%td id=col2 :td id=col3 0.00%tr class=rowAlt td 
 id=col1 DataNodes usagestd id=col2 :td id=col3 Min %td id=col4 
 Median %td id=col5 Max %td id=col6 stdev %tr class=rowNormal td 
 id=col1 td id=col2 td id=col3 0.00%td id=col4 0.00%td 
 id=col5 0.00%td id=col6 0.00%tr class=rowAlt td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=LIVELive Nodes/a td id=col2 :td 
 id=col3 4 (Decommissioned: 1)tr class=rowNormal td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=DEADDead Nodes/a td id=col2 :td 
 id=col3 0 (Decommissioned: 0)tr class=rowAlt td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=DECOMMISSIONINGDecommissioning Nodes/a 
 td id=col2 :td id=col3 0tr class=rowNormal td id=col1 
 title=Excludes missing blocks. Number of Under-Replicated Blockstd 
 id=col2 :td id=col3 0/table/divbr
 h3 NameNode Journal Status: /h3
 bCurrent transaction ID:/b 6br/
 div class=dfstable
 table class=storage title=NameNode Journals
 theadtrtdbJournal Manager/b/tdtdbState/b/td/tr/thead
 trtdFileJournalManager(root=I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1)/tdtdEditLogFileOutputStream(I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1\current\edits_inprogress_001)
 /td/tr
 trtdFileJournalManager(root=I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2)/tdtdEditLogFileOutputStream(I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2\current\edits_inprogress_001)
 /td/tr
 /table/div
 hr/
 h3 NameNode Storage: /h3div class=dfstable table class=storage 
 title=NameNode Storage
 theadtrtdbStorage 
 Directory/b/tdtdbType/b/tdtdbState/b/td/tr/theadtrtdI:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1/tdtdIMAGE_AND_EDITS/tdtdActive/td/trtrtdI:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2/tdtdIMAGE_AND_EDITS/tdtdActive/td/tr/table/div
 hr
 hr /
 a href='http://hadoop.apache.org/core'Hadoop/a, 2013.
 /body/html
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestHostsFiles.testHostsExcludeDfshealthJsp(TestHostsFiles.java:127)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   

[jira] [Updated] (HDFS-4582) TestHostsFiles fails on Windows

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4582:
-

Attachment: HDFS-4582.trunk.patch

Attaching the patch.

The issue is related to localhost resolving to 127.0.0.1 on Windows. 
Behavior was discussed in details in HADOOP-8414. 

The fix is to update the test not to assume localhost when comparing the 
output.

 TestHostsFiles fails on Windows
 ---

 Key: HDFS-4582
 URL: https://issues.apache.org/jira/browse/HDFS-4582
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4582.trunk.patch


 Test failure details: 
 java.lang.AssertionError: dfshealth should contain localhost, got:!DOCTYPE 
 html
 html
 head
 link rel=stylesheet type=text/css href=/static/hadoop.css
 titleHadoop NameNode 127.0.0.1:53373/title
 /head
 body
 h1NameNode '127.0.0.1:53373' (active)/h1
 div class='dfstable'table
   trtd class='col1'Started:/tdtdSun Mar 10 22:45:28 PDT 2013/td/tr
   trtd class='col1'Version:/tdtd3.0.0-SNAPSHOT, 
 6094bfab6459e20eba44304ffc7e65c6416dfe18/td/tr
   trtd class='col1'Compiled:/tdtd2013-03-11T05:42Z by ivanmi from 
 trunk/td/tr
   trtd class='col1'Cluster ID:/tdtdtestClusterID/td/tr
   trtd class='col1'Block Pool 
 ID:/tdtdBP-549950874-10.120.2.171-1362980728518/td/tr
 /table/divbr /
 ba href=/nn_browsedfscontent.jspBrowse the filesystem/a/bbr
 ba href=/logs/NameNode Logs/a/b
 hr
 h3Cluster Summary/h3
 b div class=securitySecurity is emOFF/em/div/b
 b /b
 b div2 files and directories, 1 blocks = 3 total filesystem 
 objects./divdivHeap Memory used 90.61 MB is  49% of Commited Heap Memory 
 183.81 MB. Max Heap Memory is 2.66 GB. /divdivNon Heap Memory used 36.67 
 MB is 96% of  Commited Non Heap Memory 37.81 MB. Max Non Heap Memory is 130 
 MB./div/b
 div class=dfstable table
 tr class=rowAlt td id=col1 Configured Capacitytd id=col2 :td 
 id=col3 670.73 GBtr class=rowNormal td id=col1 DFS Usedtd 
 id=col2 :td id=col3 1.09 KBtr class=rowAlt td id=col1 Non DFS 
 Usedtd id=col2 :td id=col3 513.45 GBtr class=rowNormal td 
 id=col1 DFS Remainingtd id=col2 :td id=col3 157.28 GBtr 
 class=rowAlt td id=col1 DFS Used%td id=col2 :td id=col3 
 0.00%tr class=rowNormal td id=col1 DFS Remaining%td id=col2 :td 
 id=col3 23.45%tr class=rowAlt td id=col1 Block Pool Usedtd 
 id=col2 :td id=col3 1.09 KBtr class=rowNormal td id=col1 Block 
 Pool Used%td id=col2 :td id=col3 0.00%tr class=rowAlt td 
 id=col1 DataNodes usagestd id=col2 :td id=col3 Min %td id=col4 
 Median %td id=col5 Max %td id=col6 stdev %tr class=rowNormal td 
 id=col1 td id=col2 td id=col3 0.00%td id=col4 0.00%td 
 id=col5 0.00%td id=col6 0.00%tr class=rowAlt td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=LIVELive Nodes/a td id=col2 :td 
 id=col3 4 (Decommissioned: 1)tr class=rowNormal td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=DEADDead Nodes/a td id=col2 :td 
 id=col3 0 (Decommissioned: 0)tr class=rowAlt td id=col1 a 
 href=dfsnodelist.jsp?whatNodes=DECOMMISSIONINGDecommissioning Nodes/a 
 td id=col2 :td id=col3 0tr class=rowNormal td id=col1 
 title=Excludes missing blocks. Number of Under-Replicated Blockstd 
 id=col2 :td id=col3 0/table/divbr
 h3 NameNode Journal Status: /h3
 bCurrent transaction ID:/b 6br/
 div class=dfstable
 table class=storage title=NameNode Journals
 theadtrtdbJournal Manager/b/tdtdbState/b/td/tr/thead
 trtdFileJournalManager(root=I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1)/tdtdEditLogFileOutputStream(I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1\current\edits_inprogress_001)
 /td/tr
 trtdFileJournalManager(root=I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2)/tdtdEditLogFileOutputStream(I:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2\current\edits_inprogress_001)
 /td/tr
 /table/div
 hr/
 h3 NameNode Storage: /h3div class=dfstable table class=storage 
 title=NameNode Storage
 theadtrtdbStorage 
 Directory/b/tdtdbType/b/tdtdbState/b/td/tr/theadtrtdI:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name1/tdtdIMAGE_AND_EDITS/tdtdActive/td/trtrtdI:\svn\tr\hadoop-common-project\hadoop-common\target\test\data\dfs\name2/tdtdIMAGE_AND_EDITS/tdtdActive/td/tr/table/div
 hr
 hr /
 a href='http://hadoop.apache.org/core'Hadoop/a, 2013.
 /body/html
   at org.junit.Assert.fail(Assert.java:91)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestHostsFiles.testHostsExcludeDfshealthJsp(TestHostsFiles.java:127)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 

[jira] [Created] (HDFS-4583) TestNodeCount fails with: Timeout: excess replica count not equal to 2

2013-03-11 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4583:


 Summary: TestNodeCount fails with: Timeout: excess replica count 
not equal to 2
 Key: HDFS-4583
 URL: https://issues.apache.org/jira/browse/HDFS-4583
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Test fails on the following assertion:

java.util.concurrent.TimeoutException: Timeout: excess replica count not equal 
to 2 for block blk_6432712012621304004_1002 after 2 msec.  Last counts: 
live = 2, excess = 1, corrupt = 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:155)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:149)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 
org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at 
org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at 
org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4583) TestNodeCount fails with: Timeout: excess replica count not equal to 2

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4583:
-

Status: Patch Available  (was: Open)

 TestNodeCount fails with: Timeout: excess replica count not equal to 2
 --

 Key: HDFS-4583
 URL: https://issues.apache.org/jira/browse/HDFS-4583
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4583.trunk.patch


 Test fails on the following assertion:
 java.util.concurrent.TimeoutException: Timeout: excess replica count not 
 equal to 2 for block blk_6432712012621304004_1002 after 2 msec.  Last 
 counts: live = 2, excess = 1, corrupt = 0
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:155)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:149)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:133)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4583) TestNodeCount fails with: Timeout: excess replica count not equal to 2

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4583:
-

Attachment: HDFS-4583.trunk.patch

Attaching the patch.

The test consistently fails on Windows because it incorrectly picks the excess 
datanode on the place where it is explicitly looking for a non-excess node. 
This happens because {{blocks.contains()}} tries to match the ExtendedBlock 
object in the collection of Block objects what always fails.

From the test log:
{quote}
2013-03-11 01:26:48,890 INFO  BlockStateChange 
(BlockManager.java:chooseExcessReplicates(2439)) - BLOCK* 
chooseExcessReplicates: (127.0.0.1:61959, blk_6432712012621304004_1002) is 
added to invalidated blocks set
...
...
...
2013-03-11 01:26:48,989 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:stopDataNode(1607)) - DN name= 127.0.0.1:61959  found 
DN=DataNode{data=FSDataset 
{quote}


 TestNodeCount fails with: Timeout: excess replica count not equal to 2
 --

 Key: HDFS-4583
 URL: https://issues.apache.org/jira/browse/HDFS-4583
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4583.trunk.patch


 Test fails on the following assertion:
 java.util.concurrent.TimeoutException: Timeout: excess replica count not 
 equal to 2 for block blk_6432712012621304004_1002 after 2 msec.  Last 
 counts: live = 2, excess = 1, corrupt = 0
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:155)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:149)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:133)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4583) TestNodeCount fails with: Timeout: excess replica count not equal to 2

2013-03-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599222#comment-13599222
 ] 

Ivan Mitic commented on HDFS-4583:
--

I was able to repro TestDataDirs failure, seems orthogonal to this patch, will 
take a look at what is going on.

No repro for TestBlocksWithNotEnoughRacks.

 TestNodeCount fails with: Timeout: excess replica count not equal to 2
 --

 Key: HDFS-4583
 URL: https://issues.apache.org/jira/browse/HDFS-4583
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4583.trunk.patch


 Test fails on the following assertion:
 java.util.concurrent.TimeoutException: Timeout: excess replica count not 
 equal to 2 for block blk_6432712012621304004_1002 after 2 msec.  Last 
 counts: live = 2, excess = 1, corrupt = 0
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:155)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:149)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:133)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-4586) TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid

2013-03-11 Thread Ivan Mitic (JIRA)
Ivan Mitic created HDFS-4586:


 Summary: TestDataDirs.testGetDataDirsFromURIs fails with all 
directories in dfs.datanode.data.dir are invalid
 Key: HDFS-4586
 URL: https://issues.apache.org/jira/browse/HDFS-4586
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic


Error Message
All directories in dfs.datanode.data.dir are invalid: /p1 /p2 /p3 
 Stacktrace
{code}
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
/p1 /p2 /p3 
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{code}

Seems like Jenkins will return -1 on all HDFS patches because of this (check 
HDFS-4583)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4586) TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid

2013-03-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599261#comment-13599261
 ] 

Ivan Mitic commented on HDFS-4586:
--

Seems to be a regression introduced with HADOOP-8973. TestDataDirs mocks based 
on the previous implementation of DiskChecker#checkDir. This seems to be 
breaking the layering of projects, and to prevent this from happening, I will 
mock DiskChecker#checkDir instead. Will post a patch for this soon. Please 
comment if you think differently.


 TestDataDirs.testGetDataDirsFromURIs fails with all directories in 
 dfs.datanode.data.dir are invalid
 

 Key: HDFS-4586
 URL: https://issues.apache.org/jira/browse/HDFS-4586
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic

 Error Message
 All directories in dfs.datanode.data.dir are invalid: /p1 /p2 /p3 
  Stacktrace
 {code}
 java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
 /p1 /p2 /p3 
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 Seems like Jenkins will return -1 on all HDFS patches because of this (check 
 HDFS-4583)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4586) TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid

2013-03-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599292#comment-13599292
 ] 

Ivan Mitic commented on HDFS-4586:
--

Thanks, although I'm not sure if mocking of a static method is feasible in 
mockito :(

 TestDataDirs.testGetDataDirsFromURIs fails with all directories in 
 dfs.datanode.data.dir are invalid
 

 Key: HDFS-4586
 URL: https://issues.apache.org/jira/browse/HDFS-4586
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic

 Error Message
 All directories in dfs.datanode.data.dir are invalid: /p1 /p2 /p3 
  Stacktrace
 {code}
 java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
 /p1 /p2 /p3 
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 Seems like Jenkins will return -1 on all HDFS patches because of this (check 
 HDFS-4583)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4586) TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4586:
-

Attachment: HDFS-4586.trunk.patch

Attaching the patch.

The current version of mockito used in Hadoop does not support mocking of 
static methods. There is some controversy of adding this support in general (as 
discussed in HADOOP-8973) so this was not an option. Instead I abstracted out 
of DiskChecker static method into a wrapper object that allowed me to mock.

I believe this satisfies both the layering issue I mentioned above and is in 
line with guidelines around mocking. The downside is the additional of the new 
wrapper helper class. Please comment if you have suggestions for improvement.

 TestDataDirs.testGetDataDirsFromURIs fails with all directories in 
 dfs.datanode.data.dir are invalid
 

 Key: HDFS-4586
 URL: https://issues.apache.org/jira/browse/HDFS-4586
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4586.trunk.patch


 Error Message
 All directories in dfs.datanode.data.dir are invalid: /p1 /p2 /p3 
  Stacktrace
 {code}
 java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
 /p1 /p2 /p3 
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 Seems like Jenkins will return -1 on all HDFS patches because of this (check 
 HDFS-4583)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4583) TestNodeCount fails with: Timeout: excess replica count not equal to 2

2013-03-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599400#comment-13599400
 ] 

Ivan Mitic commented on HDFS-4583:
--

HDFS-4586 tracks the failure in TestDataDirs.

 TestNodeCount fails with: Timeout: excess replica count not equal to 2
 --

 Key: HDFS-4583
 URL: https://issues.apache.org/jira/browse/HDFS-4583
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4583.trunk.patch


 Test fails on the following assertion:
 java.util.concurrent.TimeoutException: Timeout: excess replica count not 
 equal to 2 for block blk_6432712012621304004_1002 after 2 msec.  Last 
 counts: live = 2, excess = 1, corrupt = 0
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:155)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:149)
   at 
 org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:133)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
   at 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
   at 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4586) TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4586:
-

Attachment: HDFS-4586.trunk.2.patch

Thanks Chris, good catch! Attaching updated patch. 

 TestDataDirs.testGetDataDirsFromURIs fails with all directories in 
 dfs.datanode.data.dir are invalid
 

 Key: HDFS-4586
 URL: https://issues.apache.org/jira/browse/HDFS-4586
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4586.trunk.2.patch, HDFS-4586.trunk.patch


 Error Message
 All directories in dfs.datanode.data.dir are invalid: /p1 /p2 /p3 
  Stacktrace
 {code}
 java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
 /p1 /p2 /p3 
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 Seems like Jenkins will return -1 on all HDFS patches because of this (check 
 HDFS-4583)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4287) HTTPFS tests fail on Windows

2013-03-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599418#comment-13599418
 ] 

Ivan Mitic commented on HDFS-4287:
--

Patch looks good overall Chris.

One minor comment though, isn't File#isAbsolute() more appropriate below given 
that we're talking about local paths (not DFS paths):
{code}
if (!new Path(value).isUriPathAbsolute()) {
{code}

 HTTPFS tests fail on Windows
 

 Key: HDFS-4287
 URL: https://issues.apache.org/jira/browse/HDFS-4287
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4287.1.patch, HDFS-4287.2.patch


 The HTTPFS tests have some platform-specific assumptions that cause the tests 
 to fail when run on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4287) HTTPFS tests fail on Windows

2013-03-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599419#comment-13599419
 ] 

Ivan Mitic commented on HDFS-4287:
--

PS. Same comment for TestDirHelper.

 HTTPFS tests fail on Windows
 

 Key: HDFS-4287
 URL: https://issues.apache.org/jira/browse/HDFS-4287
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4287.1.patch, HDFS-4287.2.patch


 The HTTPFS tests have some platform-specific assumptions that cause the tests 
 to fail when run on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4572) Fix TestJournal failures on Windows

2013-03-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599441#comment-13599441
 ] 

Ivan Mitic commented on HDFS-4572:
--

Hi Arpit, we already filed HDFS-4586 on TestDataDirs failing. Will resolve 
yours as a duplicate.

 Fix TestJournal failures on Windows
 ---

 Key: HDFS-4572
 URL: https://issues.apache.org/jira/browse/HDFS-4572
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode, test
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal
 Fix For: 3.0.0

 Attachments: HDFS-4572.patch, HDFS-4572.patch, HDFS-4572.patch


 Multiple test failures in TestJournal. Windows is stricter about restricting 
 access to in-use files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-4589) TestDataDirs fails on trunk

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic resolved HDFS-4589.
--

Resolution: Duplicate

Dupe of HDFS-4586.

 TestDataDirs fails on trunk
 ---

 Key: HDFS-4589
 URL: https://issues.apache.org/jira/browse/HDFS-4589
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Arpit Agarwal
 Fix For: 3.0.0


 Exception details:
 java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
 /p1 /p2 /p3 
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
   at 
 org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
   at 
 org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
   at 
 org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
   at 
 org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4586) TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4586:
-

Status: Patch Available  (was: Open)

 TestDataDirs.testGetDataDirsFromURIs fails with all directories in 
 dfs.datanode.data.dir are invalid
 

 Key: HDFS-4586
 URL: https://issues.apache.org/jira/browse/HDFS-4586
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4586.trunk.2.patch, HDFS-4586.trunk.patch


 Error Message
 All directories in dfs.datanode.data.dir are invalid: /p1 /p2 /p3 
  Stacktrace
 {code}
 java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
 /p1 /p2 /p3 
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 Seems like Jenkins will return -1 on all HDFS patches because of this (check 
 HDFS-4583)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-4586) TestDataDirs.testGetDataDirsFromURIs fails with all directories in dfs.datanode.data.dir are invalid

2013-03-11 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HDFS-4586:
-

Attachment: HDFS-4586.trunk.3.patch

New patch, addressing the test timeout.

 TestDataDirs.testGetDataDirsFromURIs fails with all directories in 
 dfs.datanode.data.dir are invalid
 

 Key: HDFS-4586
 URL: https://issues.apache.org/jira/browse/HDFS-4586
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HDFS-4586.trunk.2.patch, HDFS-4586.trunk.3.patch, 
 HDFS-4586.trunk.patch


 Error Message
 All directories in dfs.datanode.data.dir are invalid: /p1 /p2 /p3 
  Stacktrace
 {code}
 java.io.IOException: All directories in dfs.datanode.data.dir are invalid: 
 /p1 /p2 /p3 
   at 
 org.apache.hadoop.hdfs.server.datanode.DataNode.getDataDirsFromURIs(DataNode.java:1668)
   at 
 org.apache.hadoop.hdfs.server.datanode.TestDataDirs.testGetDataDirsFromURIs(TestDataDirs.java:53)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}
 Seems like Jenkins will return -1 on all HDFS patches because of this (check 
 HDFS-4583)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4287) HTTPFS tests fail on Windows

2013-03-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13599556#comment-13599556
 ] 

Ivan Mitic commented on HDFS-4287:
--

Thanks Chris for addressing the comments! One minor comment, otherwise +1 from 
me

 - TestServer#contructorsGetters(): I think we can avoid the {{if WINDOWS 
check) and the hardcoded C:/ if we have a File object for a {{drive}} and 
then pass {{new File(drive, child).getAbsolutePath()}} to the Server 
constructor and asserts. You can use {{test.build.data}} for a test base dir. 
Make sense?

I verified that the now tests pass on my box.

 HTTPFS tests fail on Windows
 

 Key: HDFS-4287
 URL: https://issues.apache.org/jira/browse/HDFS-4287
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HDFS-4287.1.patch, HDFS-4287.2.patch, HDFS-4287.3.patch


 The HTTPFS tests have some platform-specific assumptions that cause the tests 
 to fail when run on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >