[jira] [Updated] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-10142:
-

Attachment: HADOOP-10142-branch-1.2.patch

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Fix For: 2.3.0

 Attachments: HADOOP-10142-branch-1.2.patch, 
 HADOOP-10142-branch-1.patch, HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch, HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
 at 
 

[jira] [Commented] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13844946#comment-13844946
 ] 

Xi Fang commented on HADOOP-10142:
--

Hi [~cnauroth], thanks for pointing this out. I made a new patch 
(HADOOP-10142-branch-1.2.patch) and tried to make the format as identical as 
possible. 

Thanks!

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Fix For: 2.3.0

 Attachments: HADOOP-10142-branch-1.2.patch, 
 HADOOP-10142-branch-1.patch, HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch, HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 

[jira] [Updated] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-09 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-10142:
-

Attachment: HADOOP-10142-branch-1.patch

Thanks Vinay. I backport this to branch-1-win and branch-1 
(HADOOP-10142-branch-1.patch).

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Fix For: 2.3.0

 Attachments: HADOOP-10142-branch-1.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch, HADOOP-10142.patch, HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
  

[jira] [Commented] (HADOOP-10142) Avoid groups lookup for unprivileged users such as dr.who

2013-12-06 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13841842#comment-13841842
 ] 

Xi Fang commented on HADOOP-10142:
--

Thanks Vinay and Chris. I think this patch also solves our static user-to-group 
mapping problem in certain Windows deployments. It would be good to have this 
patch in the Hadoop codebase.

 Avoid groups lookup for unprivileged users such as dr.who
 ---

 Key: HADOOP-10142
 URL: https://issues.apache.org/jira/browse/HADOOP-10142
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinay
Assignee: Vinay
 Attachments: HADOOP-10142.patch, HADOOP-10142.patch, 
 HADOOP-10142.patch, HADOOP-10142.patch


 Reduce the logs generated by ShellBasedUnixGroupsMapping.
 For ex: Using WebHdfs from windows generates following log for each request
 {noformat}2013-12-03 11:34:56,589 WARN 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping: got exception trying 
 to get groups for user dr.who
 org.apache.hadoop.util.Shell$ExitCodeException: id: dr.who: No such user
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:504)
 at org.apache.hadoop.util.Shell.run(Shell.java:417)
 at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:636)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:725)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:708)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:83)
 at 
 org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:52)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:95)
 at 
 org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1376)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.init(FSPermissionChecker.java:63)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getPermissionChecker(FSNamesystem.java:3228)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(FSNamesystem.java:4063)
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListing(FSNamesystem.java:4052)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getListing(NameNodeRpcServer.java:748)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getDirectoryListing(NamenodeWebHdfsMethods.java:715)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getListingStream(NamenodeWebHdfsMethods.java:727)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:675)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.access$400(NamenodeWebHdfsMethods.java:114)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:623)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods$3.run(NamenodeWebHdfsMethods.java:618)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1515)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.get(NamenodeWebHdfsMethods.java:618)
 at 
 org.apache.hadoop.hdfs.server.namenode.web.resources.NamenodeWebHdfsMethods.getRoot(NamenodeWebHdfsMethods.java:586)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at 
 com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
 at 
 com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
 at 
 com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
 at 
 com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
 at 
 

[jira] [Commented] (HADOOP-9802) Support Snappy codec on Windows.

2013-08-09 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13735510#comment-13735510
 ] 

Xi Fang commented on HADOOP-9802:
-

It looks good to me. Thanks, Chris.

 Support Snappy codec on Windows.
 

 Key: HADOOP-9802
 URL: https://issues.apache.org/jira/browse/HADOOP-9802
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0, 1-win, 2.1.1-beta
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-9802-branch-1-win.1.patch, 
 HADOOP-9802-trunk.1.patch, HADOOP-9802-trunk.2.patch, 
 HADOOP-9802-trunk.3.patch


 Build and test the existing Snappy codec on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9790) Job token path is not unquoted properly

2013-08-07 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13732903#comment-13732903
 ] 

Xi Fang commented on HADOOP-9790:
-

Thanks Chuan and Chris

 Job token path is not unquoted properly
 ---

 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Attachments: HADOOP-9790.1.patch, HADOOP-9790.2.patch, stderr.txt


 Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
 testing (distcp action).
 The error is:
 Exception reading 
 file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9790) Job token path is not unquoted properly

2013-07-31 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9790:


Attachment: HADOOP-9790.2.patch

 Job token path is not unquoted properly
 ---

 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Attachments: HADOOP-9790.1.patch, HADOOP-9790.2.patch, stderr.txt


 Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
 testing (distcp action).
 The error is:
 Exception reading 
 file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9790) Job token path is not unquoted properly

2013-07-31 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725747#comment-13725747
 ] 

Xi Fang commented on HADOOP-9790:
-

A new patch was attached. I refactored the code using a common method in 
Shell.java.

 Job token path is not unquoted properly
 ---

 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Attachments: HADOOP-9790.1.patch, HADOOP-9790.2.patch, stderr.txt


 Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
 testing (distcp action).
 The error is:
 Exception reading 
 file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9790) Job token path is not unquoted properly

2013-07-30 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9790:


Attachment: stderr.txt

 Job token path is not unquoted properly
 ---

 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Attachments: HADOOP-9790.1.patch, stderr.txt


 Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
 testing (distcp action).
 The error is:
 Exception reading 
 file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9790) Job token path is not unquoted properly

2013-07-30 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724122#comment-13724122
 ] 

Xi Fang commented on HADOOP-9790:
-

Thanks, [~daryn]. I am sorry. I didn't make the description clear. Attached is 
the stderr information. Basically TestDistCpActionExecutor calls DistCp in some 
mapreduceJob. A minicluster is launched for executing this. However, the 
jobToken file can't be found when the mapred job is running. See the error: 

file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken


The cause of this is the surrounding quotes around the path.  As I mentioned in 
my previous comment, on Windows when setting up environment variable for child 
tasks, we add quotes to the environment variable values to avoid escaping 
special characters. In Cmd shell, the quotation marks are set as part of the 
environment variable value. So when we receive the environment variables in 
Hadoop code, we should remove the surrounding quotes.   

Please let me know if you have any concerns or questions.

 Job token path is not unquoted properly
 ---

 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Attachments: HADOOP-9790.1.patch, stderr.txt


 Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
 testing (distcp action).
 The error is:
 Exception reading 
 file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9790) Job token path is not unquoted properly

2013-07-29 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9790:
---

 Summary: Job token path is not unquoted properly
 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang


Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
testing (distcp action).
The error is:
Exception reading 
file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9790) Job token path is not unquoted properly

2013-07-29 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723236#comment-13723236
 ] 

Xi Fang commented on HADOOP-9790:
-

This is a Hadoop bug related to 
https://issues.apache.org/jira/browse/MAPREDUCE-4374. On Windows when setting 
up environment variable for child tasks, we add quotes to the environment 
variable values to avoid escaping special characters. In Cmd shell, the 
quotation marks are set as part of the environment variable value. So when we 
receive the environment variables in Hadoop code, we should remove the 
surrounding quotes. We already have code in several places to do so. I am 
thinking this could be something we missed in previous patch.

It is involved to debug this issue because we need to pass debug opts to oozie, 
hadoop and child tasks, which are difficult. I tried a different approach, 
searching for System.getenv() across oozie and hadoop projects, and found two 
suspicious places:
In DistCp.java
{code}
  private static JobConf createJobConf(Configuration conf) {
...
String tokenFile = System.getenv(HADOOP_TOKEN_FILE_LOCATION);
if (tokenFile != null) {
  LOG
.info(Setting env property for mapreduce.job.credentials.binary to: 
  + tokenFile);
  jobconf.set(mapreduce.job.credentials.binary, tokenFile);
}
{code} 

and in UserGroupInfomration.java
{code}
static UserGroupInformation getLoginUser() throws IOException {
  ...
String fileLocation = System.getenv(HADOOP_TOKEN_FILE_LOCATION);
if (fileLocation != null  isSecurityEnabled()) {
  // load the token storage file and put all of the tokens into the
  // user.
  Credentials cred = Credentials.readTokenStorageFile(
  new Path(file:/// + fileLocation), conf);
{code}

The first one is very likely to be the root cause of this oozie test failure 
(see JobClient.readTokensFromFiles() 
{code}
String binaryTokenFilename =
  conf.get(mapreduce.job.credentials.binary);
if (binaryTokenFilename != null) {
  Credentials binary = 
Credentials.readTokenStorageFile(new Path(file:/// +  
  binaryTokenFilename), conf);
  credentials.addAll(binary);
}
{code}
). 

The second one looks like a potential bug we may run into. We need to fix both.


 Job token path is not unquoted properly
 ---

 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang

 Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
 testing (distcp action).
 The error is:
 Exception reading 
 file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9790) Job token path is not unquoted properly

2013-07-29 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9790:


Attachment: HADOOP-9790.1.patch

 Job token path is not unquoted properly
 ---

 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Attachments: HADOOP-9790.1.patch


 Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
 testing (distcp action).
 The error is:
 Exception reading 
 file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9790) Job token path is not unquoted properly

2013-07-29 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723255#comment-13723255
 ] 

Xi Fang commented on HADOOP-9790:
-

I attached a patch based on my previous root cause analysis. Oozie test passed 
after this fix. If we see many such problem, an improvement could be writing a 
common function to remove quotes from file paths. 

 Job token path is not unquoted properly
 ---

 Key: HADOOP-9790
 URL: https://issues.apache.org/jira/browse/HADOOP-9790
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Attachments: HADOOP-9790.1.patch


 Found during oozie unit tests (TestDistCpActionExecutor) and oozie ad-hoc 
 testing (distcp action).
 The error is:
 Exception reading 
 file:/D:/git/Monarch/project/oozie-monarch/core/target/test-data/minicluster/mapred/local/0_0/taskTracker/test/jobcache/job_20130725105336682_0001/jobToken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9739) Branch-1-Win TestNNThroughputBenchmark failed

2013-07-16 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9739:
---

 Summary: Branch-1-Win TestNNThroughputBenchmark failed
 Key: HADOOP-9739
 URL: https://issues.apache.org/jira/browse/HADOOP-9739
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


This test failed on both Windows and Linux.
Here is the error information.

Testcase: testNNThroughput took 36.221 sec
Caused an ERROR
NNThroughputBenchmark: cannot mkdir 
D:\condor\condor\build\test\dfs\hosts\exclude
java.io.IOException: NNThroughputBenchmark: cannot mkdir 
D:\condor\condor\build\test\dfs\hosts\exclude
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.init(NNThroughputBenchmark.java:111)
at 
org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1168)
at 
org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark.testNNThroughput(TestNNThroughputBenchmark.java:38)

This test may not fail for the first run, but will fail for the second one.

The root cause is in the constructor of NNThroughputBenchmark

{code}
NNThroughputBenchmark(Configuration conf) throws IOException, LoginException  {
...
 config.set(dfs.hosts.exclude, ${hadoop.tmp.dir}/dfs/hosts/exclude);
 File excludeFile = new File(config.get(dfs.hosts.exclude, exclude));
 if(! excludeFile.exists()) {
  if(!excludeFile.getParentFile().mkdirs())
 throw new IOException(NNThroughputBenchmark: cannot mkdir  + 
excludeFile);
 }
 new FileOutputStream(excludeFile).close();

{code}

excludeFile.getParentFile() may already exist, then 
excludeFile.getParentFile().mkdirs() will return false, which however is not an 
expected behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9739) Branch-1-Win TestNNThroughputBenchmark failed

2013-07-16 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9739 started by Xi Fang.

 Branch-1-Win TestNNThroughputBenchmark failed
 -

 Key: HADOOP-9739
 URL: https://issues.apache.org/jira/browse/HADOOP-9739
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9739.1.patch


 This test failed on both Windows and Linux.
 Here is the error information.
 Testcase: testNNThroughput took 36.221 sec
   Caused an ERROR
 NNThroughputBenchmark: cannot mkdir 
 D:\condor\condor\build\test\dfs\hosts\exclude
 java.io.IOException: NNThroughputBenchmark: cannot mkdir 
 D:\condor\condor\build\test\dfs\hosts\exclude
   at 
 org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.init(NNThroughputBenchmark.java:111)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1168)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark.testNNThroughput(TestNNThroughputBenchmark.java:38)
 This test may not fail for the first run, but will fail for the second one.
 The root cause is in the constructor of NNThroughputBenchmark
 {code}
 NNThroughputBenchmark(Configuration conf) throws IOException, LoginException  
 {
 ...
  config.set(dfs.hosts.exclude, ${hadoop.tmp.dir}/dfs/hosts/exclude);
  File excludeFile = new File(config.get(dfs.hosts.exclude, exclude));
  if(! excludeFile.exists()) {
   if(!excludeFile.getParentFile().mkdirs())
  throw new IOException(NNThroughputBenchmark: cannot mkdir  + 
 excludeFile);
  }
  new FileOutputStream(excludeFile).close();
 {code}
 excludeFile.getParentFile() may already exist, then 
 excludeFile.getParentFile().mkdirs() will return false, which however is not 
 an expected behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9739) Branch-1-Win TestNNThroughputBenchmark failed

2013-07-16 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9739:


Attachment: HADOOP-9739.1.patch

 Branch-1-Win TestNNThroughputBenchmark failed
 -

 Key: HADOOP-9739
 URL: https://issues.apache.org/jira/browse/HADOOP-9739
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9739.1.patch


 This test failed on both Windows and Linux.
 Here is the error information.
 Testcase: testNNThroughput took 36.221 sec
   Caused an ERROR
 NNThroughputBenchmark: cannot mkdir 
 D:\condor\condor\build\test\dfs\hosts\exclude
 java.io.IOException: NNThroughputBenchmark: cannot mkdir 
 D:\condor\condor\build\test\dfs\hosts\exclude
   at 
 org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.init(NNThroughputBenchmark.java:111)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1168)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark.testNNThroughput(TestNNThroughputBenchmark.java:38)
 This test may not fail for the first run, but will fail for the second one.
 The root cause is in the constructor of NNThroughputBenchmark
 {code}
 NNThroughputBenchmark(Configuration conf) throws IOException, LoginException  
 {
 ...
  config.set(dfs.hosts.exclude, ${hadoop.tmp.dir}/dfs/hosts/exclude);
  File excludeFile = new File(config.get(dfs.hosts.exclude, exclude));
  if(! excludeFile.exists()) {
   if(!excludeFile.getParentFile().mkdirs())
  throw new IOException(NNThroughputBenchmark: cannot mkdir  + 
 excludeFile);
  }
  new FileOutputStream(excludeFile).close();
 {code}
 excludeFile.getParentFile() may already exist, then 
 excludeFile.getParentFile().mkdirs() will return false, which however is not 
 an expected behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9739) Branch-1-Win TestNNThroughputBenchmark failed

2013-07-16 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13710408#comment-13710408
 ] 

Xi Fang commented on HADOOP-9739:
-

A patch is attached. I added a check for the existence of the directory before 
mkdirs(). 

 Branch-1-Win TestNNThroughputBenchmark failed
 -

 Key: HADOOP-9739
 URL: https://issues.apache.org/jira/browse/HADOOP-9739
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9739.1.patch


 This test failed on both Windows and Linux.
 Here is the error information.
 Testcase: testNNThroughput took 36.221 sec
   Caused an ERROR
 NNThroughputBenchmark: cannot mkdir 
 D:\condor\condor\build\test\dfs\hosts\exclude
 java.io.IOException: NNThroughputBenchmark: cannot mkdir 
 D:\condor\condor\build\test\dfs\hosts\exclude
   at 
 org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.init(NNThroughputBenchmark.java:111)
   at 
 org.apache.hadoop.hdfs.server.namenode.NNThroughputBenchmark.runBenchmark(NNThroughputBenchmark.java:1168)
   at 
 org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark.testNNThroughput(TestNNThroughputBenchmark.java:38)
 This test may not fail for the first run, but will fail for the second one.
 The root cause is in the constructor of NNThroughputBenchmark
 {code}
 NNThroughputBenchmark(Configuration conf) throws IOException, LoginException  
 {
 ...
  config.set(dfs.hosts.exclude, ${hadoop.tmp.dir}/dfs/hosts/exclude);
  File excludeFile = new File(config.get(dfs.hosts.exclude, exclude));
  if(! excludeFile.exists()) {
   if(!excludeFile.getParentFile().mkdirs())
  throw new IOException(NNThroughputBenchmark: cannot mkdir  + 
 excludeFile);
  }
  new FileOutputStream(excludeFile).close();
 {code}
 excludeFile.getParentFile() may already exist, then 
 excludeFile.getParentFile().mkdirs() will return false, which however is not 
 an expected behavior.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-11 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13705526#comment-13705526
 ] 

Xi Fang commented on HADOOP-9722:
-

Thanks Chris.

 Branch-1-win TestNativeIO failed caused by Window incompatible test case
 

 Key: HADOOP-9722
 URL: https://issues.apache.org/jira/browse/HADOOP-9722
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9722.patch


 org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
 Windows. Here is the error information.
 \dev\zero (The system cannot find the path specified)
 java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
 specified)
 at java.io.FileInputStream.open(Native Method)
 at java.io.FileInputStream.init(FileInputStream.java:120)
 at java.io.FileInputStream.init(FileInputStream.java:79)
 at 
 org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
 The root cause of this is /dev/zero is used and Windows does not have 
 devices like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9718:
---

 Summary: Branch-1-win TestGroupFallback#testGroupWithFallback() 
failed caused by java.lang.UnsatisfiedLinkError
 Key: HADOOP-9718
 URL: https://issues.apache.org/jira/browse/HADOOP-9718
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win


Here is the error information:
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
at org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
Method)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
at 
org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9718 started by Xi Fang.

 Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
 java.lang.UnsatisfiedLinkError
 --

 Key: HADOOP-9718
 URL: https://issues.apache.org/jira/browse/HADOOP-9718
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9718.patch


 Here is the error information:
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
 java.lang.UnsatisfiedLinkError: 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
 Method)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
 at 
 org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
 This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9718:


Attachment: HADOOP-9718.patch

 Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
 java.lang.UnsatisfiedLinkError
 --

 Key: HADOOP-9718
 URL: https://issues.apache.org/jira/browse/HADOOP-9718
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9718.patch


 Here is the error information:
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
 java.lang.UnsatisfiedLinkError: 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
 Method)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
 at 
 org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
 This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13704831#comment-13704831
 ] 

Xi Fang commented on HADOOP-9718:
-

Backporting https://issues.apache.org/jira/browse/HADOOP-9232

 Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
 java.lang.UnsatisfiedLinkError
 --

 Key: HADOOP-9718
 URL: https://issues.apache.org/jira/browse/HADOOP-9718
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9718.patch


 Here is the error information:
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
 java.lang.UnsatisfiedLinkError: 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
 Method)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
 at 
 org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
 This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9718) Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by java.lang.UnsatisfiedLinkError

2013-07-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13704889#comment-13704889
 ] 

Xi Fang commented on HADOOP-9718:
-

Thanks Chris!

 Branch-1-win TestGroupFallback#testGroupWithFallback() failed caused by 
 java.lang.UnsatisfiedLinkError
 --

 Key: HADOOP-9718
 URL: https://issues.apache.org/jira/browse/HADOOP-9718
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9718.patch


 Here is the error information:
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
 java.lang.UnsatisfiedLinkError: 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Ljava/lang/String[Ljava/lang/String;
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroupForUser(Native 
 Method)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMapping.getGroups(JniBasedUnixGroupsMapping.java:53)
 at 
 org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.getGroups(JniBasedUnixGroupsMappingWithFallback.java:50)
 at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
 at 
 org.apache.hadoop.security.TestGroupFallback.testGroupWithFallback(TestGroupFallback.java:77)
 This is related to https://issues.apache.org/jira/browse/HADOOP-9232.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9719:
---

 Summary: Branch-1-win TestFsShellReturnCode#testChgrp() failed 
caused by incorrect exit codes
 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to change 
group association of files to admin.
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { -chgrp, admin, f1 };
verify(fs, -chgrp, argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
original Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9719 started by Xi Fang.

 Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
 exit codes
 

 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 1-win


 TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to 
 change group association of files to admin.
 // Test 1: exit code for chgrp on existing file is 0
 String argv[] = { -chgrp, admin, f1 };
 verify(fs, -chgrp, argv, 1, fsShell, 0);
 .
 On Windows, this is the error information:
 org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
 (1332): No mapping between account names and security IDs was done.
 Invalid group name: admin
 This test case passed previously, but it looks like this test case 
 incorrectly passed because of another bug in FsShell@runCmdHandler 
 (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
 FsShell#runCmdHandler may not return error exit codes for some exceptions 
 (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
 FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
 original Branch-1-win if even if admin is not a valid group, there is no 
 error caught. The fix of HADOOP-9502 makes this test fail.
 This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9719:


Attachment: HADOOP-9719.patch

 Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
 exit codes
 

 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 1-win

 Attachments: HADOOP-9719.patch


 TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to 
 change group association of files to admin.
 // Test 1: exit code for chgrp on existing file is 0
 String argv[] = { -chgrp, admin, f1 };
 verify(fs, -chgrp, argv, 1, fsShell, 0);
 .
 On Windows, this is the error information:
 org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
 (1332): No mapping between account names and security IDs was done.
 Invalid group name: admin
 This test case passed previously, but it looks like this test case 
 incorrectly passed because of another bug in FsShell@runCmdHandler 
 (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
 FsShell#runCmdHandler may not return error exit codes for some exceptions 
 (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
 FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
 original Branch-1-win if even if admin is not a valid group, there is no 
 error caught. The fix of HADOOP-9502 makes this test fail.
 This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9719:


Description: 
TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to change 
group association of files to admin.
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { -chgrp, admin, f1 };
verify(fs, -chgrp, argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
previous Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux

  was:
TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to change 
group association of files to admin.
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { -chgrp, admin, f1 };
verify(fs, -chgrp, argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
original Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux


 Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
 exit codes
 

 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 1-win

 Attachments: HADOOP-9719.patch


 TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to 
 change group association of files to admin.
 // Test 1: exit code for chgrp on existing file is 0
 String argv[] = { -chgrp, admin, f1 };
 verify(fs, -chgrp, argv, 1, fsShell, 0);
 .
 On Windows, this is the error information:
 org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
 (1332): No mapping between account names and security IDs was done.
 Invalid group name: admin
 This test case passed previously, but it looks like this test case 
 incorrectly passed because of another bug in FsShell@runCmdHandler 
 (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
 FsShell#runCmdHandler may not return error exit codes for some exceptions 
 (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
 FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
 previous Branch-1-win if even if admin is not a valid group, there is no 
 error caught. The fix of HADOOP-9502 makes this test fail.
 This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13705025#comment-13705025
 ] 

Xi Fang commented on HADOOP-9719:
-

A patch was attached. In testChgrp(), I replaced the hardcoded admin by the 
group of the current user. I also found admin in testChown() was not correct, 
although the test passed. I also changed that. 

 Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
 exit codes
 

 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 1-win

 Attachments: HADOOP-9719.patch


 TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to 
 change group association of files to admin.
 // Test 1: exit code for chgrp on existing file is 0
 String argv[] = { -chgrp, admin, f1 };
 verify(fs, -chgrp, argv, 1, fsShell, 0);
 .
 On Windows, this is the error information:
 org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
 (1332): No mapping between account names and security IDs was done.
 Invalid group name: admin
 This test case passed previously, but it looks like this test case 
 incorrectly passed because of another bug in FsShell@runCmdHandler 
 (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
 FsShell#runCmdHandler may not return error exit codes for some exceptions 
 (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
 FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
 previous Branch-1-win if even if admin is not a valid group, there is no 
 error caught. The fix of HADOOP-9502 makes this test fail.
 This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9719:


Description: 
TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to change 
group association of files to admin.
{code}
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { -chgrp, admin, f1 };
verify(fs, -chgrp, argv, 1, fsShell, 0);
{code}
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
previous Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux

  was:
TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to change 
group association of files to admin.
// Test 1: exit code for chgrp on existing file is 0
String argv[] = { -chgrp, admin, f1 };
verify(fs, -chgrp, argv, 1, fsShell, 0);
.
On Windows, this is the error information:
org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
(1332): No mapping between account names and security IDs was done.
Invalid group name: admin
This test case passed previously, but it looks like this test case incorrectly 
passed because of another bug in FsShell@runCmdHandler 
(https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
FsShell#runCmdHandler may not return error exit codes for some exceptions (see 
private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
previous Branch-1-win if even if admin is not a valid group, there is no error 
caught. The fix of HADOOP-9502 makes this test fail.

This test also failed on Linux


 Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
 exit codes
 

 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 1-win

 Attachments: HADOOP-9719.patch


 TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to 
 change group association of files to admin.
 {code}
 // Test 1: exit code for chgrp on existing file is 0
 String argv[] = { -chgrp, admin, f1 };
 verify(fs, -chgrp, argv, 1, fsShell, 0);
 {code}
 On Windows, this is the error information:
 org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
 (1332): No mapping between account names and security IDs was done.
 Invalid group name: admin
 This test case passed previously, but it looks like this test case 
 incorrectly passed because of another bug in FsShell@runCmdHandler 
 (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
 FsShell#runCmdHandler may not return error exit codes for some exceptions 
 (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
 FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
 previous Branch-1-win if even if admin is not a valid group, there is no 
 error caught. The fix of HADOOP-9502 makes this test fail.
 This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9719) Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect exit codes

2013-07-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13705257#comment-13705257
 ] 

Xi Fang commented on HADOOP-9719:
-

Thanks Chris!

 Branch-1-win TestFsShellReturnCode#testChgrp() failed caused by incorrect 
 exit codes
 

 Key: HADOOP-9719
 URL: https://issues.apache.org/jira/browse/HADOOP-9719
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 1-win

 Attachments: HADOOP-9719.patch


 TestFsShellReturnCode#testChgrp() failed when we try to use -chgrp to 
 change group association of files to admin.
 {code}
 // Test 1: exit code for chgrp on existing file is 0
 String argv[] = { -chgrp, admin, f1 };
 verify(fs, -chgrp, argv, 1, fsShell, 0);
 {code}
 On Windows, this is the error information:
 org.apache.hadoop.util.Shell$ExitCodeException: GetSidFromAcctName error 
 (1332): No mapping between account names and security IDs was done.
 Invalid group name: admin
 This test case passed previously, but it looks like this test case 
 incorrectly passed because of another bug in FsShell@runCmdHandler 
 (https://issues.apache.org/jira/browse/HADOOP-9502). The original code in 
 FsShell#runCmdHandler may not return error exit codes for some exceptions 
 (see private static int runCmdHandler(CmdHandler handler, FileStatus stat, 
 FileSystem srcFs, boolean recursive) throws IOException {}). Therefore, in 
 previous Branch-1-win if even if admin is not a valid group, there is no 
 error caught. The fix of HADOOP-9502 makes this test fail.
 This test also failed on Linux

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-10 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9722:
---

 Summary: Branch-1-win TestNativeIO failed caused by Window 
incompatible test case
 Key: HADOOP-9722
 URL: https://issues.apache.org/jira/browse/HADOOP-9722
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
Windows. Here is the error information.
\dev\zero (The system cannot find the path specified)
java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.init(FileInputStream.java:120)
at java.io.FileInputStream.init(FileInputStream.java:79)
at 
org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
The root cause of this is /dev/zero is used and Windows does not have devices 
like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9722:


Attachment: HADOOP-9722.patch

A patch was attached. For Windows, we skip test testPosixFadvise. 

 Branch-1-win TestNativeIO failed caused by Window incompatible test case
 

 Key: HADOOP-9722
 URL: https://issues.apache.org/jira/browse/HADOOP-9722
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9722.patch


 org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
 Windows. Here is the error information.
 \dev\zero (The system cannot find the path specified)
 java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
 specified)
 at java.io.FileInputStream.open(Native Method)
 at java.io.FileInputStream.init(FileInputStream.java:120)
 at java.io.FileInputStream.init(FileInputStream.java:79)
 at 
 org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
 The root cause of this is /dev/zero is used and Windows does not have 
 devices like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9722) Branch-1-win TestNativeIO failed caused by Window incompatible test case

2013-07-10 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9722 started by Xi Fang.

 Branch-1-win TestNativeIO failed caused by Window incompatible test case
 

 Key: HADOOP-9722
 URL: https://issues.apache.org/jira/browse/HADOOP-9722
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9722.patch


 org.apache.hadoop.io.nativeio.TestNativeIO#testPosixFadvise() failed on 
 Windows. Here is the error information.
 \dev\zero (The system cannot find the path specified)
 java.io.FileNotFoundException: \dev\zero (The system cannot find the path 
 specified)
 at java.io.FileInputStream.open(Native Method)
 at java.io.FileInputStream.init(FileInputStream.java:120)
 at java.io.FileInputStream.init(FileInputStream.java:79)
 at 
 org.apache.hadoop.io.nativeio.TestNativeIO.testPosixFadvise(TestNativeIO.java:277)
 The root cause of this is /dev/zero is used and Windows does not have 
 devices like the unix /dev/zero or /dev/random.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9714) Branch-1-win TestReplicationPolicy failed caused by stale data node handling

2013-07-09 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9714:
---

 Summary: Branch-1-win TestReplicationPolicy failed caused by stale 
data node handling
 Key: HADOOP-9714
 URL: https://issues.apache.org/jira/browse/HADOOP-9714
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win


Condor-Branch-1 TestReplicationPolicy failed on 
* testChooseTargetWithMoreThanAvailableNodes()
* testChooseTargetWithStaleNodes()
* testChooseTargetWithHalfStaleNodes()

The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is the 
following:
In BlockPlacementPolicyDefault#chooseTarget()
{code}
  chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
blocksize, maxNodesPerRack, results);
} catch (NotEnoughReplicasException e) {
  FSNamesystem.LOG.warn(Not able to place enough replicas, still in need 
of  + numOfReplicas);
{code}
However, numOfReplicas is passed into chooseRandom() as int (primitive type in 
java) by value. The updating operation for numOfReplicas in chooseRandom() will 
not change the value in chooseTarget(). 

The root cause for testChooseTargetWithStaleNodes() and 
testChooseTargetWithHalfStaleNodes() is the current 
BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9714) Branch-1-win TestReplicationPolicy failed caused by stale data node handling

2013-07-09 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9714 started by Xi Fang.

 Branch-1-win TestReplicationPolicy failed caused by stale data node handling
 

 Key: HADOOP-9714
 URL: https://issues.apache.org/jira/browse/HADOOP-9714
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9714.1.patch


 Condor-Branch-1 TestReplicationPolicy failed on 
 * testChooseTargetWithMoreThanAvailableNodes()
 * testChooseTargetWithStaleNodes()
 * testChooseTargetWithHalfStaleNodes()
 The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is 
 the following:
 In BlockPlacementPolicyDefault#chooseTarget()
 {code}
   chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
 blocksize, maxNodesPerRack, results);
 } catch (NotEnoughReplicasException e) {
   FSNamesystem.LOG.warn(Not able to place enough replicas, still in need 
 of  + numOfReplicas);
 {code}
 However, numOfReplicas is passed into chooseRandom() as int (primitive type 
 in java) by value. The updating operation for numOfReplicas in chooseRandom() 
 will not change the value in chooseTarget(). 
 The root cause for testChooseTargetWithStaleNodes() and 
 testChooseTargetWithHalfStaleNodes() is the current 
 BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9714) Branch-1-win TestReplicationPolicy failed caused by stale data node handling

2013-07-09 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13703912#comment-13703912
 ] 

Xi Fang commented on HADOOP-9714:
-

It looks like 
https://issues.apache.org/jira/secure/attachment/12563083/hdfs-4351-branch-1-1.patch
 and the changes for BlockPlacementPolicyDefault in 
https://issues.apache.org/jira/secure/attachment/12549392/HDFS-3912-branch-1.patch
 are missing. A patch was made based on these two patches

 Branch-1-win TestReplicationPolicy failed caused by stale data node handling
 

 Key: HADOOP-9714
 URL: https://issues.apache.org/jira/browse/HADOOP-9714
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9714.1.patch


 Condor-Branch-1 TestReplicationPolicy failed on 
 * testChooseTargetWithMoreThanAvailableNodes()
 * testChooseTargetWithStaleNodes()
 * testChooseTargetWithHalfStaleNodes()
 The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is 
 the following:
 In BlockPlacementPolicyDefault#chooseTarget()
 {code}
   chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
 blocksize, maxNodesPerRack, results);
 } catch (NotEnoughReplicasException e) {
   FSNamesystem.LOG.warn(Not able to place enough replicas, still in need 
 of  + numOfReplicas);
 {code}
 However, numOfReplicas is passed into chooseRandom() as int (primitive type 
 in java) by value. The updating operation for numOfReplicas in chooseRandom() 
 will not change the value in chooseTarget(). 
 The root cause for testChooseTargetWithStaleNodes() and 
 testChooseTargetWithHalfStaleNodes() is the current 
 BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9714) Branch-1-win TestReplicationPolicy failed caused by stale data node handling

2013-07-09 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9714:


Attachment: HADOOP-9714.1.patch

 Branch-1-win TestReplicationPolicy failed caused by stale data node handling
 

 Key: HADOOP-9714
 URL: https://issues.apache.org/jira/browse/HADOOP-9714
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9714.1.patch


 Condor-Branch-1 TestReplicationPolicy failed on 
 * testChooseTargetWithMoreThanAvailableNodes()
 * testChooseTargetWithStaleNodes()
 * testChooseTargetWithHalfStaleNodes()
 The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is 
 the following:
 In BlockPlacementPolicyDefault#chooseTarget()
 {code}
   chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
 blocksize, maxNodesPerRack, results);
 } catch (NotEnoughReplicasException e) {
   FSNamesystem.LOG.warn(Not able to place enough replicas, still in need 
 of  + numOfReplicas);
 {code}
 However, numOfReplicas is passed into chooseRandom() as int (primitive type 
 in java) by value. The updating operation for numOfReplicas in chooseRandom() 
 will not change the value in chooseTarget(). 
 The root cause for testChooseTargetWithStaleNodes() and 
 testChooseTargetWithHalfStaleNodes() is the current 
 BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9714) Branch-1-win TestReplicationPolicy failed caused by stale data node handling

2013-07-09 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9714:


Description: 
TestReplicationPolicy failed on 
* testChooseTargetWithMoreThanAvailableNodes()
* testChooseTargetWithStaleNodes()
* testChooseTargetWithHalfStaleNodes()

The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is the 
following:
In BlockPlacementPolicyDefault#chooseTarget()
{code}
  chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
blocksize, maxNodesPerRack, results);
} catch (NotEnoughReplicasException e) {
  FSNamesystem.LOG.warn(Not able to place enough replicas, still in need 
of  + numOfReplicas);
{code}
However, numOfReplicas is passed into chooseRandom() as int (primitive type in 
java) by value. The updating operation for numOfReplicas in chooseRandom() will 
not change the value in chooseTarget(). 

The root cause for testChooseTargetWithStaleNodes() and 
testChooseTargetWithHalfStaleNodes() is the current 
BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  

  was:
Condor-Branch-1 TestReplicationPolicy failed on 
* testChooseTargetWithMoreThanAvailableNodes()
* testChooseTargetWithStaleNodes()
* testChooseTargetWithHalfStaleNodes()

The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is the 
following:
In BlockPlacementPolicyDefault#chooseTarget()
{code}
  chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
blocksize, maxNodesPerRack, results);
} catch (NotEnoughReplicasException e) {
  FSNamesystem.LOG.warn(Not able to place enough replicas, still in need 
of  + numOfReplicas);
{code}
However, numOfReplicas is passed into chooseRandom() as int (primitive type in 
java) by value. The updating operation for numOfReplicas in chooseRandom() will 
not change the value in chooseTarget(). 

The root cause for testChooseTargetWithStaleNodes() and 
testChooseTargetWithHalfStaleNodes() is the current 
BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  


 Branch-1-win TestReplicationPolicy failed caused by stale data node handling
 

 Key: HADOOP-9714
 URL: https://issues.apache.org/jira/browse/HADOOP-9714
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9714.1.patch


 TestReplicationPolicy failed on 
 * testChooseTargetWithMoreThanAvailableNodes()
 * testChooseTargetWithStaleNodes()
 * testChooseTargetWithHalfStaleNodes()
 The root of cause of testChooseTargetWithMoreThanAvailableNodes failing is 
 the following:
 In BlockPlacementPolicyDefault#chooseTarget()
 {code}
   chooseRandom(numOfReplicas, NodeBase.ROOT, excludedNodes, 
 blocksize, maxNodesPerRack, results);
 } catch (NotEnoughReplicasException e) {
   FSNamesystem.LOG.warn(Not able to place enough replicas, still in need 
 of  + numOfReplicas);
 {code}
 However, numOfReplicas is passed into chooseRandom() as int (primitive type 
 in java) by value. The updating operation for numOfReplicas in chooseRandom() 
 will not change the value in chooseTarget(). 
 The root cause for testChooseTargetWithStaleNodes() and 
 testChooseTargetWithHalfStaleNodes() is the current 
 BlockPlacementPolicyDefault#chooseTarget() doesn't check if a node is stale.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9687) branch-1-win TestJobTrackerQuiescence and TestFileLengthOnClusterRestart failed caused by incorrect DFS path construction on Windows

2013-07-05 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13701152#comment-13701152
 ] 

Xi Fang commented on HADOOP-9687:
-

Thanks Chris!

 branch-1-win TestJobTrackerQuiescence and TestFileLengthOnClusterRestart 
 failed caused by incorrect DFS path construction on Windows
 

 Key: HADOOP-9687
 URL: https://issues.apache.org/jira/browse/HADOOP-9687
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9687.1.patch


 TestJobTrackerQuiescence is a test case introduced in 
 https://issues.apache.org/jira/browse/MAPREDUCE-4328. 
 Here is the code generating a file path on DFS:
 {code}
  final Path testDir = 
 new Path(System.getProperty(test.build.data, /tmp), jt-safemode);
 {code}
 This doesn't work on Windows because test.build.data would have a driver 
 name with : (e.g. D:/hadoop/build/test). However, this is not a valid path 
 name on DFS because colon is disallowed (See DFSUtil#isValidName()).
 A similar problem happens to 
 TestFileLengthOnClusterRestart#testFileLengthWithHSyncAndClusterRestartWithOutDNsRegister()
 {code}
   Path path = new Path(MiniDFSCluster.getBaseDir().getPath(), test);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows

2013-07-02 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698202#comment-13698202
 ] 

Xi Fang commented on HADOOP-9677:
-

Thanks Chris!

 TestSetupAndCleanupFailure#testWithDFS fails on Windows
 ---

 Key: HADOOP-9677
 URL: https://issues.apache.org/jira/browse/HADOOP-9677
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HADOOP-9677.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: expected:2 but was:3
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219)
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9687) Condor-Branch-1 TestJobTrackerQuiescence and TestFileLengthOnClusterRestart failed caused by incorrect DFS path construction on Windows

2013-07-02 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9687:
---

 Summary: Condor-Branch-1 TestJobTrackerQuiescence and 
TestFileLengthOnClusterRestart failed caused by incorrect DFS path construction 
on Windows
 Key: HADOOP-9687
 URL: https://issues.apache.org/jira/browse/HADOOP-9687
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


TestJobTrackerQuiescence is a test case introduced in 
https://issues.apache.org/jira/browse/MAPREDUCE-4328. 
Here is the code generating a file path on DFS:
{code}
 final Path testDir = 
new Path(System.getProperty(test.build.data, /tmp), jt-safemode);
{code}

This doesn't work on Windows because test.build.data would have a driver name 
with : (e.g. D:/hadoop/build/test). However, this is not a valid path name on 
DFS because colon is disallowed (See DFSUtil#isValidName()).

A similar problem happens to 
TestFileLengthOnClusterRestart#testFileLengthWithHSyncAndClusterRestartWithOutDNsRegister()
{code}
  Path path = new Path(MiniDFSCluster.getBaseDir().getPath(), test);
{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9687) Condor-Branch-1 TestJobTrackerQuiescence and TestFileLengthOnClusterRestart failed caused by incorrect DFS path construction on Windows

2013-07-02 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9687 started by Xi Fang.

 Condor-Branch-1 TestJobTrackerQuiescence and TestFileLengthOnClusterRestart 
 failed caused by incorrect DFS path construction on Windows
 ---

 Key: HADOOP-9687
 URL: https://issues.apache.org/jira/browse/HADOOP-9687
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win


 TestJobTrackerQuiescence is a test case introduced in 
 https://issues.apache.org/jira/browse/MAPREDUCE-4328. 
 Here is the code generating a file path on DFS:
 {code}
  final Path testDir = 
 new Path(System.getProperty(test.build.data, /tmp), jt-safemode);
 {code}
 This doesn't work on Windows because test.build.data would have a driver 
 name with : (e.g. D:/hadoop/build/test). However, this is not a valid path 
 name on DFS because colon is disallowed (See DFSUtil#isValidName()).
 A similar problem happens to 
 TestFileLengthOnClusterRestart#testFileLengthWithHSyncAndClusterRestartWithOutDNsRegister()
 {code}
   Path path = new Path(MiniDFSCluster.getBaseDir().getPath(), test);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9687) Condor-Branch-1 TestJobTrackerQuiescence and TestFileLengthOnClusterRestart failed caused by incorrect DFS path construction on Windows

2013-07-02 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13698368#comment-13698368
 ] 

Xi Fang commented on HADOOP-9687:
-

A patch was attached to solve this problem.

 Condor-Branch-1 TestJobTrackerQuiescence and TestFileLengthOnClusterRestart 
 failed caused by incorrect DFS path construction on Windows
 ---

 Key: HADOOP-9687
 URL: https://issues.apache.org/jira/browse/HADOOP-9687
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9687.1.patch


 TestJobTrackerQuiescence is a test case introduced in 
 https://issues.apache.org/jira/browse/MAPREDUCE-4328. 
 Here is the code generating a file path on DFS:
 {code}
  final Path testDir = 
 new Path(System.getProperty(test.build.data, /tmp), jt-safemode);
 {code}
 This doesn't work on Windows because test.build.data would have a driver 
 name with : (e.g. D:/hadoop/build/test). However, this is not a valid path 
 name on DFS because colon is disallowed (See DFSUtil#isValidName()).
 A similar problem happens to 
 TestFileLengthOnClusterRestart#testFileLengthWithHSyncAndClusterRestartWithOutDNsRegister()
 {code}
   Path path = new Path(MiniDFSCluster.getBaseDir().getPath(), test);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9687) Condor-Branch-1 TestJobTrackerQuiescence and TestFileLengthOnClusterRestart failed caused by incorrect DFS path construction on Windows

2013-07-02 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9687:


Attachment: HADOOP-9687.1.patch

 Condor-Branch-1 TestJobTrackerQuiescence and TestFileLengthOnClusterRestart 
 failed caused by incorrect DFS path construction on Windows
 ---

 Key: HADOOP-9687
 URL: https://issues.apache.org/jira/browse/HADOOP-9687
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
 Fix For: 1-win

 Attachments: HADOOP-9687.1.patch


 TestJobTrackerQuiescence is a test case introduced in 
 https://issues.apache.org/jira/browse/MAPREDUCE-4328. 
 Here is the code generating a file path on DFS:
 {code}
  final Path testDir = 
 new Path(System.getProperty(test.build.data, /tmp), jt-safemode);
 {code}
 This doesn't work on Windows because test.build.data would have a driver 
 name with : (e.g. D:/hadoop/build/test). However, this is not a valid path 
 name on DFS because colon is disallowed (See DFSUtil#isValidName()).
 A similar problem happens to 
 TestFileLengthOnClusterRestart#testFileLengthWithHSyncAndClusterRestartWithOutDNsRegister()
 {code}
   Path path = new Path(MiniDFSCluster.getBaseDir().getPath(), test);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows

2013-07-01 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9677:


Attachment: HADOOP-9677.patch

 TestSetupAndCleanupFailure#testWithDFS fails on Windows
 ---

 Key: HADOOP-9677
 URL: https://issues.apache.org/jira/browse/HADOOP-9677
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Xi Fang
 Attachments: HADOOP-9677.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: expected:2 but was:3
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219)
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows

2013-07-01 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-9677 started by Xi Fang.

 TestSetupAndCleanupFailure#testWithDFS fails on Windows
 ---

 Key: HADOOP-9677
 URL: https://issues.apache.org/jira/browse/HADOOP-9677
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Xi Fang
 Attachments: HADOOP-9677.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: expected:2 but was:3
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219)
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9677) TestSetupAndCleanupFailure#testWithDFS fails on Windows

2013-07-01 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696973#comment-13696973
 ] 

Xi Fang commented on HADOOP-9677:
-

The failure was introduced by the patch fixing  MAPREDUCE-5330. We have used 
tests to verify the patch for MAPREDUCE-5330 works indeed. In that patch, on 
Windows, a delay kill is used (see JVMManager#kill()) for killing JVM and 
Signal.TERM is ignored. Setting  
mapred.tasktracker.tasks.sleeptime-before-sigkill in this patch to zero 
ensures that in the unit test the kill will be executed with no delay so that 
the setup/cleanup attempts get killed immediately.

 TestSetupAndCleanupFailure#testWithDFS fails on Windows
 ---

 Key: HADOOP-9677
 URL: https://issues.apache.org/jira/browse/HADOOP-9677
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Xi Fang
 Attachments: HADOOP-9677.patch


 Exception:
 {noformat}
 junit.framework.AssertionFailedError: expected:2 but was:3
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testSetupAndCleanupKill(TestSetupAndCleanupFailure.java:219)
   at 
 org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS(TestSetupAndCleanupFailure.java:282)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-18 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Attachment: HADOOP-9624.trunk.patch

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Attachments: HADOOP-9624.patch, HADOOP-9624.trunk.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that once the enlistment root path has X in  its name, 
 TEST_ROOT_DIR will also has X in its name. The path check will pass even 
 if the customized partial path doesn't have X. However, for this case the 
 path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-18 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Attachment: HADOOP-9624.branch-1.patch

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Attachments: HADOOP-9624.branch-1.patch, HADOOP-9624.patch, 
 HADOOP-9624.trunk.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that once the enlistment root path has X in  its name, 
 TEST_ROOT_DIR will also has X in its name. The path check will pass even 
 if the customized partial path doesn't have X. However, for this case the 
 path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-18 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Attachment: HAOOP-9624.branch-1.patch

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Attachments: HADOOP-9624.branch-1.patch, HADOOP-9624.patch, 
 HADOOP-9624.trunk.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that once the enlistment root path has X in  its name, 
 TEST_ROOT_DIR will also has X in its name. The path check will pass even 
 if the customized partial path doesn't have X. However, for this case the 
 path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-18 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Attachment: (was: HAOOP-9624.branch-1.patch)

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Attachments: HADOOP-9624.branch-1.patch, HADOOP-9624.patch, 
 HADOOP-9624.trunk.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that once the enlistment root path has X in  its name, 
 TEST_ROOT_DIR will also has X in its name. The path check will pass even 
 if the customized partial path doesn't have X. However, for this case the 
 path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-18 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687162#comment-13687162
 ] 

Xi Fang commented on HADOOP-9624:
-

Thanks Chris. Two patches have been attached. 

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Attachments: HADOOP-9624.branch-1.patch, HADOOP-9624.patch, 
 HADOOP-9624.trunk.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that once the enlistment root path has X in  its name, 
 TEST_ROOT_DIR will also has X in its name. The path check will pass even 
 if the customized partial path doesn't have X. However, for this case the 
 path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop test root path has X in its name

2013-06-18 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13687389#comment-13687389
 ] 

Xi Fang commented on HADOOP-9624:
-

Chris, thanks for your review and comments.

 TestFSMainOperationsLocalFileSystem failed when the Hadoop test root path has 
 X in its name
 -

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0, 1-win, 2.1.0-beta, 1.3.0
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Fix For: 3.0.0, 1-win, 2.1.0-beta, 1.3.0

 Attachments: HADOOP-9624.branch-1.2.patch, 
 HADOOP-9624.branch-1.patch, HADOOP-9624.patch, HADOOP-9624.trunk.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that TEST_ROOT_DIR may also has X in its name. The path 
 check will pass even if the customized partial path doesn't have X. 
 However, for this case the path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9633) An incorrect data node might be added to the network topology, an exception is thrown though

2013-06-14 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13683984#comment-13683984
 ] 

Xi Fang commented on HADOOP-9633:
-

Thanks, Aaron. HDFS-4521 has been reopened. 

 An incorrect data node might be added to the network topology, an exception 
 is thrown though
 

 Key: HADOOP-9633
 URL: https://issues.apache.org/jira/browse/HADOOP-9633
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Xi Fang
Priority: Minor

 In NetworkTopology#add(Node node), an incorrect node may be added to the 
 cluster even if an exception is thrown.
 This is the original code:
 {code}
   if (clusterMap.add(node)) {
 LOG.info(Adding a new node: +NodeBase.getPath(node));
 if (rack == null) {
   numOfRacks++;
 }
 if (!(node instanceof InnerNode)) {
   if (depthOfAllLeaves == -1) {
 depthOfAllLeaves = node.getLevel();
   } else {
 if (depthOfAllLeaves != node.getLevel()) {
   LOG.error(Error: can't add leaf node at depth  +
   node.getLevel() +  to topology:\n + oldTopoStr);
   throw new InvalidTopologyException(Invalid network topology.  
 +
   You cannot have a rack and a non-rack node at the same  +
   level of the network topology.);
 }
   }
 }
 {code}
 This is a potential bug, because a wrong leaf node is already added to the 
 cluster before throwing the exception. However, we can't check this 
 (depthOfAllLeaves != node.getLevel()) before if (clusterMap.add(node)), 
 because node.getLevel() will work correctly only after clusterMap.add(node) 
 has been executed.
 A possible solution to this is checking the depthOfAllLeaves in 
 clusterMap.add(node). Note that this is a recursive call. A check should be 
 put at the bottom of this recursive call. If check fails, don't add this leaf 
 and all its upstream racks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-10 Thread Xi Fang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13680180#comment-13680180
 ] 

Xi Fang commented on HADOOP-9624:
-

Thanks Aaron!

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Assignee: Xi Fang
Priority: Minor
  Labels: test
 Attachments: HADOOP-9624.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that once the enlistment root path has X in  its name, 
 TEST_ROOT_DIR will also has X in its name. The path check will pass even 
 if the customized partial path doesn't have X. However, for this case the 
 path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9633) An incorrect data node might be added to the network topology, an exception is thrown though

2013-06-07 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9633:
---

 Summary: An incorrect data node might be added to the network 
topology, an exception is thrown though
 Key: HADOOP-9633
 URL: https://issues.apache.org/jira/browse/HADOOP-9633
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.3.0
Reporter: Xi Fang
Priority: Minor


In NetworkTopology#add(Node node), an incorrect node may be added to the 
cluster even if an exception is thrown.
This is the original code:
{code}
  if (clusterMap.add(node)) {
LOG.info(Adding a new node: +NodeBase.getPath(node));
if (rack == null) {
  numOfRacks++;
}
if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  } else {
if (depthOfAllLeaves != node.getLevel()) {
  LOG.error(Error: can't add leaf node at depth  +
  node.getLevel() +  to topology:\n + oldTopoStr);
  throw new InvalidTopologyException(Invalid network topology.  +
  You cannot have a rack and a non-rack node at the same  +
  level of the network topology.);
}
  }
}
{code}
This is a potential bug, because a wrong leaf node is already added to the 
cluster before throwing the exception. However, we can't check this 
(depthOfAllLeaves != node.getLevel()) before if (clusterMap.add(node)), because 
node.getLevel() will work correctly only after clusterMap.add(node) has been 
executed.
A possible solution to this is checking the depthOfAllLeaves in 
clusterMap.add(node). Note that this is a recursive call. A check should be put 
at the bottom of this recursive call. If check fails, don't add this leaf and 
all its upstream racks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-06 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Attachment: HADOOP-9624.patch

A patch is attached.

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Priority: Minor
  Labels: test
 Attachments: HADOOP-9624.patch


 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name. Here is the the root cause of the failures.
 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has X in 
 its name. Some of the test cases construct a path by combining path 
 TEST_ROOT_DIR with a customized partial path. The problem is that once the 
 enlistment root path has X in  its name, TEST_ROOT_DIR will also has X 
 in its name. The path check will pass even if the customized partial path 
 doesn't have X. However, for this case the path filter is supposed to 
 reject this path.
 An easy fix is using more complicated char sequence rather than a simple char 
 X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-06 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Description: 
TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x and 
X in its name. 
{code}
final private static PathFilter TEST_X_FILTER = new PathFilter() {
  public boolean accept(Path file) {
if(file.getName().contains(x) || file.toString().contains(X))
  return true;
else
  return false;
{code}




Some of the test cases construct a path by combining path TEST_ROOT_DIR with 
a customized partial path. 
The problem is that once the enlistment root path has X in  its name, 
TEST_ROOT_DIR will also has X in its name. The path check will pass even if 
the customized partial path doesn't have X. However, for this case the path 
filter is supposed to reject this path.

An easy fix is to change file.toString().contains(X) to 
file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
only returns the final component of this path.


  was:
TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path 
has X in its name. Here is the the root cause of the failures.

TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has X in 
its name. Some of the test cases construct a path by combining path 
TEST_ROOT_DIR with a customized partial path. The problem is that once the 
enlistment root path has X in  its name, TEST_ROOT_DIR will also has X in 
its name. The path check will pass even if the customized partial path doesn't 
have X. However, for this case the path filter is supposed to reject this 
path.

An easy fix is using more complicated char sequence rather than a simple char 
X.


 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Priority: Minor
  Labels: test
 Attachments: HADOOP-9624.patch


 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x 
 and X in its name. 
 {code}
 final private static PathFilter TEST_X_FILTER = new PathFilter() {
   public boolean accept(Path file) {
 if(file.getName().contains(x) || file.toString().contains(X))
   return true;
 else
   return false;
 {code}
 Some of the test cases construct a path by combining path TEST_ROOT_DIR 
 with a customized partial path. 
 The problem is that once the enlistment root path has X in  its name, 
 TEST_ROOT_DIR will also has X in its name. The path check will pass even 
 if the customized partial path doesn't have X. However, for this case the 
 path filter is supposed to reject this path.
 An easy fix is to change file.toString().contains(X) to 
 file.getName().contains(X). Note that org.apache.hadoop.fs.Path.getName() 
 only returns the final component of this path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9624) TestFSMainOperationsLocalFileSystem and TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment root path has x or X in its name

2013-06-05 Thread Xi Fang (JIRA)
Xi Fang created HADOOP-9624:
---

 Summary: TestFSMainOperationsLocalFileSystem and 
TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment 
root path has x or X in its name
 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
 Environment: Windows
Reporter: Xi Fang
Priority: Minor


TestFSMainOperationsLocalFileSystem and 
TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment 
root path has x or X in its name. Here is the the root cause of the 
failures.

Both classes extend Class FSMainOperationsBaseTest. PathFilter 
FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or X in its 
name. Some of the test cases construct a path by combining path TEST_ROOT_DIR 
with a customized partial path. 
The problem is that once the enlistment root path has x or X in  its name, 
TEST_ROOT_DIR will also has x or X in its name. The path check will pass 
even if the customized partial path doesn't have x or X. However, the path 
filter is supposed to reject this path when the customized partial path doesn't 
have x or X.

An easy fix is using more complicated char sequence rather than a simple char 
x or X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem and TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment root path has x or X in its name

2013-06-05 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Affects Version/s: 1-win

 TestFSMainOperationsLocalFileSystem and 
 TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment 
 root path has x or X in its name
 

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Priority: Minor
  Labels: test

 TestFSMainOperationsLocalFileSystem and 
 TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment 
 root path has x or X in its name. Here is the the root cause of the 
 failures.
 Both classes extend Class FSMainOperationsBaseTest. PathFilter 
 FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or X in its 
 name. Some of the test cases construct a path by combining path 
 TEST_ROOT_DIR with a customized partial path. 
 The problem is that once the enlistment root path has x or X in  its 
 name, TEST_ROOT_DIR will also has x or X in its name. The path check 
 will pass even if the customized partial path doesn't have x or X. 
 However, the path filter is supposed to reject this path when the customized 
 partial path doesn't have x or X.
 An easy fix is using more complicated char sequence rather than a simple char 
 x or X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has x or X in its name

2013-06-05 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Summary: TestFSMainOperationsLocalFileSystem failed when the Hadoop 
enlistment root path has x or X in its name  (was: 
TestFSMainOperationsLocalFileSystem and 
TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment 
root path has x or X in its name)

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has x or X in its name
 --

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Priority: Minor
  Labels: test

 TestFSMainOperationsLocalFileSystem and 
 TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment 
 root path has x or X in its name. Here is the the root cause of the 
 failures.
 Both classes extend Class FSMainOperationsBaseTest. PathFilter 
 FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or X in its 
 name. Some of the test cases construct a path by combining path 
 TEST_ROOT_DIR with a customized partial path. 
 The problem is that once the enlistment root path has x or X in  its 
 name, TEST_ROOT_DIR will also has x or X in its name. The path check 
 will pass even if the customized partial path doesn't have x or X. 
 However, the path filter is supposed to reject this path when the customized 
 partial path doesn't have x or X.
 An easy fix is using more complicated char sequence rather than a simple char 
 x or X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has x or X in its name

2013-06-05 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Description: 
TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path 
has x or X in its name. Here is the the root cause of the failures.

TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or 
X in its name. Some of the test cases construct a path by combining path 
TEST_ROOT_DIR with a customized partial path. The problem is that once the 
enlistment root path has x or X in  its name, TEST_ROOT_DIR will also has 
x or X in its name. The path check will pass even if the customized partial 
path doesn't have x or X. However, the path filter is supposed to reject 
this path when the customized partial path doesn't have x or X.

An easy fix is using more complicated char sequence rather than a simple char 
x or X.

  was:
TestFSMainOperationsLocalFileSystem and 
TestNativeAzureFileSystemOperationsMocked failed when the Hadoop enlistment 
root path has x or X in its name. Here is the the root cause of the 
failures.

Both classes extend Class FSMainOperationsBaseTest. PathFilter 
FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or X in its 
name. Some of the test cases construct a path by combining path TEST_ROOT_DIR 
with a customized partial path. 
The problem is that once the enlistment root path has x or X in  its name, 
TEST_ROOT_DIR will also has x or X in its name. The path check will pass 
even if the customized partial path doesn't have x or X. However, the path 
filter is supposed to reject this path when the customized partial path doesn't 
have x or X.

An easy fix is using more complicated char sequence rather than a simple char 
x or X.


 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has x or X in its name
 --

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Priority: Minor
  Labels: test

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has x or X in its name. Here is the the root cause of the failures.
 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or 
 X in its name. Some of the test cases construct a path by combining path 
 TEST_ROOT_DIR with a customized partial path. The problem is that once the 
 enlistment root path has x or X in  its name, TEST_ROOT_DIR will also 
 has x or X in its name. The path check will pass even if the customized 
 partial path doesn't have x or X. However, the path filter is supposed to 
 reject this path when the customized partial path doesn't have x or X.
 An easy fix is using more complicated char sequence rather than a simple char 
 x or X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has x or X in its name

2013-06-05 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Description: 
TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path 
has x or X in its name. Here is the the root cause of the failures.

TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or 
X in its name. Some of the test cases construct a path by combining path 
TEST_ROOT_DIR with a customized partial path. The problem is that once the 
enlistment root path has x or X in  its name, TEST_ROOT_DIR will also has 
x or X in its name. The path check will pass even if the customized partial 
path doesn't have x or X. However, for this case the path filter is 
supposed to reject this path.

An easy fix is using more complicated char sequence rather than a simple char 
x or X.

  was:
TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path 
has x or X in its name. Here is the the root cause of the failures.

TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or 
X in its name. Some of the test cases construct a path by combining path 
TEST_ROOT_DIR with a customized partial path. The problem is that once the 
enlistment root path has x or X in  its name, TEST_ROOT_DIR will also has 
x or X in its name. The path check will pass even if the customized partial 
path doesn't have x or X. However, the path filter is supposed to reject 
this path when the customized partial path doesn't have x or X.

An easy fix is using more complicated char sequence rather than a simple char 
x or X.


 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has x or X in its name
 --

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Priority: Minor
  Labels: test

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has x or X in its name. Here is the the root cause of the failures.
 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or 
 X in its name. Some of the test cases construct a path by combining path 
 TEST_ROOT_DIR with a customized partial path. The problem is that once the 
 enlistment root path has x or X in  its name, TEST_ROOT_DIR will also 
 has x or X in its name. The path check will pass even if the customized 
 partial path doesn't have x or X. However, for this case the path filter 
 is supposed to reject this path.
 An easy fix is using more complicated char sequence rather than a simple char 
 x or X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-05 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Summary: TestFSMainOperationsLocalFileSystem failed when the Hadoop 
enlistment root path has X in its name  (was: 
TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path 
has x or X in its name)

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Priority: Minor
  Labels: test

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has x or X in its name. Here is the the root cause of the failures.
 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or 
 X in its name. Some of the test cases construct a path by combining path 
 TEST_ROOT_DIR with a customized partial path. The problem is that once the 
 enlistment root path has x or X in  its name, TEST_ROOT_DIR will also 
 has x or X in its name. The path check will pass even if the customized 
 partial path doesn't have x or X. However, for this case the path filter 
 is supposed to reject this path.
 An easy fix is using more complicated char sequence rather than a simple char 
 x or X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has X in its name

2013-06-05 Thread Xi Fang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xi Fang updated HADOOP-9624:


Description: 
TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path 
has X in its name. Here is the the root cause of the failures.

TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has X in 
its name. Some of the test cases construct a path by combining path 
TEST_ROOT_DIR with a customized partial path. The problem is that once the 
enlistment root path has X in  its name, TEST_ROOT_DIR will also has X in 
its name. The path check will pass even if the customized partial path doesn't 
have X. However, for this case the path filter is supposed to reject this 
path.

An easy fix is using more complicated char sequence rather than a simple char 
X.

  was:
TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path 
has x or X in its name. Here is the the root cause of the failures.

TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has x or 
X in its name. Some of the test cases construct a path by combining path 
TEST_ROOT_DIR with a customized partial path. The problem is that once the 
enlistment root path has x or X in  its name, TEST_ROOT_DIR will also has 
x or X in its name. The path check will pass even if the customized partial 
path doesn't have x or X. However, for this case the path filter is 
supposed to reject this path.

An easy fix is using more complicated char sequence rather than a simple char 
x or X.


 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name
 ---

 Key: HADOOP-9624
 URL: https://issues.apache.org/jira/browse/HADOOP-9624
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1-win
 Environment: Windows
Reporter: Xi Fang
Priority: Minor
  Labels: test

 TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root 
 path has X in its name. Here is the the root cause of the failures.
 TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. 
 PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has X in 
 its name. Some of the test cases construct a path by combining path 
 TEST_ROOT_DIR with a customized partial path. The problem is that once the 
 enlistment root path has X in  its name, TEST_ROOT_DIR will also has X 
 in its name. The path check will pass even if the customized partial path 
 doesn't have X. However, for this case the path filter is supposed to 
 reject this path.
 An easy fix is using more complicated char sequence rather than a simple char 
 X.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira