[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2012-09-12 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454650#comment-13454650
 ] 

Alejandro Abdelnur commented on HADOOP-8787:


Ted,


On AuthenticationFilter.

{code}
  if (configPrefix.isEmpty()) {
errorMessage = "Unable to init AuthenticationHandler because of the 
following exception.";
  } else {
errorMessage = "Unable to init AuthenticationHandler of '" + 
configPrefix + "*' properties because of the following exception.";
  }
  throw new ServletException(errorMessage, ex);
{code}

I'd do:

{code}
  errorMessage = "Unable to init AuthenticationHandler";
  if (!configPrefix.isEmpty()) {
errorMessage += " with '" + configPrefix + "*' properties";
  }
  throw new ServletException(errorMessage + ": " + ex.getMessage(), ex);
{code}


On

{code}
  LOG.warn("'" + configPrefix + "signature.secret' configuration not set, 
using a random value as secret");
{code}

use the {{SIGNATURE_SECRET}} constant instead.


On the {{AuthenticatorFilter.getConfiguration()}} method I would add the 
{{configPrefix}} as a 'config.prefix' property in the {{Properties}} object.

This will allow you to generate the full property name in the exception thrown 
from the KerberosAuthenticatorHandler. ---



> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 3.0.0, 2.0.1-alpha
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch, 
> HADOOP-8787-2.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-8755:


Attachment: HADOOP-8755.patch

Fixed pom files to avoid warnings from Maven.

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HADOOP-8755.patch, HADOOP-8755.patch, 
> HDFS-3762-branch-0.23.patch, HDFS-3762.patch, HDFS-3762.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454617#comment-13454617
 ] 

Aaron T. Myers commented on HADOOP-8755:


Thanks for posting an updated patch, Andrey. The test failures are most likely 
unrelated, but the javac warning seems to be due to this patch. Mind taking a 
look into that?

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HADOOP-8755.patch, HDFS-3762-branch-0.23.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, 
> HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454615#comment-13454615
 ] 

Hadoop QA commented on HADOOP-8755:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544912/HADOOP-8755.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 2 new or modified test 
files.

-1 javac.  The applied patch generated 2067 javac compiler warnings (more 
than the trunk's current 2056 warnings).

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-httpfs:

  org.apache.hadoop.ha.TestZKFailoverController
  org.apache.hadoop.hdfs.TestPersistBlocks

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1450//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1450//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1450//console

This message is automatically generated.

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HADOOP-8755.patch, HDFS-3762-branch-0.23.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, 
> HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8694) Create true symbolic links on Windows

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454562#comment-13454562
 ] 

Bikas Saha commented on HADOOP-8694:


The vcproj changes look like all lines have edits? Is it some line ending 
issue? Could you run this through dos2unix?

> Create true symbolic links on Windows
> -
>
> Key: HADOOP-8694
> URL: https://issues.apache.org/jira/browse/HADOOP-8694
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Attachments: HADOOP-8694-branch-1-win-2.patch, 
> HADOOP-8694-branch-1-win.patch, secpol.png
>
>
> In branch-1-win, we currently copy files for symbolic links in Hadoop on 
> Windows. We have talked to [~davidlao] who made the original fix, and did 
> some investigation on Windows. Windows began to support symbolic links 
> (symlinks) since Vista/Server 2008. The original reason to copy files instead 
> of creating actual symlinks is that only Administrators has the privilege to 
> create symlinks on Windows _by default_. After talking to NTFS folks, we knew 
> the reason for that is mostly due to security, and this default behavior may 
> not be changed in near future. Though this behavior can be changed via  the 
> Local Security Policy management console, i.e. secpol.msc, under Security 
> Settings\Local Policies\User Rights Assignment\Create symbolic links.
>  
> In Hadoop, symlinks is mostly used to for DistributedCache and attempted 
> logs. We felt the usages are important enough for us to provide true symlinks 
> support, and users need to have the symlink creation privilege enabled on 
> Windows to use Hadoop.
> This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8694) Create true symbolic links on Windows

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454560#comment-13454560
 ] 

Bikas Saha commented on HADOOP-8694:


+1 looks good.

> Create true symbolic links on Windows
> -
>
> Key: HADOOP-8694
> URL: https://issues.apache.org/jira/browse/HADOOP-8694
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Attachments: HADOOP-8694-branch-1-win-2.patch, 
> HADOOP-8694-branch-1-win.patch, secpol.png
>
>
> In branch-1-win, we currently copy files for symbolic links in Hadoop on 
> Windows. We have talked to [~davidlao] who made the original fix, and did 
> some investigation on Windows. Windows began to support symbolic links 
> (symlinks) since Vista/Server 2008. The original reason to copy files instead 
> of creating actual symlinks is that only Administrators has the privilege to 
> create symlinks on Windows _by default_. After talking to NTFS folks, we knew 
> the reason for that is mostly due to security, and this default behavior may 
> not be changed in near future. Though this behavior can be changed via  the 
> Local Security Policy management console, i.e. secpol.msc, under Security 
> Settings\Local Policies\User Rights Assignment\Create symbolic links.
>  
> In Hadoop, symlinks is mostly used to for DistributedCache and attempted 
> logs. We felt the usages are important enough for us to provide true symlinks 
> support, and users need to have the symlink creation privilege enabled on 
> Windows to use Hadoop.
> This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8763) Set group owner on Windows failed

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454553#comment-13454553
 ] 

Bikas Saha commented on HADOOP-8763:


The following code seems to be an unrelated change? Also, do you mean BUILDIN 
or BUILTIN?
{code}
+  // Empty name is invalid. However, LookupAccountName() function will return a
+  // false Sid, i.e. Sid for 'BUILDIN', for an empty name instead failing. We
+  // report the error before calling LookupAccountName() function for this
+  // special case.
+  //
+  if (wcslen(acctName) == 0)
+return FALSE;
{code}

Do you see any unexpected behavior for users because of the following?
{code}
+On Linux, if a colon but no group name follows the user name, the group of\n\
+the files is changed to that user\'s login group. Windows has no concept of\n\
+a user's login group. So we do not change the group owner in this case.\n",
 program)
{code}

> Set group owner on Windows failed
> -
>
> Key: HADOOP-8763
> URL: https://issues.apache.org/jira/browse/HADOOP-8763
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chuan Liu
>Assignee: Chuan Liu
>Priority: Minor
> Fix For: 1-win
>
> Attachments: HADOOP-8763-branch-1-win-2.patch, 
> HADOOP-8763-branch-1-win.patch
>
>
> RawLocalFileSystem.setOwner() method may incorrectly set the group owner of a 
> file on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8795) BASH tab completion doesn't look in PATH, assumes path to executable is specified

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454542#comment-13454542
 ] 

Hadoop QA commented on HADOOP-8795:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544913/HADOOP-8795.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1449//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1449//console

This message is automatically generated.

> BASH tab completion doesn't look in PATH, assumes path to executable is 
> specified
> -
>
> Key: HADOOP-8795
> URL: https://issues.apache.org/jira/browse/HADOOP-8795
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
> Attachments: HADOOP-8795.patch
>
>
> bash-tab-completion/hadoop.sh checks that the first token in the command is 
> an existing, executable file - which assumes that the path to the hadoop 
> executable is specified (or that it's in the working directory). If the 
> executable is somewhere else in PATH, tab completion will not work.
> I propose that the first token be passed through 'which' so that any 
> executables in the path also get detected. I've tested that this technique 
> will work in the event that relative and absolute paths are used as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454539#comment-13454539
 ] 

Bikas Saha commented on HADOOP-8734:


So if I understand this right, this fixes a generic deficiency in 
LocalJobRunner which wasnt showing up because by default files are public to 
read on Linux FS and so LocalJobRunner would not see issues in accessing 
private distributed cache from the local FS.
Also, this would make the change to TestMRWithDistributedCache unnecessary?

> LocalJobRunner does not support private distributed cache
> -
>
> Key: HADOOP-8734
> URL: https://issues.apache.org/jira/browse/HADOOP-8734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8734-LocalJobRunner.patch
>
>
> It seems that LocalJobRunner does not support private distributed cache. The 
> issue is more visible on Windows as all DC files are private by default (see 
> HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8796) commands_manual.html link is broken

2012-09-12 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-8796:


 Summary: commands_manual.html link is broken
 Key: HADOOP-8796
 URL: https://issues.apache.org/jira/browse/HADOOP-8796
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.1-alpha
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
Priority: Minor
 Fix For: 2.0.2-alpha


If you go to http://hadoop.apache.org/docs/r2.0.0-alpha/ and click on Hadoop 
Commands you are getting a broken link: 
http://hadoop.apache.org/docs/r2.0.0-alpha/hadoop-project-dist/hadoop-common/commands_manual.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8795) BASH tab completion doesn't look in PATH, assumes path to executable is specified

2012-09-12 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454526#comment-13454526
 ] 

Roman Shaposhnik commented on HADOOP-8795:
--

+1

> BASH tab completion doesn't look in PATH, assumes path to executable is 
> specified
> -
>
> Key: HADOOP-8795
> URL: https://issues.apache.org/jira/browse/HADOOP-8795
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
> Attachments: HADOOP-8795.patch
>
>
> bash-tab-completion/hadoop.sh checks that the first token in the command is 
> an existing, executable file - which assumes that the path to the hadoop 
> executable is specified (or that it's in the working directory). If the 
> executable is somewhere else in PATH, tab completion will not work.
> I propose that the first token be passed through 'which' so that any 
> executables in the path also get detected. I've tested that this technique 
> will work in the event that relative and absolute paths are used as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8795) BASH tab completion doesn't look in PATH, assumes path to executable is specified

2012-09-12 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-8795:
--

Status: Patch Available  (was: Open)

> BASH tab completion doesn't look in PATH, assumes path to executable is 
> specified
> -
>
> Key: HADOOP-8795
> URL: https://issues.apache.org/jira/browse/HADOOP-8795
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
> Attachments: HADOOP-8795.patch
>
>
> bash-tab-completion/hadoop.sh checks that the first token in the command is 
> an existing, executable file - which assumes that the path to the hadoop 
> executable is specified (or that it's in the working directory). If the 
> executable is somewhere else in PATH, tab completion will not work.
> I propose that the first token be passed through 'which' so that any 
> executables in the path also get detected. I've tested that this technique 
> will work in the event that relative and absolute paths are used as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8795) BASH tab completion doesn't look in PATH, assumes path to executable is specified

2012-09-12 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-8795:
--

Attachment: HADOOP-8795.patch

> BASH tab completion doesn't look in PATH, assumes path to executable is 
> specified
> -
>
> Key: HADOOP-8795
> URL: https://issues.apache.org/jira/browse/HADOOP-8795
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
> Attachments: HADOOP-8795.patch
>
>
> bash-tab-completion/hadoop.sh checks that the first token in the command is 
> an existing, executable file - which assumes that the path to the hadoop 
> executable is specified (or that it's in the working directory). If the 
> executable is somewhere else in PATH, tab completion will not work.
> I propose that the first token be passed through 'which' so that any 
> executables in the path also get detected. I've tested that this technique 
> will work in the event that relative and absolute paths are used as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8795) BASH tab completion doesn't look in PATH, assumes path to executable is specified

2012-09-12 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-8795:
-

 Summary: BASH tab completion doesn't look in PATH, assumes path to 
executable is specified
 Key: HADOOP-8795
 URL: https://issues.apache.org/jira/browse/HADOOP-8795
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Mackrory
 Attachments: HADOOP-8795.patch

bash-tab-completion/hadoop.sh checks that the first token in the command is an 
existing, executable file - which assumes that the path to the hadoop 
executable is specified (or that it's in the working directory). If the 
executable is somewhere else in PATH, tab completion will not work.

I propose that the first token be passed through 'which' so that any 
executables in the path also get detected. I've tested that this technique will 
work in the event that relative and absolute paths are used as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-8755:


Attachment: HADOOP-8755.patch

Attaching an updated patch

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HADOOP-8755.patch, HDFS-3762-branch-0.23.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, 
> HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8791) rm "Only deletes non empty directory and files."

2012-09-12 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HADOOP-8791:
--

Assignee: Jing Zhao

> rm "Only deletes non empty directory and files."
> 
>
> Key: HADOOP-8791
> URL: https://issues.apache.org/jira/browse/HADOOP-8791
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3, 3.0.0
>Reporter: Bertrand Dechoux
>Assignee: Jing Zhao
>  Labels: documentation
> Attachments: HADOOP-8791-branch-1.patch, HADOOP-8791-trunk.patch
>
>
> The documentation (1.0.3) is describing the opposite of what rm does.
> It should be  "Only delete files and empty directories."
> With regards to file, the size of the file should not matter, should it?
> OR I am totally misunderstanding the semantic of this command and I am not 
> the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8794) Modifiy bin/hadoop to point to HADOOP_YARN_HOME

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454455#comment-13454455
 ] 

Hadoop QA commented on HADOOP-8794:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12544884/HADOOP-8794-20120912.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1448//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1448//console

This message is automatically generated.

> Modifiy bin/hadoop to point to HADOOP_YARN_HOME
> ---
>
> Key: HADOOP-8794
> URL: https://issues.apache.org/jira/browse/HADOOP-8794
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>     Attachments: HADOOP-8794-20120912.txt
>
>
> YARN-9 renames YARN_HOME to HADOOP_YARN_HOME. bin/hadoop script also needs to 
> do the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454454#comment-13454454
 ] 

Colin Patrick McCabe commented on HADOOP-8756:
--

Roman: 
bq. It would be nice if it [searched java.library.path]

Let's file a separate JIRA for that.  It's not really related to this JIRA, 
which is just about fixing a segfault.  Smaller patches are easier for people 
to review, as well.

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
> HADOOP-8756.004.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Andrey Klochkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454431#comment-13454431
 ] 

Andrey Klochkov commented on HADOOP-8755:
-

Sounds great! Thanks, Aaron. I'll update the patch shortly.

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HDFS-3762-branch-0.23.patch, HDFS-3762.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8794) Modifiy bin/hadoop to point to HADOOP_YARN_HOME

2012-09-12 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-8794:


Status: Patch Available  (was: Open)

> Modifiy bin/hadoop to point to HADOOP_YARN_HOME
> ---
>
> Key: HADOOP-8794
> URL: https://issues.apache.org/jira/browse/HADOOP-8794
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>     Attachments: HADOOP-8794-20120912.txt
>
>
> YARN-9 renames YARN_HOME to HADOOP_YARN_HOME. bin/hadoop script also needs to 
> do the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-8794) Modifiy bin/hadoop to point to HADOOP_YARN_HOME

2012-09-12 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli reassigned HADOOP-8794:
---

Assignee: Vinod Kumar Vavilapalli

> Modifiy bin/hadoop to point to HADOOP_YARN_HOME
> ---
>
> Key: HADOOP-8794
> URL: https://issues.apache.org/jira/browse/HADOOP-8794
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
> Attachments: HADOOP-8794-20120912.txt
>
>
> YARN-9 renames YARN_HOME to HADOOP_YARN_HOME. bin/hadoop script also needs to 
> do the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8794) Modifiy bin/hadoop to point to HADOOP_YARN_HOME

2012-09-12 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-8794:


Attachment: HADOOP-8794-20120912.txt

This should do. Tested by running mapreduce jobs via "bin/hadoop jar" on a 
single node cluster with YARN-9 patch.

> Modifiy bin/hadoop to point to HADOOP_YARN_HOME
> ---
>
> Key: HADOOP-8794
> URL: https://issues.apache.org/jira/browse/HADOOP-8794
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinod Kumar Vavilapalli
> Attachments: HADOOP-8794-20120912.txt
>
>
> YARN-9 renames YARN_HOME to HADOOP_YARN_HOME. bin/hadoop script also needs to 
> do the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454425#comment-13454425
 ] 

Aaron T. Myers commented on HADOOP-8755:


That sounds fine to me, Andrey. Once this patch goes in, I'll send out a note 
to common-dev@ saying something along the lines of "We now have support for 
printing a thread dump whenever a test case times out. However, this will only 
happen for test cases which are annotated with a JUnit timeout. If you see a 
test case fail by reaching the Surefire fork timeout, please file a JIRA to add 
a JUnit timeout for that test. If when adding a test case you think it might 
time out, please add a JUnit timeout."

Sound good?

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HDFS-3762-branch-0.23.patch, HDFS-3762.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8794) Modifiy bin/hadoop to point to HADOOP_YARN_HOME

2012-09-12 Thread Vinod Kumar Vavilapalli (JIRA)
Vinod Kumar Vavilapalli created HADOOP-8794:
---

 Summary: Modifiy bin/hadoop to point to HADOOP_YARN_HOME
 Key: HADOOP-8794
 URL: https://issues.apache.org/jira/browse/HADOOP-8794
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Vinod Kumar Vavilapalli


YARN-9 renames YARN_HOME to HADOOP_YARN_HOME. bin/hadoop script also needs to 
do the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Andrey Klochkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454418#comment-13454418
 ] 

Andrey Klochkov commented on HADOOP-8755:
-

Aaron, I'll do the 1st improvement in this JIRA as it's really simple. 

Marking all the annotations with timeouts manually - agree, that's much simpler 
and more transparent. The downside is polluting all the tests with it. In 
general limiting true unit tests with timeouts isn't the right thing, and in 
our case we're just doing this to troubleshoot flakiness of particular 
component tests. How about not having timeouts by default and instead marking 
just those tests which fail intermittently?

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HDFS-3762-branch-0.23.patch, HDFS-3762.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8457) Address file ownership issue for users in Administrators group on Windows.

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454389#comment-13454389
 ] 

Hadoop QA commented on HADOOP-8457:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12544124/HADOOP-8457-branch-1-win_Admins%283%29.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1447//console

This message is automatically generated.

> Address file ownership issue for users in Administrators group on Windows.
> --
>
> Key: HADOOP-8457
> URL: https://issues.apache.org/jira/browse/HADOOP-8457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 1.1.0
>Reporter: Chuan Liu
>Assignee: Ivan Mitic
>Priority: Minor
> Attachments: HADOOP-8457-branch-1-win_Admins(2).patch, 
> HADOOP-8457-branch-1-win_Admins(3).patch, 
> HADOOP-8457-branch-1-win_Admins.patch
>
>
> On Linux, the initial file owners are the creators. (I think this is true in 
> general. If there are exceptions, please let me know.) On Windows, the file 
> created by a user in the Administrators group has the initial owner 
> ‘Administrators’, i.e. the the Administrators group is the initial owner of 
> the file. As a result, this leads to an exception when we check file 
> ownership in SecureIOUtils .checkStat() method. As a result, this method is 
> disabled right now. We need to address this problem and enable the method on 
> Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8694) Create true symbolic links on Windows

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454385#comment-13454385
 ] 

Hadoop QA commented on HADOOP-8694:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12544286/HADOOP-8694-branch-1-win-2.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1446//console

This message is automatically generated.

> Create true symbolic links on Windows
> -
>
> Key: HADOOP-8694
> URL: https://issues.apache.org/jira/browse/HADOOP-8694
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Attachments: HADOOP-8694-branch-1-win-2.patch, 
> HADOOP-8694-branch-1-win.patch, secpol.png
>
>
> In branch-1-win, we currently copy files for symbolic links in Hadoop on 
> Windows. We have talked to [~davidlao] who made the original fix, and did 
> some investigation on Windows. Windows began to support symbolic links 
> (symlinks) since Vista/Server 2008. The original reason to copy files instead 
> of creating actual symlinks is that only Administrators has the privilege to 
> create symlinks on Windows _by default_. After talking to NTFS folks, we knew 
> the reason for that is mostly due to security, and this default behavior may 
> not be changed in near future. Though this behavior can be changed via  the 
> Local Security Policy management console, i.e. secpol.msc, under Security 
> Settings\Local Policies\User Rights Assignment\Create symbolic links.
>  
> In Hadoop, symlinks is mostly used to for DistributedCache and attempted 
> logs. We felt the usages are important enough for us to provide true symlinks 
> support, and users need to have the symlink creation privilege enabled on 
> Windows to use Hadoop.
> This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454384#comment-13454384
 ] 

Hadoop QA commented on HADOOP-8733:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12544033/HADOOP-8733-scripts.2.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1445//console

This message is automatically generated.

> TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
> on Windows
> ---
>
> Key: HADOOP-8733
> URL: https://issues.apache.org/jira/browse/HADOOP-8733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8733-scripts.2.patch, 
> HADOOP-8733-scripts.2.patch, HADOOP-8733-scripts.patch
>
>
> Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454377#comment-13454377
 ] 

Hadoop QA commented on HADOOP-8756:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544869/HADOOP-8756.004.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1444//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1444//console

This message is automatically generated.

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
> HADOOP-8756.004.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454365#comment-13454365
 ] 

Roman Shaposhnik commented on HADOOP-8756:
--

bq. The patch could be revised to manually search java.library.path, I guess. 
Would that be worthwhile?

Well, my concern is around all the clients that don't use hadoop launcher 
script but need to access snappy codec *on the client* side. Flume would be a 
good example here: since it launches the JVM directly it has to also makes sure 
it sets up LD_LIBRARY_PATH if we don't provide the manual search capability in 
core hadoop itself.

I guess what I'm trying to say is that we have to have a solution for things 
like Flume. It would be nice if it worked automagically.

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
> HADOOP-8756.004.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8733) TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail on Windows

2012-09-12 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8733:
---

Affects Version/s: 1-win
   Status: Patch Available  (was: Open)

> TestStreamingTaskLog, TestJvmManager, TestLinuxTaskControllerLaunchArgs fail 
> on Windows
> ---
>
> Key: HADOOP-8733
> URL: https://issues.apache.org/jira/browse/HADOOP-8733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1-win
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8733-scripts.2.patch, 
> HADOOP-8733-scripts.2.patch, HADOOP-8733-scripts.patch
>
>
> Jira tracking test failures related to test .sh script dependencies. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8694) Create true symbolic links on Windows

2012-09-12 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8694:
---

Fix Version/s: (was: 1-win)
   Status: Patch Available  (was: Open)

> Create true symbolic links on Windows
> -
>
> Key: HADOOP-8694
> URL: https://issues.apache.org/jira/browse/HADOOP-8694
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1-win
>Reporter: Chuan Liu
>Assignee: Chuan Liu
> Attachments: HADOOP-8694-branch-1-win-2.patch, 
> HADOOP-8694-branch-1-win.patch, secpol.png
>
>
> In branch-1-win, we currently copy files for symbolic links in Hadoop on 
> Windows. We have talked to [~davidlao] who made the original fix, and did 
> some investigation on Windows. Windows began to support symbolic links 
> (symlinks) since Vista/Server 2008. The original reason to copy files instead 
> of creating actual symlinks is that only Administrators has the privilege to 
> create symlinks on Windows _by default_. After talking to NTFS folks, we knew 
> the reason for that is mostly due to security, and this default behavior may 
> not be changed in near future. Though this behavior can be changed via  the 
> Local Security Policy management console, i.e. secpol.msc, under Security 
> Settings\Local Policies\User Rights Assignment\Create symbolic links.
>  
> In Hadoop, symlinks is mostly used to for DistributedCache and attempted 
> logs. We felt the usages are important enough for us to provide true symlinks 
> support, and users need to have the symlink creation privilege enabled on 
> Windows to use Hadoop.
> This JIRA is created to tracking symlink support on Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HADOOP-8457) Address file ownership issue for users in Administrators group on Windows.

2012-09-12 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-8457 started by Ivan Mitic.

> Address file ownership issue for users in Administrators group on Windows.
> --
>
> Key: HADOOP-8457
> URL: https://issues.apache.org/jira/browse/HADOOP-8457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 1.1.0
>Reporter: Chuan Liu
>Assignee: Ivan Mitic
>Priority: Minor
> Attachments: HADOOP-8457-branch-1-win_Admins(2).patch, 
> HADOOP-8457-branch-1-win_Admins(3).patch, 
> HADOOP-8457-branch-1-win_Admins.patch
>
>
> On Linux, the initial file owners are the creators. (I think this is true in 
> general. If there are exceptions, please let me know.) On Windows, the file 
> created by a user in the Administrators group has the initial owner 
> ‘Administrators’, i.e. the the Administrators group is the initial owner of 
> the file. As a result, this leads to an exception when we check file 
> ownership in SecureIOUtils .checkStat() method. As a result, this method is 
> disabled right now. We need to address this problem and enable the method on 
> Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8457) Address file ownership issue for users in Administrators group on Windows.

2012-09-12 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-8457:
---

Affects Version/s: (was: 0.24.0)
   Status: Patch Available  (was: In Progress)

> Address file ownership issue for users in Administrators group on Windows.
> --
>
> Key: HADOOP-8457
> URL: https://issues.apache.org/jira/browse/HADOOP-8457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 1.1.0
>Reporter: Chuan Liu
>Assignee: Ivan Mitic
>Priority: Minor
> Attachments: HADOOP-8457-branch-1-win_Admins(2).patch, 
> HADOOP-8457-branch-1-win_Admins(3).patch, 
> HADOOP-8457-branch-1-win_Admins.patch
>
>
> On Linux, the initial file owners are the creators. (I think this is true in 
> general. If there are exceptions, please let me know.) On Windows, the file 
> created by a user in the Administrators group has the initial owner 
> ‘Administrators’, i.e. the the Administrators group is the initial owner of 
> the file. As a result, this leads to an exception when we check file 
> ownership in SecureIOUtils .checkStat() method. As a result, this method is 
> disabled right now. We need to address this problem and enable the method on 
> Windows.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8793) hadoop-core has dependencies on two different versions of commons-httpclient

2012-09-12 Thread Christopher Tubbs (JIRA)
Christopher Tubbs created HADOOP-8793:
-

 Summary: hadoop-core has dependencies on two different versions of 
commons-httpclient
 Key: HADOOP-8793
 URL: https://issues.apache.org/jira/browse/HADOOP-8793
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.20.205.0
 Environment: Seen on 0.20.205.0, but may be an issue for other 
versions of hadoop-core (and probably other hadoop builds)
Reporter: Christopher Tubbs
Priority: Critical


hadoop-core fails to enforce dependency convergence, resulting in potential 
conflicts.

At the very least, there appears to be a direct dependency on
{code}commons-httpclient:commons-httpclient:3.0.1{code}
but a transitive dependency on
{code}commons-httpclient:commons-httpclient:3.1{code}
via
{code}net.java.dev.jets3t:jets3t:0.7.1{code}

See http://maven.apache.org/enforcer/enforcer-rules/dependencyConvergence.html 
for details on how to enforce dependency convergence in Maven.

Please enforce dependency convergence... it helps projects that depend on 
hadoop libraries build much more reliably and safely.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8731) Public distributed cache support for Windows

2012-09-12 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454352#comment-13454352
 ] 

Ivan Mitic commented on HADOOP-8731:


bq. can you please explain the chmod() changes in 
TrackerDistributedCacheManager.
Thanks Bikas for reviewing. The issue is that the right permissions are not set 
on files if I do not make this change. If you take a look at the previous  
{{FileUtils.chmod()}} it only sets permissions for archives, but not for files. 
Now when I moved it below, it sets the permissions for both files are archives.

> Public distributed cache support for Windows
> 
>
> Key: HADOOP-8731
> URL: https://issues.apache.org/jira/browse/HADOOP-8731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8731-PublicCache.patch
>
>
> A distributed cache file is considered public (sharable between MR jobs) if 
> OTHER has read permissions on the file and +x permissions all the way up in 
> the folder hierarchy. By default, Windows permissions are mapped to "700" all 
> the way up to the drive letter, and it is unreasonable to ask users to change 
> the permission on the whole drive to make the file public. IOW, it is hardly 
> possible to have public distributed cache on Windows. 
> To enable the scenario and make it more "Windows friendly", the criteria on 
> when a file is considered public should be relaxed. One proposal is to check 
> whether the user has given EVERYONE group permission on the file only (and 
> discard the +x check on parent folders).
> Security considerations for the proposal: Default permissions on Unix 
> platforms are usually "775" or "755" meaning that OTHER users can read and 
> list folders by default. What this also means is that Hadoop users have to 
> explicitly make the files private in order to make them private in the 
> cluster (please correct me if this is not the case in real life!). On 
> Windows, default permissions are "700". This means that by default all files 
> are private. In the new model, if users want to make them public, they have 
> to explicitly add EVERYONE group permissions on the file. 
> TestTrackerDistributedCacheManager fails because of this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454344#comment-13454344
 ] 

Colin Patrick McCabe commented on HADOOP-8756:
--

bq. Just to make sure – is the patches posted to this JIRA meant to provide an 
alternative fix for the issue that won't require us changing the value of 
LD_LIBRARY_PATH

No, you will still need to set {{LD_LIBRARY_PATH}}.  The patch could be revised 
to manually search {{java.library.path}}, I guess.  Would that be worthwhile?

bq. If so, shouldn't your patches include the changes that would restore the 
old behavior?

The behavior hasn't changed.  You always needed to have {{libsnappy.so}} in 
your {{LD_LIBRARY_PATH}} or system library path in order to load it with 
{{dlopen}}.

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
> HADOOP-8756.004.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-09-12 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454336#comment-13454336
 ] 

Ivan Mitic commented on HADOOP-8734:


What I mean is that I made a small change to TestMRWithDistributedCache such 
that the test fails without my fix to the LocalJobRunner. 

{code}
+// Change permissions on one file to be private (others cannot read
+// the file) to make sure private distributed cache works fine with
+// the LocalJobRunner.
+FileUtil.chmod(fourth.toUri().getPath(), "700");
{code}

Let me know if this clarifies things.

> LocalJobRunner does not support private distributed cache
> -
>
> Key: HADOOP-8734
> URL: https://issues.apache.org/jira/browse/HADOOP-8734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8734-LocalJobRunner.patch
>
>
> It seems that LocalJobRunner does not support private distributed cache. The 
> issue is more visible on Windows as all DC files are private by default (see 
> HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454333#comment-13454333
 ] 

Roman Shaposhnik commented on HADOOP-8756:
--

This seems to be related to the HADOOP-8781. Just to make sure -- is the 
patches posted to this JIRA meant to provide an alternative fix for the issue 
that won't require us changing the value of LD_LIBRARY_PATH ? If so, shouldn't 
your patches include the changes that would restore the old behavior?

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
> HADOOP-8756.004.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8734) LocalJobRunner does not support private distributed cache

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454326#comment-13454326
 ] 

Bikas Saha commented on HADOOP-8734:


bq. Check out the fix I did to TestMRWithDistributedCache, this is an E2E use 
case.
What fix are you mentioning?

> LocalJobRunner does not support private distributed cache
> -
>
> Key: HADOOP-8734
> URL: https://issues.apache.org/jira/browse/HADOOP-8734
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8734-LocalJobRunner.patch
>
>
> It seems that LocalJobRunner does not support private distributed cache. The 
> issue is more visible on Windows as all DC files are private by default (see 
> HADOOP-8731).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8756:
-

Attachment: HADOOP-8756.004.patch

* fix whitespace

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
> HADOOP-8756.004.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454284#comment-13454284
 ] 

Aaron T. Myers commented on HADOOP-8755:


Thanks for the update, Andrey. I agree that both of those improvements sound 
good, though both could reasonably be done in separate JIRAs.

Regarding the difficulty of implementing option #2, I agree that both of those 
sound pretty hacky, probably to the degree that it's not worth it. I don't even 
think that dynamically modifying the @Test annotations for all the test methods 
would work, since I don't think you can change annotation attributes at 
run-time. I've also taken a look at the JUnit docs, and I think another way of 
setting a default timeout might be to implement a custom BlockJUnit4ClassRunner 
which overrides the withPotentialTimeout method to add a default value if none 
is set. That's still not trivial, but it seems a little less hacky than either 
of the two options so far proposed.

All that said, given the difficulty of setting a default JUnit test timeout, 
I'd even be OK with just modifying all existing tests to set the timeout 
attribute of the @Test annotation, and going forward being sure to always set 
one. Considering we recently converted all of Hadoop's tests to JUnit 4 style, 
this seems like it might be reasonable. I think we could very close to such a 
patch just by doing the following:

{code}
sed -i 's/@Test$/@Test(timeout=48)/g' `egrep -r '@Test$' . -l`
sed -i 's/@Test(expected/@Test(timeout=48, expected/g' `egrep -r 
'@Test\(expected' . -l`
{code}

Thoughts?

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HDFS-3762-branch-0.23.patch, HDFS-3762.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8791) rm "Only deletes non empty directory and files."

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454264#comment-13454264
 ] 

Hadoop QA commented on HADOOP-8791:
---

+1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12544860/HADOOP-8791-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+0 tests included.  The patch appears to be a documentation patch that 
doesn't require tests.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1443//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1443//console

This message is automatically generated.

> rm "Only deletes non empty directory and files."
> 
>
> Key: HADOOP-8791
> URL: https://issues.apache.org/jira/browse/HADOOP-8791
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3, 3.0.0
>Reporter: Bertrand Dechoux
>  Labels: documentation
> Attachments: HADOOP-8791-branch-1.patch, HADOOP-8791-trunk.patch
>
>
> The documentation (1.0.3) is describing the opposite of what rm does.
> It should be  "Only delete files and empty directories."
> With regards to file, the size of the file should not matter, should it?
> OR I am totally misunderstanding the semantic of this command and I am not 
> the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7688) When a servlet filter throws an exception in init(..), the Jetty server failed silently.

2012-09-12 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454246#comment-13454246
 ] 

Todd Lipcon commented on HADOOP-7688:
-

Yes, let's merge this to branch-2 and branch-1 if applicable.

> When a servlet filter throws an exception in init(..), the Jetty server 
> failed silently. 
> -
>
> Key: HADOOP-7688
> URL: https://issues.apache.org/jira/browse/HADOOP-7688
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.23.0, 0.24.0
>Reporter: Tsz Wo (Nicholas), SZE
>Assignee: Uma Maheswara Rao G
> Fix For: 3.0.0
>
> Attachments: filter-init-exception-test.patch, HADOOP-7688.patch, 
> org.apache.hadoop.http.TestServletFilter-output.txt
>
>
> When a servlet filter throws a ServletException in init(..), the exception is 
> logged by Jetty but not re-throws to the caller.  As a result, the Jetty 
> server failed silently.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8731) Public distributed cache support for Windows

2012-09-12 Thread Bikas Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454240#comment-13454240
 ] 

Bikas Saha commented on HADOOP-8731:


can you please explain the chmod() changes in TrackerDistributedCacheManager.

> Public distributed cache support for Windows
> 
>
> Key: HADOOP-8731
> URL: https://issues.apache.org/jira/browse/HADOOP-8731
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: filecache
>Reporter: Ivan Mitic
>Assignee: Ivan Mitic
> Attachments: HADOOP-8731-PublicCache.patch
>
>
> A distributed cache file is considered public (sharable between MR jobs) if 
> OTHER has read permissions on the file and +x permissions all the way up in 
> the folder hierarchy. By default, Windows permissions are mapped to "700" all 
> the way up to the drive letter, and it is unreasonable to ask users to change 
> the permission on the whole drive to make the file public. IOW, it is hardly 
> possible to have public distributed cache on Windows. 
> To enable the scenario and make it more "Windows friendly", the criteria on 
> when a file is considered public should be relaxed. One proposal is to check 
> whether the user has given EVERYONE group permission on the file only (and 
> discard the +x check on parent folders).
> Security considerations for the proposal: Default permissions on Unix 
> platforms are usually "775" or "755" meaning that OTHER users can read and 
> list folders by default. What this also means is that Hadoop users have to 
> explicitly make the files private in order to make them private in the 
> cluster (please correct me if this is not the case in real life!). On 
> Windows, default permissions are "700". This means that by default all files 
> are private. In the new model, if users want to make them public, they have 
> to explicitly add EVERYONE group permissions on the file. 
> TestTrackerDistributedCacheManager fails because of this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8791) rm "Only deletes non empty directory and files."

2012-09-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-8791:
--

Affects Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> rm "Only deletes non empty directory and files."
> 
>
> Key: HADOOP-8791
> URL: https://issues.apache.org/jira/browse/HADOOP-8791
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3, 3.0.0
>Reporter: Bertrand Dechoux
>  Labels: documentation
> Attachments: HADOOP-8791-branch-1.patch, HADOOP-8791-trunk.patch
>
>
> The documentation (1.0.3) is describing the opposite of what rm does.
> It should be  "Only delete files and empty directories."
> With regards to file, the size of the file should not matter, should it?
> OR I am totally misunderstanding the semantic of this command and I am not 
> the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8791) rm "Only deletes non empty directory and files."

2012-09-12 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-8791:
--

Attachment: HADOOP-8791-trunk.patch
HADOOP-8791-branch-1.patch

So I did the two patches for branch-1 and trunk for Bertrand. Thanks for the 
finding Bertrand!

> rm "Only deletes non empty directory and files."
> 
>
> Key: HADOOP-8791
> URL: https://issues.apache.org/jira/browse/HADOOP-8791
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3
>Reporter: Bertrand Dechoux
>  Labels: documentation
> Attachments: HADOOP-8791-branch-1.patch, HADOOP-8791-trunk.patch
>
>
> The documentation (1.0.3) is describing the opposite of what rm does.
> It should be  "Only delete files and empty directories."
> With regards to file, the size of the file should not matter, should it?
> OR I am totally misunderstanding the semantic of this command and I am not 
> the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454214#comment-13454214
 ] 

Andy Isaacson commented on HADOOP-8756:
---

{code}
@@ -295,4 +298,5 @@ public class SnappyCompressor implements Compressor {
   private native static void initIDs();
 
   private native int compressBytesDirect();
+
 }
{code}
Unneeded whitespace change.

Other than that, the patch looks right and is a nice conceptual cleanup as well 
as fixing the SEGV. +1.

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Andrey Klochkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454171#comment-13454171
 ] 

Andrey Klochkov commented on HADOOP-8755:
-

Hi Aaron, I'd like to make number of improvements before submitting a patch. 
These are:
# include dead lock detection into the dump
# introduce default timeouts on junit level

The 2nd one is not easy. I'm thinking about 2 possible ways to implement it, 
and both seem pretty hackie. The first is implementing a custom Surefire 
provider. It's not straightforward (if possible) as there are no explicit 
extension points for that in Surefire.  The second is doing instrumentation 
with a custom JVM agent, adding "timeout" parameter to the @Test annotation for 
all test methods which don't provide it. I'm planning to evaluate both ways but 
it may take time. I think a separate JIRA would be better for this part. WDYT?

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HDFS-3762-branch-0.23.patch, HDFS-3762.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8756:
-

Summary: Fix SEGV when libsnappy is in java.library.path but not 
LD_LIBRARY_PATH  (was: libsnappy loader issues)

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8780) Update DeprecatedProperties apt file

2012-09-12 Thread Ahmed Radwan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Radwan updated HADOOP-8780:
-

Attachment: HADOOP-8780_rev3.patch

Thanks Tom, here is an updated patch that gets rid of a couple of javac warning 
that test-patch revealed. 

The test-patch results for the new patch:

{code}

-1 overall.  

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version ) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

{code}

> Update DeprecatedProperties apt file
> 
>
> Key: HADOOP-8780
> URL: https://issues.apache.org/jira/browse/HADOOP-8780
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Radwan
>Assignee: Ahmed Radwan
> Attachments: HADOOP-8780.patch, HADOOP-8780_rev2.patch, 
> HADOOP-8780_rev3.patch
>
>
> The current list of deprecated properties is not up-to-date. I'll will upload 
> a patch momentarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8755) Print thread dump when tests fail due to timeout

2012-09-12 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454158#comment-13454158
 ] 

Aaron T. Myers commented on HADOOP-8755:


Hi Andrey, do you think you'll have a chance to post another rev of this patch 
soon? I'm looking forward to getting this change checked in.

> Print thread dump when tests fail due to timeout 
> -
>
> Key: HADOOP-8755
> URL: https://issues.apache.org/jira/browse/HADOOP-8755
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HDFS-3762-branch-0.23.patch, HDFS-3762.patch, 
> HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch, HDFS-3762.patch
>
>
> When a test fails due to timeout it's often not clear what is the root cause. 
> See HDFS-3364 as an example.
> We can print dump of all threads in this case, this may help finding causes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7827) jsp pages missing DOCTYPE

2012-09-12 Thread Ivan Mitic (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Mitic updated HADOOP-7827:
---

Attachment: HADOOP-6496.branch-1.1.backport.patch

Attaching a patch that backports HADOOP-6496 to branch-1.1.

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Assignee: Dave Vronay
>Priority: Trivial
> Attachments: HADOOP-6496.branch-1.1.backport.patch, 
> HADOOP-7827-branch-0.20-security.patch, HADOOP-7827.patch, HADOOP-7827.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7827) jsp pages missing DOCTYPE

2012-09-12 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454146#comment-13454146
 ] 

Ivan Mitic commented on HADOOP-7827:


Hi folks,

I spent some time investigating the IE9 render issues on branch-1 (and 
regression mentioned in HADOOP-7867). If I see things correctly, the cause of 
the regression is browsers failing to load the css file. Now that we changed 
the browser mode to HTML5, it fails to load the css file if the returned server 
content type does not match the expected value (in this case text/css).

The problem for this seems to be in HttpServer.java, where 
{{QuotingInputFilter#doFilter}} always sets the context type to {{text/html}}. 
This was fixed with HADOOP-6496, so the missing piece is to backport this 
changelist. Will attach a patch for the backport.

Let me know if I should also rebase the changes to the latest trunk/branch-1.1, 
I'll be happy to do that.

Hope this helps

> jsp pages missing DOCTYPE
> -
>
> Key: HADOOP-7827
> URL: https://issues.apache.org/jira/browse/HADOOP-7827
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.203.0
>Reporter: Dave Vronay
>Assignee: Dave Vronay
>Priority: Trivial
> Attachments: HADOOP-7827-branch-0.20-security.patch, 
> HADOOP-7827.patch, HADOOP-7827.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> The various jsp pages in the UI are all missing a DOCTYPE declaration.  This 
> causes the pages to render incorrectly on some browsers, such as IE9.  Every 
> UI page should have a valid tag, such as , as their first 
> line.  There are 31 files that need to be changed, all in the 
> core\src\webapps tree.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8791) rm "Only deletes non empty directory and files."

2012-09-12 Thread Bertrand Dechoux (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454109#comment-13454109
 ] 

Bertrand Dechoux commented on HADOOP-8791:
--

And "non empty directories" is actually "empty directories". This one is not 
ambiguous but wrong.

I might send a patch but that won't be soon. I might have a go at it around 
22-23th but no promise.

> rm "Only deletes non empty directory and files."
> 
>
> Key: HADOOP-8791
> URL: https://issues.apache.org/jira/browse/HADOOP-8791
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3
>Reporter: Bertrand Dechoux
>  Labels: documentation
>
> The documentation (1.0.3) is describing the opposite of what rm does.
> It should be  "Only delete files and empty directories."
> With regards to file, the size of the file should not matter, should it?
> OR I am totally misunderstanding the semantic of this command and I am not 
> the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8791) rm "Only deletes non empty directory and files."

2012-09-12 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454082#comment-13454082
 ] 

Harsh J commented on HADOOP-8791:
-

The original doc under question is:

{quote}
Delete files specified as args. Only deletes non empty directory and files.
{quote}

While there's actually supposed to be a separation, i.e. "non empty 
directories" AND "files" in reading this, you're right that this is ambiguous.

Wanna send across a patch for both branch-1 and trunk Bertrand? :)

> rm "Only deletes non empty directory and files."
> 
>
> Key: HADOOP-8791
> URL: https://issues.apache.org/jira/browse/HADOOP-8791
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3
>Reporter: Bertrand Dechoux
>  Labels: documentation
>
> The documentation (1.0.3) is describing the opposite of what rm does.
> It should be  "Only delete files and empty directories."
> With regards to file, the size of the file should not matter, should it?
> OR I am totally misunderstanding the semantic of this command and I am not 
> the only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8789) Tests setLevel(Level.OFF) should be Level.ERROR

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454007#comment-13454007
 ] 

Hudson commented on HADOOP-8789:


Integrated in Hadoop-Mapreduce-trunk #1194 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1194/])
HADOOP-8789. Tests setLevel(Level.OFF) should be Level.ERROR. Contributed 
by Andy Isaacson (Revision 1383494)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383494
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/test/java/org/apache/hadoop/tools/TestHadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestCopyFiles.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java


> Tests setLevel(Level.OFF) should be Level.ERROR
> ---
>
> Key: HADOOP-8789
> URL: https://issues.apache.org/jira/browse/HADOOP-8789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3911.txt
>
>
> Multiple tests have code like
> {code}
> ((Log4JLogger)LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.OFF);
> {code}
> Completely disabling logs from given classes with {{Level.OFF}} is a bad idea 
> and makes debugging other test failures, especially intermittent test 
> failures like HDFS-3664, difficult.  Instead the code should use 
> {{Level.ERROR}} to reduce verbosity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454009#comment-13454009
 ] 

Hudson commented on HADOOP-8767:


Integrated in Hadoop-Mapreduce-trunk #1194 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1194/])
HADOOP-8767. Secondary namenode is started on slave nodes instead of master 
nodes. Contributed by Giovanni Delussu. (Revision 1383560)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383560
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh


> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Assignee: giovanni delussu
>Priority: Minor
> Fix For: 1.2.0, 3.0.0
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8597) FsShell's Text command should be able to read avro data files

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454004#comment-13454004
 ] 

Hudson commented on HADOOP-8597:


Integrated in Hadoop-Mapreduce-trunk #1194 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1194/])
HADOOP-8597. Permit FsShell's text command to read Avro files.  Contributed 
by Ivan Vladimirov. (Revision 1383607)

 Result = SUCCESS
cutting : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383607
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java


> FsShell's Text command should be able to read avro data files
> -
>
> Key: HADOOP-8597
> URL: https://issues.apache.org/jira/browse/HADOOP-8597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Ivan Vladimirov Ivanov
>  Labels: newbie
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8597-2.patch, HADOOP-8597.patch, 
> HADOOP-8597.patch, HADOOP-8597.patch
>
>
> Similar to SequenceFiles are Apache Avro's DataFiles. Since these are getting 
> popular as a data format, perhaps it would be useful if {{fs -text}} were to 
> add some support for reading it, like it reads SequenceFiles. Should be easy 
> since Avro is already a dependency and provides the required classes.
> Of discussion is the output we ought to emit. Avro DataFiles aren't simple as 
> text, nor have they the singular Key-Value pair structure of SequenceFiles. 
> They usually contain a set of fields defined as a record, and the usual text 
> emit, as available from avro-tools via 
> http://avro.apache.org/docs/current/api/java/org/apache/avro/tool/DataFileReadTool.html,
>  is in proper JSON format.
> I think we should use the JSON format as the output, rather than a delimited 
> form, for there are many complex structures in Avro and JSON is the easiest 
> and least-work-to-do way to display it (Avro supports json dumping by itself).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8767) secondary namenode on slave machines

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453962#comment-13453962
 ] 

Hudson commented on HADOOP-8767:


Integrated in Hadoop-Hdfs-trunk #1163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1163/])
HADOOP-8767. Secondary namenode is started on slave nodes instead of master 
nodes. Contributed by Giovanni Delussu. (Revision 1383560)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383560
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh


> secondary namenode on slave machines
> 
>
> Key: HADOOP-8767
> URL: https://issues.apache.org/jira/browse/HADOOP-8767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
>Reporter: giovanni delussu
>Assignee: giovanni delussu
>Priority: Minor
> Fix For: 1.2.0, 3.0.0
>
> Attachments: patch_hadoop-config.sh_hadoop-1.0.3_fromtar.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_branch1.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk_no_prefix.patch, 
> patch_hadoop-config.sh_slaves.sh_trunk.patch, 
> patch_slaves.sh_hadoop-1.0.3_fromtar.patch
>
>
> when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs 
> starting (with start-dfs.sh) creates secondary namenodes on all the machines 
> in the file conf/slaves instead of conf/masters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8789) Tests setLevel(Level.OFF) should be Level.ERROR

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453960#comment-13453960
 ] 

Hudson commented on HADOOP-8789:


Integrated in Hadoop-Hdfs-trunk #1163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1163/])
HADOOP-8789. Tests setLevel(Level.OFF) should be Level.ERROR. Contributed 
by Andy Isaacson (Revision 1383494)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383494
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-archives/src/test/java/org/apache/hadoop/tools/TestHadoopArchives.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestCopyFiles.java
* 
/hadoop/common/trunk/hadoop-tools/hadoop-extras/src/test/java/org/apache/hadoop/tools/TestDistCh.java


> Tests setLevel(Level.OFF) should be Level.ERROR
> ---
>
> Key: HADOOP-8789
> URL: https://issues.apache.org/jira/browse/HADOOP-8789
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: hdfs-3911.txt
>
>
> Multiple tests have code like
> {code}
> ((Log4JLogger)LogFactory.getLog(FSNamesystem.class)).getLogger().setLevel(Level.OFF);
> {code}
> Completely disabling logs from given classes with {{Level.OFF}} is a bad idea 
> and makes debugging other test failures, especially intermittent test 
> failures like HDFS-3664, difficult.  Instead the code should use 
> {{Level.ERROR}} to reduce verbosity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8597) FsShell's Text command should be able to read avro data files

2012-09-12 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453957#comment-13453957
 ] 

Hudson commented on HADOOP-8597:


Integrated in Hadoop-Hdfs-trunk #1163 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1163/])
HADOOP-8597. Permit FsShell's text command to read Avro files.  Contributed 
by Ivan Vladimirov. (Revision 1383607)

 Result = FAILURE
cutting : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1383607
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Display.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/TestTextCommand.java


> FsShell's Text command should be able to read avro data files
> -
>
> Key: HADOOP-8597
> URL: https://issues.apache.org/jira/browse/HADOOP-8597
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.0.0-alpha
>Reporter: Harsh J
>Assignee: Ivan Vladimirov Ivanov
>  Labels: newbie
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8597-2.patch, HADOOP-8597.patch, 
> HADOOP-8597.patch, HADOOP-8597.patch
>
>
> Similar to SequenceFiles are Apache Avro's DataFiles. Since these are getting 
> popular as a data format, perhaps it would be useful if {{fs -text}} were to 
> add some support for reading it, like it reads SequenceFiles. Should be easy 
> since Avro is already a dependency and provides the required classes.
> Of discussion is the output we ought to emit. Avro DataFiles aren't simple as 
> text, nor have they the singular Key-Value pair structure of SequenceFiles. 
> They usually contain a set of fields defined as a record, and the usual text 
> emit, as available from avro-tools via 
> http://avro.apache.org/docs/current/api/java/org/apache/avro/tool/DataFileReadTool.html,
>  is in proper JSON format.
> I think we should use the JSON format as the output, rather than a delimited 
> form, for there are many complex structures in Avro and JSON is the easiest 
> and least-work-to-do way to display it (Avro supports json dumping by itself).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8780) Update DeprecatedProperties apt file

2012-09-12 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453935#comment-13453935
 ] 

Tom White commented on HADOOP-8780:
---

The Jenkins job is timing out when running the HDFS tests - 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1436/console. The HDFS 
pre-commit jobs are passing fine though - e.g. 
https://builds.apache.org/job/PreCommit-HDFS-Build/3180/console. I'm not sure 
why.

Ahmed, can you run test-patch and post the results here please?

> Update DeprecatedProperties apt file
> 
>
> Key: HADOOP-8780
> URL: https://issues.apache.org/jira/browse/HADOOP-8780
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Radwan
>Assignee: Ahmed Radwan
> Attachments: HADOOP-8780.patch, HADOOP-8780_rev2.patch
>
>
> The current list of deprecated properties is not up-to-date. I'll will upload 
> a patch momentarily.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8790) testTrashEmptier() fails when run TestHDFSTrash

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453903#comment-13453903
 ] 

Hadoop QA commented on HADOOP-8790:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544798/HADOOP-8790.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 1 new or modified test 
files.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1441//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1441//console

This message is automatically generated.

> testTrashEmptier() fails when run TestHDFSTrash
> ---
>
> Key: HADOOP-8790
> URL: https://issues.apache.org/jira/browse/HADOOP-8790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Han Xiao
> Fix For: 2.0.1-alpha
>
> Attachments: HADOOP-8790.patch
>
>
> In our test environment, TestHDFSTrash.testTrashEmptier fails occasionally:
> Standout is:
> 2012-09-12 01:09:23,732 WARN  conf.Configuration 
> (Configuration.java:warnOnceIfDeprecated(737)) - fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Moved: 
> 'file:/home/hadoop/jenkins/jenkins_home/jobs/hadoop-hdfs-test/workspace/hadoop-hdfs/target/test/data/testTrash/test/mkdirs/myFile0'
>  to trash at: file:/home/hadoop/.Trash/Current
> Stacktrace is:
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertTrue(Assert.java:27)
>   at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
>   at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.extensions.TestSetup.run(TestSetup.java:27)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74

[jira] [Updated] (HADOOP-8790) testTrashEmptier() fails when run TestHDFSTrash

2012-09-12 Thread Han Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Han Xiao updated HADOOP-8790:
-

Attachment: (was: HDFS-3266-1.patch)

> testTrashEmptier() fails when run TestHDFSTrash
> ---
>
> Key: HADOOP-8790
> URL: https://issues.apache.org/jira/browse/HADOOP-8790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Han Xiao
> Fix For: 2.0.1-alpha
>
> Attachments: HADOOP-8790.patch
>
>
> In our test environment, TestHDFSTrash.testTrashEmptier fails occasionally:
> Standout is:
> 2012-09-12 01:09:23,732 WARN  conf.Configuration 
> (Configuration.java:warnOnceIfDeprecated(737)) - fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Moved: 
> 'file:/home/hadoop/jenkins/jenkins_home/jobs/hadoop-hdfs-test/workspace/hadoop-hdfs/target/test/data/testTrash/test/mkdirs/myFile0'
>  to trash at: file:/home/hadoop/.Trash/Current
> Stacktrace is:
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertTrue(Assert.java:27)
>   at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
>   at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.extensions.TestSetup.run(TestSetup.java:27)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
>   
> So it seems that at the beginning, the condition that 'checkpoints.size() == 
> 4' is true. Then the testcase fails right away.
> If the Trash directory likes bellow, the testcase fails everytime.
> hadoop@HADOOP-CI-AGENT-A:~/.Trash> l
> total 24
> drwxr-xr-x  6 hadoop users 4096 Sep 12 17:26 ./
> drwx-- 21 hadoop users 4096 Sep 12 17:25 ../
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:26 120912170042/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170048/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170054/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 Current/
> So the testcase must be modified to avoid the failing in such a precondition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8790) testTrashEmptier() fails when run TestHDFSTrash

2012-09-12 Thread Han Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Han Xiao updated HADOOP-8790:
-

Attachment: HADOOP-8790.patch

Sorry to upload the wrong patch.

> testTrashEmptier() fails when run TestHDFSTrash
> ---
>
> Key: HADOOP-8790
> URL: https://issues.apache.org/jira/browse/HADOOP-8790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Han Xiao
> Fix For: 2.0.1-alpha
>
> Attachments: HADOOP-8790.patch
>
>
> In our test environment, TestHDFSTrash.testTrashEmptier fails occasionally:
> Standout is:
> 2012-09-12 01:09:23,732 WARN  conf.Configuration 
> (Configuration.java:warnOnceIfDeprecated(737)) - fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Moved: 
> 'file:/home/hadoop/jenkins/jenkins_home/jobs/hadoop-hdfs-test/workspace/hadoop-hdfs/target/test/data/testTrash/test/mkdirs/myFile0'
>  to trash at: file:/home/hadoop/.Trash/Current
> Stacktrace is:
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertTrue(Assert.java:27)
>   at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
>   at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.extensions.TestSetup.run(TestSetup.java:27)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
>   
> So it seems that at the beginning, the condition that 'checkpoints.size() == 
> 4' is true. Then the testcase fails right away.
> If the Trash directory likes bellow, the testcase fails everytime.
> hadoop@HADOOP-CI-AGENT-A:~/.Trash> l
> total 24
> drwxr-xr-x  6 hadoop users 4096 Sep 12 17:26 ./
> drwx-- 21 hadoop users 4096 Sep 12 17:25 ../
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:26 120912170042/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170048/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170054/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 Current/
> So the testcase must be modified to avoid the failing in such a precondition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8790) testTrashEmptier() fails when run TestHDFSTrash

2012-09-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453893#comment-13453893
 ] 

Hadoop QA commented on HADOOP-8790:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12544796/HDFS-3266-1.patch
  against trunk revision .

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1440//console

This message is automatically generated.

> testTrashEmptier() fails when run TestHDFSTrash
> ---
>
> Key: HADOOP-8790
> URL: https://issues.apache.org/jira/browse/HADOOP-8790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Han Xiao
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3266-1.patch
>
>
> In our test environment, TestHDFSTrash.testTrashEmptier fails occasionally:
> Standout is:
> 2012-09-12 01:09:23,732 WARN  conf.Configuration 
> (Configuration.java:warnOnceIfDeprecated(737)) - fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Moved: 
> 'file:/home/hadoop/jenkins/jenkins_home/jobs/hadoop-hdfs-test/workspace/hadoop-hdfs/target/test/data/testTrash/test/mkdirs/myFile0'
>  to trash at: file:/home/hadoop/.Trash/Current
> Stacktrace is:
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertTrue(Assert.java:27)
>   at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
>   at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.extensions.TestSetup.run(TestSetup.java:27)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
>   
> So it seems that at the beginning, the condition that 'checkpoints.size() == 
> 4' is true. Then the testcase fails right away.
> If the Trash directory likes bellow, the testcase fails everytime.
> hadoop@HADOOP-CI-AGENT-A:~/.Trash> l
> total 24
> drwxr-xr-x  6 hadoop users 4096 Sep 12 17:26 ./
> drwx-- 21 hadoop users 4096 Sep 12 17:25 ../
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:26 120912170042/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170048/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170054/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 Current/
> So the testcase must be modified to avoid the failing in such a precondition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more informat

[jira] [Updated] (HADOOP-8790) testTrashEmptier() fails when run TestHDFSTrash

2012-09-12 Thread Han Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Han Xiao updated HADOOP-8790:
-

Attachment: HDFS-3266-1.patch

> testTrashEmptier() fails when run TestHDFSTrash
> ---
>
> Key: HADOOP-8790
> URL: https://issues.apache.org/jira/browse/HADOOP-8790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Han Xiao
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3266-1.patch
>
>
> In our test environment, TestHDFSTrash.testTrashEmptier fails occasionally:
> Standout is:
> 2012-09-12 01:09:23,732 WARN  conf.Configuration 
> (Configuration.java:warnOnceIfDeprecated(737)) - fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Moved: 
> 'file:/home/hadoop/jenkins/jenkins_home/jobs/hadoop-hdfs-test/workspace/hadoop-hdfs/target/test/data/testTrash/test/mkdirs/myFile0'
>  to trash at: file:/home/hadoop/.Trash/Current
> Stacktrace is:
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertTrue(Assert.java:27)
>   at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
>   at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.extensions.TestSetup.run(TestSetup.java:27)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
>   
> So it seems that at the beginning, the condition that 'checkpoints.size() == 
> 4' is true. Then the testcase fails right away.
> If the Trash directory likes bellow, the testcase fails everytime.
> hadoop@HADOOP-CI-AGENT-A:~/.Trash> l
> total 24
> drwxr-xr-x  6 hadoop users 4096 Sep 12 17:26 ./
> drwx-- 21 hadoop users 4096 Sep 12 17:25 ../
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:26 120912170042/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170048/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170054/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 Current/
> So the testcase must be modified to avoid the failing in such a precondition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8790) testTrashEmptier() fails when run TestHDFSTrash

2012-09-12 Thread Han Xiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Han Xiao updated HADOOP-8790:
-

Fix Version/s: 2.0.1-alpha
   Status: Patch Available  (was: Open)

Upload a patch, it works in our environment to protect the testcase from 
failing in such a prediction.

> testTrashEmptier() fails when run TestHDFSTrash
> ---
>
> Key: HADOOP-8790
> URL: https://issues.apache.org/jira/browse/HADOOP-8790
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.1-alpha
>Reporter: Han Xiao
> Fix For: 2.0.1-alpha
>
> Attachments: HDFS-3266-1.patch
>
>
> In our test environment, TestHDFSTrash.testTrashEmptier fails occasionally:
> Standout is:
> 2012-09-12 01:09:23,732 WARN  conf.Configuration 
> (Configuration.java:warnOnceIfDeprecated(737)) - fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Moved: 
> 'file:/home/hadoop/jenkins/jenkins_home/jobs/hadoop-hdfs-test/workspace/hadoop-hdfs/target/test/data/testTrash/test/mkdirs/myFile0'
>  to trash at: file:/home/hadoop/.Trash/Current
> Stacktrace is:
> junit.framework.AssertionFailedError: null
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertTrue(Assert.java:27)
>   at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:533)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
>   at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.extensions.TestSetup.run(TestSetup.java:27)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)
>   
> So it seems that at the beginning, the condition that 'checkpoints.size() == 
> 4' is true. Then the testcase fails right away.
> If the Trash directory likes bellow, the testcase fails everytime.
> hadoop@HADOOP-CI-AGENT-A:~/.Trash> l
> total 24
> drwxr-xr-x  6 hadoop users 4096 Sep 12 17:26 ./
> drwx-- 21 hadoop users 4096 Sep 12 17:25 ../
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:26 120912170042/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170048/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170054/
> drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 Current/
> So the testcase must be modified to avoid the failing in such a precondition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7682) taskTracker could not start because "Failed to set permissions" to "ttprivate to 0700"

2012-09-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453878#comment-13453878
 ] 

丁冬超 commented on HADOOP-7682:
-

the same  on hadoop  1.0.1  with win7 and cygwin  。  
 because just test  ,so  i suggest a easy workaround :
1、  alert  FileUtil
mkdir $HADOOP_HOME/classes  
and  then alter the  org.apache.hadoop.fs.FileUtil.checkReturnValue  remove the 
Exception .
build  、copy FileUtil.class  to $HADOOP_HOME/classes   。


2 、alter classpath 

add shell in  $HADOOP_HOME/hadoop  file :

if [ -d "$HADOOP_HOME/classes" ]; then
  CLASSPATH=${CLASSPATH}:$HADOOP_HOME/classes
fi

make sure this classpath is in front of  $HADOOP_HOME/hadoop-core-1.*.*.jar  。


restartOK  


> taskTracker could not start because "Failed to set permissions" to "ttprivate 
> to 0700"
> --
>
> Key: HADOOP-7682
> URL: https://issues.apache.org/jira/browse/HADOOP-7682
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.1
> Environment: OS:WindowsXP SP3 , Filesystem :NTFS, cygwin 1.7.9-1, 
> jdk1.6.0_05
>Reporter: Magic Xie
>
> ERROR org.apache.hadoop.mapred.TaskTracker:Can not start task tracker because 
> java.io.IOException:Failed to set permissions of 
> path:/tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
> at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
> at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
> at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
> at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)
> Since hadoop0.20.203 when the TaskTracker initialize, it checks the 
> permission(TaskTracker Line 624) of 
> (org.apache.hadoop.mapred.TaskTracker.TT_LOG_TMP_DIR,org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR,
>  
> org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR).RawLocalFileSystem(http://svn.apache.org/viewvc/hadoop/common/tags/release-0.20.203.0/src/core/org/apache/hadoop/fs/RawLocalFileSystem.java?view=markup)
>  call setPermission(Line 481) to deal with it, setPermission works fine on 
> *nx, however,it dose not alway works on windows.
> setPermission call setReadable of Java.io.File in the line 498, but according 
> to the Table1 below provided by oracle,setReadable(false) will always return 
> false on windows, the same as setExecutable(false).
> http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/enhancements/
> is it cause the task tracker "Failed to set permissions" to "ttprivate to 
> 0700"?
> Hadoop 0.20.202 works fine in the same environment. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8792) hadoop-daemon doesn't handle chown failures

2012-09-12 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13453873#comment-13453873
 ] 

Steve Loughran commented on HADOOP-8792:


Whirr log. it's hadoop-daemon that is failing; that failure to chown propagates 
into a log access problem -neither of which are picked up and reported back

{code}
+ echo 'Starting hadoop-jobtracker'
+ service hadoop-jobtracker start
chown: invalid user: `hadoop'
/usr/lib/hadoop/bin/hadoop-daemon.sh: line 135: 
/var/log/hadoop/logs/hadoop-hadoop-jobtracker-nn1.out: Permission denied
head: cannot open `/var/log/hadoop/logs/hadoop-hadoop-jobtracker-nn1.out' for 
reading: No such file or directory
+ retval=0
+ (( 0 == 0 ))
+ echo 'Service hadoop-jobtracker is started'
+ CONFIGURE_HADOOP_DONE=1

{code}

> hadoop-daemon doesn't handle chown failures
> ---
>
> Key: HADOOP-8792
> URL: https://issues.apache.org/jira/browse/HADOOP-8792
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 1.0.3
> Environment: Whirr deployment onto existing VM
>Reporter: Steve Loughran
>
> A whirr deployment of the JT failed; it looks like the hadoop user wasn't 
> there. This didn't get picked up by whirr (WHIRR-651) as the hadoop-daemon 
> script doesn't check the return value of its chown operation -this should be 
> converted into a failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8792) hadoop-daemon doesn't handle chown failures

2012-09-12 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-8792:
--

 Summary: hadoop-daemon doesn't handle chown failures
 Key: HADOOP-8792
 URL: https://issues.apache.org/jira/browse/HADOOP-8792
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 1.0.3
 Environment: Whirr deployment onto existing VM
Reporter: Steve Loughran


A whirr deployment of the JT failed; it looks like the hadoop user wasn't 
there. This didn't get picked up by whirr (WHIRR-651) as the hadoop-daemon 
script doesn't check the return value of its chown operation -this should be 
converted into a failure.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8791) rm "Only deletes non empty directory and files."

2012-09-12 Thread Bertrand Dechoux (JIRA)
Bertrand Dechoux created HADOOP-8791:


 Summary: rm "Only deletes non empty directory and files."
 Key: HADOOP-8791
 URL: https://issues.apache.org/jira/browse/HADOOP-8791
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.3
Reporter: Bertrand Dechoux


The documentation (1.0.3) is describing the opposite of what rm does.
It should be  "Only delete files and empty directories."

With regards to file, the size of the file should not matter, should it?

OR I am totally misunderstanding the semantic of this command and I am not the 
only one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8790) testTrashEmptier() fails when run TestHDFSTrash

2012-09-12 Thread Han Xiao (JIRA)
Han Xiao created HADOOP-8790:


 Summary: testTrashEmptier() fails when run TestHDFSTrash
 Key: HADOOP-8790
 URL: https://issues.apache.org/jira/browse/HADOOP-8790
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.1-alpha
Reporter: Han Xiao


In our test environment, TestHDFSTrash.testTrashEmptier fails occasionally:

Standout is:
2012-09-12 01:09:23,732 WARN  conf.Configuration 
(Configuration.java:warnOnceIfDeprecated(737)) - fs.default.name is deprecated. 
Instead, use fs.defaultFS
Moved: 
'file:/home/hadoop/jenkins/jenkins_home/jobs/hadoop-hdfs-test/workspace/hadoop-hdfs/target/test/data/testTrash/test/mkdirs/myFile0'
 to trash at: file:/home/hadoop/.Trash/Current

Stacktrace is:
junit.framework.AssertionFailedError: null
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.assertTrue(Assert.java:20)
at junit.framework.Assert.assertTrue(Assert.java:27)
at org.apache.hadoop.fs.TestTrash.testTrashEmptier(TestTrash.java:533)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.extensions.TestSetup.run(TestSetup.java:27)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
at 
org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:103)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:74)

So it seems that at the beginning, the condition that 'checkpoints.size() == 4' 
is true. Then the testcase fails right away.
If the Trash directory likes bellow, the testcase fails everytime.

hadoop@HADOOP-CI-AGENT-A:~/.Trash> l
total 24
drwxr-xr-x  6 hadoop users 4096 Sep 12 17:26 ./
drwx-- 21 hadoop users 4096 Sep 12 17:25 ../
drwxr-xr-x  3 hadoop users 4096 Sep 12 17:26 120912170042/
drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170048/
drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 120912170054/
drwxr-xr-x  3 hadoop users 4096 Sep 12 17:00 Current/

So the testcase must be modified to avoid the failing in such a precondition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira