[jira] [Created] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Tom White (JIRA)
Tom White created HADOOP-9212:
-

 Summary: Potential deadlock in FileSystem.Cache/IPC/UGI
 Key: HADOOP-9212
 URL: https://issues.apache.org/jira/browse/HADOOP-9212
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Tom White
Assignee: Tom White


jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9212:
--

Attachment: 1_jcarder_result_0.png

The scenario that leads to the potential deadlock:
* FS.Cache.closeAll(), which is holding the FS.Cache lock, calls DFS's close 
method which calls close on the RPC proxy, which eventually calls 
ipc.Client.stop() and takes lock on Hashtable of connections
* ipc.Client.getConnection(), which is holding lock on Hashtable of 
connections, takes lock on UGI's class during some UGI setup that trigger's 
UGI.ensureInitialized()
* UGI.getCurrentUser(), which is holding UGI class lock, calls getLoginUser() 
which calls Credentials.readTokenStorageFile, which uses FileSystem, taking a 
lock on FileSystem.Cache


> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553654#comment-13553654
 ] 

Hudson commented on HADOOP-9097:


Integrated in Hadoop-Yarn-trunk #97 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/97/])
HADOOP-9097. Maven RAT plugin is not checking all source files (tgraves) 
(Revision 1432934)

 Result = SUCCESS
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432934
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-dist/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/appendix.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/architecture.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/cli.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/index.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/usage.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/resources/sslConfig.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word-part.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-gdb-commands.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-script
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/anonymization/WordList.java
* /hadoop/common/trunk/hadoop-tools/hadoop-tools-dist/pom.xml
* /hadoop/common/trunk/hadoop-tools/pom.xml
* /hadoop/common/trunk/pom.xml


> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
> HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9203) RPCCallBenchmark should find a random available port

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553656#comment-13553656
 ] 

Hudson commented on HADOOP-9203:


Integrated in Hadoop-Yarn-trunk #97 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/97/])
HADOOP-9203. RPCCallBenchmark should find a random available port. 
Contributec by Andrew Purtell. (Revision 1433220)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433220
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/RPCCallBenchmark.java


> RPCCallBenchmark should find a random available port
> 
>
> Key: HADOOP-9203
> URL: https://issues.apache.org/jira/browse/HADOOP-9203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, test
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Andrew Purtell
>Priority: Trivial
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9203.patch, HADOOP-9203.patch
>
>
> RPCCallBenchmark insists on port 12345 by default. It should find a random 
> ephemeral range port instead if one isn't specified.
> {noformat}
> testBenchmarkWithProto(org.apache.hadoop.ipc.TestRPCCallBenchmark)  Time 
> elapsed: 5092 sec  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:12345] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:361)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:459)
>   at org.apache.hadoop.ipc.Server.(Server.java:1877)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:982)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:376)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:351)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:825)
>   at 
> org.apache.hadoop.ipc.RPCCallBenchmark.startServer(RPCCallBenchmark.java:230)
>   at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:264)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto(TestRPCCallBenchmark.java:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553660#comment-13553660
 ] 

Hudson commented on HADOOP-9178:


Integrated in Hadoop-Yarn-trunk #97 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/97/])
HADOOP-9178. src/main/conf is missing hadoop-policy.xml. Contributed by 
Sandy Ryza (Revision 1433275)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433275
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HDFSPolicyProvider.java


> src/main/conf is missing hadoop-policy.xml
> --
>
> Key: HADOOP-9178
> URL: https://issues.apache.org/jira/browse/HADOOP-9178
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
> HADOOP-9178-2.patch, HADOOP-9178.patch
>
>
> src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
> hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9202) test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553662#comment-13553662
 ] 

Hudson commented on HADOOP-9202:


Integrated in Hadoop-Yarn-trunk #97 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/97/])
HADOOP-9202. test-patch.sh fails during mvn eclipse:eclipse if patch adds a 
new module to the build (Chris Nauroth via bobby) (Revision 1432949)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432949
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to 
> the build
> --
>
> Key: HADOOP-9202
> URL: https://issues.apache.org/jira/browse/HADOOP-9202
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-9202.1.patch
>
>
> test-patch.sh tries running mvn eclipse:eclipse after applying the patch.  It 
> runs this before running mvn install.  The mvn eclipse:eclipse command 
> doesn't actually build the code, so if the patch in question is adding a 
> whole new module, then any other modules dependent on finding it in the 
> reactor will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9212:
--

Status: Patch Available  (was: Open)

> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9212:
--

Attachment: HADOOP-9212.patch

Here's a fix to break the cycle by using the Java File API in 
Credentials.readTokenStorageFile (overloading the method to take File) so the 
FileSystem.Cache lock need not be taken.

> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9213) Create a Jenkins job to run jcarder

2013-01-15 Thread Tom White (JIRA)
Tom White created HADOOP-9213:
-

 Summary: Create a Jenkins job to run jcarder
 Key: HADOOP-9213
 URL: https://issues.apache.org/jira/browse/HADOOP-9213
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Tom White
Assignee: Tom White


It would be useful to have a nightly job to look for deadlocks in the Hadoop 
source code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553685#comment-13553685
 ] 

Hadoop QA commented on HADOOP-9212:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564909/HADOOP-9212.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2045//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2045//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2045//console

This message is automatically generated.

> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-15 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553707#comment-13553707
 ] 

Ivan A. Veselovsky commented on HADOOP-9205:


Shortest way to reproduce the issue:
Run test org.apache.hadoop.util.TestNativeCodeLoader with 
-Drequire.test.libhadoop=true and 
env variable 
LD_LIBRARY_PATH=.../hadoop-common/hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
If you're running on Java 1.6, the test passes.
If you're running on Java 1.7, the test fails.
If we add parameter
-Djava.library.path=.../hadoop-common/hadoop-common-project/hadoop-common/target/native/target/usr/local/lib
 , the test passes on both 1.6 and 1.7.

This is reproducible with Oracle's JDK jdk1.7.0_07, jdk1.7.0_10, but is *not* 
reproducible with jdk1.7.0_05, so Kihwall's observation regarding 1.7.0_05 is 
correct.

> Java7: path to native libraries should be passed to tests via 
> -Djava.library.path rather than env.LD_LIBRARY_PATH
> -
>
> Key: HADOOP-9205
> URL: https://issues.apache.org/jira/browse/HADOOP-9205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9205.patch
>
>
> Currently the path to native libraries is passed to unit tests via 
> environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
> work for Java7, since Java7 ignores this environment variable.
> So, to run the tests with native implementation on Java7 one needs to pass 
> the paths to native libs via -Djava.library.path system property rather than 
> the LD_LIBRARY_PATH env variable.
> The suggested patch fixes the problem via setting the paths to native libs 
> using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
> tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9181) Set daemon flag for HttpServer's QueuedThreadPool

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553727#comment-13553727
 ] 

Hudson commented on HADOOP-9181:


Integrated in Hadoop-Hdfs-0.23-Build #495 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/495/])
HADOOP-9181. Set daemon flag for HttpServer's QueuedThreadPool (Liang Xie 
via tgraves) (Revision 1432972)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432972
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java


> Set daemon flag for HttpServer's QueuedThreadPool
> -
>
> Key: HADOOP-9181
> URL: https://issues.apache.org/jira/browse/HADOOP-9181
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: liang xie
>Assignee: liang xie
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9181.txt
>
>
> we hit HBASE-6031 again, after looking into thread dump, it was caused by the 
> threads from QueuedThreadPool are user thread, not daemon thread, so the 
> hbase shutdownhook never be called and the hbase instance was hung.
> Furthermore, i saw daemon be set in fb-20 branch, let's set in trunk codebase 
> as well, it should be safe:)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553729#comment-13553729
 ] 

Hudson commented on HADOOP-9097:


Integrated in Hadoop-Hdfs-0.23-Build #495 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/495/])
HADOOP-9097. Maven RAT plugin is not checking all source files (tgraves) 
(Revision 1432947)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432947
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* /hadoop/common/branches/branch-0.23/hadoop-common-project/pom.xml
* /hadoop/common/branches/branch-0.23/hadoop-dist/pom.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/site/xdoc/appendix.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/site/xdoc/architecture.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/site/xdoc/cli.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/site/xdoc/index.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/site/xdoc/usage.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-distcp/src/test/resources/sslConfig.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word-part.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-gdb-commands.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-script
* 
/hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/anonymization/WordList.java
* /hadoop/common/branches/branch-0.23/hadoop-tools/hadoop-tools-dist/pom.xml
* /hadoop/common/branches/branch-0.23/hadoop-tools/pom.xml
* /hadoop/common/branches/branch-0.23/pom.xml


> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
> HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553753#comment-13553753
 ] 

Hudson commented on HADOOP-9097:


Integrated in Hadoop-Hdfs-trunk #1286 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/])
HADOOP-9097. Maven RAT plugin is not checking all source files (tgraves) 
(Revision 1432934)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432934
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-dist/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/appendix.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/architecture.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/cli.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/index.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/usage.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/resources/sslConfig.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word-part.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-gdb-commands.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-script
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/anonymization/WordList.java
* /hadoop/common/trunk/hadoop-tools/hadoop-tools-dist/pom.xml
* /hadoop/common/trunk/hadoop-tools/pom.xml
* /hadoop/common/trunk/pom.xml


> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
> HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9203) RPCCallBenchmark should find a random available port

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553755#comment-13553755
 ] 

Hudson commented on HADOOP-9203:


Integrated in Hadoop-Hdfs-trunk #1286 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/])
HADOOP-9203. RPCCallBenchmark should find a random available port. 
Contributec by Andrew Purtell. (Revision 1433220)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433220
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/RPCCallBenchmark.java


> RPCCallBenchmark should find a random available port
> 
>
> Key: HADOOP-9203
> URL: https://issues.apache.org/jira/browse/HADOOP-9203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, test
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Andrew Purtell
>Priority: Trivial
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9203.patch, HADOOP-9203.patch
>
>
> RPCCallBenchmark insists on port 12345 by default. It should find a random 
> ephemeral range port instead if one isn't specified.
> {noformat}
> testBenchmarkWithProto(org.apache.hadoop.ipc.TestRPCCallBenchmark)  Time 
> elapsed: 5092 sec  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:12345] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:361)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:459)
>   at org.apache.hadoop.ipc.Server.(Server.java:1877)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:982)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:376)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:351)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:825)
>   at 
> org.apache.hadoop.ipc.RPCCallBenchmark.startServer(RPCCallBenchmark.java:230)
>   at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:264)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto(TestRPCCallBenchmark.java:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553759#comment-13553759
 ] 

Hudson commented on HADOOP-9178:


Integrated in Hadoop-Hdfs-trunk #1286 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/])
HADOOP-9178. src/main/conf is missing hadoop-policy.xml. Contributed by 
Sandy Ryza (Revision 1433275)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433275
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HDFSPolicyProvider.java


> src/main/conf is missing hadoop-policy.xml
> --
>
> Key: HADOOP-9178
> URL: https://issues.apache.org/jira/browse/HADOOP-9178
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
> HADOOP-9178-2.patch, HADOOP-9178.patch
>
>
> src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
> hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9202) test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553761#comment-13553761
 ] 

Hudson commented on HADOOP-9202:


Integrated in Hadoop-Hdfs-trunk #1286 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1286/])
HADOOP-9202. test-patch.sh fails during mvn eclipse:eclipse if patch adds a 
new module to the build (Chris Nauroth via bobby) (Revision 1432949)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432949
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to 
> the build
> --
>
> Key: HADOOP-9202
> URL: https://issues.apache.org/jira/browse/HADOOP-9202
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-9202.1.patch
>
>
> test-patch.sh tries running mvn eclipse:eclipse after applying the patch.  It 
> runs this before running mvn install.  The mvn eclipse:eclipse command 
> doesn't actually build the code, so if the patch in question is adding a 
> whole new module, then any other modules dependent on finding it in the 
> reactor will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE

2013-01-15 Thread Sarah Weissman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553778#comment-13553778
 ] 

Sarah Weissman commented on HADOOP-9211:


Maybe this can be better documented inside hadoop-env.sh? Out of the box 
changing HADOOP_HEAPSIZE adds a -Xmx argument to the beginning of the 
bin/hadoop java command, but the hard-coded 128M heap size in 
HADOOP_CLIENT_OPTS overrides that option with a second -Xmx. As a new user this 
was very unintuitive, especially since most advice given around the internet 
for earlier versions will refer to adjusting HADOOP_HEAPSIZE. 

Also, at least in my case, 128M was not enough to run various examples from the 
hadoop examples jar, so the first thing many new users might encounter is 
trying to figure out how to increase heap size. Maybe 128M is a bad value given 
the default settings of other variables?

> HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards 
> HADOOP_HEAPSIZE
> --
>
> Key: HADOOP-9211
> URL: https://issues.apache.org/jira/browse/HADOOP-9211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.2-alpha
>Reporter: Sarah Weissman
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> hadoop-env.sh as included in the 2.0.2alpha release tarball contains:
> export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS"
> This overrides any heap settings in HADOOP_HEAPSIZE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9203) RPCCallBenchmark should find a random available port

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553788#comment-13553788
 ] 

Hudson commented on HADOOP-9203:


Integrated in Hadoop-Mapreduce-trunk #1314 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1314/])
HADOOP-9203. RPCCallBenchmark should find a random available port. 
Contributec by Andrew Purtell. (Revision 1433220)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433220
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/RPCCallBenchmark.java


> RPCCallBenchmark should find a random available port
> 
>
> Key: HADOOP-9203
> URL: https://issues.apache.org/jira/browse/HADOOP-9203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, test
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Andrew Purtell
>Priority: Trivial
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9203.patch, HADOOP-9203.patch
>
>
> RPCCallBenchmark insists on port 12345 by default. It should find a random 
> ephemeral range port instead if one isn't specified.
> {noformat}
> testBenchmarkWithProto(org.apache.hadoop.ipc.TestRPCCallBenchmark)  Time 
> elapsed: 5092 sec  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:12345] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:361)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:459)
>   at org.apache.hadoop.ipc.Server.(Server.java:1877)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:982)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:376)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:351)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:825)
>   at 
> org.apache.hadoop.ipc.RPCCallBenchmark.startServer(RPCCallBenchmark.java:230)
>   at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:264)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto(TestRPCCallBenchmark.java:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553786#comment-13553786
 ] 

Hudson commented on HADOOP-9097:


Integrated in Hadoop-Mapreduce-trunk #1314 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1314/])
HADOOP-9097. Maven RAT plugin is not checking all source files (tgraves) 
(Revision 1432934)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432934
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/config.h.cmake
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/META-INF/services/org.apache.hadoop.security.SecurityInfo
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestDelegationTokenRenewer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* /hadoop/common/trunk/hadoop-common-project/pom.xml
* /hadoop/common/trunk/hadoop-dist/pom.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/appendix.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/architecture.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/cli.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/index.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/site/xdoc/usage.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/test/resources/sslConfig.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word-part.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/examples/conf/word.xml
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-gdb-commands.txt
* 
/hadoop/common/trunk/hadoop-tools/hadoop-pipes/src/main/native/pipes/debug/pipes-default-script
* 
/hadoop/common/trunk/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/anonymization/WordList.java
* /hadoop/common/trunk/hadoop-tools/hadoop-tools-dist/pom.xml
* /hadoop/common/trunk/hadoop-tools/pom.xml
* /hadoop/common/trunk/pom.xml


> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
> HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9178) src/main/conf is missing hadoop-policy.xml

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553792#comment-13553792
 ] 

Hudson commented on HADOOP-9178:


Integrated in Hadoop-Mapreduce-trunk #1314 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1314/])
HADOOP-9178. src/main/conf is missing hadoop-policy.xml. Contributed by 
Sandy Ryza (Revision 1433275)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433275
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/HDFSPolicyProvider.java


> src/main/conf is missing hadoop-policy.xml
> --
>
> Key: HADOOP-9178
> URL: https://issues.apache.org/jira/browse/HADOOP-9178
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Sandy Ryza
>Assignee: Sandy Ryza
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9178-1.patch, HADOOP-9178-1.patch, 
> HADOOP-9178-2.patch, HADOOP-9178.patch
>
>
> src/main/conf contains hadoop-env.sh and core-site.xml, but is missing 
> hadoop-policy.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9202) test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to the build

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553794#comment-13553794
 ] 

Hudson commented on HADOOP-9202:


Integrated in Hadoop-Mapreduce-trunk #1314 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1314/])
HADOOP-9202. test-patch.sh fails during mvn eclipse:eclipse if patch adds a 
new module to the build (Chris Nauroth via bobby) (Revision 1432949)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1432949
Files : 
* /hadoop/common/trunk/dev-support/test-patch.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh fails during mvn eclipse:eclipse if patch adds a new module to 
> the build
> --
>
> Key: HADOOP-9202
> URL: https://issues.apache.org/jira/browse/HADOOP-9202
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-9202.1.patch
>
>
> test-patch.sh tries running mvn eclipse:eclipse after applying the patch.  It 
> runs this before running mvn install.  The mvn eclipse:eclipse command 
> doesn't actually build the code, so if the patch in question is adding a 
> whole new module, then any other modules dependent on finding it in the 
> reactor will fail.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE

2013-01-15 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553811#comment-13553811
 ] 

Harsh J commented on HADOOP-9211:
-

In distributed mode, with no local tasks, the 128M should suffice for simply 
launching the apps/shell/etc. I think. We can surely improve docs around this, 
and if you wish, we can make default 256m?

> HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards 
> HADOOP_HEAPSIZE
> --
>
> Key: HADOOP-9211
> URL: https://issues.apache.org/jira/browse/HADOOP-9211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.2-alpha
>Reporter: Sarah Weissman
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> hadoop-env.sh as included in the 2.0.2alpha release tarball contains:
> export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS"
> This overrides any heap settings in HADOOP_HEAPSIZE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9097) Maven RAT plugin is not checking all source files

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553839#comment-13553839
 ] 

Thomas Graves commented on HADOOP-9097:
---

Todd, I filed HDFS-4399 to handle.  I would be greatful for a review if you 
have time.



> Maven RAT plugin is not checking all source files
> -
>
> Key: HADOOP-9097
> URL: https://issues.apache.org/jira/browse/HADOOP-9097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.0.3-alpha, 0.23.5
>Reporter: Tom White
>Assignee: Thomas Graves
>Priority: Critical
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9097-branch-0.23-entire.patch, 
> HADOOP-9097-branch-0.23-entire.patch, HADOOP-9097-branch-0.23.patch, 
> HADOOP-9097-branch-0.23.patch, HADOOP-9097-entire.patch, 
> HADOOP-9097-entire.patch, HADOOP-9097.patch, HADOOP-9097.patch, 
> HADOOP-9097-remove-branch23.sh, HADOOP-9097-remove-branch2.sh, 
> HADOOP-9097-remove-entire.sh
>
>
> Running 'mvn apache-rat:check' passes, but running RAT by hand (by 
> downloading the JAR) produces some warnings for Java files, amongst others.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9208) Fix release audit warnings

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553851#comment-13553851
 ] 

Thomas Graves commented on HADOOP-9208:
---

where are you seeing these?  The precommit builds were complaining about the 
hdfs*.odg files and HDFS-4399 is taking care of those.  If run manually what 
version, what os, etc..

> Fix release audit warnings
> --
>
> Key: HADOOP-9208
> URL: https://issues.apache.org/jira/browse/HADOOP-9208
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>
> The following files should be excluded from rat check:
> ./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsarchitecture.odg
> ./hadoop-common-project/hadoop-common/src/main/docs/src/documentation/resources/images/hdfsdatanodes.odg
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/resources/images/FI-framework.odg
> ./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsarchitecture.odg
> ./hadoop-hdfs-project/hadoop-hdfs/src/site/resources/images/hdfsdatanodes.odg

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9214) Update touchz to allow modifying atime and mtime

2013-01-15 Thread Brian Burton (JIRA)
Brian Burton created HADOOP-9214:


 Summary: Update touchz to allow modifying atime and mtime
 Key: HADOOP-9214
 URL: https://issues.apache.org/jira/browse/HADOOP-9214
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.23.5
Reporter: Brian Burton
Priority: Minor


Currently there is no way to set the mtime or atime of a file from the "hadoop 
fs" command line. It would be useful if the 'hadoop fs -touchz' command were 
updated to include this functionality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9211) HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards HADOOP_HEAPSIZE

2013-01-15 Thread Sarah Weissman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553898#comment-13553898
 ] 

Sarah Weissman commented on HADOOP-9211:


I do not know enough about hadoop to feel confident in making a recommendation 
on what the default max heap size should be. Also, I was running the examples 
in non-distributed mode. 256m does appear to be enough so that the pi and 
wordcount examples from 
share/hadoop/mapreduce/hadoop-mapreduce-examples-2.0.2-alpha.jar do not run out 
of memory in non-distributed mode.

> HADOOP_CLIENT_OPTS default setting fixes max heap size at 128m, disregards 
> HADOOP_HEAPSIZE
> --
>
> Key: HADOOP-9211
> URL: https://issues.apache.org/jira/browse/HADOOP-9211
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.2-alpha
>Reporter: Sarah Weissman
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> hadoop-env.sh as included in the 2.0.2alpha release tarball contains:
> export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS"
> This overrides any heap settings in HADOOP_HEAPSIZE.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553909#comment-13553909
 ] 

Thomas Graves commented on HADOOP-9205:
---

I ran a quick job using jdk1.7.0_10 and it loads the native libraries fine. 
This was using jdk1.7.0_10 for execution, jars were still built with jdk1.6.

Also I tried to reproduce with method you stated.  On trunk I wasn't able to 
reproduce.  Note I built all source code with jdk1.7.0_10 and then ran the 
test.  I did have to create a symlink from 
hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
 to 
hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so.1.0.0.
 

I'm not sure what happened to libhadoop.so I'll have to investigate.  I need to 
look at the jdk release notes in more detail but at a glance it says "Java 
applications invoking JDK 7 from a legacy JDK must be careful to clean up the 
LD_LIBRARY_PATH environment variable before executing JDK 7" which makes me 
wonder if it applies. 

> Java7: path to native libraries should be passed to tests via 
> -Djava.library.path rather than env.LD_LIBRARY_PATH
> -
>
> Key: HADOOP-9205
> URL: https://issues.apache.org/jira/browse/HADOOP-9205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9205.patch
>
>
> Currently the path to native libraries is passed to unit tests via 
> environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
> work for Java7, since Java7 ignores this environment variable.
> So, to run the tests with native implementation on Java7 one needs to pass 
> the paths to native libs via -Djava.library.path system property rather than 
> the LD_LIBRARY_PATH env variable.
> The suggested patch fixes the problem via setting the paths to native libs 
> using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
> tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Thomas Graves (JIRA)
Thomas Graves created HADOOP-9215:
-

 Summary: libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
 Key: HADOOP-9215
 URL: https://issues.apache.org/jira/browse/HADOOP-9215
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.0.3-alpha
Reporter: Thomas Graves
Priority: Blocker


Looks like none of the .so files are being built. They all have .so.1.0.0 but 
no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.

This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7682) taskTracker could not start because "Failed to set permissions" to "ttprivate to 0700"

2013-01-15 Thread Kirill Vergun (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553956#comment-13553956
 ] 

Kirill Vergun commented on HADOOP-7682:
---

visionersadak,

are you running your installation under “single node” (pseudo-distributed) 
configuration?
I am trying to do my own patching, it may work on such type of configuration.

https://github.com/o-nix/hadoop-patches

But it slows down every hadoop shell command execution a lot.

> taskTracker could not start because "Failed to set permissions" to "ttprivate 
> to 0700"
> --
>
> Key: HADOOP-7682
> URL: https://issues.apache.org/jira/browse/HADOOP-7682
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.1
> Environment: OS:WindowsXP SP3 , Filesystem :NTFS, cygwin 1.7.9-1, 
> jdk1.6.0_05
>Reporter: Magic Xie
>
> ERROR org.apache.hadoop.mapred.TaskTracker:Can not start task tracker because 
> java.io.IOException:Failed to set permissions of 
> path:/tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
> at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
> at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
> at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
> at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)
> Since hadoop0.20.203 when the TaskTracker initialize, it checks the 
> permission(TaskTracker Line 624) of 
> (org.apache.hadoop.mapred.TaskTracker.TT_LOG_TMP_DIR,org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR,
>  
> org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR).RawLocalFileSystem(http://svn.apache.org/viewvc/hadoop/common/tags/release-0.20.203.0/src/core/org/apache/hadoop/fs/RawLocalFileSystem.java?view=markup)
>  call setPermission(Line 481) to deal with it, setPermission works fine on 
> *nx, however,it dose not alway works on windows.
> setPermission call setReadable of Java.io.File in the line 498, but according 
> to the Table1 below provided by oracle,setReadable(false) will always return 
> false on windows, the same as setExecutable(false).
> http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/enhancements/
> is it cause the task tracker "Failed to set permissions" to "ttprivate to 
> 0700"?
> Hadoop 0.20.202 works fine in the same environment. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553958#comment-13553958
 ] 

Kihwal Lee commented on HADOOP-9205:


I tried with jdk 1.7.0_11 but could not reproduce the issue. May be your 
run-time env has something different from ours. I don't have anything special: 
Fedora 17 x86_64, freshly downloaded Oracle jdk 7u11. JAVA_HOME was set 
accordingly. maven 3.0.4 and maven --version shows the right java info. My 
run-time env does not have LD_LIBRARY_PATH or JAVA_LIBRARY_PATH set. No hadoop 
specific settings either.

> Java7: path to native libraries should be passed to tests via 
> -Djava.library.path rather than env.LD_LIBRARY_PATH
> -
>
> Key: HADOOP-9205
> URL: https://issues.apache.org/jira/browse/HADOOP-9205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9205.patch
>
>
> Currently the path to native libraries is passed to unit tests via 
> environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
> work for Java7, since Java7 ignores this environment variable.
> So, to run the tests with native implementation on Java7 one needs to pass 
> the paths to native libs via -Djava.library.path system property rather than 
> the LD_LIBRARY_PATH env variable.
> The suggested patch fixes the problem via setting the paths to native libs 
> using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
> tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-15 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553960#comment-13553960
 ] 

Ivan A. Veselovsky commented on HADOOP-9205:


Hi, Thomas, 
can you please provide more detail on your environment: what OS did you use?
I experimented on "CentOS release 6.3 (Final)" and "Ubuntu precise (12.04.1 
LTS)".

BTW, the problem with missing symlink libhadoop.so -> libhadoop.so.1.0.0 can be 
avoided in you install "cmake" utility of version >= 2.8. On CentOS systems 
this version is installed as a separate package named "cmake28", and 
corresponding executable is /usr/bin/cmake28. We create symlink cmake -> 
cmake28, and after that the problem goes away.



> Java7: path to native libraries should be passed to tests via 
> -Djava.library.path rather than env.LD_LIBRARY_PATH
> -
>
> Key: HADOOP-9205
> URL: https://issues.apache.org/jira/browse/HADOOP-9205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9205.patch
>
>
> Currently the path to native libraries is passed to unit tests via 
> environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
> work for Java7, since Java7 ignores this environment variable.
> So, to run the tests with native implementation on Java7 one needs to pass 
> the paths to native libs via -Djava.library.path system property rather than 
> the LD_LIBRARY_PATH env variable.
> The suggested patch fixes the problem via setting the paths to native libs 
> using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
> tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-15 Thread Surenkumar Nihalani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553966#comment-13553966
 ] 

Surenkumar Nihalani commented on HADOOP-9205:
-

I think we should share our output from {{env}} in a pastie to solve this 
better.

> Java7: path to native libraries should be passed to tests via 
> -Djava.library.path rather than env.LD_LIBRARY_PATH
> -
>
> Key: HADOOP-9205
> URL: https://issues.apache.org/jira/browse/HADOOP-9205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9205.patch
>
>
> Currently the path to native libraries is passed to unit tests via 
> environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
> work for Java7, since Java7 ignores this environment variable.
> So, to run the tests with native implementation on Java7 one needs to pass 
> the paths to native libs via -Djava.library.path system property rather than 
> the LD_LIBRARY_PATH env variable.
> The suggested patch fixes the problem via setting the paths to native libs 
> using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
> tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553993#comment-13553993
 ] 

Thomas Graves commented on HADOOP-9205:
---

I'm running on rhel5.6 with maven 3.0.3 and cmake version 2.6-patch 4. 

> Java7: path to native libraries should be passed to tests via 
> -Djava.library.path rather than env.LD_LIBRARY_PATH
> -
>
> Key: HADOOP-9205
> URL: https://issues.apache.org/jira/browse/HADOOP-9205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9205.patch
>
>
> Currently the path to native libraries is passed to unit tests via 
> environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
> work for Java7, since Java7 ignores this environment variable.
> So, to run the tests with native implementation on Java7 one needs to pass 
> the paths to native libs via -Djava.library.path system property rather than 
> the LD_LIBRARY_PATH env variable.
> The suggested patch fixes the problem via setting the paths to native libs 
> using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
> tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13553996#comment-13553996
 ] 

Thomas Graves commented on HADOOP-9215:
---

Note I'm using cmake version 2.6-patch 4.  Someone on different jira mentioned 
using 2.8 fixes this issue, I can't easily install that to test.

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Priority: Blocker
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Charles Wimmer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554013#comment-13554013
 ] 

Charles Wimmer commented on HADOOP-9215:


If a specific version of cmake is required, then the build should fail without 
it.

http://www.cmake.org/cmake/help/v2.8.10/cmake.html#command:cmake_minimum_required

{noformat}
cmake_minimum_required: Set the minimum required version of cmake for a project.

  cmake_minimum_required(VERSION major[.minor[.patch[.tweak]]]
 [FATAL_ERROR])

If the current version of CMake is lower than that required it will stop 
processing the project and report an error. When a version higher than 2.4 is 
specified the command implicitly invokes

  cmake_policy(VERSION major[.minor[.patch[.tweak]]])

which sets the cmake policy version level to the version specified. When 
version 2.4 or lower is given the command implicitly invokes

  cmake_policy(VERSION 2.4)

which enables compatibility features for CMake 2.4 and lower.

The FATAL_ERROR option is accepted but ignored by CMake 2.6 and higher. It 
should be specified so CMake versions 2.4 and lower fail with an error instead 
of just a warning.{noformat}

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Priority: Blocker
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7682) taskTracker could not start because "Failed to set permissions" to "ttprivate to 0700"

2013-01-15 Thread FKorning (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554028#comment-13554028
 ] 

FKorning commented on HADOOP-7682:
--

yes.  it is extremely slow, which sort of makes the whole windows thing a bit 
moot.  then again, if you have a farm of windows boxes sitting idle, you may as 
well use their cycles...

Sent from my iPhone




> taskTracker could not start because "Failed to set permissions" to "ttprivate 
> to 0700"
> --
>
> Key: HADOOP-7682
> URL: https://issues.apache.org/jira/browse/HADOOP-7682
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.1
> Environment: OS:WindowsXP SP3 , Filesystem :NTFS, cygwin 1.7.9-1, 
> jdk1.6.0_05
>Reporter: Magic Xie
>
> ERROR org.apache.hadoop.mapred.TaskTracker:Can not start task tracker because 
> java.io.IOException:Failed to set permissions of 
> path:/tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
> at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
> at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
> at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
> at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)
> Since hadoop0.20.203 when the TaskTracker initialize, it checks the 
> permission(TaskTracker Line 624) of 
> (org.apache.hadoop.mapred.TaskTracker.TT_LOG_TMP_DIR,org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR,
>  
> org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR).RawLocalFileSystem(http://svn.apache.org/viewvc/hadoop/common/tags/release-0.20.203.0/src/core/org/apache/hadoop/fs/RawLocalFileSystem.java?view=markup)
>  call setPermission(Line 481) to deal with it, setPermission works fine on 
> *nx, however,it dose not alway works on windows.
> setPermission call setReadable of Java.io.File in the line 498, but according 
> to the Table1 below provided by oracle,setReadable(false) will always return 
> false on windows, the same as setExecutable(false).
> http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/enhancements/
> is it cause the task tracker "Failed to set permissions" to "ttprivate to 
> 0700"?
> Hadoop 0.20.202 works fine in the same environment. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554031#comment-13554031
 ] 

Todd Lipcon commented on HADOOP-9212:
-

- the cleanup of the 'in' stream should be in a finally clause, rather than in 
the catch clause (in case there's some non-IOE thrown)
- do you need buffering on the input stream?
- Can you add a reference to this JIRA in the comment change in 
UserGroupInformation.java? Otherwise I don't think anyone will know about this 
lock cycle in a couple months when we've forgotten about this JIRA.


> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-15 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-8816:
---

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   Status: Resolved  (was: Patch Available)

Thanks Mortiz. Committed to trunk and branch-1.

> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554102#comment-13554102
 ] 

Hudson commented on HADOOP-8816:


Integrated in Hadoop-trunk-Commit #3240 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3240/])
HADOOP-8816. HTTP Error 413 full HEAD if using kerberos authentication. 
(moritzmoeller via tucu) (Revision 1433567)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9209) Add shell command to dump file checksums

2013-01-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554116#comment-13554116
 ] 

Colin Patrick McCabe commented on HADOOP-9209:
--

{code}
+  "to the datanode storing each block of the file, and thus is not\n" +
{code}

Perhaps this should be "*a* datanode storing..." to avoid the implication that 
there is only one place a block is stored.

I think it would be better to call this command {{\-dumpChecksums}}.  Just 
calling it "checksum" leaves it kind of ambiguous what it does (at least in my 
mind).  A command just called "checksum" could do many things-- like create a 
new checksum for a file that didn't have one, checksum some data which wasn't 
checksummed before, etc.  "dump checksum" makes it clear that you're dumping 
something that already exists.

> Add shell command to dump file checksums
> 
>
> Key: HADOOP-9209
> URL: https://issues.apache.org/jira/browse/HADOOP-9209
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9209.txt, hadoop-9209.txt
>
>
> Occasionally while working with tools like distcp, or debugging certain 
> issues, it's useful to be able to quickly see the checksum of a file. We 
> currently have the APIs to efficiently calculate a checksum, but we don't 
> expose it to users. This JIRA is to add a "fs -checksum" command which dumps 
> the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554132#comment-13554132
 ] 

Colin Patrick McCabe commented on HADOOP-9215:
--

bq. Note I'm using cmake version 2.6-patch 4. Someone on different jira 
mentioned using 2.8 fixes this issue, I can't easily install that to test.

A newer version of cmake fixes this issue (pretty much any version newer than 
the ancient CentOS 5 version).  If you can't upgrade, a workaround is running 
"mvn compile" twice (yeah, I know, it sucks.)

I would welcome a patch to fix this (since we still want to support CentOS 5).  
The easiest way to do that is probably to manually make the symlink from 
libhadoop.so to libhadoop.so.1.0.0 (and so forth) in the CMakeLists.txt script. 
 This could be put into a library file similar to how we do with 
{{JNIFlags.cmake}}.

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Priority: Blocker
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-15 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Attachment: HADOOP-8849-trunk--5.patch

The patch HADOOP-8849-trunk--5.patch implements the suggested change: the 
methods granting permissions before delete are extracted into a separate API.
Also separate tests provided for them.

Note: the imports and some other code especially arranged to avoid merge 
conflicts with the pending patch HADOOP-9063.

> FileUtil#fullyDelete should grant the target directories +rwx permissions 
> before trying to delete them
> --
>
> Key: HADOOP-8849
> URL: https://issues.apache.org/jira/browse/HADOOP-8849
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods 
> org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to 
> delete them.
> The mentioned methods fail to delete directories that don't have read or 
> execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short 
> timeout like tens of seconds), and the forked test process is killed, some 
> directories are left on disk that are not readable and/or executable. This 
> prevents next tests from being executed properly because these directories 
> cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
> So, its recommended to grant the read, write, and execute permissions the 
> directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
> return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
> this is not reliable because File#delete() returns true only if the file was 
> deleted as a result of the #delete() method invocation. E.g. in the following 
> code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and 
> "2", this fragment will return "false", while the file f does not exist upon 
> the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-15 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-8849:
---

Affects Version/s: 0.23.6
   2.0.3-alpha
   3.0.0

> FileUtil#fullyDelete should grant the target directories +rwx permissions 
> before trying to delete them
> --
>
> Key: HADOOP-8849
> URL: https://issues.apache.org/jira/browse/HADOOP-8849
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods 
> org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to 
> delete them.
> The mentioned methods fail to delete directories that don't have read or 
> execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short 
> timeout like tens of seconds), and the forked test process is killed, some 
> directories are left on disk that are not readable and/or executable. This 
> prevents next tests from being executed properly because these directories 
> cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
> So, its recommended to grant the read, write, and execute permissions the 
> directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
> return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
> this is not reliable because File#delete() returns true only if the file was 
> deleted as a result of the #delete() method invocation. E.g. in the following 
> code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and 
> "2", this fragment will return "false", while the file f does not exist upon 
> the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9063) enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil

2013-01-15 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9063:
---

Attachment: HADOOP-9063-trunk--c.patch
HADOOP-9063-branch-0.23--c.patch

The version "c" of the patches re-arranges some code to avoid merge conflicts 
with HADOOP-8849.

> enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil
> -
>
> Key: HADOOP-9063
> URL: https://issues.apache.org/jira/browse/HADOOP-9063
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9063--b.patch, HADOOP-9063-branch-0.23--b.patch, 
> HADOOP-9063-branch-0.23--c.patch, HADOOP-9063.patch, 
> HADOOP-9063-trunk--c.patch
>
>
> Some methods of class org.apache.hadoop.fs.FileUtil are covered by unit-tests 
> poorly or not covered at all. Enhance the coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9063) enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil

2013-01-15 Thread Ivan A. Veselovsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554164#comment-13554164
 ] 

Ivan A. Veselovsky commented on HADOOP-9063:


patch "HADOOP-9063-trunk--c.patch" is for trunk and branch-2.

> enhance unit-test coverage of class org.apache.hadoop.fs.FileUtil
> -
>
> Key: HADOOP-9063
> URL: https://issues.apache.org/jira/browse/HADOOP-9063
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Attachments: HADOOP-9063--b.patch, HADOOP-9063-branch-0.23--b.patch, 
> HADOOP-9063-branch-0.23--c.patch, HADOOP-9063.patch, 
> HADOOP-9063-trunk--c.patch
>
>
> Some methods of class org.apache.hadoop.fs.FileUtil are covered by unit-tests 
> poorly or not covered at all. Enhance the coverage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9209) Add shell command to dump file checksums

2013-01-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554170#comment-13554170
 ] 

Kihwal Lee commented on HADOOP-9209:


Regarding the name of command, it seems we use the same name if there is 
something equivalent in shell, otherwise the name is more descriptive. Commands 
like sum and md5sum exist, so "checksum" may be okay in that sense. But more 
descriptive name will be fine too.

HDFS checksum is a bit different from regular checksums obtained against a file 
in conventional file systems. It has been of no concern until now as it's 
mostly internal. But if it is exposed to users, we now have to tell users what 
it is and what to expect.  For example, users must be told that hdfs file 
checksum can be different even if the contents of files are identical due to 
use of different block sizes and checksum parameters. May be we should mention 
it in the help.


> Add shell command to dump file checksums
> 
>
> Key: HADOOP-9209
> URL: https://issues.apache.org/jira/browse/HADOOP-9209
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9209.txt, hadoop-9209.txt
>
>
> Occasionally while working with tools like distcp, or debugging certain 
> issues, it's useful to be able to quickly see the checksum of a file. We 
> currently have the APIs to efficiently calculate a checksum, but we don't 
> expose it to users. This JIRA is to add a "fs -checksum" command which dumps 
> the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9197) Some little confusion in official documentation

2013-01-15 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-9197.
-

Resolution: Invalid

Jason, for now I am going to resolve this jira as invalid, since you have not 
posted additional details. Feel free to open it when you have more concrete 
details/suggestions.


> Some little confusion in official documentation
> ---
>
> Key: HADOOP-9197
> URL: https://issues.apache.org/jira/browse/HADOOP-9197
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Jason Lee
>Priority: Trivial
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am just a newbie to Hadoop. recently i self-study hadoop. when i reading 
> the official documentations, i find that them is a little confusion by 
> beginners like me. for example, look at the documents about HDFS shell guide:
> In 0.17, the prefix of HDFS shell is hadoop dfs:
> http://hadoop.apache.org/docs/r0.17.2/hdfs_shell.html
> In 0.19, the prefix of HDFS shell is hadoop fs:
> http://hadoop.apache.org/docs/r0.19.1/hdfs_shell.html#lsr
> In 1.0.4,the prefix of HDFS shell is hdfs dfs:
> http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html#ls
> As a beginner, i think reading them is suffering.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2013-01-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reopened HADOOP-9070:
-


> Kerberos SASL server cannot find kerberos key
> -
>
> Key: HADOOP-9070
> URL: https://issues.apache.org/jira/browse/HADOOP-9070
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9070.patch, HADOOP-9070.patch, HADOOP-9070.patch
>
>
> HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
> the sasl server which renders a server incapable of accepting kerberized 
> connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554205#comment-13554205
 ] 

Thomas Graves commented on HADOOP-9215:
---

I also have rhel6 boxes which have cmake 2.6 on them so its not just centOs 5. 
Also taking a quick look at centos 6.3 I see cmake-2.6.4-5.el6.src.rpm.  (from 
here http://vault.centos.org/6.3/os/Source/SPackages/)  What version of CentOs 
has cmake 2.8?  

# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.3 (Santiago)
# rpm -qa | grep cmake
cmake-2.6.4-5.el6.x86_64


What Jira introduced this dependency?  Personally I don't think we should be 
mandating cmake 2.8 version if its not in or easily available for rhel5 or 
rhel6/CentOs6. I'll go look some more to see if there is easier way for me to 
get it but it doesn't currently easily come up in yum list for me.

In the very least I think we should have it fail as Charles mentioned.



> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Priority: Blocker
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8589) ViewFs tests fail when tests and home dirs are nested

2013-01-15 Thread Eli Collins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HADOOP-8589:


Target Version/s: 0.23.5  (was: 2.0.3-alpha, 0.23.5)
   Fix Version/s: 2.0.3-alpha

I merged this change to branch-2.

> ViewFs tests fail when tests and home dirs are nested
> -
>
> Key: HADOOP-8589
> URL: https://issues.apache.org/jira/browse/HADOOP-8589
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 0.23.1, 2.0.0-alpha
>Reporter: Andrey Klochkov
>Assignee: Sanjay Radia
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: Hadoop-8589.patch, HADOOP-8589.patch, HADOOP-8589.patch, 
> hadoop-8589-sanjay.patch, Hadoop-8589-v2.patch, HADOOP-8859.patch
>
>
> TestFSMainOperationsLocalFileSystem fails in case when the test root 
> directory is under the user's home directory, and the user's home dir is 
> deeper than 2 levels from /. This happens with the default 1-node 
> installation of Jenkins. 
> This is the failure log:
> {code}
> org.apache.hadoop.fs.FileAlreadyExistsException: Path /var already exists as 
> dir; cannot create link here
>   at org.apache.hadoop.fs.viewfs.InodeTree.createLink(InodeTree.java:244)
>   at org.apache.hadoop.fs.viewfs.InodeTree.(InodeTree.java:334)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$1.(ViewFileSystem.java:167)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.initialize(ViewFileSystem.java:167)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2094)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:79)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2128)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2110)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:290)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystemTestSetup.setupForViewFileSystem(ViewFileSystemTestSetup.java:76)
>   at 
> org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem.setUp(TestFSMainOperationsLocalFileSystem.java:40)
> ...
> Standard Output
> 2012-07-11 22:07:20,239 INFO  mortbay.log (Slf4jLog.java:info(67)) - Home dir 
> base /var/lib
> {code}
> The reason for the failure is that the code tries to mount links for both 
> "/var" and "/var/lib", and it fails for the 2nd one as the "/var" is mounted 
> already.
> The fix was provided in HADOOP-8036 but later it was reverted in HADOOP-8129.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9212:
--

Attachment: HADOOP-9212.patch

Thanks for the review Todd. Here's a new patch with your feedback addressed.

> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9210) bad mirror in download list

2013-01-15 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson resolved HADOOP-9210.
---

  Resolution: Not A Problem
Release Note: From IRC, "hadoop can't do anything about it, and we have an 
automated system that detects+fixes it".

> bad mirror in download list
> ---
>
> Key: HADOOP-9210
> URL: https://issues.apache.org/jira/browse/HADOOP-9210
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Andy Isaacson
>Priority: Minor
>
> The http://hadoop.apache.org/releases.html page links to 
> http://www.apache.org/dyn/closer.cgi/hadoop/common/ which provides a list of 
> mirrors.  The first one on the list (for me) is 
> http://www.alliedquotes.com/mirrors/apache/hadoop/common/ which is 404.
> I checked the rest of the mirrors in the list and only alliedquotes is 404.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9210) bad mirror in download list

2013-01-15 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HADOOP-9210:
--

Release Note:   (was: From IRC, "hadoop can't do anything about it, and we 
have an automated system that detects+fixes it".)

> bad mirror in download list
> ---
>
> Key: HADOOP-9210
> URL: https://issues.apache.org/jira/browse/HADOOP-9210
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Andy Isaacson
>Priority: Minor
>
> The http://hadoop.apache.org/releases.html page links to 
> http://www.apache.org/dyn/closer.cgi/hadoop/common/ which provides a list of 
> mirrors.  The first one on the list (for me) is 
> http://www.alliedquotes.com/mirrors/apache/hadoop/common/ which is 404.
> I checked the rest of the mirrors in the list and only alliedquotes is 404.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9210) bad mirror in download list

2013-01-15 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554269#comment-13554269
 ] 

Andy Isaacson commented on HADOOP-9210:
---

>From IRC, "hadoop can't do anything about it, and we have an automated system 
>that detects+fixes it".

> bad mirror in download list
> ---
>
> Key: HADOOP-9210
> URL: https://issues.apache.org/jira/browse/HADOOP-9210
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Andy Isaacson
>Priority: Minor
>
> The http://hadoop.apache.org/releases.html page links to 
> http://www.apache.org/dyn/closer.cgi/hadoop/common/ which provides a list of 
> mirrors.  The first one on the list (for me) is 
> http://www.alliedquotes.com/mirrors/apache/hadoop/common/ which is 404.
> I checked the rest of the mirrors in the list and only alliedquotes is 404.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9193) hadoop script can inadvertently expand wildcard arguments when delegating to hdfs script

2013-01-15 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554284#comment-13554284
 ] 

Andy Isaacson commented on HADOOP-9193:
---

The TestZKFailoverController failure is unrelated.  There does not seem to be 
any existing test code for the {{hadoop dfs}} shell scripts, so adding tests 
for this condition is challenging.

> hadoop script can inadvertently expand wildcard arguments when delegating to 
> hdfs script
> 
>
> Key: HADOOP-9193
> URL: https://issues.apache.org/jira/browse/HADOOP-9193
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Jason Lowe
>Assignee: Andy Isaacson
>Priority: Minor
> Attachments: hadoop9193.diff
>
>
> The hadoop front-end script will print a deprecation warning and defer to the 
> hdfs front-end script for certain commands, like fsck, dfs.  If a wildcard 
> appears as an argument then it can be inadvertently expanded by the shell to 
> match a local filesystem path before being sent to the hdfs script, which can 
> be very confusing to the end user.
> For example, the following two commands usually perform very different 
> things, even though they should be equivalent:
> {code}
> hadoop fs -ls /tmp/\*
> hadoop dfs -ls /tmp/\*
> {code}
> The former lists everything in the default filesystem under /tmp, while the 
> latter expands /tmp/\* into everything in the *local* filesystem under /tmp 
> and passes those as arguments to try to list in the default filesystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9209) Add shell command to dump file checksums

2013-01-15 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554300#comment-13554300
 ] 

Todd Lipcon commented on HADOOP-9209:
-

[~kihwal], that's a good point. Maybe it's better to not include this as a 
shell command, but instead just have it be an undocumented 'tool' accessible by 
something like 'hadoop org.apache.hadoop.tools.ChecksumFile' or something? 
Putting it in the Shell hierarchy is nice because we get argument parsing for 
free, etc, but maybe it's unnecessary.

To play devil's advocate, though, we do expose FileSystem.getFileChecksum() as 
a public API, so it seems like offering CLI access to the same API is 
equivalent.

> Add shell command to dump file checksums
> 
>
> Key: HADOOP-9209
> URL: https://issues.apache.org/jira/browse/HADOOP-9209
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9209.txt, hadoop-9209.txt
>
>
> Occasionally while working with tools like distcp, or debugging certain 
> issues, it's useful to be able to quickly see the checksum of a file. We 
> currently have the APIs to efficiently calculate a checksum, but we don't 
> expose it to users. This JIRA is to add a "fs -checksum" command which dumps 
> the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8712) Change default hadoop.security.group.mapping

2013-01-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8712:


   Resolution: Fixed
Fix Version/s: 3.0.0
 Release Note: The default group mapping policy has been changed to 
JniBasedUnixGroupsNetgroupMappingWithFallback. This should maintain the same 
semantics as the prior default for most users.
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-2 and trunk. Thanks, Robert.

> Change default hadoop.security.group.mapping
> 
>
> Key: HADOOP-8712
> URL: https://issues.apache.org/jira/browse/HADOOP-8712
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch
>
>
> Change the hadoop.security.group.mapping in core-site to 
> JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554309#comment-13554309
 ] 

Hadoop QA commented on HADOOP-9212:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12564991/HADOOP-9212.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2049//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2049//console

This message is automatically generated.

> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2013-01-15 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554311#comment-13554311
 ] 

Daryn Sharp commented on HADOOP-9070:
-

Reverting this patch alone won't undo the version incompatibility.  The SASL 
exchange was amended on another jira to send a final ack during SASL exchange.  
This ensured a symmetry that every client message received a response - instead 
of the client sometimes assuming that auth was successful.  If the assumption 
was wrong, and the server sent an exception or switch to simple, it was 
misinterpreted as a malformed protobuf response to the first proxy call.

I might be able to somehow maintain compatibility, but it's likely going to 
require hardcoded hacks.

I understand the desire to avoid wire incompat, and I would 100% agree if this 
was 2.1 or 2.2.  I'd make the case that alpha 2.0 is the time to make changes 
to support future work on the 2.x branch.  I'm concerned that the larger goal 
of pluggable SASL mechanisms won't work w/o more hacks for which mechanisms do 
or don't send a final ack, which essentially means it's not going to be 
feasible in 2.x.



> Kerberos SASL server cannot find kerberos key
> -
>
> Key: HADOOP-9070
> URL: https://issues.apache.org/jira/browse/HADOOP-9070
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9070.patch, HADOOP-9070.patch, HADOOP-9070.patch
>
>
> HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
> the sasl server which renders a server incapable of accepting kerberized 
> connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-15 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554314#comment-13554314
 ] 

Todd Lipcon commented on HADOOP-9212:
-

+1, looks good to me. Nice analysis of the cycle.

> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2013-01-15 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554320#comment-13554320
 ] 

Todd Lipcon commented on HADOOP-9070:
-

Hi Daryn. I'm happy to do the work to make it compatible... I'm assuming the 
other JIRA you're mentioning is HADOOP-8999? One thing I'm wondering: are the 
SASL negotiation improvements necessary/applicable for the DIGEST and GSS SASL 
mechanisms in use now? Or are they only important for future extensions in 
other SASL mechanisms (as mentioned in the description of 8999?)

> Kerberos SASL server cannot find kerberos key
> -
>
> Key: HADOOP-9070
> URL: https://issues.apache.org/jira/browse/HADOOP-9070
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9070.patch, HADOOP-9070.patch, HADOOP-9070.patch
>
>
> HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
> the sasl server which renders a server incapable of accepting kerberized 
> connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8999) SASL negotiation is flawed

2013-01-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8999:


Hadoop Flags: Incompatible change,Reviewed  (was: Reviewed)

> SASL negotiation is flawed
> --
>
> Key: HADOOP-8999
> URL: https://issues.apache.org/jira/browse/HADOOP-8999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8999.patch
>
>
> The RPC protocol used for SASL negotiation is flawed.  The server's RPC 
> response contains the next SASL challenge token, but a SASL server can return 
> null (I'm done) or a N-many byte challenge.  The server currently will not 
> send a RPC success response to the client if the SASL server returns null, 
> which causes the client to hang until it times out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554327#comment-13554327
 ] 

Hudson commented on HADOOP-8712:


Integrated in Hadoop-trunk-Commit #3244 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3244/])
HADOOP-8712. Change default hadoop.security.group.mapping to 
JniBasedUnixGroupsNetgroupMappingWithFallback. Contributed by Robert Parker. 
(Revision 1433624)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433624
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml


> Change default hadoop.security.group.mapping
> 
>
> Key: HADOOP-8712
> URL: https://issues.apache.org/jira/browse/HADOOP-8712
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch
>
>
> Change the hadoop.security.group.mapping in core-site to 
> JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reassigned HADOOP-9215:


Assignee: Colin Patrick McCabe

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554340#comment-13554340
 ] 

Colin Patrick McCabe commented on HADOOP-9215:
--

We definitely need to support CMake 2.6 since it is present on RHEL5, which we 
want to support.

I'll take a look at working around this issue in the CMakeLists.txt.

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9216) CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration.

2013-01-15 Thread Tsuyoshi OZAWA (JIRA)
Tsuyoshi OZAWA created HADOOP-9216:
--

 Summary: CompressionCodecFactory#getCodecClasses should trim the 
result of parsing by Configuration.
 Key: HADOOP-9216
 URL: https://issues.apache.org/jira/browse/HADOOP-9216
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Tsuyoshi OZAWA


CompressionCodecFactory#getCodecClasses doesn't trim its input.
This can confuse users of CompressionCodecFactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9216) CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration.

2013-01-15 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-9216:
---

Attachment: HADOOP-9216.patch

Attach test and fix.

> CompressionCodecFactory#getCodecClasses should trim the result of parsing by 
> Configuration.
> ---
>
> Key: HADOOP-9216
> URL: https://issues.apache.org/jira/browse/HADOOP-9216
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Tsuyoshi OZAWA
> Attachments: HADOOP-9216.patch
>
>
> CompressionCodecFactory#getCodecClasses doesn't trim its input.
> This can confuse users of CompressionCodecFactory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9216) CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration.

2013-01-15 Thread Tsuyoshi OZAWA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi OZAWA updated HADOOP-9216:
---

Description: 
CompressionCodecFactory#getCodecClasses doesn't trim its input.
This can confuse users of CompressionCodecFactory. For example, The setting as 
follows can cause error because of spaces in the values.

{quote}
 conf.set("io.compression.codecs", 
"  org.apache.hadoop.io.compress.GzipCodec , " +
" org.apache.hadoop.io.compress.DefaultCodec  , " +
"org.apache.hadoop.io.compress.BZip2Codec   ");
{quote}


This ticket deals with this problem.

  was:
CompressionCodecFactory#getCodecClasses doesn't trim its input.
This can confuse users of CompressionCodecFactory.


> CompressionCodecFactory#getCodecClasses should trim the result of parsing by 
> Configuration.
> ---
>
> Key: HADOOP-9216
> URL: https://issues.apache.org/jira/browse/HADOOP-9216
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Tsuyoshi OZAWA
> Attachments: HADOOP-9216.patch
>
>
> CompressionCodecFactory#getCodecClasses doesn't trim its input.
> This can confuse users of CompressionCodecFactory. For example, The setting 
> as follows can cause error because of spaces in the values.
> {quote}
>  conf.set("io.compression.codecs", 
> "  org.apache.hadoop.io.compress.GzipCodec , " +
> " org.apache.hadoop.io.compress.DefaultCodec  , " +
> "org.apache.hadoop.io.compress.BZip2Codec   ");
> {quote}
> This ticket deals with this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9217) Print tread dumps when hadoop-common tests fail

2013-01-15 Thread Andrey Klochkov (JIRA)
Andrey Klochkov created HADOOP-9217:
---

 Summary: Print tread dumps when hadoop-common tests fail
 Key: HADOOP-9217
 URL: https://issues.apache.org/jira/browse/HADOOP-9217
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Reporter: Andrey Klochkov


Printing tread dumps when tests fail due to timeouts was introduced in 
HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
It makes sense to enable in hadoop-common as well. In particular, 
TestZKFailoverController seems to be one of the most flaky tests in trunk 
currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9217:


Summary: Print thread dumps when hadoop-common tests fail  (was: Print 
tread dumps when hadoop-common tests fail)

> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Andrey Klochkov
>
> Printing tread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9217:


Description: 
Printing thread dumps when tests fail due to timeouts was introduced in 
HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
It makes sense to enable in hadoop-common as well. In particular, 
TestZKFailoverController seems to be one of the most flaky tests in trunk 
currently and having thread dumps may help debugging this.

  was:
Printing tread dumps when tests fail due to timeouts was introduced in 
HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
It makes sense to enable in hadoop-common as well. In particular, 
TestZKFailoverController seems to be one of the most flaky tests in trunk 
currently and having thread dumps may help debugging this.


> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Andrey Klochkov
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9217:


Attachment: HADOOP-9217.patch

The patch can be applied to all 3 branches (trunk, branch-2 and branch-0.23)

> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Andrey Klochkov
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Andrey Klochkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Klochkov updated HADOOP-9217:


 Assignee: Andrey Klochkov
Affects Version/s: 2.0.2-alpha
   0.23.5
   Status: Patch Available  (was: Open)

> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.23.5, 2.0.2-alpha
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-15 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554406#comment-13554406
 ] 

Robert Parker commented on HADOOP-9106:
---

Suresh with resect to 
| Change "final public static" to "public static final"

I would like to leave "final public static" for consistency and file a separate 
ticket to change all the uses of "final public static" to "public static final".

> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8999) SASL negotiation is flawed

2013-01-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554409#comment-13554409
 ] 

Suresh Srinivas commented on HADOOP-8999:
-

Daryn, can you please move the change description of this patch to Incompatible 
Changes section.

> SASL negotiation is flawed
> --
>
> Key: HADOOP-8999
> URL: https://issues.apache.org/jira/browse/HADOOP-8999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8999.patch
>
>
> The RPC protocol used for SASL negotiation is flawed.  The server's RPC 
> response contains the next SASL challenge token, but a SASL server can return 
> null (I'm done) or a N-many byte challenge.  The server currently will not 
> send a RPC success response to the client if the SASL server returns null, 
> which causes the client to hang until it times out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9209) Add shell command to dump file checksums

2013-01-15 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554411#comment-13554411
 ] 

Kihwal Lee commented on HADOOP-9209:


bq. To play devil's advocate, though, we do expose FileSystem.getFileChecksum() 
as a public API, so it seems like offering CLI access to the same API is 
equivalent.

CLI access through FsShell sounds reasonable, as long as the distinct 
properties of the HDFS file checksum is properly documented.

> Add shell command to dump file checksums
> 
>
> Key: HADOOP-9209
> URL: https://issues.apache.org/jira/browse/HADOOP-9209
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9209.txt, hadoop-9209.txt
>
>
> Occasionally while working with tools like distcp, or debugging certain 
> issues, it's useful to be able to quickly see the checksum of a file. We 
> currently have the APIs to efficiently calculate a checksum, but we don't 
> expose it to users. This JIRA is to add a "fs -checksum" command which dumps 
> the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8999) SASL negotiation is flawed

2013-01-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554412#comment-13554412
 ] 

Suresh Srinivas commented on HADOOP-8999:
-

Also please add Release Notes to describe why this is incompatible.

> SASL negotiation is flawed
> --
>
> Key: HADOOP-8999
> URL: https://issues.apache.org/jira/browse/HADOOP-8999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8999.patch
>
>
> The RPC protocol used for SASL negotiation is flawed.  The server's RPC 
> response contains the next SASL challenge token, but a SASL server can return 
> null (I'm done) or a N-many byte challenge.  The server currently will not 
> send a RPC success response to the client if the SASL server returns null, 
> which causes the client to hang until it times out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9209) Add shell command to dump file checksums

2013-01-15 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554432#comment-13554432
 ] 

Todd Lipcon commented on HADOOP-9209:
-

Yea... the issue is that the distinct properties are odd... here's a first 
crack at how I understand it:

- If the checksum "algorithm names" are different, then we can say nothing 
about whether the files are identical. (does the "algorithm name" fully 
encompass things like the block size?)
- If the checksum "algorithm names" are the same, and the checksums are the 
same, then the files are probably identical (except for possibilities of hash 
collision)
- If the checksum "algorithm names" are the same, but the checksums differ, 
then the files are definitely not identical.

Does that mesh with your understanding? Or does the block size not properly 
propagate into the algorithm name string? (and if that's the case, then under 
what cases can we actually make definitive judgments?)

> Add shell command to dump file checksums
> 
>
> Key: HADOOP-9209
> URL: https://issues.apache.org/jira/browse/HADOOP-9209
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9209.txt, hadoop-9209.txt
>
>
> Occasionally while working with tools like distcp, or debugging certain 
> issues, it's useful to be able to quickly see the checksum of a file. We 
> currently have the APIs to efficiently calculate a checksum, but we don't 
> expose it to users. This JIRA is to add a "fs -checksum" command which dumps 
> the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554436#comment-13554436
 ] 

Suresh Srinivas commented on HADOOP-9106:
-

bq. I would like to leave "final public static" for consistency and file a 
separate ticket to change all the uses of "final public static" to "public 
static final".
The code has mix of both public static final and other non standard variants. 
So the new code could just use the right convention. But I will leave it up to 
you. +1 for making the code consistent in a separate jira.

bq. We could set the member variable in the constructor but that dilutes the 
meaning of setTimeoutConnection (even if it may not actual use case to set it 
more than once).
The setConnectionTimeout() is setting a parameter in Configuration object and 
has nothing to with {{Client}} class, right? So, I fail to understand the above 
point.

The way I see it, {{Client}} gets {{Configuration}} in the constructor. That is 
the only point in time, the connection timeout for a client is decided. This is 
formalized also with member variable {{conf}} declared as final. Given that I 
do not understand why the timeout cannot be set in a final member variable of 
Client, to clearly show that it is only set once during creation/construction 
time.

> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554455#comment-13554455
 ] 

Hadoop QA commented on HADOOP-9217:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565014/HADOOP-9217.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 2022 javac 
compiler warnings (more than the trunk's current 2014 warnings).

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2050//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2050//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2050//console

This message is automatically generated.

> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Andrey Klochkov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554459#comment-13554459
 ] 

Andrey Klochkov commented on HADOOP-9217:
-

No tests are necessary as this is a change in build scripts.

> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2013-01-15 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554460#comment-13554460
 ] 

Daryn Sharp commented on HADOOP-9070:
-

Yes, I believe HADOOP-8999 is where the change (or bulk of it) was made.  My 
memory is fuzzy, but I think there was an existing case where a malformed 
protobuf exception was generated when the client wasn't reading a final 
response.  The change is largely intended to support PLAIN and/or other future 
SASL mechanisms, but it's definitely a bug that the client and server cannot be 
sure that SASL has completed.

> Kerberos SASL server cannot find kerberos key
> -
>
> Key: HADOOP-9070
> URL: https://issues.apache.org/jira/browse/HADOOP-9070
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9070.patch, HADOOP-9070.patch, HADOOP-9070.patch
>
>
> HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
> the sasl server which renders a server incapable of accepting kerberized 
> connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554463#comment-13554463
 ] 

Suresh Srinivas commented on HADOOP-9217:
-

+1 for the change.

> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9217:


   Resolution: Fixed
Fix Version/s: 0.23.6
   2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed the patch to trunk, branch-2 and 0.23.

Thank you Andrey.

> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554476#comment-13554476
 ] 

Hudson commented on HADOOP-9217:


Integrated in Hadoop-trunk-Commit #3245 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3245/])
HADOOP-9217. Print thread dumps when hadoop-common tests fail. Contributed 
by Andrey Klochkov. (Revision 1433713)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433713
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml


> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-15 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-9106:
--

Attachment: HADOOP-9106v4.patch

> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch, HADOOP-9106v4.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-15 Thread Robert Parker (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554480#comment-13554480
 ] 

Robert Parker commented on HADOOP-9106:
---

Suresh,
Thanks for your input.  This patch uses the correct convention and has the 
final member variable.

> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch, HADOOP-9106v4.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9134) Unified server side user groups mapping service

2013-01-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9134:
--

Attachment: HADOOP-9134.patch

Initial patch for review.

> Unified server side user groups mapping service
> ---
>
> Key: HADOOP-9134
> URL: https://issues.apache.org/jira/browse/HADOOP-9134
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Kai Zheng
> Attachments: HADOOP-9134.patch
>
>
> This proposes to provide/expose the server side user group mapping service in 
> NameNode to clients so that user group mapping can be kept in the single 
> place and thus unified in all nodes and clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9134) Unified server side user groups mapping service

2013-01-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9134:
--

Status: Patch Available  (was: Open)

> Unified server side user groups mapping service
> ---
>
> Key: HADOOP-9134
> URL: https://issues.apache.org/jira/browse/HADOOP-9134
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Kai Zheng
> Attachments: HADOOP-9134.patch
>
>
> This proposes to provide/expose the server side user group mapping service in 
> NameNode to clients so that user group mapping can be kept in the single 
> place and thus unified in all nodes and clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554499#comment-13554499
 ] 

Hadoop QA commented on HADOOP-9106:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565027/HADOOP-9106v4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2051//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2051//console

This message is automatically generated.

> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch, HADOOP-9106v4.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9134) Unified server side user groups mapping service

2013-01-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554511#comment-13554511
 ] 

Hadoop QA commented on HADOOP-9134:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565033/HADOOP-9134.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2052//console

This message is automatically generated.

> Unified server side user groups mapping service
> ---
>
> Key: HADOOP-9134
> URL: https://issues.apache.org/jira/browse/HADOOP-9134
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Kai Zheng
> Attachments: HADOOP-9134.patch
>
>
> This proposes to provide/expose the server side user group mapping service in 
> NameNode to clients so that user group mapping can be kept in the single 
> place and thus unified in all nodes and clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9203) RPCCallBenchmark should find a random available port

2013-01-15 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HADOOP-9203:
--

Assignee: Andrew Purtell

> RPCCallBenchmark should find a random available port
> 
>
> Key: HADOOP-9203
> URL: https://issues.apache.org/jira/browse/HADOOP-9203
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, test
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9203.patch, HADOOP-9203.patch
>
>
> RPCCallBenchmark insists on port 12345 by default. It should find a random 
> ephemeral range port instead if one isn't specified.
> {noformat}
> testBenchmarkWithProto(org.apache.hadoop.ipc.TestRPCCallBenchmark)  Time 
> elapsed: 5092 sec  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:12345] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:710)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:361)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:459)
>   at org.apache.hadoop.ipc.Server.(Server.java:1877)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:982)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:376)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:351)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:825)
>   at 
> org.apache.hadoop.ipc.RPCCallBenchmark.startServer(RPCCallBenchmark.java:230)
>   at org.apache.hadoop.ipc.RPCCallBenchmark.run(RPCCallBenchmark.java:264)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.hadoop.ipc.TestRPCCallBenchmark.testBenchmarkWithProto(TestRPCCallBenchmark.java:43)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$1.run(FailOnTimeout.java:28)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-15 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-9106:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Release Note: This jira introduces a new configuration parameter 
"ipc.client.connect.timeout". This configuration defines the Hadoop RPC 
connection timeout in milliseconds for a client to connect to a server. For 
details see the description associated with this configuration in 
core-default.xml.
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed the patch to branch-2 and trunk.

Thanks you Robert!

> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch, HADOOP-9106v4.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554549#comment-13554549
 ] 

Hudson commented on HADOOP-9106:


Integrated in Hadoop-trunk-Commit #3246 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3246/])
HADOOP-9106. Allow configuration of IPC connect timeout. Contributed by 
Rober Parker. (Revision 1433747)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433747
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch, HADOOP-9106v4.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9070) Kerberos SASL server cannot find kerberos key

2013-01-15 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554551#comment-13554551
 ] 

Suresh Srinivas commented on HADOOP-9070:
-

bq. I understand the desire to avoid wire incompat, and I would 100% agree if 
this was 2.1 or 2.2. I'd make the case that alpha 2.0 is the time to make 
changes to support future work on the 2.x branch.
+1. I have been making the same point in many many jiras, all of which are 
mainly blocked due to CDH4.

If a simple compatible change is made as an alternate to this change, I am 
okay. Any thing that adds unnecessary complexity will be vetoed by me.

> Kerberos SASL server cannot find kerberos key
> -
>
> Key: HADOOP-9070
> URL: https://issues.apache.org/jira/browse/HADOOP-9070
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9070.patch, HADOOP-9070.patch, HADOOP-9070.patch
>
>
> HADOOP-9015 inadvertently removed a {{doAs}} block around instantiation of 
> the sasl server which renders a server incapable of accepting kerberized 
> connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8990) Some minor issus in protobuf based ipc

2013-01-15 Thread Sanjay Radia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554576#comment-13554576
 ] 

Sanjay Radia commented on HADOOP-8990:
--

bq. there are still some Writables: RpcRequestWritable and RpcResponseWritable

These are wrappers and not actual writables sent across the wire. They happen 
to be writable so that the client an server can call the write and read 
methods. I will create a jira to document this better.

> Some minor issus in protobuf based ipc
> --
>
> Key: HADOOP-8990
> URL: https://issues.apache.org/jira/browse/HADOOP-8990
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Binglin Chang
>Priority: Minor
>
> 1. proto file naming
> RpcPayloadHeader.proto include not only RpcPayLoadHeaderProto, but also 
> RpcResponseHeaderProto, which is irrelevant to the file name.
> hadoop_rpc.proto only include HadoopRpcRequestProto, and the filename 
> "hadoop_rpc" is strange comparing to other .proto file names.
> How about merge those two file into HadoopRpc.proto?
> 2. proto class naming
> In rpc request RpcPayloadHeaderProto includes callId, but in rpc response 
> callId is included in RpcResponseHeaderProto, and there is also 
> HadoopRpcRequestProto, this is just too confusing.
> 3. The rpc system is not fully protobuf based, there are still some Writables:
> RpcRequestWritable and RpcResponseWritable.
> rpc response exception name and stack trace string.
> And RpcRequestWritable uses protobuf style varint32 prefix, but 
> RpcResponseWritable uses int32 prefix, why this inconsistency?
> Currently rpc request is splitted into length, PayLoadHeader and PayLoad, and 
> response into RpcResponseHeader, response and error message. 
> I think wrap request and response into single RequstProto and ResponseProto 
> is better, cause this gives a formal complete wire format definition, 
> or developer need to look into the source code and hard coding the 
> communication format.
> These issues above make it very confusing and hard for developers to use 
> these rpc interfaces.
> Some of these issues can be solved without breaking compatibility, but some 
> can not, but at least we need to know what will be changed and what will stay 
> stable?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9218) Document the Rpc-wrappers used internally

2013-01-15 Thread Sanjay Radia (JIRA)
Sanjay Radia created HADOOP-9218:


 Summary: Document the Rpc-wrappers used internally
 Key: HADOOP-9218
 URL: https://issues.apache.org/jira/browse/HADOOP-9218
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sanjay Radia
Assignee: Sanjay Radia




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-15 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554580#comment-13554580
 ] 

Aaron T. Myers commented on HADOOP-9150:


+1, patch looks good to me.

My only suggestion would be to add a test for the FileContext side of the house 
to make sure that that isn't affected by this issue as well, though you could 
also certainly do this in a separate JIRA if you wanted.

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> log.txt, tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-15 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9150:


Attachment: hadoop-9150.txt

Added a test for FileContext. It appears it wasn't affected by the issue 
(appears not to do canonicalization). But if someone adds canonicalization to 
FileContext later, this should catch a regression.

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> hadoop-9150.txt, log.txt, tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >