[jira] [Updated] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-16 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9199:
---

Attachment: (was: HADOOP-9199-trunk-a.patch)

> Cover package org.apache.hadoop.io with unit tests
> --
>
> Key: HADOOP-9199
> URL: https://issues.apache.org/jira/browse/HADOOP-9199
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9199-branch-0.23-a.patch, 
> HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9199) Cover package org.apache.hadoop.io with unit tests

2013-01-16 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9199:
---

Attachment: HADOOP-9199-trunk-a.patch

> Cover package org.apache.hadoop.io with unit tests
> --
>
> Key: HADOOP-9199
> URL: https://issues.apache.org/jira/browse/HADOOP-9199
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9199-branch-0.23-a.patch, 
> HADOOP-9199-branch-2-a.patch, HADOOP-9199-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9216) CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration.

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555947#comment-13555947
 ] 

Hudson commented on HADOOP-9216:


Integrated in Hadoop-trunk-Commit #3254 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3254/])
HADOOP-9216. CompressionCodecFactory#getCodecClasses should trim the result 
of parsing by Configuration. Contributed by Tsuyoshi Ozawa. (Revision 1434569)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434569
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecFactory.java


> CompressionCodecFactory#getCodecClasses should trim the result of parsing by 
> Configuration.
> ---
>
> Key: HADOOP-9216
> URL: https://issues.apache.org/jira/browse/HADOOP-9216
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-9216.patch
>
>
> CompressionCodecFactory#getCodecClasses doesn't trim its input.
> This can confuse users of CompressionCodecFactory. For example, The setting 
> as follows can cause error because of spaces in the values.
> {quote}
>  conf.set("io.compression.codecs", 
> "  org.apache.hadoop.io.compress.GzipCodec , " +
> " org.apache.hadoop.io.compress.DefaultCodec  , " +
> "org.apache.hadoop.io.compress.BZip2Codec   ");
> {quote}
> This ticket deals with this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9216) CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration.

2013-01-16 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HADOOP-9216:
---

Assignee: Tsuyoshi OZAWA

> CompressionCodecFactory#getCodecClasses should trim the result of parsing by 
> Configuration.
> ---
>
> Key: HADOOP-9216
> URL: https://issues.apache.org/jira/browse/HADOOP-9216
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-9216.patch
>
>
> CompressionCodecFactory#getCodecClasses doesn't trim its input.
> This can confuse users of CompressionCodecFactory. For example, The setting 
> as follows can cause error because of spaces in the values.
> {quote}
>  conf.set("io.compression.codecs", 
> "  org.apache.hadoop.io.compress.GzipCodec , " +
> " org.apache.hadoop.io.compress.DefaultCodec  , " +
> "org.apache.hadoop.io.compress.BZip2Codec   ");
> {quote}
> This ticket deals with this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9216) CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration.

2013-01-16 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9216:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to branch-2 and trunk. Thanks, Tsuyoshi!

> CompressionCodecFactory#getCodecClasses should trim the result of parsing by 
> Configuration.
> ---
>
> Key: HADOOP-9216
> URL: https://issues.apache.org/jira/browse/HADOOP-9216
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Tsuyoshi OZAWA
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-9216.patch
>
>
> CompressionCodecFactory#getCodecClasses doesn't trim its input.
> This can confuse users of CompressionCodecFactory. For example, The setting 
> as follows can cause error because of spaces in the values.
> {quote}
>  conf.set("io.compression.codecs", 
> "  org.apache.hadoop.io.compress.GzipCodec , " +
> " org.apache.hadoop.io.compress.DefaultCodec  , " +
> "org.apache.hadoop.io.compress.BZip2Codec   ");
> {quote}
> This ticket deals with this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-16 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9150:


Attachment: hadoop-9150.txt

Previous patch forgot to add a delegator method in FilterFileSystem. This patch 
adds the trivial delegation.

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, log.txt, 
> tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555771#comment-13555771
 ] 

Hadoop QA commented on HADOOP-9150:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565211/hadoop-9150.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 2030 javac 
compiler warnings (more than the trunk's current 2022 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.fs.TestFilterFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2061//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2061//artifact/trunk/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2061//console

This message is automatically generated.

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> hadoop-9150.txt, hadoop-9150.txt, log.txt, tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) when using cmake-2.6, libhadoop.so doesn't get created (only libhadoop.so.1.0.0)

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555759#comment-13555759
 ] 

Hudson commented on HADOOP-9215:


Integrated in Hadoop-trunk-Commit #3253 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3253/])
HADOOP-9215. when using cmake-2.6, libhadoop.so doesn't get created (only 
libhadoop.so.1.0.0). Contributed by Colin Patrick McCabe. (Revision 1434530)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434530
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/pom.xml
* /hadoop/common/trunk/hadoop-tools/hadoop-pipes/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml


> when using cmake-2.6, libhadoop.so doesn't get created (only 
> libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch, 
> HADOOP-9215.003.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9215) when using cmake-2.6, libhadoop.so doesn't get created (only libhadoop.so.1.0.0)

2013-01-16 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9215:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks for looking into this, Thomas and Colin.

> when using cmake-2.6, libhadoop.so doesn't get created (only 
> libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch, 
> HADOOP-9215.003.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) when using cmake-2.6, libhadoop.so doesn't get created (only libhadoop.so.1.0.0)

2013-01-16 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555735#comment-13555735
 ] 

Todd Lipcon commented on HADOOP-9215:
-

Yep, looks good to me. Only throwing up in my mouth a tiny bit :) Will commit 
momentarily.

> when using cmake-2.6, libhadoop.so doesn't get created (only 
> libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch, 
> HADOOP-9215.003.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) when using cmake-2.6, libhadoop.so doesn't get created (only libhadoop.so.1.0.0)

2013-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555676#comment-13555676
 ] 

Hadoop QA commented on HADOOP-9215:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565193/HADOOP-9215.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2059//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2059//console

This message is automatically generated.

> when using cmake-2.6, libhadoop.so doesn't get created (only 
> libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch, 
> HADOOP-9215.003.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9134) Unified server side user groups mapping service

2013-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555675#comment-13555675
 ] 

Hadoop QA commented on HADOOP-9134:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565192/HADOOP-9134.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2060//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2060//console

This message is automatically generated.

> Unified server side user groups mapping service
> ---
>
> Key: HADOOP-9134
> URL: https://issues.apache.org/jira/browse/HADOOP-9134
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Kai Zheng
> Attachments: HADOOP-9134.patch, HADOOP-9134.patch, HADOOP-9134.patch
>
>
> This proposes to provide/expose the server side user group mapping service in 
> NameNode to clients so that user group mapping can be kept in the single 
> place and thus unified in all nodes and clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) when using cmake-2.6, libhadoop.so doesn't get created (only libhadoop.so.1.0.0)

2013-01-16 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555661#comment-13555661
 ] 

Thomas Graves commented on HADOOP-9215:
---

I now see all the *.so files I expect.  so I'm +1 if Todd's good with it.  
Thanks Colin!

> when using cmake-2.6, libhadoop.so doesn't get created (only 
> libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch, 
> HADOOP-9215.003.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-16 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9150:


Attachment: hadoop-9150.txt

Attached patch implements something like what's described above.

Note that this changes behavior for those who are extending the FileSystem 
interface. It's marked as Public and Stable, so we should probably add a 
release note for this. I think we should allow it nonetheless -- it's not clear 
that the {{Stable}} marking there refers to "stable for FileSystem developers 
to inherit from" vs "stable for developers to code against". Since the changes 
are only to protected methods, we've only changed something for the former and 
not the latter.

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> hadoop-9150.txt, hadoop-9150.txt, log.txt, tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9196) Modify BloomFilter.write() to address memory concerns

2013-01-16 Thread Surenkumar Nihalani (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555606#comment-13555606
 ] 

Surenkumar Nihalani commented on HADOOP-9196:
-

It does allocate a full array. I was looking at {{BitSet}}'s implementation. 
Getting each bit at a time is too expensive. I am thinking extending {{BitSet}} 
for {{BloomFilter}} to have a {{getInt(index)}} to return the internal integer 
and write that each int at a time because allocation a huge array just for 
writing to {{DataOutput}} seems overkill. It's not like it will be used and 
kept around. It will be garbage collected. Passing an integer to {{DataOutput}} 
at a time wouldn't be bad because it's buffered.

Thoughts?


> Modify BloomFilter.write() to address memory concerns
> -
>
> Key: HADOOP-9196
> URL: https://issues.apache.org/jira/browse/HADOOP-9196
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: James
>Priority: Minor
>
> It appears that org.apache.hadoop.util.bloom.BloomFilter's write() method 
> creates a byte array large enough to fit the entire bit vector into memory 
> during serialization.  This is unnecessary and may cause out of memory issues 
> if the bit vector is sufficiently large and memory is tight.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-16 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1397#comment-1397
 ] 

Todd Lipcon commented on HADOOP-9150:
-

Hey Daryn.

I'm looking at implementing your #1 option above. Another issue, though, is 
that checkPath() hard-codes {{NetUtils.getCanonicalUri}}. I don't really want 
to have to make all of HDFS,HFTP,HSFTP,WebHDFS re-implement that code.

How does the following sound?
- Add {{protected FileSystem.canonicalizeUri(uri)}}. Default implementation 
would add getDefaultPort() if the given URI has no port set, and if 
getDefaultPort() > 0
- Make the default implementation of {{FileSystem.getCanonicalUri()}} call 
{{canonicalizeUri(getUri())}}
- Change {{checkPath}} to call {{canonicalizeUri}}
- In all of the FileSystems which use real hostnames as their authority, 
override {{canoncalizeUri}} to call {{NetUtils.canonicalizeUri}}

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> hadoop-9150.txt, log.txt, tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9215) when using cmake-2.6, libhadoop.so doesn't get created (only libhadoop.so.1.0.0)

2013-01-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9215:
-

Summary: when using cmake-2.6, libhadoop.so doesn't get created (only 
libhadoop.so.1.0.0)  (was: libhadoop.so doesn't exist (only libhadoop.so.1.0.0))

> when using cmake-2.6, libhadoop.so doesn't get created (only 
> libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch, 
> HADOOP-9215.003.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9215:
-

Attachment: HADOOP-9215.003.patch

added a comment

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch, 
> HADOOP-9215.003.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9134) Unified server side user groups mapping service

2013-01-16 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9134:
--

Attachment: HADOOP-9134.patch

Fixed the warning reported by Hadoop-QA.

> Unified server side user groups mapping service
> ---
>
> Key: HADOOP-9134
> URL: https://issues.apache.org/jira/browse/HADOOP-9134
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Kai Zheng
> Attachments: HADOOP-9134.patch, HADOOP-9134.patch, HADOOP-9134.patch
>
>
> This proposes to provide/expose the server side user group mapping service in 
> NameNode to clients so that user group mapping can be kept in the single 
> place and thus unified in all nodes and clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9193) hadoop script can inadvertently expand wildcard arguments when delegating to hdfs script

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1352#comment-1352
 ] 

Hudson commented on HADOOP-9193:


Integrated in Hadoop-trunk-Commit #3252 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3252/])
HADOOP-9193. hadoop script can inadvertently expand wildcard arguments when 
delegating to hdfs script. Contributed by Andy Isaacson. (Revision 1434450)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434450
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop


> hadoop script can inadvertently expand wildcard arguments when delegating to 
> hdfs script
> 
>
> Key: HADOOP-9193
> URL: https://issues.apache.org/jira/browse/HADOOP-9193
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Jason Lowe
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hadoop9193.diff
>
>
> The hadoop front-end script will print a deprecation warning and defer to the 
> hdfs front-end script for certain commands, like fsck, dfs.  If a wildcard 
> appears as an argument then it can be inadvertently expanded by the shell to 
> match a local filesystem path before being sent to the hdfs script, which can 
> be very confusing to the end user.
> For example, the following two commands usually perform very different 
> things, even though they should be equivalent:
> {code}
> hadoop fs -ls /tmp/\*
> hadoop dfs -ls /tmp/\*
> {code}
> The former lists everything in the default filesystem under /tmp, while the 
> latter expands /tmp/\* into everything in the *local* filesystem under /tmp 
> and passes those as arguments to try to list in the default filesystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1351#comment-1351
 ] 

Colin Patrick McCabe commented on HADOOP-9215:
--

The new patch passes all org.apache.hadoop.util.TestNativeCodeLoader and the 
other tests to ensure that the native libraries are loaded.  No additional 
tests are included, since the only way to reproduce the problem is to run with 
cmake 2.6, which is not installed on the build cluster.

However, I verified that the patch fixed the bug with cmake version 2.6-patch 
4, on CentOS release 5.8.

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-16 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1345#comment-1345
 ] 

Todd Lipcon commented on HADOOP-9215:
-

Sorry to make you rev the patch again, but can you add a short comment above 
the second make invocation which explains this workaround, and points to 
HADOOP-9215? I can see a well-meaning contributor removing it down the line 
otherwise.

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9193) hadoop script can inadvertently expand wildcard arguments when delegating to hdfs script

2013-01-16 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-9193:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> hadoop script can inadvertently expand wildcard arguments when delegating to 
> hdfs script
> 
>
> Key: HADOOP-9193
> URL: https://issues.apache.org/jira/browse/HADOOP-9193
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Jason Lowe
>Assignee: Andy Isaacson
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: hadoop9193.diff
>
>
> The hadoop front-end script will print a deprecation warning and defer to the 
> hdfs front-end script for certain commands, like fsck, dfs.  If a wildcard 
> appears as an argument then it can be inadvertently expanded by the shell to 
> match a local filesystem path before being sent to the hdfs script, which can 
> be very confusing to the end user.
> For example, the following two commands usually perform very different 
> things, even though they should be equivalent:
> {code}
> hadoop fs -ls /tmp/\*
> hadoop dfs -ls /tmp/\*
> {code}
> The former lists everything in the default filesystem under /tmp, while the 
> latter expands /tmp/\* into everything in the *local* filesystem under /tmp 
> and passes those as arguments to try to list in the default filesystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9193) hadoop script can inadvertently expand wildcard arguments when delegating to hdfs script

2013-01-16 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1341#comment-1341
 ] 

Aaron T. Myers commented on HADOOP-9193:


+1, the patch looks good to me.

I'm going to commit this momentarily.

> hadoop script can inadvertently expand wildcard arguments when delegating to 
> hdfs script
> 
>
> Key: HADOOP-9193
> URL: https://issues.apache.org/jira/browse/HADOOP-9193
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Jason Lowe
>Assignee: Andy Isaacson
>Priority: Minor
> Attachments: hadoop9193.diff
>
>
> The hadoop front-end script will print a deprecation warning and defer to the 
> hdfs front-end script for certain commands, like fsck, dfs.  If a wildcard 
> appears as an argument then it can be inadvertently expanded by the shell to 
> match a local filesystem path before being sent to the hdfs script, which can 
> be very confusing to the end user.
> For example, the following two commands usually perform very different 
> things, even though they should be equivalent:
> {code}
> hadoop fs -ls /tmp/\*
> hadoop dfs -ls /tmp/\*
> {code}
> The former lists everything in the default filesystem under /tmp, while the 
> latter expands /tmp/\* into everything in the *local* filesystem under /tmp 
> and passes those as arguments to try to list in the default filesystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9193) hadoop script can inadvertently expand wildcard arguments when delegating to hdfs script

2013-01-16 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1338#comment-1338
 ] 

Todd Lipcon commented on HADOOP-9193:
-

+1, lgtm. Will commit momentarily.

> hadoop script can inadvertently expand wildcard arguments when delegating to 
> hdfs script
> 
>
> Key: HADOOP-9193
> URL: https://issues.apache.org/jira/browse/HADOOP-9193
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Jason Lowe
>Assignee: Andy Isaacson
>Priority: Minor
> Attachments: hadoop9193.diff
>
>
> The hadoop front-end script will print a deprecation warning and defer to the 
> hdfs front-end script for certain commands, like fsck, dfs.  If a wildcard 
> appears as an argument then it can be inadvertently expanded by the shell to 
> match a local filesystem path before being sent to the hdfs script, which can 
> be very confusing to the end user.
> For example, the following two commands usually perform very different 
> things, even though they should be equivalent:
> {code}
> hadoop fs -ls /tmp/\*
> hadoop dfs -ls /tmp/\*
> {code}
> The former lists everything in the default filesystem under /tmp, while the 
> latter expands /tmp/\* into everything in the *local* filesystem under /tmp 
> and passes those as arguments to try to list in the default filesystem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555493#comment-13555493
 ] 

Hadoop QA commented on HADOOP-9215:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565162/HADOOP-9215.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2058//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2058//console

This message is automatically generated.

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8157) TestRPCCallBenchmark#testBenchmarkWithWritable fails with RTE

2013-01-16 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-8157:
---

Fix Version/s: (was: 0.23.2)
   2.0.0-alpha
   0.23.7

Even though the JIRA said it was merged into 0.23, this was one of those JIRAs 
that was "lost" when branch-0.23 became branch-2 and branch-0.23 was recreated.

Ran across it on 0.23 and noticed it wasn't fixed, so I pulled it into 
branch-0.23.  Again.  ;-)  Thanks, Todd!

> TestRPCCallBenchmark#testBenchmarkWithWritable fails with RTE
> -
>
> Key: HADOOP-8157
> URL: https://issues.apache.org/jira/browse/HADOOP-8157
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 0.24.0
>Reporter: Eli Collins
>Assignee: Todd Lipcon
> Fix For: 2.0.0-alpha, 0.23.7
>
> Attachments: hadoop-8157.txt
>
>
> Saw TestRPCCallBenchmark#testBenchmarkWithWritable fail with the following on 
> jenkins:
> Caused by: java.lang.RuntimeException: IPC server unable to read call 
> parameters: readObject can't find class java.lang.String

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8999) SASL negotiation is flawed

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555468#comment-13555468
 ] 

Hudson commented on HADOOP-8999:


Integrated in Hadoop-trunk-Commit #3251 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3251/])
HADOOP-8999. Move to incompatible section of changelog (Revision 1434370)

 Result = SUCCESS
daryn : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434370
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> SASL negotiation is flawed
> --
>
> Key: HADOOP-8999
> URL: https://issues.apache.org/jira/browse/HADOOP-8999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8999.patch
>
>
> The RPC protocol used for SASL negotiation is flawed.  The server's RPC 
> response contains the next SASL challenge token, but a SASL server can return 
> null (I'm done) or a N-many byte challenge.  The server currently will not 
> send a RPC success response to the client if the SASL server returns null, 
> which causes the client to hang until it times out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8999) SASL negotiation is flawed

2013-01-16 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8999:


Release Note: The RPC SASL negotiation now always ends with final response. 
 If the SASL mechanism does not have a final response (GSSAPI, PLAIN), then an 
empty success response is sent to the client.  The client will now always 
expect a final response to definitively know if negotiation is 
complete/successful.

> SASL negotiation is flawed
> --
>
> Key: HADOOP-8999
> URL: https://issues.apache.org/jira/browse/HADOOP-8999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8999.patch
>
>
> The RPC protocol used for SASL negotiation is flawed.  The server's RPC 
> response contains the next SASL challenge token, but a SASL server can return 
> null (I'm done) or a N-many byte challenge.  The server currently will not 
> send a RPC success response to the client if the SASL server returns null, 
> which causes the client to hang until it times out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-16 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-9215:
-

Attachment: HADOOP-9215.002.patch

Here is a different solution which just runs 'make' twice.  The second make 
exits almost immediately (make has a very quick startup/shutdown time).  On 
CentOS 5, this has the side effect of creating the libhadoop.so symlink.

I thought of some other workarounds, but they were all pretty cumbersome.  Even 
doing the symlink manually did not seem to work-- something in the first 
invocation of make deletes it on CentOS 5.

A few years from now we can bump up the minimum required version for cmake and 
drop the hack.

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch, HADOOP-9215.002.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-16 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555310#comment-13555310
 ] 

Arun C Murthy commented on HADOOP-9215:
---

Colin - can you please take a look at the failing tests? I'd really like to get 
this in, it's the only one blocking 2.0.3-alpha for now. Thanks!

> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9134) Unified server side user groups mapping service

2013-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555307#comment-13555307
 ] 

Hadoop QA commented on HADOOP-9134:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565137/HADOOP-9134.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2057//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2057//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2057//console

This message is automatically generated.

> Unified server side user groups mapping service
> ---
>
> Key: HADOOP-9134
> URL: https://issues.apache.org/jira/browse/HADOOP-9134
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Kai Zheng
> Attachments: HADOOP-9134.patch, HADOOP-9134.patch
>
>
> This proposes to provide/expose the server side user group mapping service in 
> NameNode to clients so that user group mapping can be kept in the single 
> place and thus unified in all nodes and clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9209) Add shell command to dump file checksums

2013-01-16 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555214#comment-13555214
 ] 

Kihwal Lee commented on HADOOP-9209:


bq. Does that mesh with your understanding?

Yes.

The block size is a factor in determining crcPerBlock, which is a part of the 
algorithm name. But, when the file size is less than the block size, 
crcPerBlock will be 0 (as in the test cases in the patch). The only case it 
might confuse users is when two identical such files with different preferred 
block sizes. If the files get appended, as soon as the file gets bigger than 
the block size of one, the two checksums and algorithm name will look different.



> Add shell command to dump file checksums
> 
>
> Key: HADOOP-9209
> URL: https://issues.apache.org/jira/browse/HADOOP-9209
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-9209.txt, hadoop-9209.txt
>
>
> Occasionally while working with tools like distcp, or debugging certain 
> issues, it's useful to be able to quickly see the checksum of a file. We 
> currently have the APIs to efficiently calculate a checksum, but we don't 
> expose it to users. This JIRA is to add a "fs -checksum" command which dumps 
> the checksum information for the specified file(s).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-16 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555198#comment-13555198
 ] 

Daryn Sharp commented on HADOOP-9150:
-

Skimming the patch, to reduce adding more methods and complexity, maybe we 
should consider either of the following:
# Default impl of {{getCanonicalUri()}} just returns {{getUri()}}.  Filesystem 
like DFS can specifically override {{getCanonicalUri}} to call 
{{NetUtils.getCanonicalUri}}.  The advantage is that it won't preclude 
other/future logical filesystems from utilizing a port.
# {{getCanonicalUri()}} continues to call {{NetUtils.getCanonicalUri}}.  
Logical filesystems should have a default port of -1 (ie. URI considers this as 
no port), so perhaps {{NetUtils.getCanonicalUri}} can just return the given uri 
if there's no default port.

I lean towards #1.

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> hadoop-9150.txt, log.txt, tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9150) Unnecessary DNS resolution attempts for logical URIs

2013-01-16 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555188#comment-13555188
 ] 

Daryn Sharp commented on HADOOP-9150:
-

bq. Hey Daryn. What do you think about changing canonicalizeLogicalUri to 
actually just remove the port? So far we've been canonicalizing these URIs by 
adding the default port, but in fact logical URIs don't really have ports. 
Would this have issues 'downstream' in stuff like the token code?

Where are you proposing to remove the ports?  In general or just viewfs?

> Unnecessary DNS resolution attempts for logical URIs
> 
>
> Key: HADOOP-9150
> URL: https://issues.apache.org/jira/browse/HADOOP-9150
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, ha, performance, viewfs
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Attachments: hadoop-9150.txt, hadoop-9150.txt, hadoop-9150.txt, 
> hadoop-9150.txt, log.txt, tracing-resolver.tgz
>
>
> In the FileSystem code, we accidentally try to DNS-resolve the logical name 
> before it is converted to an actual domain name. In some DNS setups, this can 
> cause a big slowdown - eg in one misconfigured cluster we saw a 2-3x drop in 
> terasort throughput, since every task wasted a lot of time waiting for slow 
> "not found" responses from DNS.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9134) Unified server side user groups mapping service

2013-01-16 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-9134:
--

Attachment: HADOOP-9134.patch

Merged with latest codes and fixed the building issue.

> Unified server side user groups mapping service
> ---
>
> Key: HADOOP-9134
> URL: https://issues.apache.org/jira/browse/HADOOP-9134
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Kai Zheng
> Attachments: HADOOP-9134.patch, HADOOP-9134.patch
>
>
> This proposes to provide/expose the server side user group mapping service in 
> NameNode to clients so that user group mapping can be kept in the single 
> place and thus unified in all nodes and clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8139) Path does not allow metachars to be escaped

2013-01-16 Thread Thomas Graves (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8139:
--

 Target Version/s:   (was: 0.23.3, 0.24.0)
Affects Version/s: (was: 0.24.0)
   (was: 0.23.0)
   3.0.0
   0.23.3

> Path does not allow metachars to be escaped
> ---
>
> Key: HADOOP-8139
> URL: https://issues.apache.org/jira/browse/HADOOP-8139
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.23.3, 3.0.0
>Reporter: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-8139-2.patch, HADOOP-8139-3.patch, 
> HADOOP-8139-4.patch, HADOOP-8139-5.patch, HADOOP-8139-6.patch, 
> HADOOP-8139.patch, HADOOP-8139.patch
>
>
> Path converts "\" into "/", probably for windows support?  This means it's 
> impossible for the user to escape metachars in a path name.  Glob expansion 
> can have deadly results.
> Here are the most egregious examples. A user accidentally creates a path like 
> "/user/me/*/file".  Now they want to remove it.
> {noformat}"hadoop fs -rmr -skipTrash '/user/me/\*'" becomes...
> "hadoop fs -rmr -skipTrash /user/me/*"{noformat}
> * User/Admin: Nuked their home directory or any given directory
> {noformat}"hadoop fs -rmr -skipTrash '\*'" becomes...
> "hadoop fs -rmr -skipTrash /*"{noformat}
> * User:  Deleted _everything_ they have access to on the cluster
> * Admin: *Nukes the entire cluster*
> Note: FsShell is shown for illustrative purposes, however the problem is in 
> the Path object, not FsShell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9215) libhadoop.so doesn't exist (only libhadoop.so.1.0.0)

2013-01-16 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555124#comment-13555124
 ] 

Thomas Graves commented on HADOOP-9215:
---

Thanks Colin for taking this on.  So I now see the libhadoop.so and libhdfs.so 
in :

./hadoop-common-project/hadoop-common/target/native/libhadoop.so
./hadoop-hdfs-project/hadoop-hdfs/target/native/libhdfs.so

However looking more I don't see any *.so* in the tarball or hadoop-dist that 
is generated.  I'm pretty sure without this change I saw these:
./hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
./hadoop-dist/target/hadoop-3.0.0-SNAPSHOT/lib/native/libhdfs.so.0.0.0


> libhadoop.so doesn't exist (only libhadoop.so.1.0.0)
> 
>
> Key: HADOOP-9215
> URL: https://issues.apache.org/jira/browse/HADOOP-9215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha
>Reporter: Thomas Graves
>Assignee: Colin Patrick McCabe
>Priority: Blocker
> Attachments: HADOOP-9215.001.patch
>
>
> Looks like none of the .so files are being built. They all have .so.1.0.0 but 
> no just .so file.  branch-0.23 works fine but trunk and branch-2 are broke.
> This actually applies to libhadoop.so and libhdfs.so

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-01-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555115#comment-13555115
 ] 

Hadoop QA commented on HADOOP-9078:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565119/HADOOP-9078--b.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2056//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2056//console

This message is automatically generated.

> enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
> 
>
> Key: HADOOP-9078
> URL: https://issues.apache.org/jira/browse/HADOOP-9078
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
> HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, 
> HADOOP-9078-branch-2.patch, HADOOP-9078.patch, 
> HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555054#comment-13555054
 ] 

Hudson commented on HADOOP-8816:


Integrated in Hadoop-Mapreduce-trunk #1315 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1315/])
HADOOP-8816. HTTP Error 413 full HEAD if using kerberos authentication. 
(moritzmoeller via tucu) (Revision 1433567)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555056#comment-13555056
 ] 

Hudson commented on HADOOP-9106:


Integrated in Hadoop-Mapreduce-trunk #1315 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1315/])
HADOOP-9106. Allow configuration of IPC connect timeout. Contributed by 
Rober Parker. (Revision 1433747)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433747
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch, HADOOP-9106v4.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555052#comment-13555052
 ] 

Hudson commented on HADOOP-9212:


Integrated in Hadoop-Mapreduce-trunk #1315 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1315/])
HADOOP-9212. Potential deadlock in FileSystem.Cache/IPC/UGI. (Revision 
1433879)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433879
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha
>
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555048#comment-13555048
 ] 

Hudson commented on HADOOP-8712:


Integrated in Hadoop-Mapreduce-trunk #1315 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1315/])
HADOOP-8712. Change default hadoop.security.group.mapping to 
JniBasedUnixGroupsNetgroupMappingWithFallback. Contributed by Robert Parker. 
(Revision 1433624)

 Result = FAILURE
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433624
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml


> Change default hadoop.security.group.mapping
> 
>
> Key: HADOOP-8712
> URL: https://issues.apache.org/jira/browse/HADOOP-8712
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch
>
>
> Change the hadoop.security.group.mapping in core-site to 
> JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555049#comment-13555049
 ] 

Hudson commented on HADOOP-9217:


Integrated in Hadoop-Mapreduce-trunk #1315 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1315/])
HADOOP-9217. Print thread dumps when hadoop-common tests fail. Contributed 
by Andrey Klochkov. (Revision 1433713)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433713
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml


> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-01-16 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9078:
---

Attachment: HADOOP-9078--b.patch
HADOOP-9078-branch-2--c.patch

Remaking patch for branch-2 (version "c"): merge with incoming changes. 

> enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
> 
>
> Key: HADOOP-9078
> URL: https://issues.apache.org/jira/browse/HADOOP-9078
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, 
> HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, 
> HADOOP-9078-branch-2.patch, HADOOP-9078.patch, 
> HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext

2013-01-16 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9078:
---

Attachment: (was: HADOOP-9078--b.patch)

> enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
> 
>
> Key: HADOOP-9078
> URL: https://issues.apache.org/jira/browse/HADOOP-9078
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9078-branch-0.23.patch, 
> HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2.patch, HADOOP-9078.patch, 
> HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555008#comment-13555008
 ] 

Hudson commented on HADOOP-9106:


Integrated in Hadoop-Hdfs-trunk #1287 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1287/])
HADOOP-9106. Allow configuration of IPC connect timeout. Contributed by 
Rober Parker. (Revision 1433747)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433747
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch, HADOOP-9106v4.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555006#comment-13555006
 ] 

Hudson commented on HADOOP-8816:


Integrated in Hadoop-Hdfs-trunk #1287 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1287/])
HADOOP-8816. HTTP Error 413 full HEAD if using kerberos authentication. 
(moritzmoeller via tucu) (Revision 1433567)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555004#comment-13555004
 ] 

Hudson commented on HADOOP-9212:


Integrated in Hadoop-Hdfs-trunk #1287 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1287/])
HADOOP-9212. Potential deadlock in FileSystem.Cache/IPC/UGI. (Revision 
1433879)

 Result = FAILURE
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433879
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha
>
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555000#comment-13555000
 ] 

Hudson commented on HADOOP-8712:


Integrated in Hadoop-Hdfs-trunk #1287 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1287/])
HADOOP-8712. Change default hadoop.security.group.mapping to 
JniBasedUnixGroupsNetgroupMappingWithFallback. Contributed by Robert Parker. 
(Revision 1433624)

 Result = FAILURE
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433624
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml


> Change default hadoop.security.group.mapping
> 
>
> Key: HADOOP-8712
> URL: https://issues.apache.org/jira/browse/HADOOP-8712
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch
>
>
> Change the hadoop.security.group.mapping in core-site to 
> JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13555001#comment-13555001
 ] 

Hudson commented on HADOOP-9217:


Integrated in Hadoop-Hdfs-trunk #1287 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1287/])
HADOOP-9217. Print thread dumps when hadoop-common tests fail. Contributed 
by Andrey Klochkov. (Revision 1433713)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433713
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml


> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554986#comment-13554986
 ] 

Hudson commented on HADOOP-9217:


Integrated in Hadoop-Hdfs-0.23-Build #496 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/496/])
HADOOP-9217. Merging change 1433713 from trunk (Revision 1433718)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433718
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/pom.xml


> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9205) Java7: path to native libraries should be passed to tests via -Djava.library.path rather than env.LD_LIBRARY_PATH

2013-01-16 Thread Ivan A. Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan A. Veselovsky updated HADOOP-9205:
---

Resolution: Invalid
Status: Resolved  (was: Patch Available)

The described problem appears to be reproducible *only* on JDKs 7 patched in 
order to enable privileged ports (<1024) usage by non-root users via linux 
capabilities. (The exact patching proc looks like the folowing: 
patchelf --set-rpath ${J7_HOME}/jre/lib/amd64/jli ${J7_HOME}/bin/java
setcap cap_net_bind_service=+epi ${J7_HOME}/bin/java
patchelf --set-rpath ${J7_HOME}/jre/lib/amd64/jli ${J7_HOME}/jre/bin/java
setcap cap_net_bind_service=+epi ${J7_HOME}/jre/bin/java   
This patching is needed to run some security tests because they use <1024 
ports, and there is no simple way to re-configure these ports to higher values.)

So, the problem described in this issue appears to be a side effect of this 
patch. On a clean JDK 7 installed from scratch the problem is *not*  
reproducible, as both Thomas and Kihwal stated.

The command to verify:
mvn clean test -Pnative -Dtest=org.apache.hadoop.util.TestNativeCodeLoader 
-Drequire.test.libhadoop=true

So, I'm closing this issue as invalid. 
Sorry for this mess and many thanks for the provided information. 

> Java7: path to native libraries should be passed to tests via 
> -Djava.library.path rather than env.LD_LIBRARY_PATH
> -
>
> Key: HADOOP-9205
> URL: https://issues.apache.org/jira/browse/HADOOP-9205
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
> Attachments: HADOOP-9205.patch
>
>
> Currently the path to native libraries is passed to unit tests via 
> environment variable LD_LIBRARTY_PATH. This is okay for Java6, but does not 
> work for Java7, since Java7 ignores this environment variable.
> So, to run the tests with native implementation on Java7 one needs to pass 
> the paths to native libs via -Djava.library.path system property rather than 
> the LD_LIBRARY_PATH env variable.
> The suggested patch fixes the problem via setting the paths to native libs 
> using both LD_LIBRARY_PATH and -Djava.library.path property. This way the 
> tests work equally on both Java6 and Java7.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-16 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9212:
--

   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I just committed this.

> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha
>
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9106) Allow configuration of IPC connect timeout

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554928#comment-13554928
 ] 

Hudson commented on HADOOP-9106:


Integrated in Hadoop-Yarn-trunk #98 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/98/])
HADOOP-9106. Allow configuration of IPC connect timeout. Contributed by 
Rober Parker. (Revision 1433747)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433747
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java


> Allow configuration of IPC connect timeout
> --
>
> Key: HADOOP-9106
> URL: https://issues.apache.org/jira/browse/HADOOP-9106
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Robert Parker
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-9106v1.patch, HADOOP-9106v2.patch, 
> HADOOP-9106v3.patch, HADOOP-9106v4.patch
>
>
> Currently the connection timeout in Client.setupConnection() is hard coded to 
> 20seconds. This is unreasonable in some scenarios, such as HA failover, if we 
> want a faster failover time. We should allow this to be configured per-client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8816) HTTP Error 413 full HEAD if using kerberos authentication

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554926#comment-13554926
 ] 

Hudson commented on HADOOP-8816:


Integrated in Hadoop-Yarn-trunk #98 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/98/])
HADOOP-8816. HTTP Error 413 full HEAD if using kerberos authentication. 
(moritzmoeller via tucu) (Revision 1433567)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433567
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestHttpServer.java


> HTTP Error 413 full HEAD if using kerberos authentication
> -
>
> Key: HADOOP-8816
> URL: https://issues.apache.org/jira/browse/HADOOP-8816
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.0.1-alpha
> Environment: ubuntu linux with active directory kerberos.
>Reporter: Moritz Moeller
>Assignee: Moritz Moeller
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8816.patch, 
> hadoop-common-kerberos-increase-http-header-buffer-size.patch
>
>
> The HTTP Authentication: header is too large if using kerberos and the 
> request is rejected by Jetty because Jetty has a too low default header size 
> limit.
> Can be fixed by adding ret.setHeaderBufferSize(1024*128); in 
> org.apache.hadoop.http.HttpServer.createDefaultChannelConnector

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554924#comment-13554924
 ] 

Hudson commented on HADOOP-9212:


Integrated in Hadoop-Yarn-trunk #98 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/98/])
HADOOP-9212. Potential deadlock in FileSystem.Cache/IPC/UGI. (Revision 
1433879)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433879
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8712) Change default hadoop.security.group.mapping

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554920#comment-13554920
 ] 

Hudson commented on HADOOP-8712:


Integrated in Hadoop-Yarn-trunk #98 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/98/])
HADOOP-8712. Change default hadoop.security.group.mapping to 
JniBasedUnixGroupsNetgroupMappingWithFallback. Contributed by Robert Parker. 
(Revision 1433624)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433624
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml


> Change default hadoop.security.group.mapping
> 
>
> Key: HADOOP-8712
> URL: https://issues.apache.org/jira/browse/HADOOP-8712
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.2-alpha
>Reporter: Robert Parker
>Assignee: Robert Parker
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8712-v1.patch, HADOOP-8712-v2.patch
>
>
> Change the hadoop.security.group.mapping in core-site to 
> JniBasedUnixGroupsNetgroupMappingWithFallback

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9217) Print thread dumps when hadoop-common tests fail

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554921#comment-13554921
 ] 

Hudson commented on HADOOP-9217:


Integrated in Hadoop-Yarn-trunk #98 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/98/])
HADOOP-9217. Print thread dumps when hadoop-common tests fail. Contributed 
by Andrey Klochkov. (Revision 1433713)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433713
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml


> Print thread dumps when hadoop-common tests fail
> 
>
> Key: HADOOP-9217
> URL: https://issues.apache.org/jira/browse/HADOOP-9217
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Andrey Klochkov
>Assignee: Andrey Klochkov
> Fix For: 2.0.3-alpha, 0.23.6
>
> Attachments: HADOOP-9217.patch
>
>
> Printing thread dumps when tests fail due to timeouts was introduced in 
> HADOOP-8755, but was enabled in M/R, HDFS and Yarn only. 
> It makes sense to enable in hadoop-common as well. In particular, 
> TestZKFailoverController seems to be one of the most flaky tests in trunk 
> currently and having thread dumps may help debugging this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554907#comment-13554907
 ] 

Hudson commented on HADOOP-9212:


Integrated in Hadoop-trunk-Commit #3249 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3249/])
HADOOP-9212. Potential deadlock in FileSystem.Cache/IPC/UGI. (Revision 
1433879)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1433879
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9210) bad mirror in download list

2013-01-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13554838#comment-13554838
 ] 

Steve Loughran commented on HADOOP-9210:


I raised it with infrastructure

# the mirror list is http://www.us.apache.org/mirrors/
# after 3 days it stops being suggested to anyone
# after 28 it is dropped off the mirror list at all

Alliedquotes has been down for 12 days, so it shouldn't be suggested as a 
default link -another two weeks and it won't get listed at all

> bad mirror in download list
> ---
>
> Key: HADOOP-9210
> URL: https://issues.apache.org/jira/browse/HADOOP-9210
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Andy Isaacson
>Priority: Minor
>
> The http://hadoop.apache.org/releases.html page links to 
> http://www.apache.org/dyn/closer.cgi/hadoop/common/ which provides a list of 
> mirrors.  The first one on the list (for me) is 
> http://www.alliedquotes.com/mirrors/apache/hadoop/common/ which is 404.
> I checked the rest of the mirrors in the list and only alliedquotes is 404.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira