[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2014-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247905#comment-14247905
 ] 

Hudson commented on HADOOP-1:
-

SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #711 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/711/])
HBASE-12493 User class should provide a way to re-use existing token - addendum 
for hadoop-1 (tedyu: rev d6d22113c93d252b6b206d74e98da73273ba5a3a)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java


> initial import of code from Nutch
> -
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Doug Cutting
>Assignee: Doug Cutting
> Fix For: 0.1.0
>
>
> The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11413) Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs

2014-12-15 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11413:

Status: Patch Available  (was: Open)

> Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs
> --
>
> Key: HADOOP-11413
> URL: https://issues.apache.org/jira/browse/HADOOP-11413
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
> Attachments: HADOOP-11413.001.patch
>
>
> in org.apache.hadoop.fs.Hdfs, the {{CryptoCodec}} is unused, and we can 
> remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11413) Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs

2014-12-15 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11413:

Attachment: HADOOP-11413.001.patch

> Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs
> --
>
> Key: HADOOP-11413
> URL: https://issues.apache.org/jira/browse/HADOOP-11413
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>Priority: Minor
> Attachments: HADOOP-11413.001.patch
>
>
> in org.apache.hadoop.fs.Hdfs, the {{CryptoCodec}} is unused, and we can 
> remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247878#comment-14247878
 ] 

Jonathan Eagles commented on HADOOP-11387:
--

Please hold off on check this in until at least 24 hours so I can check the 
implications of the change. Also please update the title. This may have 
negative performance implications that I need to check on.

Jon

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code triggers a findbugs warning. The cache is used for caching NN 
> resolution for the client-side {{FileSystem}} objects. In most of the use 
> cases there are at most one or two instances in the cache. This jira proposes 
> to eliminate the findbugs warnings and to simplify the code using 
> {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11413) Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs

2014-12-15 Thread Yi Liu (JIRA)
Yi Liu created HADOOP-11413:
---

 Summary: Remove unused CryptoCodec in org.apache.hadoop.fs.Hdfs
 Key: HADOOP-11413
 URL: https://issues.apache.org/jira/browse/HADOOP-11413
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


in org.apache.hadoop.fs.Hdfs, the {{CryptoCodec}} is unused, and we can remove 
it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-1) initial import of code from Nutch

2014-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247874#comment-14247874
 ] 

Hudson commented on HADOOP-1:
-

SUCCESS: Integrated in HBase-0.98 #745 (See 
[https://builds.apache.org/job/HBase-0.98/745/])
HBASE-12493 User class should provide a way to re-use existing token - addendum 
for hadoop-1 (tedyu: rev d6d22113c93d252b6b206d74e98da73273ba5a3a)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/token/TokenUtil.java


> initial import of code from Nutch
> -
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Doug Cutting
>Assignee: Doug Cutting
> Fix For: 0.1.0
>
>
> The initial code for Hadoop will be copied from Nutch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11371) Shell#runCommand may miss output on stderr

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247871#comment-14247871
 ] 

Hadoop QA commented on HADOOP-11371:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687421/HADOOP-11371.001.patch
  against trunk revision c379e10.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5279//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5279//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5279//console

This message is automatically generated.

> Shell#runCommand may miss output on stderr
> --
>
> Key: HADOOP-11371
> URL: https://issues.apache.org/jira/browse/HADOOP-11371
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Minor
> Attachments: HADOOP-11371.001.patch
>
>
> Shell#runCommand uses buffered reader to read complete lines. Hence, if the 
> output is not terminated by newline or the thread was interrupted before new 
> line was produced by the child process, this last portion is not consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247835#comment-14247835
 ] 

Gera Shegalov commented on HADOOP-11409:


findbugs report appears unrelated. 

> FileContext.getFileContext can stack overflow if default fs misconfigured
> -
>
> Key: HADOOP-11409
> URL: https://issues.apache.org/jira/browse/HADOOP-11409
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Gera Shegalov
> Attachments: HADOOP-11409.001.patch
>
>
> If the default filesystem is misconfigured such that it doesn't have a scheme 
> then FileContext.getFileContext(URI, Configuration) will call 
> FileContext.getFileContext(Configuration) which in turn calls the former and 
> we loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11371) Shell#runCommand may miss output on stderr

2014-12-15 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11371:
---
Target Version/s: 2.7.0
  Status: Patch Available  (was: Open)

> Shell#runCommand may miss output on stderr
> --
>
> Key: HADOOP-11371
> URL: https://issues.apache.org/jira/browse/HADOOP-11371
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Minor
> Attachments: HADOOP-11371.001.patch
>
>
> Shell#runCommand uses buffered reader to read complete lines. Hence, if the 
> output is not terminated by newline or the thread was interrupted before new 
> line was produced by the child process, this last portion is not consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11371) Shell#runCommand may miss output on stderr

2014-12-15 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov reassigned HADOOP-11371:
--

Assignee: Gera Shegalov

> Shell#runCommand may miss output on stderr
> --
>
> Key: HADOOP-11371
> URL: https://issues.apache.org/jira/browse/HADOOP-11371
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
>Priority: Minor
> Attachments: HADOOP-11371.001.patch
>
>
> Shell#runCommand uses buffered reader to read complete lines. Hence, if the 
> output is not terminated by newline or the thread was interrupted before new 
> line was produced by the child process, this last portion is not consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11371) Shell#runCommand may miss output on stderr

2014-12-15 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11371:
---
Attachment: HADOOP-11371.001.patch

I added a regression test that would catch this problem. However, it turns out 
that java.io.BufferedReader#readLine() despite the javadoc not mentioning it 
OpenJDK actually [returns the incomplete line on 
EOF|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/io/BufferedReader.java#317].

Not sure about whether we can rely on this with other JDK's. 

# This patch eliminates arrayCopies in Buffered for readLine. 
# We also don't need to use synchronized StringBuffer for errMsg because there 
is already a synchronization point via {{joinThread(errThread);}}.
# No need to feed extra line.separator.

 

> Shell#runCommand may miss output on stderr
> --
>
> Key: HADOOP-11371
> URL: https://issues.apache.org/jira/browse/HADOOP-11371
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gera Shegalov
> Attachments: HADOOP-11371.001.patch
>
>
> Shell#runCommand uses buffered reader to read complete lines. Hence, if the 
> output is not terminated by newline or the thread was interrupted before new 
> line was produced by the child process, this last portion is not consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11371) Shell#runCommand may miss output on stderr

2014-12-15 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11371:
---
Priority: Minor  (was: Major)

> Shell#runCommand may miss output on stderr
> --
>
> Key: HADOOP-11371
> URL: https://issues.apache.org/jira/browse/HADOOP-11371
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gera Shegalov
>Priority: Minor
> Attachments: HADOOP-11371.001.patch
>
>
> Shell#runCommand uses buffered reader to read complete lines. Hence, if the 
> output is not terminated by newline or the thread was interrupted before new 
> line was produced by the child process, this last portion is not consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11387:

Description: 
Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
the canonicalized hostname.

{code}
  private static String canonicalizeHost(String host) {
// check if the host has already been canonicalized
String fqHost = canonicalizedHostCache.get(host);
if (fqHost == null) {
  try {
fqHost = SecurityUtil.getByName(host).getHostName();
// slight race condition, but won't hurt
canonicalizedHostCache.put(host, fqHost);
  } catch (UnknownHostException e) {
fqHost = host;
  }
}
return fqHost;
  }
{code}

The code triggers a findbugs warning. The cache is used for caching NN 
resolution for the client-side {{FileSystem}} objects. In most of the use cases 
there are at most one or two instances in the cache. This jira proposes to 
eliminate the findbugs warnings and to simplify the code using {{CacheMap}}.

  was:
Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
the canonicalized hostname.

{code}
  private static String canonicalizeHost(String host) {
// check if the host has already been canonicalized
String fqHost = canonicalizedHostCache.get(host);
if (fqHost == null) {
  try {
fqHost = SecurityUtil.getByName(host).getHostName();
// slight race condition, but won't hurt
canonicalizedHostCache.put(host, fqHost);
  } catch (UnknownHostException e) {
fqHost = host;
  }
}
return fqHost;
  }
{code}

The code can be simplified using {{CacheMap}}.


> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code triggers a findbugs warning. The cache is used for caching NN 
> resolution for the client-side {{FileSystem}} objects. In most of the use 
> cases there are at most one or two instances in the cache. This jira proposes 
> to eliminate the findbugs warnings and to simplify the code using 
> {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247815#comment-14247815
 ] 

Haohui Mai commented on HADOOP-11387:
-

bq. Have a look at TEZ-1526 to see my earlier concerns with the performance and 
memory concerns over using this guava library in Hadoop.

As [~gtCarrera9] pointed out, the cache is only for caching the resolution of 
NN by the client-side {{FileSystem}} objects. In common cases there should be 
only one or two instances in the cache, thus in my opinion guava cache is good 
enough for this use case.

bq. That is not documented in this jira. If there is truly a simplification 
jira then this shouldn't block a clean jenkins build. Can you please comment on 
this.

I'll update the description accordingly.

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247778#comment-14247778
 ] 

Li Lu commented on HADOOP-11387:


The two unit test failures are both timeout-related. They appears to by 
unrelated to the changes in this JIRA and I could not reproduce them. From the 
report, the two findbugs warnings are completely orthogonal to this JIRA. I'm 
looking into the performance issues, though. 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11403) Solaris does not support sys_errlist requires use of strerror instead

2014-12-15 Thread Malcolm Kavalsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Malcolm Kavalsky updated HADOOP-11403:
--
Hadoop Flags: Reviewed
  Status: Patch Available  (was: In Progress)

> Solaris does not support sys_errlist requires use of strerror instead
> -
>
> Key: HADOOP-11403
> URL: https://issues.apache.org/jira/browse/HADOOP-11403
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.0, 2.4.0, 2.3.0, 2.2.0
> Environment: Solaris 11.1 (Sparc, Intel), Linux x86
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native, newbie, patch, solaris, terror
> Fix For: 2.6.0
>
> Attachments: HADOOP-11403.001.patch, HADOOP-11403.002.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> sys_errlist has been removed from Solaris. The new interface is strerror.  
> Wherever sys_errlist is accessed we should change to using strerror instead.
> We already have an interface function terror which can contain this 
> functionality, so we should use it instead of directly accessing sys_errlist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11403) Solaris does not support sys_errlist requires use of strerror instead

2014-12-15 Thread Malcolm Kavalsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Malcolm Kavalsky updated HADOOP-11403:
--
Attachment: HADOOP-11403.002.patch

Added comment to terror explaining that Solaris does not support sys_errlist

> Solaris does not support sys_errlist requires use of strerror instead
> -
>
> Key: HADOOP-11403
> URL: https://issues.apache.org/jira/browse/HADOOP-11403
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.5.0
> Environment: Solaris 11.1 (Sparc, Intel), Linux x86
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native, newbie, patch, solaris, terror
> Fix For: 2.6.0
>
> Attachments: HADOOP-11403.001.patch, HADOOP-11403.002.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> sys_errlist has been removed from Solaris. The new interface is strerror.  
> Wherever sys_errlist is accessed we should change to using strerror instead.
> We already have an interface function terror which can contain this 
> functionality, so we should use it instead of directly accessing sys_errlist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247666#comment-14247666
 ] 

Jonathan Eagles commented on HADOOP-11387:
--

A couple of things.

1) The performance of this CacheLoader is several orders of magnitude worse 
than the concurrent library for our use case. Have a look at TEZ-1526 to see my 
earlier concerns with the performance and memory concerns over using this guava 
library in Hadoop.

2) The purpose by looking at the summary and description of this change is to 
make something simpler, but you say that it blocks a clean jenkins run above. 
That is not documented in this jira. If there is truly a simplification jira 
then this shouldn't block a clean jenkins build. Can you please comment on this.


> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11412) POMs mention "The Apache Software License" rather than "Apache License"

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247660#comment-14247660
 ] 

Hadoop QA commented on HADOOP-11412:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12687381/HADOOP-11412.patch
  against trunk revision a095622.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 14 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-client hadoop-maven-plugins hadoop-minicluster hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-azure hadoop-tools/hadoop-openstack hadoop-tools/hadoop-sls.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5277//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5277//artifact/patchprocess/newPatchFindbugsWarningshadoop-sls.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5277//artifact/patchprocess/newPatchFindbugsWarningshadoop-maven-plugins.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5277//console

This message is automatically generated.

> POMs mention "The Apache Software License" rather than "Apache License"
> ---
>
> Key: HADOOP-11412
> URL: https://issues.apache.org/jira/browse/HADOOP-11412
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Hervé Boutemy
>Priority: Trivial
> Attachments: HADOOP-11412.patch
>
>
> like JAMES-821 or RAT-128 or MPOM-48



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247654#comment-14247654
 ] 

Hudson commented on HADOOP-11410:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6725 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6725/])
HADOOP-11410. Make the rpath of libhadoop.so configurable (cmccabe) (cmccabe: 
rev fb20797b6237054f3d16ff94a665cbad4cbe3293)
* hadoop-common-project/hadoop-common/src/CMakeLists.txt
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch, HADOOP-11410.002.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11410:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to 2.7, thanks

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch, HADOOP-11410.002.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247630#comment-14247630
 ] 

Hadoop QA commented on HADOOP-11387:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687367/HADOOP-11387-121514-2.patch
  against trunk revision a095622.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverController
  org.apache.hadoop.crypto.random.TestOsSecureRandom

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5276//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5276//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5276//console

This message is automatically generated.

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11412) POMs mention "The Apache Software License" rather than "Apache License"

2014-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hervé Boutemy updated HADOOP-11412:
---
Status: Patch Available  (was: Open)

> POMs mention "The Apache Software License" rather than "Apache License"
> ---
>
> Key: HADOOP-11412
> URL: https://issues.apache.org/jira/browse/HADOOP-11412
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Hervé Boutemy
>Priority: Trivial
> Attachments: HADOOP-11412.patch
>
>
> like JAMES-821 or RAT-128 or MPOM-48



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11412) POMs mention "The Apache Software License" rather than "Apache License"

2014-12-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HADOOP-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hervé Boutemy updated HADOOP-11412:
---
Attachment: HADOOP-11412.patch

testing the contribution process with this simple patch before working on more 
important issues :)

> POMs mention "The Apache Software License" rather than "Apache License"
> ---
>
> Key: HADOOP-11412
> URL: https://issues.apache.org/jira/browse/HADOOP-11412
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Hervé Boutemy
>Priority: Trivial
> Attachments: HADOOP-11412.patch
>
>
> like JAMES-821 or RAT-128 or MPOM-48



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11412) POMs mention "The Apache Software License" rather than "Apache License"

2014-12-15 Thread JIRA
Hervé Boutemy created HADOOP-11412:
--

 Summary: POMs mention "The Apache Software License" rather than 
"Apache License"
 Key: HADOOP-11412
 URL: https://issues.apache.org/jira/browse/HADOOP-11412
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Hervé Boutemy
Priority: Trivial


like JAMES-821 or RAT-128 or MPOM-48



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247617#comment-14247617
 ] 

Hadoop QA commented on HADOOP-11410:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687364/HADOOP-11410.002.patch
  against trunk revision a095622.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5275//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5275//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5275//console

This message is automatically generated.

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch, HADOOP-11410.002.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11403) Solaris does not support sys_errlist requires use of strerror instead

2014-12-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247610#comment-14247610
 ] 

Colin Patrick McCabe commented on HADOOP-11403:
---

+1.  Can you hit "submit patch" so that we can get a jenkins run?  Thanks

> Solaris does not support sys_errlist requires use of strerror instead
> -
>
> Key: HADOOP-11403
> URL: https://issues.apache.org/jira/browse/HADOOP-11403
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.5.0
> Environment: Solaris 11.1 (Sparc, Intel), Linux x86
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native, newbie, patch, solaris, terror
> Fix For: 2.6.0
>
> Attachments: HADOOP-11403.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> sys_errlist has been removed from Solaris. The new interface is strerror.  
> Wherever sys_errlist is accessed we should change to using strerror instead.
> We already have an interface function terror which can contain this 
> functionality, so we should use it instead of directly accessing sys_errlist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11403) Solaris does not support sys_errlist requires use of strerror instead

2014-12-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247613#comment-14247613
 ] 

Colin Patrick McCabe commented on HADOOP-11403:
---

Can you also add a comment to {{terror}} explaining that {{sys_errlist}} 
doesn't exist on Solaris?

> Solaris does not support sys_errlist requires use of strerror instead
> -
>
> Key: HADOOP-11403
> URL: https://issues.apache.org/jira/browse/HADOOP-11403
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.3.0, 2.4.0, 2.5.0
> Environment: Solaris 11.1 (Sparc, Intel), Linux x86
>Reporter: Malcolm Kavalsky
>Assignee: Malcolm Kavalsky
>  Labels: native, newbie, patch, solaris, terror
> Fix For: 2.6.0
>
> Attachments: HADOOP-11403.001.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> sys_errlist has been removed from Solaris. The new interface is strerror.  
> Wherever sys_errlist is accessed we should change to using strerror instead.
> We already have an interface function terror which can contain this 
> functionality, so we should use it instead of directly accessing sys_errlist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11152) Better random number generator

2014-12-15 Thread RJ Nowling (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247594#comment-14247594
 ] 

RJ Nowling commented on HADOOP-11152:
-

Hi all,

What about quasi-random numbers?

If you generate a large number of samples using a RNG, the set of numbers will 
be approximately uniformly distributed. However, if you take a small number of 
samples (say 50), you would see that they are not picked uniformly.  E.g., 
samples may cluster.  RNGs are not necessarily guaranteed to maintain the 
uniform nature for small groups of samples.  (Some may do that, however.)

Quasi-random sequences ensure that the set of numbers generated maintains a 
uniform distribution regardless of whether you pick 100 or 10,000.  You may 
want to read this blog entry by John D. Cook for an example: 
http://www.johndcook.com/blog/2009/03/16/quasi-random-sequences-in-art-and-integration/
 .

Numerical Recipes discusses algorithms for generating sequences of quasi-random 
numbers.

> Better random number generator
> --
>
> Key: HADOOP-11152
> URL: https://issues.apache.org/jira/browse/HADOOP-11152
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Luke Lu
>  Labels: newbie++
>
> HDFS-7122 showed that naive ThreadLocal usage of simple LCG based j.u.Random 
> creates unacceptable distribution of random numbers for block placement. 
> Similarly, ThreadLocalRandom in java 7 (same static thread local with 
> synchronized methods overridden) has the same problem. 
> "Better" is defined as better quality and faster than j.u.Random (which is 
> already much faster (20x) than SecureRandom).
> People (e.g. Numerical Recipes) have shown that by combining LCG and XORShift 
> we can have a better fast RNG. It'd be worthwhile to investigate a thread 
> local version of these "better" RNG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247590#comment-14247590
 ] 

Haohui Mai commented on HADOOP-11387:
-

LGTM. +1 pending jenkins. [~jeagles], do you have any comments? I'd like to 
commit it sooner rather than later as if blocks a clean Jenkins run.

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11400) GraphiteSink does not reconnect to Graphite after 'broken pipe'

2014-12-15 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247566#comment-14247566
 ] 

Ravi Prakash commented on HADOOP-11400:
---

Thanks for the report Kamil! Do you plan on uploading a patch to fix the issue? 
I'd be happy to review and commit it promptly

> GraphiteSink does not reconnect to Graphite after 'broken pipe'
> ---
>
> Key: HADOOP-11400
> URL: https://issues.apache.org/jira/browse/HADOOP-11400
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.6.0, 2.5.1
>Reporter: Kamil Gorlo
>
> I see that after network error GraphiteSink does not reconnects to Graphite 
> server and in effect metrics are not sent. 
> Here is stacktrace I see (this is from nodemanager):
> 2014-12-11 16:39:21,655 ERROR 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Got sink exception, retry 
> in 4806ms
> org.apache.hadoop.metrics2.MetricsException: Error flushing metrics
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:120)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:129)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> Caused by: java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
> at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
> at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
> at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
> at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:118)
> ... 5 more
> 2014-12-11 16:39:26,463 ERROR 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Got sink exception and 
> over retry limit, suppressing further error messages
> org.apache.hadoop.metrics2.MetricsException: Error flushing metrics
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:120)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:129)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> Caused by: java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
> at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
> at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
> at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
> at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:118)
> ... 5 more
> GraphiteSinkFixed.java is simply GraphiteSink.java from Hadoop 2.6.0 (with 
> fixed https://issues.apache.org/jira/browse/HADOOP-11182) because I cannot 
> simply upgrade Hadoop (I am using CDH5).
> I see that GraphiteSink is using OutputStreamWriter which is created only in 
> init method (which is probably called only once per application runtime) and 
> there is no reconnection logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11387:
---
Attachment: HADOOP-11387-121514-2.patch

Move the catch ExecutionExection statement into the loader to make the overall 
load logic cleaner. 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, 
> HADOOP-11387-121514-2.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11411) Hive build failure on hadoop-2.7 due to HADOOP-11356

2014-12-15 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere resolved HADOOP-11411.
-
  Resolution: Duplicate
Release Note: Opened Hive Jira at HIVE-9115

> Hive build failure on hadoop-2.7 due to HADOOP-11356
> 
>
> Key: HADOOP-11411
> URL: https://issues.apache.org/jira/browse/HADOOP-11411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jason Dere
>
> HADOOP-11356 removes org.apache.hadoop.fs.permission.AccessControlException, 
> causing build break on Hive when compiling against hadoop-2.7:
> {noformat}
> shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:[808,63]
>  cannot find symbol
>   symbol:   class AccessControlException
>   location: package org.apache.hadoop.fs.permission
> [INFO] 1 error
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247555#comment-14247555
 ] 

Hadoop QA commented on HADOOP-11410:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687349/HADOOP-11410.001.patch
  against trunk revision a095622.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5274//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5274//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5274//console

This message is automatically generated.

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch, HADOOP-11410.002.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247554#comment-14247554
 ] 

Hadoop QA commented on HADOOP-11409:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687342/HADOOP-11409.001.patch
  against trunk revision a095622.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5273//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5273//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5273//console

This message is automatically generated.

> FileContext.getFileContext can stack overflow if default fs misconfigured
> -
>
> Key: HADOOP-11409
> URL: https://issues.apache.org/jira/browse/HADOOP-11409
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Gera Shegalov
> Attachments: HADOOP-11409.001.patch
>
>
> If the default filesystem is misconfigured such that it doesn't have a scheme 
> then FileContext.getFileContext(URI, Configuration) will call 
> FileContext.getFileContext(Configuration) which in turn calls the former and 
> we loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247553#comment-14247553
 ] 

Aaron T. Myers commented on HADOOP-11410:
-

Latest patch looks great to me, and thanks for the explanation of what testing 
you did.

+1 pending Jenkins.

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch, HADOOP-11410.002.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11410:
--
Attachment: HADOOP-11410.002.patch

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch, HADOOP-11410.002.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2014-12-15 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247539#comment-14247539
 ] 

Yongjun Zhang commented on HADOOP-10895:


As long planned for a real cluster test after the code review is settled, and I 
got [~rkanter]'s help on this. Thanks Robert a lot for walking me through with 
the oozie tests, and the comments here. Look forward to [~atm] and [~tucu00]'s 
comments (thanks guys!).




> HTTP KerberosAuthenticator fallback should have a flag to disable it
> 
>
> Key: HADOOP-10895
> URL: https://issues.apache.org/jira/browse/HADOOP-10895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Yongjun Zhang
>Priority: Blocker
> Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
> HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
> HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
> HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
> HADOOP-10895.008.patch, HADOOP-10895.009.patch
>
>
> Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
> delegation token version coming in with HADOOP-10771 should have a flag to 
> disable fallback to pseudo, similarly to the one that was introduced in 
> Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11411) Hive build failure on hadoop-2.7 due to HADOOP-11356

2014-12-15 Thread Jason Dere (JIRA)
Jason Dere created HADOOP-11411:
---

 Summary: Hive build failure on hadoop-2.7 due to HADOOP-11356
 Key: HADOOP-11411
 URL: https://issues.apache.org/jira/browse/HADOOP-11411
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jason Dere


HADOOP-11356 removes org.apache.hadoop.fs.permission.AccessControlException, 
causing build break on Hive when compiling against hadoop-2.7:

{noformat}
shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:[808,63]
 cannot find symbol
  symbol:   class AccessControlException
  location: package org.apache.hadoop.fs.permission
[INFO] 1 error
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11411) Hive build failure on hadoop-2.7 due to HADOOP-11356

2014-12-15 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247548#comment-14247548
 ] 

Jason Dere commented on HADOOP-11411:
-

Whoops, meant to open this against Hive.

> Hive build failure on hadoop-2.7 due to HADOOP-11356
> 
>
> Key: HADOOP-11411
> URL: https://issues.apache.org/jira/browse/HADOOP-11411
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jason Dere
>
> HADOOP-11356 removes org.apache.hadoop.fs.permission.AccessControlException, 
> causing build break on Hive when compiling against hadoop-2.7:
> {noformat}
> shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java:[808,63]
>  cannot find symbol
>   symbol:   class AccessControlException
>   location: package org.apache.hadoop.fs.permission
> [INFO] 1 error
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247546#comment-14247546
 ] 

Colin Patrick McCabe commented on HADOOP-11410:
---

bq. Patch looks pretty good to me, Colin. Can you perhaps comment on what 
testing you've done of this change? It seems quite straightforward to me, but 
it'd be good if you could let us know what verification of this you've done.

I built with and without {{\-Dextra.libhadoop.rpath=/tmp/foo}} and verified 
that the RPATH was as expected using {{chrpath -l}}

One small suggestion on the actual contents of the patch: given that we're now 
adding $ORIGIN into the RPATH before the SET_TARGET_PROPERTIES call, seems like 
it'd be better to move the associated comment explaining the purpose of that up 
as well.

yeah

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247535#comment-14247535
 ] 

Aaron T. Myers commented on HADOOP-11410:
-

Patch looks pretty good to me, Colin. Can you perhaps comment on what testing 
you've done of this change? It seems quite straightforward to me, but it'd be 
good if you could let us know what verification of this you've done.

One small suggestion on the actual contents of the patch: given that we're now 
adding {{$ORIGIN}} into the {{RPATH}} before the SET_TARGET_PROPERTIES call, 
seems like it'd be better to move the associated comment explaining the purpose 
of that up as well.

+1 once the above to little things are addressed.

Thanks, Colin.

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247512#comment-14247512
 ] 

Hadoop QA commented on HADOOP-11387:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687334/HADOOP-11387-121514-1.patch
  against trunk revision a095622.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.crypto.random.TestOsSecureRandom

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5272//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5272//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5272//console

This message is automatically generated.

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11409:
---
Target Version/s: 2.7.0
  Status: Patch Available  (was: Open)

> FileContext.getFileContext can stack overflow if default fs misconfigured
> -
>
> Key: HADOOP-11409
> URL: https://issues.apache.org/jira/browse/HADOOP-11409
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Gera Shegalov
> Attachments: HADOOP-11409.001.patch
>
>
> If the default filesystem is misconfigured such that it doesn't have a scheme 
> then FileContext.getFileContext(URI, Configuration) will call 
> FileContext.getFileContext(Configuration) which in turn calls the former and 
> we loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11410:
--
Status: Patch Available  (was: Open)

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11410:
--
Attachment: HADOOP-11410.001.patch

> make the rpath of libhadoop.so configurable 
> 
>
> Key: HADOOP-11410
> URL: https://issues.apache.org/jira/browse/HADOOP-11410
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-11410.001.patch
>
>
> We should make the rpath of {{libhadoop.so}} configurable, so that we can use 
> a different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily 
> used to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11410) make the rpath of libhadoop.so configurable

2014-12-15 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11410:
-

 Summary: make the rpath of libhadoop.so configurable 
 Key: HADOOP-11410
 URL: https://issues.apache.org/jira/browse/HADOOP-11410
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


We should make the rpath of {{libhadoop.so}} configurable, so that we can use a 
different rpath if needed.  The {{RPATH}} of {{libhadoop.so}} is primarily used 
to control where {{dlopen}} looks for shared libraries by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2014-12-15 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247476#comment-14247476
 ] 

Robert Kanter commented on HADOOP-10895:


I was helping [~yzhangal] test this out by deploying Oozie with the hadoop-auth 
changes in the patch.  The idea would be that Oozie uses 
{{KerberosAuthenticator}} even in a non-secure cluster, relying on the fallback 
behavior.  With the patch, that should now fail because the fallback is 
disabled by default.  

However, when we tried this, we saw that it still was able to connect with the 
{{KerberosAuthenticator}} (and also the {{PsuedoAuthenticator}}.  We attached a 
debugger and discovered that it wasn't even trying to use Kerberos and 
succeeding; it actually looks like the {{KerberosAuthenticator}} can be used to 
talk to a non-secure cluster, without falling back.  See this code here from 
{{KerberosAuthenticator}}:
{code:java}
  if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {// <- A
LOG.debug("JDK performed authentication on our behalf.");
// If the JDK already did the SPNEGO back-and-forth for
// us, just pull out the token.
AuthenticatedURL.extractToken(conn, token);
return;
  } else if (isNegotiate()) {   // <- B
LOG.debug("Performing our own SPNEGO sequence.");
doSpnegoSequence(token);
  } else {  // <- C
LOG.debug("Using fallback authenticator sequence.");
Authenticator auth = getFallBackAuthenticator();
// Make sure that the fall back authenticator have the same
// ConnectionConfigurator, since the method might be overridden.
// Otherwise the fall back authenticator might not have the information
// to make the connection (e.g., SSL certificates)
auth.setConnectionConfigurator(connConfigurator);
auth.authenticate(url, token);
  }
}
{code}

In the case we were expecting it to fail, we get to Line A.  Because it’s a 
non-secure cluster, we get an HTTP_OK when we talk to the server, even without 
Kerberos credentials.  Because of that, it goes ahead normally.  As the comment 
suggests, Line A can also occur sometimes in a normal Kerberos case.

Line B occurs when we’re doing a Kerberos negotiation.  And Lince C occurs when 
we’re not doing Kerberos; which is what we were expecting to hit in our test 
but didn’t.

We can’t remove Line A; IIRC, we’ve tried that in the past and it’s caused 
problems.  So, I’m not really sure what we should do here.  Regardless of 
fallback, it looks like the KerberosAuthenticator can talk to a non-secure 
cluster, which was the point of this JIRA.  Any ideas [~atm] or [~tucu00]?

> HTTP KerberosAuthenticator fallback should have a flag to disable it
> 
>
> Key: HADOOP-10895
> URL: https://issues.apache.org/jira/browse/HADOOP-10895
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Yongjun Zhang
>Priority: Blocker
> Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
> HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
> HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
> HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
> HADOOP-10895.008.patch, HADOOP-10895.009.patch
>
>
> Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
> delegation token version coming in with HADOOP-10771 should have a flag to 
> disable fallback to pseudo, similarly to the one that was introduced in 
> Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HADOOP-11409:
---
Attachment: HADOOP-11409.001.patch

Jason, I added a regression test and a fix.

> FileContext.getFileContext can stack overflow if default fs misconfigured
> -
>
> Key: HADOOP-11409
> URL: https://issues.apache.org/jira/browse/HADOOP-11409
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Gera Shegalov
> Attachments: HADOOP-11409.001.patch
>
>
> If the default filesystem is misconfigured such that it doesn't have a scheme 
> then FileContext.getFileContext(URI, Configuration) will call 
> FileContext.getFileContext(Configuration) which in turn calls the former and 
> we loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11387:
---
Attachment: HADOOP-11387-121514-1.patch

Thanks [~wheat9] for pointing this out. I fixed the overlooked exception 
handling in this patch. 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514-1.patch, HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10049) 2.1.0 beta won't run under cygwin?

2014-12-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10049:
--
Fix Version/s: (was: 2.1.0-beta)

> 2.1.0 beta won't run under cygwin?
> --
>
> Key: HADOOP-10049
> URL: https://issues.apache.org/jira/browse/HADOOP-10049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 2.1.1-beta
> Environment: windows xp + cygwin
>Reporter: Michael Klybor
>
> I recently tried out 2.1.0 beta in the hope of benefiting from numerous 
> improvements made re: running on cygwin. I cannot even get a "bin/hadoop 
> version" to work, it gets a java.lang.NoClassDefFoundError for the common 
> classes such as org.apache.hadoop.util.VersionInfo
> I've looked a similiar bug report and have made sure that HADOOP_PREFIX is 
> set, and that HADOOP_HOME is not set. I've looked at the output of 
> "bin/hadoop classpath" and it looks correct EXCEPT that 2.1.0 shows the 
> classpath using unix syntax and paths, while my older version of hadoop 
> (1.2.1) shows it using Windows syntax and paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247421#comment-14247421
 ] 

Haohui Mai commented on HADOOP-11387:
-

{code}
-String fqHost = canonicalizeHost(host);
+String fqHost;
+try {
+  fqHost = canonicalizedHostCache.get(host);
+} catch (ExecutionException e) {
+  throw new RuntimeException(e);
+}
{code}

{code}
+  private static final LoadingCache canonicalizedHostCache =
+  CacheBuilder.newBuilder().maximumSize(CANONICAL_CACHE_SIZE).build(
+  new CacheLoader() {
+@Override public String load(String host) throws Exception {
+  return SecurityUtil.getByName(host).getHostName();
+}
+  }
+  );
{code}

I think it diverges from that original behavior. It might make sense to catch 
the {{UnknownHostException}} when populating the cache.

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247401#comment-14247401
 ] 

Hadoop QA commented on HADOOP-11387:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12687307/HADOOP-11387-121514.patch
  against trunk revision e597249.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.crypto.random.TestOsSecureRandom

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.http.TestHttpServerLifecycle

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5271//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5271//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5271//console

This message is automatically generated.

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov reassigned HADOOP-11409:
--

Assignee: Gera Shegalov

> FileContext.getFileContext can stack overflow if default fs misconfigured
> -
>
> Key: HADOOP-11409
> URL: https://issues.apache.org/jira/browse/HADOOP-11409
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>Assignee: Gera Shegalov
>
> If the default filesystem is misconfigured such that it doesn't have a scheme 
> then FileContext.getFileContext(URI, Configuration) will call 
> FileContext.getFileContext(Configuration) which in turn calls the former and 
> we loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247396#comment-14247396
 ] 

Jason Lowe commented on HADOOP-11409:
-

I believe this was a side-effect of the MAPREDUCE-5960 change.  
[~jira.shegalov] would you mind taking a look?

> FileContext.getFileContext can stack overflow if default fs misconfigured
> -
>
> Key: HADOOP-11409
> URL: https://issues.apache.org/jira/browse/HADOOP-11409
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Jason Lowe
>
> If the default filesystem is misconfigured such that it doesn't have a scheme 
> then FileContext.getFileContext(URI, Configuration) will call 
> FileContext.getFileContext(Configuration) which in turn calls the former and 
> we loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11409) FileContext.getFileContext can stack overflow if default fs misconfigured

2014-12-15 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-11409:
---

 Summary: FileContext.getFileContext can stack overflow if default 
fs misconfigured
 Key: HADOOP-11409
 URL: https://issues.apache.org/jira/browse/HADOOP-11409
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Jason Lowe


If the default filesystem is misconfigured such that it doesn't have a scheme 
then FileContext.getFileContext(URI, Configuration) will call 
FileContext.getFileContext(Configuration) which in turn calls the former and we 
loop until the stack explodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247377#comment-14247377
 ] 

Li Lu commented on HADOOP-11387:


Hi [~jeagles], sorry I'm a little bit confused here: I think this cache is only 
used by hdfs clients. In this use case there won't be significant number of 
hostnames in this cache. Am I missing anything here? Thanks! 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247359#comment-14247359
 ] 

Jonathan Eagles commented on HADOOP-11387:
--

[~gtCarrera9], can you provide performance measurements including runtime and 
memory usage of this function with perhaps 6000 hostnames over 10,000,000 
lookups (significant lookup metrics). Typically Cache lookups can be 5 - 100 
times worse performance and double or triple memory requirements.

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10852) NetgroupCache is not thread-safe

2014-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247314#comment-14247314
 ] 

Hudson commented on HADOOP-10852:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6724 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6724/])
HADOOP-10852 Fix thread safety issues in NetgroupCache. (Benoy Antony) (benoy: 
rev a095622f36c5e9fff3ec02b14b800038a81f6286)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NetgroupCache.java


> NetgroupCache is not thread-safe
> 
>
> Key: HADOOP-10852
> URL: https://issues.apache.org/jira/browse/HADOOP-10852
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Fix For: 2.7.0
>
> Attachments: HADOOP-10852.patch, HADOOP-10852.patch, 
> HADOOP-10852.patch, HADOOP-10852.patch
>
>
> _NetgroupCache_ internally uses two ConcurrentHashMaps and a boolean variable 
> to signal updates on one of the ConcurrentHashMap
> None of the functions are synchronized  and hence is possible to have 
> unexpected results due to race condition between different threads.
> As an example, consider the following sequence:
> Thread1 :
> {{add}} a group
> {{netgroupToUsersMap}} is updated.
> {{netgroupToUsersMapUpdated}} is set to true.
> Thread 2:
> calls {{getNetgroups}} for a user
> Due to re-ordering, {{netgroupToUsersMapUpdated=true}} is visible, but 
> updates in {{netgroupToUsersMap}} is not visible.
> Does a wrong update with older {{netgroupToUsersMap}} values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10852) NetgroupCache is not thread-safe

2014-12-15 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10852:
--
  Resolution: Fixed
Target Version/s: 2.7.0
  Status: Resolved  (was: Patch Available)

committed to trunk and branch-2

> NetgroupCache is not thread-safe
> 
>
> Key: HADOOP-10852
> URL: https://issues.apache.org/jira/browse/HADOOP-10852
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-10852.patch, HADOOP-10852.patch, 
> HADOOP-10852.patch, HADOOP-10852.patch
>
>
> _NetgroupCache_ internally uses two ConcurrentHashMaps and a boolean variable 
> to signal updates on one of the ConcurrentHashMap
> None of the functions are synchronized  and hence is possible to have 
> unexpected results due to race condition between different threads.
> As an example, consider the following sequence:
> Thread1 :
> {{add}} a group
> {{netgroupToUsersMap}} is updated.
> {{netgroupToUsersMapUpdated}} is set to true.
> Thread 2:
> calls {{getNetgroups}} for a user
> Due to re-ordering, {{netgroupToUsersMapUpdated=true}} is visible, but 
> updates in {{netgroupToUsersMap}} is not visible.
> Does a wrong update with older {{netgroupToUsersMap}} values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10852) NetgroupCache is not thread-safe

2014-12-15 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10852:
--
Fix Version/s: 2.7.0

> NetgroupCache is not thread-safe
> 
>
> Key: HADOOP-10852
> URL: https://issues.apache.org/jira/browse/HADOOP-10852
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Fix For: 2.7.0
>
> Attachments: HADOOP-10852.patch, HADOOP-10852.patch, 
> HADOOP-10852.patch, HADOOP-10852.patch
>
>
> _NetgroupCache_ internally uses two ConcurrentHashMaps and a boolean variable 
> to signal updates on one of the ConcurrentHashMap
> None of the functions are synchronized  and hence is possible to have 
> unexpected results due to race condition between different threads.
> As an example, consider the following sequence:
> Thread1 :
> {{add}} a group
> {{netgroupToUsersMap}} is updated.
> {{netgroupToUsersMapUpdated}} is set to true.
> Thread 2:
> calls {{getNetgroups}} for a user
> Due to re-ordering, {{netgroupToUsersMapUpdated=true}} is visible, but 
> updates in {{netgroupToUsersMap}} is not visible.
> Does a wrong update with older {{netgroupToUsersMap}} values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11387:
---
Status: Patch Available  (was: Open)

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11387:
---
Attachment: HADOOP-11387-121514.patch

In this patch I used LoadingCache to replace ConcurrentHashMap. After this 
change the logic of canonicalizedHostCache is simplified. 

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
> Attachments: HADOOP-11387-121514.patch
>
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11387) Simplify NetUtils#canonicalizeHost()

2014-12-15 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu reassigned HADOOP-11387:
--

Assignee: Li Lu  (was: Haohui Mai)

> Simplify NetUtils#canonicalizeHost()
> 
>
> Key: HADOOP-11387
> URL: https://issues.apache.org/jira/browse/HADOOP-11387
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Li Lu
>
> Currently {{NetUtils#canonicalizeHost}} uses a {{ConcurrentHashMap}} to cache 
> the canonicalized hostname.
> {code}
>   private static String canonicalizeHost(String host) {
> // check if the host has already been canonicalized
> String fqHost = canonicalizedHostCache.get(host);
> if (fqHost == null) {
>   try {
> fqHost = SecurityUtil.getByName(host).getHostName();
> // slight race condition, but won't hurt
> canonicalizedHostCache.put(host, fqHost);
>   } catch (UnknownHostException e) {
> fqHost = host;
>   }
> }
> return fqHost;
>   }
> {code}
> The code can be simplified using {{CacheMap}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11346) Rewrite sls/rumen to use new shell framework

2014-12-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247199#comment-14247199
 ] 

Allen Wittenauer commented on HADOOP-11346:
---

Now that  HADOOP-10950  has been committed, this patch needs to be updated to 
remove JAVA_HEAP_MAX.

> Rewrite sls/rumen to use new shell framework
> 
>
> Key: HADOOP-11346
> URL: https://issues.apache.org/jira/browse/HADOOP-11346
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts, tools
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-11346-01.patch, HADOOP-11346.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2014-12-15 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247146#comment-14247146
 ] 

Li Lu commented on HADOOP-11398:


Hi [~jingzhao], thanks for the review. I agree that we should not make the 
retry policies stateful, thanks for pointing this out. We may want to address 
the timing problems as a new feature, rather than this quick fix. 

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11380) Restore Rack Awareness documentation

2014-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247118#comment-14247118
 ] 

Hudson commented on HADOOP-11380:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #6722 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6722/])
HADOOP-11380. Restore Rack Awareness documenation (aw) (aw: rev 
e8a67bed10d749864a3bb2589c6686c40bebccc5)
* hadoop-common-project/hadoop-common/src/site/apt/RackAwareness.apt.vm
* hadoop-common-project/hadoop-common/CHANGES.txt


> Restore Rack Awareness documentation
> 
>
> Key: HADOOP-11380
> URL: https://issues.apache.org/jira/browse/HADOOP-11380
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11380.patch
>
>
> As part of  HADOOP-8427, large, extremely useful sections of the Rack 
> Awareness documentation that was added in HADOOP-6616 was wiped out.  We 
> should restore it as a separate document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10852) NetgroupCache is not thread-safe

2014-12-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247107#comment-14247107
 ] 

Hadoop QA commented on HADOOP-10852:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12687279/HADOOP-10852.patch
  against trunk revision 832ebd8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5270//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5270//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5270//console

This message is automatically generated.

> NetgroupCache is not thread-safe
> 
>
> Key: HADOOP-10852
> URL: https://issues.apache.org/jira/browse/HADOOP-10852
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-10852.patch, HADOOP-10852.patch, 
> HADOOP-10852.patch, HADOOP-10852.patch
>
>
> _NetgroupCache_ internally uses two ConcurrentHashMaps and a boolean variable 
> to signal updates on one of the ConcurrentHashMap
> None of the functions are synchronized  and hence is possible to have 
> unexpected results due to race condition between different threads.
> As an example, consider the following sequence:
> Thread1 :
> {{add}} a group
> {{netgroupToUsersMap}} is updated.
> {{netgroupToUsersMapUpdated}} is set to true.
> Thread 2:
> calls {{getNetgroups}} for a user
> Due to re-ordering, {{netgroupToUsersMapUpdated=true}} is visible, but 
> updates in {{netgroupToUsersMap}} is not visible.
> Does a wrong update with older {{netgroupToUsersMap}} values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11380) Restore Rack Awareness documentation

2014-12-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11380:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Restore Rack Awareness documentation
> 
>
> Key: HADOOP-11380
> URL: https://issues.apache.org/jira/browse/HADOOP-11380
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11380.patch
>
>
> As part of  HADOOP-8427, large, extremely useful sections of the Rack 
> Awareness documentation that was added in HADOOP-6616 was wiped out.  We 
> should restore it as a separate document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11380) Restore Rack Awareness documentation

2014-12-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247098#comment-14247098
 ] 

Allen Wittenauer commented on HADOOP-11380:
---

I think I'm going to commit as-is.  We need to do a more comprehensive doc 
uplift anyway.

Thanks for the review [~ste...@apache.org]!

> Restore Rack Awareness documentation
> 
>
> Key: HADOOP-11380
> URL: https://issues.apache.org/jira/browse/HADOOP-11380
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
> Attachments: HADOOP-11380.patch
>
>
> As part of  HADOOP-8427, large, extremely useful sections of the Rack 
> Awareness documentation that was added in HADOOP-6616 was wiped out.  We 
> should restore it as a separate document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11398) RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately

2014-12-15 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247090#comment-14247090
 ] 

Jing Zhao commented on HADOOP-11398:


Thanks for working on this Li! The current patch will change the retry policy 
from stateless to stateful, and may not guarantee the correctness in a 
multi-thread scenario. Maybe a better way to fix is to add the time-based retry 
support to our current RetryPolicy.

> RetryUpToMaximumTimeWithFixedSleep needs to behave more accurately
> --
>
> Key: HADOOP-11398
> URL: https://issues.apache.org/jira/browse/HADOOP-11398
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: HADOOP-11398-121114.patch
>
>
> RetryUpToMaximumTimeWithFixedSleep now inherits 
> RetryUpToMaximumCountWithFixedSleep and just acts as a wrapper to decide 
> maxRetries. The current implementation uses (maxTime / sleepTime) as the 
> number of maxRetries. This is fine if the actual for each retry is 
> significantly less than the sleep time, but it becomes less accurate if each 
> retry takes comparable amount of time as the sleep time. The problem gets 
> worse when there are underlying retries. 
> We may want to use timers inside RetryUpToMaximumTimeWithFixedSleep to 
> perform accurate timing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9363) AuthenticatedURL will NPE if server closes connection

2014-12-15 Thread Anthony Hsu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14247078#comment-14247078
 ] 

Anthony Hsu commented on HADOOP-9363:
-

Hi [~daryn],

Any updates on this ticket and/or a patch?  We have seen this issue 
intermittently at LinkedIn, too.

Best,
Anthony

> AuthenticatedURL will NPE if server closes connection
> -
>
> Key: HADOOP-9363
> URL: https://issues.apache.org/jira/browse/HADOOP-9363
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>
> A NPE occurs if the server unexpectedly closes the connection for an 
> {{AuthenticatedURL}} w/o sending a response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10852) NetgroupCache is not thread-safe

2014-12-15 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10852:
--
Attachment: HADOOP-10852.patch

Thanks for the review [~arpitagarwal]. Attaching the newer patch. Will commit 
once jenkins build is done with expected results.

> NetgroupCache is not thread-safe
> 
>
> Key: HADOOP-10852
> URL: https://issues.apache.org/jira/browse/HADOOP-10852
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-10852.patch, HADOOP-10852.patch, 
> HADOOP-10852.patch, HADOOP-10852.patch
>
>
> _NetgroupCache_ internally uses two ConcurrentHashMaps and a boolean variable 
> to signal updates on one of the ConcurrentHashMap
> None of the functions are synchronized  and hence is possible to have 
> unexpected results due to race condition between different threads.
> As an example, consider the following sequence:
> Thread1 :
> {{add}} a group
> {{netgroupToUsersMap}} is updated.
> {{netgroupToUsersMapUpdated}} is set to true.
> Thread 2:
> calls {{getNetgroups}} for a user
> Due to re-ordering, {{netgroupToUsersMapUpdated=true}} is visible, but 
> updates in {{netgroupToUsersMap}} is not visible.
> Does a wrong update with older {{netgroupToUsersMap}} values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7222) Inconsistent behavior when passing a path with special characters as literals to some FsShell commands

2014-12-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7222:
-
Component/s: (was: scripts)

> Inconsistent behavior when passing a path with special characters as literals 
> to some FsShell commands
> --
>
> Key: HADOOP-7222
> URL: https://issues.apache.org/jira/browse/HADOOP-7222
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2
> Environment: Unix, Java 1.6, hadoop 0.20.2
>Reporter: Karl Kuntz
>
> The following work:
> hadoop dfs --put test^ing /tmp 
> hadoop dfs --ls /tmp   
> The following do not:
> hadoop dfs --ls /tmp/test^ing  
> hadoop dfs --get /tmp/test^ing test^ing 
> The first fails with "ls: Cannot access /tmp/test^ing: No such file or 
> directory." 
> The second fails with "get: null".
>  
> It is possible to put a file with some special characters, such as ^ using 
> the hadoop shell.  But once put one cannot ls, cat, or get the file due to 
> the way some commands deal with file globbing.  Harsh J suggested on the 
> mailing list that perhaps a flag that would turn off globbing could be 
> implemented. Perhaps something like single quoting the file path on the 
> command line to disable globbing would work as well.   
> As an example in the source for 0.20.2 the ^ character in particular wasn't 
> escaped in in the output pattern in FileSystem.java @line 1050 in 
> setRegex(String filePattern).:
> ...
> } else if (pCh == '[' && setOpen == 0) {
>   setOpen++;
>   hasPattern = true;
> } else if (pCh == '^' && setOpen > 0) {
> } else if (pCh == '-' && setOpen > 0) {
>   // Character set range
>   setRange = true;
> ...
> After looking in trunk, it seems to have been dealt with in later versions 
> (refactored into GlobPattern.java)
> ...
>  case '^': // ^ inside [...] can be unescaped
>   if (setOpen == 0) {
> regex.append(BACKSLASH);
>   }
>   break;
>  case '!': //
> ...
> but even after pushing that back in 0.20.2 and testing it appears to resolve 
> the issue for commands like ls, but not for get.  So perhaps there is more to 
> be done for other commands?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8505) hadoop scripts to support user native lib dirs

2014-12-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-8505.
--
Resolution: Implemented

Trunk/3.x supports appending to JAVA_LIBRARY_PATH. Closing as implemented.

> hadoop scripts to support user native lib dirs
> --
>
> Key: HADOOP-8505
> URL: https://issues.apache.org/jira/browse/HADOOP-8505
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 1.0.3
>Reporter: Steve Loughran
>Priority: Minor
>
> you can set up a custom classpath with bin/hadoop through the 
> HADOOP_CLASSPATH env, but there is no equivalent for the native libraries 
> -the only way to get them picked up is to drop them into lib/native/${arch}/ 
> , which impacts everything.
> Having some HADOOP_NATIVE_LIB_PATH env variable would let people add new 
> native binaries to Hadoop commands.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-7852) consolidate templates

2014-12-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-7852.
--
Resolution: Won't Fix

Templates and configuration tool have been removed from trunk. Closing as won't 
fix.

> consolidate templates
> -
>
> Key: HADOOP-7852
> URL: https://issues.apache.org/jira/browse/HADOOP-7852
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.0
>Reporter: Joe Crobak
>Priority: Minor
>
> the hadoop-common project has templates for hdfs-site.xml and mapred-site.xml 
> that are used by the config generator scripts.  The 
> hadoop-{mapreduce,hdfs}-project's also have {mapred,hdfs}-site.xml templates, 
> and the templates don't match. It would be good if these could be 
> consolidated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11408) TestRetryCacheWithHA.testUpdatePipeline failed in trunk

2014-12-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang resolved HADOOP-11408.

Resolution: Duplicate

> TestRetryCacheWithHA.testUpdatePipeline failed in trunk
> ---
>
> Key: HADOOP-11408
> URL: https://issues.apache.org/jira/browse/HADOOP-11408
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yongjun Zhang
>
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/
> Error Message
> {quote}
> After waiting the operation updatePipeline still has not taken effect on NN 
> yet
> Stacktrace
> java.lang.AssertionError: After waiting the operation updatePipeline still 
> has not taken effect on NN yet
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176)
> {quote}
> Found by tool proposed in HADOOP-11045:
> {quote}
> [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
> Hadoop-Hdfs-trunk -n 5 | tee bt.log
> Recently FAILED builds in url: 
> https://builds.apache.org//job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport 
> (2014-12-15 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> Failed test: 
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport 
> (2014-12-13 10:32:27)
> Failed test: 
> org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport 
> (2014-12-13 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport 
> (2014-12-11 03:30:01)
> Failed test: 
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
> Among 6 runs examined, all failed tests <#failedRuns: testName>:
> 3: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
> 2: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
> 2: 
> org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
> 1: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11408) TestRetryCacheWithHA.testUpdatePipeline failed in trunk

2014-12-15 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-11408:
---
Description: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/

Error Message
{quote}
After waiting the operation updatePipeline still has not taken effect on NN yet
Stacktrace

java.lang.AssertionError: After waiting the operation updatePipeline still has 
not taken effect on NN yet
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176)
{quote}

Found by tool proposed in HADOOP-11045:

{quote}
[yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
Hadoop-Hdfs-trunk -n 5 | tee bt.log
Recently FAILED builds in url: 
https://builds.apache.org//job/Hadoop-Hdfs-trunk
THERE ARE 4 builds (out of 6) that have failed tests in the past 5 days, as 
listed below:

===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport (2014-12-15 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
Failed test: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport (2014-12-13 
10:32:27)
Failed test: 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport (2014-12-13 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport (2014-12-11 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization

Among 6 runs examined, all failed tests <#failedRuns: testName>:
3: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
2: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
2: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
1: 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
{quote}


  was:
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/

Error Message
{quote}
After waiting the operation updatePipeline still has not taken effect on NN yet
Stacktrace

java.lang.AssertionError: After waiting the operation updatePipeline still has 
not taken effect on NN yet
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176)
{quote}

Found by tool proposed in HADOOP-11045:

{quote}
[yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
Hadoop-Hdfs-trunk -n 28 | tee bt.log
Recently FAILED builds in url: 
https://builds.apache.org//job/Hadoop-Hdfs-trunk
THERE ARE 4 builds (out of 6) that have failed tests in the past 28 days, 
as listed below:

===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport (2014-12-15 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
Failed test: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport (2014-12-13 
10:32:27)
Failed test: 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport (2014-12-13 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport (2014-12-11 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
Failed test: 
org.apache.hadoop.hdfs.serve

[jira] [Created] (HADOOP-11408) TestRetryCacheWithHA.testUpdatePipeline failed in trunk

2014-12-15 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HADOOP-11408:
--

 Summary: TestRetryCacheWithHA.testUpdatePipeline failed in trunk
 Key: HADOOP-11408
 URL: https://issues.apache.org/jira/browse/HADOOP-11408
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yongjun Zhang


https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport/

Error Message
{quote}
After waiting the operation updatePipeline still has not taken effect on NN yet
Stacktrace

java.lang.AssertionError: After waiting the operation updatePipeline still has 
not taken effect on NN yet
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testClientRetryWithFailover(TestRetryCacheWithHA.java:1278)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline(TestRetryCacheWithHA.java:1176)
{quote}

Found by tool proposed in HADOOP-11045:

{quote}
[yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
Hadoop-Hdfs-trunk -n 28 | tee bt.log
Recently FAILED builds in url: 
https://builds.apache.org//job/Hadoop-Hdfs-trunk
THERE ARE 4 builds (out of 6) that have failed tests in the past 28 days, 
as listed below:

===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1974/testReport (2014-12-15 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
Failed test: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1972/testReport (2014-12-13 
10:32:27)
Failed test: 
org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1971/testReport (2014-12-13 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1969/testReport (2014-12-11 
03:30:01)
Failed test: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
Failed test: 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization

Among 6 runs examined, all failed tests <#failedRuns: testName>:
3: 
org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA.testUpdatePipeline
2: org.apache.hadoop.hdfs.TestDecommission.testIncludeByRegistrationName
2: 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect
1: 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testFailoverRightBeforeCommitSynchronization
{quote}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11321) copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions.

2014-12-15 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246810#comment-14246810
 ] 

Chris Nauroth commented on HADOOP-11321:


Thanks for reviewing, Arpit.

bq. Good catch. If the directory does not end in a trailing backslash then 
should it be -13 instead?

By the time execution reaches here, we have passed the path through 
{{java.io.File#getAbsolutePath}}.  This appears to have the behavior of 
removing the trailing slash if present.  Even if that weren't the case, the JDK 
code has no special handling for a trailing slash, so I think we're all clear.

http://hg.openjdk.java.net/jdk7/modules/jdk/file/a37326fa7f95/src/windows/native/java/io/io_util_md.c#l146


> copyToLocal cannot save a file to an SMB share unless the user has Full 
> Control permissions.
> 
>
> Key: HADOOP-11321
> URL: https://issues.apache.org/jira/browse/HADOOP-11321
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HADOOP-11321.003.patch, HADOOP-11321.004.patch, 
> HADOOP-11321.005.patch, HADOOP-11321.006.patch, HADOOP-11321.1.patch, 
> HADOOP-11321.2.patch, winutils.tmp.patch
>
>
> In Hadoop 2, it is impossible to use {{copyToLocal}} to copy a file from HDFS 
> to a destination on an SMB share.  This is because in Hadoop 2, the 
> {{copyToLocal}} maps to 2 underlying {{RawLocalFileSystem}} operations: 
> {{create}} and {{setPermission}}.  On an SMB share, the user may be 
> authorized for the {{create}} but denied for the {{setPermission}}.  Windows 
> denies the {{WRITE_DAC}} right required by {{setPermission}} unless the user 
> has Full Control permissions.  Granting Full Control isn't feasible for most 
> deployments, because it's insecure.  This is a regression from Hadoop 1, 
> where {{copyToLocal}} only did a {{create}} and didn't do a separate 
> {{setPermission}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10668) TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails

2014-12-15 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10668:

  Component/s: test
 Priority: Major  (was: Minor)
Affects Version/s: 3.0.0

uprating as blocking hadoop common trunk builds

> TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails
> 
>
> Key: HADOOP-10668
> URL: https://issues.apache.org/jira/browse/HADOOP-10668
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>  Labels: test
>
> From 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/4018//testReport/org.apache.hadoop.ha/TestZKFailoverControllerStress/testExpireBackAndForth/
>  :
> {code}
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
>   at org.apache.zookeeper.server.DataTree.getData(DataTree.java:648)
>   at org.apache.zookeeper.server.ZKDatabase.getData(ZKDatabase.java:371)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireActiveLockHolder(MiniZKFCCluster.java:199)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:234)
>   at 
> org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:84)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11362) Test org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf timing out

2014-12-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246539#comment-14246539
 ] 

Steve Loughran commented on HADOOP-11362:
-

findbugs warning is (clearly) spurious

> Test 
> org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf 
> timing out
> 
>
> Key: HADOOP-11362
> URL: https://issues.apache.org/jira/browse/HADOOP-11362
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
> Environment: ASF Jenkins, Java 7 & 8
>Reporter: Steve Loughran
> Attachments: 
> 0001-HADOOP-11362-Test-org.apache.hadoop.crypto.random.Te.patch
>
>
> The test 
> {{org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf}}
>  is timing out on jenkins + Java 8.
> This is probably the exec() operation. It may be transient, it may be a java 
> 8 + shell problem. 
> do we actually need this test in its present form? If a test for file handle 
> leakage is really needed, attempting to create 64K instances of the OSRandom 
> object should do it without having to resort to some printing and manual 
> debugging of logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11397) Can't override HADOOP_IDENT_STRING

2014-12-15 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246537#comment-14246537
 ] 

Steve Loughran commented on HADOOP-11397:
-

warnings & test failure unrelated, HADOOP-11362 will fix the test failure when 
applied

> Can't override HADOOP_IDENT_STRING
> --
>
> Key: HADOOP-11397
> URL: https://issues.apache.org/jira/browse/HADOOP-11397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Priority: Trivial
> Attachments: HADOOP-11397.001.patch
>
>
> Simple typo in hadoop_basic_init:
> {code}
>   HADOOP_IDENT_STRING=${HADOP_IDENT_STRING:-$USER}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11390) Metrics 2 ganglia provider to include hostname in unresolved address problems

2014-12-15 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14246504#comment-14246504
 ] 

Varun Saxena commented on HADOOP-11390:
---

[~ste...@apache.org], kindly review

> Metrics 2 ganglia provider to include hostname in unresolved address problems
> -
>
> Key: HADOOP-11390
> URL: https://issues.apache.org/jira/browse/HADOOP-11390
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Minor
> Fix For: 2.7.0
>
> Attachments: HADOOP-11390.001.patch, HADOOP-11390.002.patch
>
>
> When metrics2/ganglia gets an unresolved hostname it doesn't include the 
> hostname in question, making it harder to track down



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11407) Adding socket receive buffer size support in Client.java

2014-12-15 Thread Liang Xie (JIRA)
Liang Xie created HADOOP-11407:
--

 Summary: Adding socket receive buffer size support in Client.java
 Key: HADOOP-11407
 URL: https://issues.apache.org/jira/browse/HADOOP-11407
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 2.6.0
Reporter: Liang Xie
Assignee: Liang Xie


It would be good if the Client class having a socketReceiveBufferSize just like 
Server's socketSendBufferSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)