[jira] [Commented] (HADOOP-10285) Admin interface to swap callqueue at runtime

2014-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909215#comment-13909215
 ] 

Hadoop QA commented on HADOOP-10285:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12630428/HADOOP-10285.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3596//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3596//console

This message is automatically generated.

> Admin interface to swap callqueue at runtime
> 
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: HADOOP-10285.patch, HADOOP-10285.patch, 
> HADOOP-10285.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10359) Native bzip2 compression support is broken on non-Linux systems

2014-02-21 Thread Ilya Maykov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909182#comment-13909182
 ] 

Ilya Maykov commented on HADOOP-10359:
--

Could a committer please code review?

> Native bzip2 compression support is broken on non-Linux systems
> ---
>
> Key: HADOOP-10359
> URL: https://issues.apache.org/jira/browse/HADOOP-10359
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 2.2.0
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
> Hadoop 2.2.0-CDH5.0.0-beta-2
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10359-native-bzip2-for-os-x.patch
>
>
> While testing the patch for HADOOP-9648, I noticed that the bzip2 native 
> compressor/decompressor support wasn't working properly. I dug around a bit 
> and got native bzip2 support to work on my macbook. Will attach a patch in a 
> bit. (This probably needs to be tested on FreeBSD / Windows / Linux, but I 
> don't have the time to set up the necessary VMs to do it. I assume the build 
> bot will test Linux).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10359) Native bzip2 compression support is broken on non-Linux systems

2014-02-21 Thread Ilya Maykov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909167#comment-13909167
 ] 

Ilya Maykov commented on HADOOP-10359:
--

Note: I believe that native support is broken on non-linux systems because the 
header file unconditionally does

#define HADOOP_BZIP2_LIBRARY "libbz2.so.1"

which I think is only correct for Linux according to some comments found in 
CMakeLists.txt (Windows should have a ".dll", FreeBSD a ."so" without the 
version, and Mac OS X a ".version.dylib"). However I've only verified that it's 
broken on Mac OS X.

> Native bzip2 compression support is broken on non-Linux systems
> ---
>
> Key: HADOOP-10359
> URL: https://issues.apache.org/jira/browse/HADOOP-10359
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 2.2.0
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
> Hadoop 2.2.0-CDH5.0.0-beta-2
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10359-native-bzip2-for-os-x.patch
>
>
> While testing the patch for HADOOP-9648, I noticed that the bzip2 native 
> compressor/decompressor support wasn't working properly. I dug around a bit 
> and got native bzip2 support to work on my macbook. Will attach a patch in a 
> bit. (This probably needs to be tested on FreeBSD / Windows / Linux, but I 
> don't have the time to set up the necessary VMs to do it. I assume the build 
> bot will test Linux).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10359) Native bzip2 compression support is broken on non-Linux systems

2014-02-21 Thread Ilya Maykov (JIRA)
Ilya Maykov created HADOOP-10359:


 Summary: Native bzip2 compression support is broken on non-Linux 
systems
 Key: HADOOP-10359
 URL: https://issues.apache.org/jira/browse/HADOOP-10359
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 2.2.0
 Environment: Mac OS X 10.8.5
Oracle JDK 1.7.0_51
Hadoop 2.2.0-CDH5.0.0-beta-2
Reporter: Ilya Maykov
Priority: Minor
 Attachments: HADOOP-10359-native-bzip2-for-os-x.patch

While testing the patch for HADOOP-9648, I noticed that the bzip2 native 
compressor/decompressor support wasn't working properly. I dug around a bit and 
got native bzip2 support to work on my macbook. Will attach a patch in a bit. 
(This probably needs to be tested on FreeBSD / Windows / Linux, but I don't 
have the time to set up the necessary VMs to do it. I assume the build bot will 
test Linux).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10359) Native bzip2 compression support is broken on non-Linux systems

2014-02-21 Thread Ilya Maykov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Maykov updated HADOOP-10359:
-

Attachment: HADOOP-10359-native-bzip2-for-os-x.patch

> Native bzip2 compression support is broken on non-Linux systems
> ---
>
> Key: HADOOP-10359
> URL: https://issues.apache.org/jira/browse/HADOOP-10359
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 2.2.0
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
> Hadoop 2.2.0-CDH5.0.0-beta-2
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10359-native-bzip2-for-os-x.patch
>
>
> While testing the patch for HADOOP-9648, I noticed that the bzip2 native 
> compressor/decompressor support wasn't working properly. I dug around a bit 
> and got native bzip2 support to work on my macbook. Will attach a patch in a 
> bit. (This probably needs to be tested on FreeBSD / Windows / Linux, but I 
> don't have the time to set up the necessary VMs to do it. I assume the build 
> bot will test Linux).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10358) libhadoop doesn't compile on Mac OS X

2014-02-21 Thread Ilya Maykov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909153#comment-13909153
 ] 

Ilya Maykov commented on HADOOP-10358:
--

Yes, this is a duplicate. My patch is smaller and only fixes 
hadoop-common-project without changing hadoop-yarn-project. I'll make a comment 
in the other issue.

> libhadoop doesn't compile on Mac OS X
> -
>
> Key: HADOOP-10358
> URL: https://issues.apache.org/jira/browse/HADOOP-10358
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10358-fix-hadoop-common-native-on-os-x.patch
>
>
> The native component of hadoop-common (libhadoop.so on linux, libhadoop.dylib 
> on mac) fails to compile on Mac OS X. The problem is in  
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
>  at lines 76-78:
> [exec] 
> /Users/ilyam/src/github/apache/hadoop-common/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26:
>  error: invalid operands to binary expression ('void' and 'int')
> [exec]  if(setnetgrent(cgroup) == 1) {
> [exec]  ~~~ ^  ~
> There are two problems in the code:
> 1) The #ifndef guard only checks for __FreeBSD__ but should check for either 
> one of __FreeBSD__ or __APPLE__. This is because Mac OS X inherits its 
> syscalls from FreeBSD rather than Linux, and thus the setnetgrent() syscall 
> returns void.
> 2) setnetgrentCalledFlag = 1 is set outside the #ifndef guard, but the 
> syscall is only invoked inside the guard. This means that on FreeBSD, 
> endnetgrent() can be called in the cleanup code without a corresponding 
> setnetgrent() invocation.
> I have a patch that fixes both issues (will attach in a bit). With this 
> patch, I'm able to compile libhadoop.dylib on Mac OS X, which in turn lets me 
> install native snappy, lzo, etc compressor libraries on my client. That lets 
> me run commands like 'hadoop fs -text somefile.lzo' from the macbook rather 
> than having to ssh to a linux box, etc.
> Note that this patch only fixes the native build of hadoop-common-project. 
> Some other components of hadoop still fail to build their native components, 
> but libhadoop.dylib is enough for the client.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10070) RPC client doesn't use per-connection conf to determine server's expected Kerberos principal name

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909131#comment-13909131
 ] 

Hudson commented on HADOOP-10070:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5210 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5210/])
HADOOP-10070. RPC client doesn't use per-connection conf to determine server's 
expected Kerberos principal name. Contributed by Aaron T. Myers. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570776)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ClientCache.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcClient.java


> RPC client doesn't use per-connection conf to determine server's expected 
> Kerberos principal name
> -
>
> Key: HADOOP-10070
> URL: https://issues.apache.org/jira/browse/HADOOP-10070
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-10070.patch, HADOOP-10070.patch, 
> TestKerberosClient.java
>
>
> Currently, RPC client caches the {{Configuration}} object that was passed in 
> to its constructor and uses that same conf for every connection it sets up 
> thereafter. This can cause problems when security is enabled if the 
> {{Configuration}} object provided when the first RPC connection was made does 
> not contain all possible entries for all server principals that will later be 
> used by subsequent connections. When this happens, it will result in later 
> RPC connections incorrectly failing with the error "Failed to specify 
> server's Kerberos principal name" even though the principal name was 
> specified in the {{Configuration}} object provided on later RPC connection 
> attempts.
> I believe this means that we've inadvertently reintroduced HADOOP-6907.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10357) Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive

2014-02-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909115#comment-13909115
 ] 

Larry McCay commented on HADOOP-10357:
--

For a simple maven project that you can pull and build: 
https://github.com/lmccay/ugihivememory



> Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive
> 
>
> Key: HADOOP-10357
> URL: https://issues.apache.org/jira/browse/HADOOP-10357
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.2.0
>Reporter: Larry McCay
>
> When using UGI.doAs in order to make a connection there appears to be a 
> memory leak involving the UGI that is used for the doAs and the UGI held by 
> TUGIAssumingTransport.
> When using this approach to establishing a JDBC connection in an environment 
> that will serve many users and requests client side eventually runs out of 
> memory.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10357) Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive

2014-02-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909111#comment-13909111
 ] 

Larry McCay commented on HADOOP-10357:
--

The following class can be used to watch the memory footprint grow while it 
makes 200,000 connections and just closes them:

{code}
package org.apache.hadoop.examples;

import java.security.PrivilegedExceptionAction;
import java.sql.*;

import org.apache.hadoop.security.UserGroupInformation;


public class HiveMemoryExample {

//  JDBC credentials
static final String JDBC_DRIVER = "org.apache.hive.jdbc.HiveDriver";
static final String KEYTABDIR = 
"/etc/security/keytabs/hive.service.keytab";
static final String HIVE_PRINCIPAL = "hive/example@example.com";
static final String JDBC_DB_URL = 
"jdbc:hive2://127.0.0.1:1/default;principal=" + HIVE_PRINCIPAL;
static final String USER = null; 
static final String PASS = null; 


static Connection getConnection() throws Exception{
final UserGroupInformation ugi = UserGroupInformation.
loginUserFromKeytabAndReturnUGI(HIVE_PRINCIPAL, 
KEYTABDIR);

Connection conn = (Connection) ugi.doAs(new 
PrivilegedExceptionAction() {
public Object run() {   
  
Connection con = null;
try {

Class.forName(JDBC_DRIVER);
con =  
DriverManager.getConnection(JDBC_DB_URL,USER,PASS);
} catch (SQLException e) {
e.printStackTrace();
} catch (ClassNotFoundException 
e2) {
e2.printStackTrace();
}
return con;
}
});

return conn;
}

public static void main(String[] args) {

UserGroupInformation.setConfiguration(new 
org.apache.hadoop.conf.Configuration());

System.out.println("-- Test started ---");
Runtime rtime = Runtime.getRuntime();


for(int i=0; i<20; i++) {
Connection conn = null;
try {
conn = getConnection(); 
} catch (Exception e){
e.printStackTrace();
} finally {
try {
conn.close();
} catch (SQLException e) {}
}

//Print used memory
System.out.println("Iteration = " + i + " Used Memory:" 
+ (rtime.totalMemory() - rtime.freeMemory())  + 
" Bytes " );


}

System.out.println("Test ended  ");
}
}
{code}

> Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive
> 
>
> Key: HADOOP-10357
> URL: https://issues.apache.org/jira/browse/HADOOP-10357
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.2.0
>Reporter: Larry McCay
>
> When using UGI.doAs in order to make a connection there appears to be a 
> memory leak involving the UGI that is used for the doAs and the UGI held by 
> TUGIAssumingTransport.
> When using this approach to establishing a JDBC connection in an environment 
> that will serve many users and requests client side eventually runs out of 
> memory.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10070) RPC client doesn't use per-connection conf to determine server's expected Kerberos principal name

2014-02-21 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909088#comment-13909088
 ] 

Aaron T. Myers commented on HADOOP-10070:
-

Yes, that's correct - this fixes the latter issue.

I'm going to go ahead and commit this momentarily based on Daryn's +1.

> RPC client doesn't use per-connection conf to determine server's expected 
> Kerberos principal name
> -
>
> Key: HADOOP-10070
> URL: https://issues.apache.org/jira/browse/HADOOP-10070
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: HADOOP-10070.patch, HADOOP-10070.patch, 
> TestKerberosClient.java
>
>
> Currently, RPC client caches the {{Configuration}} object that was passed in 
> to its constructor and uses that same conf for every connection it sets up 
> thereafter. This can cause problems when security is enabled if the 
> {{Configuration}} object provided when the first RPC connection was made does 
> not contain all possible entries for all server principals that will later be 
> used by subsequent connections. When this happens, it will result in later 
> RPC connections incorrectly failing with the error "Failed to specify 
> server's Kerberos principal name" even though the principal name was 
> specified in the {{Configuration}} object provided on later RPC connection 
> attempts.
> I believe this means that we've inadvertently reintroduced HADOOP-6907.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10285) Admin interface to swap callqueue at runtime

2014-02-21 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10285:
--

Attachment: HADOOP-10285.patch

Suppress findbugs errors like the other protobufs do; not sure why the HA test 
fails, it passed on my machine. Let's try again.

> Admin interface to swap callqueue at runtime
> 
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: HADOOP-10285.patch, HADOOP-10285.patch, 
> HADOOP-10285.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9648) Fix build native library on mac osx

2014-02-21 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909047#comment-13909047
 ] 

Andrew Wang commented on HADOOP-9648:
-

The common changes look good to me. I can't really evaluate the YARN changes 
though, and I don't actually have a Mac to test this. Maybe a Mac-capable 
developer can step in and do the final review and +1?

> Fix build native library on mac osx
> ---
>
> Key: HADOOP-9648
> URL: https://issues.apache.org/jira/browse/HADOOP-9648
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 1.0.4, 1.2.0, 1.1.2, 2.0.5-alpha
>Reporter: Kirill A. Korinskiy
>Assignee: Binglin Chang
> Attachments: HADOOP-9648-native-osx.1.0.4.patch, 
> HADOOP-9648-native-osx.1.1.2.patch, HADOOP-9648-native-osx.1.2.0.patch, 
> HADOOP-9648-native-osx.2.0.5-alpha-rc1.patch, HADOOP-9648.v2.patch
>
>
> Some patches for fixing build a hadoop native library on os x 10.7/10.8.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10285) Admin interface to swap callqueue at runtime

2014-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909036#comment-13909036
 ] 

Hadoop QA commented on HADOOP-10285:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12630389/HADOOP-10285.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3595//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3595//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3595//console

This message is automatically generated.

> Admin interface to swap callqueue at runtime
> 
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: HADOOP-10285.patch, HADOOP-10285.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10295) Allow distcp to automatically identify the checksum type of source files and use it for the target

2014-02-21 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-10295:
---

Release Note: Add option for distcp to preserve the checksum type of the 
source files. Users can use "-pc" as distcp command option to preserve the 
checksum type.

> Allow distcp to automatically identify the checksum type of source files and 
> use it for the target
> --
>
> Key: HADOOP-10295
> URL: https://issues.apache.org/jira/browse/HADOOP-10295
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 2.2.0
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10295.000.patch, HADOOP-10295.002.patch, 
> hadoop-10295.patch
>
>
> Currently while doing distcp, users can use "-Ddfs.checksum.type" to specify 
> the checksum type in the target FS. This works fine if all the source files 
> are using the same checksum type. If files in the source cluster have mixed 
> types of checksum, users have to either use "-skipcrccheck" or have checksum 
> mismatching exception. Thus we may need to consider adding a new option to 
> distcp so that it can automatically identify the original checksum type of 
> each source file and use the same checksum type in the target FS. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9968) ProxyUsers does not work with NetGroups

2014-02-21 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909009#comment-13909009
 ] 

Devaraj Das commented on HADOOP-9968:
-

Looks good to me. [~tucu00], please have a look if you can. I will commit it 
later in the day today.

> ProxyUsers does not work with NetGroups
> ---
>
> Key: HADOOP-9968
> URL: https://issues.apache.org/jira/browse/HADOOP-9968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-9968.patch, HADOOP-9968.patch, HADOOP-9968.patch, 
> hadoop-9968-1.2.patch
>
>
> It is possible to use NetGroups for ACLs. This requires specifying  the 
> config property hadoop.security.group.mapping as  
> org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
> org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.
> The authorization to proxy a user by another user is specified as a list of 
> groups hadoop.proxyuser..groups. The Group resolution does not 
> work  if we are using NetGroups.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10358) libhadoop doesn't compile on Mac OS X

2014-02-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908945#comment-13908945
 ] 

Akira AJISAKA commented on HADOOP-10358:


This issue seems to be reported by HADOOP-9648 and an patch is availble now.

> libhadoop doesn't compile on Mac OS X
> -
>
> Key: HADOOP-10358
> URL: https://issues.apache.org/jira/browse/HADOOP-10358
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10358-fix-hadoop-common-native-on-os-x.patch
>
>
> The native component of hadoop-common (libhadoop.so on linux, libhadoop.dylib 
> on mac) fails to compile on Mac OS X. The problem is in  
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
>  at lines 76-78:
> [exec] 
> /Users/ilyam/src/github/apache/hadoop-common/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26:
>  error: invalid operands to binary expression ('void' and 'int')
> [exec]  if(setnetgrent(cgroup) == 1) {
> [exec]  ~~~ ^  ~
> There are two problems in the code:
> 1) The #ifndef guard only checks for __FreeBSD__ but should check for either 
> one of __FreeBSD__ or __APPLE__. This is because Mac OS X inherits its 
> syscalls from FreeBSD rather than Linux, and thus the setnetgrent() syscall 
> returns void.
> 2) setnetgrentCalledFlag = 1 is set outside the #ifndef guard, but the 
> syscall is only invoked inside the guard. This means that on FreeBSD, 
> endnetgrent() can be called in the cleanup code without a corresponding 
> setnetgrent() invocation.
> I have a patch that fixes both issues (will attach in a bit). With this 
> patch, I'm able to compile libhadoop.dylib on Mac OS X, which in turn lets me 
> install native snappy, lzo, etc compressor libraries on my client. That lets 
> me run commands like 'hadoop fs -text somefile.lzo' from the macbook rather 
> than having to ssh to a linux box, etc.
> Note that this patch only fixes the native build of hadoop-common-project. 
> Some other components of hadoop still fail to build their native components, 
> but libhadoop.dylib is enough for the client.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10358) libhadoop doesn't compile on Mac OS X

2014-02-21 Thread Ilya Maykov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Maykov updated HADOOP-10358:
-

Description: 
The native component of hadoop-common (libhadoop.so on linux, libhadoop.dylib 
on mac) fails to compile on Mac OS X. The problem is in  
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
 at lines 76-78:

[exec] 
/Users/ilyam/src/github/apache/hadoop-common/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26:
 error: invalid operands to binary expression ('void' and 'int')
[exec]  if(setnetgrent(cgroup) == 1) {
[exec]  ~~~ ^  ~

There are two problems in the code:
1) The #ifndef guard only checks for __FreeBSD__ but should check for either 
one of __FreeBSD__ or __APPLE__. This is because Mac OS X inherits its syscalls 
from FreeBSD rather than Linux, and thus the setnetgrent() syscall returns void.
2) setnetgrentCalledFlag = 1 is set outside the #ifndef guard, but the syscall 
is only invoked inside the guard. This means that on FreeBSD, endnetgrent() can 
be called in the cleanup code without a corresponding setnetgrent() invocation.

I have a patch that fixes both issues (will attach in a bit). With this patch, 
I'm able to compile libhadoop.dylib on Mac OS X, which in turn lets me install 
native snappy, lzo, etc compressor libraries on my client. That lets me run 
commands like 'hadoop fs -text somefile.lzo' from the macbook rather than 
having to ssh to a linux box, etc.

Note that this patch only fixes the native build of hadoop-common-project. Some 
other components of hadoop still fail to build their native components, but 
libhadoop.dylib is enough for the client.

  was:
The native component of hadoop-common (libhadoop.so on linux, libhadoop.dylib 
on mac) fails to compile on Mac OS X. The problem is in  
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
 at lines 76-78:

[exec] 
/Users/ilyam/src/github/apache/hadoop-common/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26:
 error: invalid operands to binary expression ('void' and 'int')
[exec]  if(setnetgrent(cgroup) == 1) {
[exec]  ~~~ ^  ~

There are two problems in the code:
1) The #ifndef guard only checks for __FreeBSD__ but should check for either 
one of __FreeBSD__ or __APPLE__. This is because Mac OS X inherits its syscalls 
from FreeBSD rather than Linux, and thus the setnetgrent() syscall returns void.
2) setnetgrentCalledFlag = 1 is called outside the #ifndef guard, but the 
syscall is only called inside the guard. This means that on FreeBSD, 
endnetgrent() can be called in the cleanup code without a corresponding 
setnetgrent() invocation.

I have a patch that fixes both issues (will attach in a bit). With this patch, 
I'm able to compile libhadoop.dylib on Mac OS X, which in turn lets me install 
native snappy, lzo, etc compressor libraries on my client. That lets me run 
commands like 'hadoop fs -text somefile.lzo' from the macbook rather than 
having to ssh to a linux box, etc.

Note that this patch only fixes the native build of hadoop-common-project. Some 
other components of hadoop still fail to build their native components, but 
libhadoop.dylib is enough for the client.


> libhadoop doesn't compile on Mac OS X
> -
>
> Key: HADOOP-10358
> URL: https://issues.apache.org/jira/browse/HADOOP-10358
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10358-fix-hadoop-common-native-on-os-x.patch
>
>
> The native component of hadoop-common (libhadoop.so on linux, libhadoop.dylib 
> on mac) fails to compile on Mac OS X. The problem is in  
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
>  at lines 76-78:
> [exec] 
> /Users/ilyam/src/github/apache/hadoop-common/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26:
>  error: invalid operands to binary expression ('void' and 'int')
> [exec]  if(setnetgrent(cgroup) == 1) {
> [exec]  ~~~ ^  ~
> There are two problems in the code:
> 1) The #ifndef guard only checks for __FreeBSD__ but should check for either 
> one of __FreeBSD__ or __APPLE__. This is because Mac OS X inherits its 
> syscalls from FreeBSD rather than Linux, and thus the setnetgrent() syscall 
> returns void.
> 2) setnetgrentCalledFlag = 1 is set outside the #

[jira] [Updated] (HADOOP-10358) libhadoop doesn't compile on Mac OS X

2014-02-21 Thread Ilya Maykov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Maykov updated HADOOP-10358:
-

Attachment: HADOOP-10358-fix-hadoop-common-native-on-os-x.patch

Patch attached.

> libhadoop doesn't compile on Mac OS X
> -
>
> Key: HADOOP-10358
> URL: https://issues.apache.org/jira/browse/HADOOP-10358
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
> Environment: Mac OS X 10.8.5
> Oracle JDK 1.7.0_51
>Reporter: Ilya Maykov
>Priority: Minor
> Attachments: HADOOP-10358-fix-hadoop-common-native-on-os-x.patch
>
>
> The native component of hadoop-common (libhadoop.so on linux, libhadoop.dylib 
> on mac) fails to compile on Mac OS X. The problem is in  
> hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
>  at lines 76-78:
> [exec] 
> /Users/ilyam/src/github/apache/hadoop-common/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26:
>  error: invalid operands to binary expression ('void' and 'int')
> [exec]  if(setnetgrent(cgroup) == 1) {
> [exec]  ~~~ ^  ~
> There are two problems in the code:
> 1) The #ifndef guard only checks for __FreeBSD__ but should check for either 
> one of __FreeBSD__ or __APPLE__. This is because Mac OS X inherits its 
> syscalls from FreeBSD rather than Linux, and thus the setnetgrent() syscall 
> returns void.
> 2) setnetgrentCalledFlag = 1 is called outside the #ifndef guard, but the 
> syscall is only called inside the guard. This means that on FreeBSD, 
> endnetgrent() can be called in the cleanup code without a corresponding 
> setnetgrent() invocation.
> I have a patch that fixes both issues (will attach in a bit). With this 
> patch, I'm able to compile libhadoop.dylib on Mac OS X, which in turn lets me 
> install native snappy, lzo, etc compressor libraries on my client. That lets 
> me run commands like 'hadoop fs -text somefile.lzo' from the macbook rather 
> than having to ssh to a linux box, etc.
> Note that this patch only fixes the native build of hadoop-common-project. 
> Some other components of hadoop still fail to build their native components, 
> but libhadoop.dylib is enough for the client.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10358) libhadoop doesn't compile on Mac OS X

2014-02-21 Thread Ilya Maykov (JIRA)
Ilya Maykov created HADOOP-10358:


 Summary: libhadoop doesn't compile on Mac OS X
 Key: HADOOP-10358
 URL: https://issues.apache.org/jira/browse/HADOOP-10358
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
 Environment: Mac OS X 10.8.5
Oracle JDK 1.7.0_51
Reporter: Ilya Maykov
Priority: Minor


The native component of hadoop-common (libhadoop.so on linux, libhadoop.dylib 
on mac) fails to compile on Mac OS X. The problem is in  
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c
 at lines 76-78:

[exec] 
/Users/ilyam/src/github/apache/hadoop-common/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/security/JniBasedUnixGroupsNetgroupMapping.c:77:26:
 error: invalid operands to binary expression ('void' and 'int')
[exec]  if(setnetgrent(cgroup) == 1) {
[exec]  ~~~ ^  ~

There are two problems in the code:
1) The #ifndef guard only checks for __FreeBSD__ but should check for either 
one of __FreeBSD__ or __APPLE__. This is because Mac OS X inherits its syscalls 
from FreeBSD rather than Linux, and thus the setnetgrent() syscall returns void.
2) setnetgrentCalledFlag = 1 is called outside the #ifndef guard, but the 
syscall is only called inside the guard. This means that on FreeBSD, 
endnetgrent() can be called in the cleanup code without a corresponding 
setnetgrent() invocation.

I have a patch that fixes both issues (will attach in a bit). With this patch, 
I'm able to compile libhadoop.dylib on Mac OS X, which in turn lets me install 
native snappy, lzo, etc compressor libraries on my client. That lets me run 
commands like 'hadoop fs -text somefile.lzo' from the macbook rather than 
having to ssh to a linux box, etc.

Note that this patch only fixes the native build of hadoop-common-project. Some 
other components of hadoop still fail to build their native components, but 
libhadoop.dylib is enough for the client.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10285) Admin interface to swap callqueue at runtime

2014-02-21 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10285:
--

Attachment: HADOOP-10285.patch

Re-uploading the patch to get QA to run again

> Admin interface to swap callqueue at runtime
> 
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: HADOOP-10285.patch, HADOOP-10285.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10285) Admin interface to swap callqueue at runtime

2014-02-21 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-10285:
--

Summary: Admin interface to swap callqueue at runtime  (was: Admin 
interface for runtime queue swapping (Depends on subtask1))

> Admin interface to swap callqueue at runtime
> 
>
> Key: HADOOP-10285
> URL: https://issues.apache.org/jira/browse/HADOOP-10285
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chris Li
> Attachments: HADOOP-10285.patch
>
>
> We wish to swap the active call queue during runtime in order to do 
> performance tuning without restarting the namenode.
> This patch adds the ability to refresh the call queue on the namenode, 
> through dfsadmin -refreshCallQueue



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908820#comment-13908820
 ] 

Hudson commented on HADOOP-10278:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5208 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5208/])
HADOOP-10278. Refactor to make CallQueue pluggable. (Contributed by Chris Li) 
(arp: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570703)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallQueueManager.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestCallQueueManager.java


> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
>Assignee: Chris Li
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10278-atomicref-adapter.patch, 
> HADOOP-10278-atomicref-adapter.patch, HADOOP-10278-atomicref-adapter.patch, 
> HADOOP-10278-atomicref-adapter.patch, HADOOP-10278-atomicref-rwlock.patch, 
> HADOOP-10278-atomicref.patch, HADOOP-10278-atomicref.patch, 
> HADOOP-10278-atomicref.patch, HADOOP-10278-atomicref.patch, 
> HADOOP-10278.patch, HADOOP-10278.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-02-21 Thread Chris Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908809#comment-13908809
 ] 

Chris Li commented on HADOOP-10278:
---

Awesome, thanks all!

The next patch in the series is 
https://issues.apache.org/jira/browse/HADOOP-10285

It adds the admin interface to trigger a queue swap at runtime. Further 
discussion can take place there. Thanks

> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
>Assignee: Chris Li
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10278-atomicref-adapter.patch, 
> HADOOP-10278-atomicref-adapter.patch, HADOOP-10278-atomicref-adapter.patch, 
> HADOOP-10278-atomicref-adapter.patch, HADOOP-10278-atomicref-rwlock.patch, 
> HADOOP-10278-atomicref.patch, HADOOP-10278-atomicref.patch, 
> HADOOP-10278-atomicref.patch, HADOOP-10278-atomicref.patch, 
> HADOOP-10278.patch, HADOOP-10278.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-02-21 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10278:
---

  Resolution: Fixed
   Fix Version/s: 2.5.0
  3.0.0
Target Version/s: 2.5.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I committed to trunk and branch-2.

Thanks for the contribution [~chrilisf] and thanks Hiroshi, Benoy, Jing and 
Daryn for reviewing.

> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
>Assignee: Chris Li
> Fix For: 3.0.0, 2.5.0
>
> Attachments: HADOOP-10278-atomicref-adapter.patch, 
> HADOOP-10278-atomicref-adapter.patch, HADOOP-10278-atomicref-adapter.patch, 
> HADOOP-10278-atomicref-adapter.patch, HADOOP-10278-atomicref-rwlock.patch, 
> HADOOP-10278-atomicref.patch, HADOOP-10278-atomicref.patch, 
> HADOOP-10278-atomicref.patch, HADOOP-10278-atomicref.patch, 
> HADOOP-10278.patch, HADOOP-10278.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10278) Refactor to make CallQueue pluggable

2014-02-21 Thread Chris Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908770#comment-13908770
 ] 

Chris Li commented on HADOOP-10278:
---

[~ikeda] I think that could be explored at a later time; for right now I think 
it's good enough that we don't degrade performance rather than seeking to 
improve it.

[~arpitagarwal] let me know if there's anything else you need before committing.


> Refactor to make CallQueue pluggable
> 
>
> Key: HADOOP-10278
> URL: https://issues.apache.org/jira/browse/HADOOP-10278
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Chris Li
>Assignee: Chris Li
> Attachments: HADOOP-10278-atomicref-adapter.patch, 
> HADOOP-10278-atomicref-adapter.patch, HADOOP-10278-atomicref-adapter.patch, 
> HADOOP-10278-atomicref-adapter.patch, HADOOP-10278-atomicref-rwlock.patch, 
> HADOOP-10278-atomicref.patch, HADOOP-10278-atomicref.patch, 
> HADOOP-10278-atomicref.patch, HADOOP-10278-atomicref.patch, 
> HADOOP-10278.patch, HADOOP-10278.patch
>
>
> * Refactor CallQueue into an interface, base, and default implementation that 
> matches today's behavior
> * Make the call queue impl configurable, keyed on port so that we minimize 
> coupling



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908704#comment-13908704
 ] 

Yongjun Zhang commented on HADOOP-10354:


Thanks for fixing the issue, Chris!


> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>Reporter: Yongjun Zhang
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-10354.1.patch, HADOOP-10354.2.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a 
> failure to parse a permission string in TestWebHDFS.  The problem appears to 
> occur only in environments with security extensions enabled on the local file 
> system, such as Smack or ACLs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908672#comment-13908672
 ] 

Hudson commented on HADOOP-10354:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5206 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5206/])
HADOOP-10354. TestWebHDFS fails after merge of HDFS-4685 to trunk. Contributed 
by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570655)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/RawLocalFileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java


> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>Reporter: Yongjun Zhang
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-10354.1.patch, HADOOP-10354.2.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a 
> failure to parse a permission string in TestWebHDFS.  The problem appears to 
> occur only in environments with security extensions enabled on the local file 
> system, such as Smack or ACLs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10354:
---

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I committed this to trunk.  Thank you to Yongjun for reporting the issue.  
Thank you to both Yongjun and Jing for code reviews.

> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>Reporter: Yongjun Zhang
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-10354.1.patch, HADOOP-10354.2.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a 
> failure to parse a permission string in TestWebHDFS.  The problem appears to 
> occur only in environments with security extensions enabled on the local file 
> system, such as Smack or ACLs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908624#comment-13908624
 ] 

Jing Zhao commented on HADOOP-10354:


Yeah, the patch looks great to me. +1. Thanks Chris!

> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>Reporter: Yongjun Zhang
>Assignee: Chris Nauroth
> Attachments: HADOOP-10354.1.patch, HADOOP-10354.2.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a 
> failure to parse a permission string in TestWebHDFS.  The problem appears to 
> occur only in environments with security extensions enabled on the local file 
> system, such as Smack or ACLs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908604#comment-13908604
 ] 

Chris Nauroth commented on HADOOP-10354:


Thanks again, Yongjun.

[~jingzhao], does the new patch look good to you?


> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>Reporter: Yongjun Zhang
>Assignee: Chris Nauroth
> Attachments: HADOOP-10354.1.patch, HADOOP-10354.2.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a 
> failure to parse a permission string in TestWebHDFS.  The problem appears to 
> occur only in environments with security extensions enabled on the local file 
> system, such as Smack or ACLs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10357) Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive

2014-02-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908577#comment-13908577
 ] 

Larry McCay commented on HADOOP-10357:
--

Also, note that this is against the 1.2 codebase. 
The UgiInstrumentation class is no longer in use in recent releases.

I have not tried to reproduce the issue on the 2 line yet though.

> Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive
> 
>
> Key: HADOOP-10357
> URL: https://issues.apache.org/jira/browse/HADOOP-10357
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.2.0
>Reporter: Larry McCay
>
> When using UGI.doAs in order to make a connection there appears to be a 
> memory leak involving the UGI that is used for the doAs and the UGI held by 
> TUGIAssumingTransport.
> When using this approach to establishing a JDBC connection in an environment 
> that will serve many users and requests client side eventually runs out of 
> memory.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10357) Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive

2014-02-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908571#comment-13908571
 ] 

Larry McCay commented on HADOOP-10357:
--

Hi Daryn - Yes, the UGI is created and disposed of per connection. I believe 
the trouble comes from the static UgiInstrumentation field within the UGI. 
Which is why we end up with at least 1 UgiInstrumentation instance per 
connection that was made and dropped. I guess it should be characterized as a 
UgiInstrumentation leak from within UGI? Anyway, I will be following up with a 
VisualVM screen capture showing that accumulation of UgiInstrumentation 
instances as well as a test program that can be used to reproduce it.

> Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive
> 
>
> Key: HADOOP-10357
> URL: https://issues.apache.org/jira/browse/HADOOP-10357
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.2.0
>Reporter: Larry McCay
>
> When using UGI.doAs in order to make a connection there appears to be a 
> memory leak involving the UGI that is used for the doAs and the UGI held by 
> TUGIAssumingTransport.
> When using this approach to establishing a JDBC connection in an environment 
> that will serve many users and requests client side eventually runs out of 
> memory.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908560#comment-13908560
 ] 

Yongjun Zhang commented on HADOOP-10354:


HI Chris, 

Thanks for addressing my comments. It looks good to me. I also ran the same 
test with you new patch for sanity, it passes.

+1.


> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>Reporter: Yongjun Zhang
>Assignee: Chris Nauroth
> Attachments: HADOOP-10354.1.patch, HADOOP-10354.2.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a 
> failure to parse a permission string in TestWebHDFS.  The problem appears to 
> occur only in environments with security extensions enabled on the local file 
> system, such as Smack or ACLs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10345) Sanitize the the inputs (groups and hosts) for the proxyuser configuration

2014-02-21 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908556#comment-13908556
 ] 

Benoy Antony commented on HADOOP-10345:
---

Could someone please review this patch ?

> Sanitize the the inputs (groups and hosts) for the proxyuser configuration
> --
>
> Key: HADOOP-10345
> URL: https://issues.apache.org/jira/browse/HADOOP-10345
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>Priority: Minor
> Attachments: HADOOP-10345.patch, HADOOP-10345.patch
>
>
> Currently there are no input cleansing done on  
> hadoop.proxyuser..groups  and hadoop.proxyuser..hosts .
> It will be an improvement to trim each value, remove duplicate and empty 
> values during init/refresh.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908534#comment-13908534
 ] 

Hadoop QA commented on HADOOP-10354:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12630325/HADOOP-10354.2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3594//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3594//console

This message is automatically generated.

> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>Reporter: Yongjun Zhang
>Assignee: Chris Nauroth
> Attachments: HADOOP-10354.1.patch, HADOOP-10354.2.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a 
> failure to parse a permission string in TestWebHDFS.  The problem appears to 
> occur only in environments with security extensions enabled on the local file 
> system, such as Smack or ACLs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10357) Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive

2014-02-21 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908522#comment-13908522
 ] 

Daryn Sharp commented on HADOOP-10357:
--

Please provide more detail.  A new UGI is expected to be created for each 
connection, and the UGI should be GC'ed after the connection/request is 
complete.  What is still holding a reference to the "leaked" UGIs?

> Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive
> 
>
> Key: HADOOP-10357
> URL: https://issues.apache.org/jira/browse/HADOOP-10357
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.2.0
>Reporter: Larry McCay
>
> When using UGI.doAs in order to make a connection there appears to be a 
> memory leak involving the UGI that is used for the doAs and the UGI held by 
> TUGIAssumingTransport.
> When using this approach to establishing a JDBC connection in an environment 
> that will serve many users and requests client side eventually runs out of 
> memory.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10354:
---

Attachment: HADOOP-10354.2.patch

Thanks for the review, Yongjun.  I'm attaching v2, which refactors to use a 
shared constant.  How does this look?

> TestWebHDFS fails after merge of HDFS-4685 to trunk
> ---
>
> Key: HADOOP-10354
> URL: https://issues.apache.org/jira/browse/HADOOP-10354
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
> Environment: CentOS release 6.5 (Final)
> cpe:/o:centos:linux:6:GA
>Reporter: Yongjun Zhang
>Assignee: Chris Nauroth
> Attachments: HADOOP-10354.1.patch, HADOOP-10354.2.patch
>
>
> After merging HDFS-4685 to trunk, some dev environments are experiencing a 
> failure to parse a permission string in TestWebHDFS.  The problem appears to 
> occur only in environments with security extensions enabled on the local file 
> system, such as Smack or ACLs.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10354) TestWebHDFS fails after merge of HDFS-4685 to trunk

2014-02-21 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10354:
---

Description: After merging HDFS-4685 to trunk, some dev environments are 
experiencing a failure to parse a permission string in TestWebHDFS.  The 
problem appears to occur only in environments with security extensions enabled 
on the local file system, such as Smack or ACLs.  (was: HI,

I'm seeing trunk branch test failure locally (centOs6) today. And I identified 
it's this commit that caused the failure. 

Author: Chris Nauroth   2014-02-19 10:34:52
Committer: Chris Nauroth   2014-02-19 10:34:52
Parent: 7215d12fdce727e1f4bce21a156b0505bd9ba72a (YARN-1666. Modified RM HA 
handling of include/exclude node-lists to be available across RM failover by 
making using of a remote configuration-provider. Contributed by Xuan Gong.)
Parent: 603ebb82b31e9300cfbf81ed5dd6110f1cb31b27 (HDFS-4685. Correct minor 
whitespace difference in FSImageSerialization.java in preparation for trunk 
merge.)
Child:  ef8a5bceb7f3ce34d08a5968777effd40e0b1d0f (YARN-1171. Add default queue 
properties to Fair Scheduler documentation (Naren Koneru via Sandy Ryza))
Branches: remotes/apache/HDFS-5535, remotes/apache/trunk, testv10, testv3, 
testv4, testv7
Follows: testv5
Precedes: 

Merge HDFS-4685 to trunk.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1569870 
13f79535-47bb-0310-9956-ffa450edef68


I'm not sure whether other folks are seeing the same, or maybe related to my 
environment. But prior to chis change, I don't see this problem.

The failures are in TestWebHDFS:

Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 5, Failures: 0, Errors: 4, Skipped: 0, Time elapsed: 3.687 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.web.TestWebHDFS
testLargeDirectory(org.apache.hadoop.hdfs.web.TestWebHDFS)  Time elapsed: 2.478 
sec  <<< ERROR!
java.lang.IllegalArgumentException: length != 
10(unixSymbolicPermission=drwxrwxr-x.)
at 
org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
at 
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:359)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
at 
org.apache.hadoop.hdfs.web.TestWebHDFS.testLargeDirectory(TestWebHDFS.java:229)

testNamenodeRestart(org.apache.hadoop.hdfs.web.TestWebHDFS)  Time elapsed: 
0.342 sec  <<< ERROR!
java.lang.IllegalArgumentException: length != 
10(unixSymbolicPermission=drwxrwxr-x.)
at 
org.apache.hadoop.fs.permission.FsPermission.valueOf(FsPermission.java:323)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:572)
at 
org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:540)
at 
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:129)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:146)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1835)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1877)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1859)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:359)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
at 
org.apache.hadoop.hdfs.TestDFSClientRetries.namenodeRestartTest(TestDFSClientRetries.java:88

[jira] [Commented] (HADOOP-10357) Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive

2014-02-21 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908453#comment-13908453
 ] 

Larry McCay commented on HADOOP-10357:
--

This has been verified with loginUserFromKeytabAndReturnUGI as well as with a 
patch for getUGIFromSubject from HADOOP-10342.
Using VisualVM we can identify UgiInstrumentation as a source of some of this 
leak - we end up with an instance for every connection established and 
subsequently closed.

> Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive
> 
>
> Key: HADOOP-10357
> URL: https://issues.apache.org/jira/browse/HADOOP-10357
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.2.0
>Reporter: Larry McCay
>
> When using UGI.doAs in order to make a connection there appears to be a 
> memory leak involving the UGI that is used for the doAs and the UGI held by 
> TUGIAssumingTransport.
> When using this approach to establishing a JDBC connection in an environment 
> that will serve many users and requests client side eventually runs out of 
> memory.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10357) Memory Leak in UserGroupInformation.doAs for JDBC Connection to Hive

2014-02-21 Thread Larry McCay (JIRA)
Larry McCay created HADOOP-10357:


 Summary: Memory Leak in UserGroupInformation.doAs for JDBC 
Connection to Hive
 Key: HADOOP-10357
 URL: https://issues.apache.org/jira/browse/HADOOP-10357
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.2.0
Reporter: Larry McCay


When using UGI.doAs in order to make a connection there appears to be a memory 
leak involving the UGI that is used for the doAs and the UGI held by 
TUGIAssumingTransport.

When using this approach to establishing a JDBC connection in an environment 
that will serve many users and requests client side eventually runs out of 
memory.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10328) loadGenerator exit code is not reliable

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908391#comment-13908391
 ] 

Hudson commented on HADOOP-10328:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1705 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1705/])
HADOOP-10328. loadGenerator exit code is not reliable. Contributed by Haohui 
Mai. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570304)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/loadGenerator/LoadGenerator.java


> loadGenerator exit code is not reliable
> ---
>
> Key: HADOOP-10328
> URL: https://issues.apache.org/jira/browse/HADOOP-10328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.2.0
>Reporter: Arpit Gupta
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10328.000.patch, HADOOP-10328.001.patch, 
> HADOOP-10328.002.patch
>
>
> LoadGenerator exit code is determined using the following logic
> {code}
> int exitCode = init(args);
> if (exitCode != 0) {
>   return exitCode;
> }
> {code}
> At the end of the run we just return the exitCode. So essentially if you are 
> arguments are correct you will always get 0 back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10348) Deprecate hadoop.ssl.configuration in branch-2, and remove it in trunk

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908395#comment-13908395
 ] 

Hudson commented on HADOOP-10348:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1705 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1705/])
move HADOOP-10348 to branch 2.4.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570296)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10348. Deprecate hadoop.ssl.configuration in branch-2, and remove it in 
trunk. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570295)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHttpPolicy.java


> Deprecate hadoop.ssl.configuration in branch-2, and remove it in trunk
> --
>
> Key: HADOOP-10348
> URL: https://issues.apache.org/jira/browse/HADOOP-10348
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10348-branch2.000.patch, HADOOP-10348.000.patch
>
>
> As discussed in
> https://issues.apache.org/jira/browse/HADOOP-8581?focusedCommentId=13786567&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13786567
> The configuration hadoop.ssl.enabled should be deprecated. We need to mark 
> them as deprecated in CommonConfigurationKeysPublic



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10352) Recursive setfacl erroneously attempts to apply default ACL to files.

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908388#comment-13908388
 ] 

Hudson commented on HADOOP-10352:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1705 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1705/])
HADOOP-10352. Recursive setfacl erroneously attempts to apply default ACL to 
files. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570466)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml


> Recursive setfacl erroneously attempts to apply default ACL to files.
> -
>
> Key: HADOOP-10352
> URL: https://issues.apache.org/jira/browse/HADOOP-10352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-10352.1.patch
>
>
> When calling setfacl -R with an ACL spec containing default ACL entries, the 
> command can fail if there is a mix of directories and files underneath the 
> specified path.  It attempts to set the default ACL entries on the files, but 
> only directories can have a default ACL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10355) TestLoadGenerator#testLoadGenerator fails

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908386#comment-13908386
 ] 

Hudson commented on HADOOP-10355:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #1705 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1705/])
HADOOP-10355. Fix TestLoadGenerator#testLoadGenerator. Contributed by Haohui 
Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570460)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/loadGenerator/LoadGenerator.java


> TestLoadGenerator#testLoadGenerator fails
> -
>
> Key: HADOOP-10355
> URL: https://issues.apache.org/jira/browse/HADOOP-10355
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>Assignee: Haohui Mai
> Fix For: 2.4.0
>
> Attachments: HDFS-5991.000.patch, 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator-output.txt, 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator.txt
>
>
> From https://builds.apache.org/job/PreCommit-HDFS-Build/6194//testReport/
> {code}
> java.io.IOException: Stream closed
>   at java.io.BufferedReader.ensureOpen(BufferedReader.java:97)
>   at java.io.BufferedReader.readLine(BufferedReader.java:292)
>   at java.io.BufferedReader.readLine(BufferedReader.java:362)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.loadScriptFile(LoadGenerator.java:511)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.init(LoadGenerator.java:418)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.run(LoadGenerator.java:324)
>   at 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator.testLoadGenerator(TestLoadGenerator.java:231)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-5787) Allow HADOOP_ROOT_LOGGER to be configured via conf/hadoop-env.sh

2014-02-21 Thread Mark Paluch (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Paluch updated HADOOP-5787:


Affects Version/s: 0.23.10
   Status: Patch Available  (was: Open)

see https://gist.github.com/mp911de/9130280#file-hadoop-daemon-diff

> Allow HADOOP_ROOT_LOGGER to be configured via conf/hadoop-env.sh
> 
>
> Key: HADOOP-5787
> URL: https://issues.apache.org/jira/browse/HADOOP-5787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.10, 0.20.0
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>
> Currently it's set in bin/hadoop-daemon.sh... we should allow it to be 
> specified in conf/hadoop-env.sh



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10348) Deprecate hadoop.ssl.configuration in branch-2, and remove it in trunk

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908317#comment-13908317
 ] 

Hudson commented on HADOOP-10348:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1680 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1680/])
move HADOOP-10348 to branch 2.4.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570296)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10348. Deprecate hadoop.ssl.configuration in branch-2, and remove it in 
trunk. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570295)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHttpPolicy.java


> Deprecate hadoop.ssl.configuration in branch-2, and remove it in trunk
> --
>
> Key: HADOOP-10348
> URL: https://issues.apache.org/jira/browse/HADOOP-10348
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10348-branch2.000.patch, HADOOP-10348.000.patch
>
>
> As discussed in
> https://issues.apache.org/jira/browse/HADOOP-8581?focusedCommentId=13786567&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13786567
> The configuration hadoop.ssl.enabled should be deprecated. We need to mark 
> them as deprecated in CommonConfigurationKeysPublic



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10328) loadGenerator exit code is not reliable

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908313#comment-13908313
 ] 

Hudson commented on HADOOP-10328:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1680 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1680/])
HADOOP-10328. loadGenerator exit code is not reliable. Contributed by Haohui 
Mai. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570304)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/loadGenerator/LoadGenerator.java


> loadGenerator exit code is not reliable
> ---
>
> Key: HADOOP-10328
> URL: https://issues.apache.org/jira/browse/HADOOP-10328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.2.0
>Reporter: Arpit Gupta
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10328.000.patch, HADOOP-10328.001.patch, 
> HADOOP-10328.002.patch
>
>
> LoadGenerator exit code is determined using the following logic
> {code}
> int exitCode = init(args);
> if (exitCode != 0) {
>   return exitCode;
> }
> {code}
> At the end of the run we just return the exitCode. So essentially if you are 
> arguments are correct you will always get 0 back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10355) TestLoadGenerator#testLoadGenerator fails

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908308#comment-13908308
 ] 

Hudson commented on HADOOP-10355:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1680 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1680/])
HADOOP-10355. Fix TestLoadGenerator#testLoadGenerator. Contributed by Haohui 
Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570460)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/loadGenerator/LoadGenerator.java


> TestLoadGenerator#testLoadGenerator fails
> -
>
> Key: HADOOP-10355
> URL: https://issues.apache.org/jira/browse/HADOOP-10355
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>Assignee: Haohui Mai
> Fix For: 2.4.0
>
> Attachments: HDFS-5991.000.patch, 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator-output.txt, 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator.txt
>
>
> From https://builds.apache.org/job/PreCommit-HDFS-Build/6194//testReport/
> {code}
> java.io.IOException: Stream closed
>   at java.io.BufferedReader.ensureOpen(BufferedReader.java:97)
>   at java.io.BufferedReader.readLine(BufferedReader.java:292)
>   at java.io.BufferedReader.readLine(BufferedReader.java:362)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.loadScriptFile(LoadGenerator.java:511)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.init(LoadGenerator.java:418)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.run(LoadGenerator.java:324)
>   at 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator.testLoadGenerator(TestLoadGenerator.java:231)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10352) Recursive setfacl erroneously attempts to apply default ACL to files.

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908310#comment-13908310
 ] 

Hudson commented on HADOOP-10352:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1680 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1680/])
HADOOP-10352. Recursive setfacl erroneously attempts to apply default ACL to 
files. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570466)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml


> Recursive setfacl erroneously attempts to apply default ACL to files.
> -
>
> Key: HADOOP-10352
> URL: https://issues.apache.org/jira/browse/HADOOP-10352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-10352.1.patch
>
>
> When calling setfacl -R with an ACL spec containing default ACL entries, the 
> command can fail if there is a mix of directories and files underneath the 
> specified path.  It attempts to set the default ACL entries on the files, but 
> only directories can have a default ACL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-02-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908228#comment-13908228
 ] 

Steve Loughran commented on HADOOP-9902:


while we are at it, HADOOP-9044 is an (unapplied, may need to be rebuilt) entry 
point to locate a class/resource on the CP & print it out, also useful for 
diagnostics. After this patch goes in, we could apply that and make it a named 
operation

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-02-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908224#comment-13908224
 ] 

Steve Loughran commented on HADOOP-9902:


its probably too late to change {{hadoop classpath}}. We could perhaps include 
some {{--strict}} param that restricts things.



> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10352) Recursive setfacl erroneously attempts to apply default ACL to files.

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908171#comment-13908171
 ] 

Hudson commented on HADOOP-10352:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #488 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/488/])
HADOOP-10352. Recursive setfacl erroneously attempts to apply default ACL to 
files. Contributed by Chris Nauroth. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570466)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml


> Recursive setfacl erroneously attempts to apply default ACL to files.
> -
>
> Key: HADOOP-10352
> URL: https://issues.apache.org/jira/browse/HADOOP-10352
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Fix For: 3.0.0
>
> Attachments: HADOOP-10352.1.patch
>
>
> When calling setfacl -R with an ACL spec containing default ACL entries, the 
> command can fail if there is a mix of directories and files underneath the 
> specified path.  It attempts to set the default ACL entries on the files, but 
> only directories can have a default ACL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10328) loadGenerator exit code is not reliable

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908174#comment-13908174
 ] 

Hudson commented on HADOOP-10328:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #488 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/488/])
HADOOP-10328. loadGenerator exit code is not reliable. Contributed by Haohui 
Mai. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570304)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/loadGenerator/LoadGenerator.java


> loadGenerator exit code is not reliable
> ---
>
> Key: HADOOP-10328
> URL: https://issues.apache.org/jira/browse/HADOOP-10328
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.2.0
>Reporter: Arpit Gupta
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10328.000.patch, HADOOP-10328.001.patch, 
> HADOOP-10328.002.patch
>
>
> LoadGenerator exit code is determined using the following logic
> {code}
> int exitCode = init(args);
> if (exitCode != 0) {
>   return exitCode;
> }
> {code}
> At the end of the run we just return the exitCode. So essentially if you are 
> arguments are correct you will always get 0 back.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10348) Deprecate hadoop.ssl.configuration in branch-2, and remove it in trunk

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908178#comment-13908178
 ] 

Hudson commented on HADOOP-10348:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #488 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/488/])
move HADOOP-10348 to branch 2.4.0 section in CHANGES.txt (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570296)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-10348. Deprecate hadoop.ssl.configuration in branch-2, and remove it in 
trunk. Contributed by Haohui Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570295)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestHttpPolicy.java


> Deprecate hadoop.ssl.configuration in branch-2, and remove it in trunk
> --
>
> Key: HADOOP-10348
> URL: https://issues.apache.org/jira/browse/HADOOP-10348
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 3.0.0, 2.4.0
>
> Attachments: HADOOP-10348-branch2.000.patch, HADOOP-10348.000.patch
>
>
> As discussed in
> https://issues.apache.org/jira/browse/HADOOP-8581?focusedCommentId=13786567&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13786567
> The configuration hadoop.ssl.enabled should be deprecated. We need to mark 
> them as deprecated in CommonConfigurationKeysPublic



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10355) TestLoadGenerator#testLoadGenerator fails

2014-02-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908169#comment-13908169
 ] 

Hudson commented on HADOOP-10355:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #488 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/488/])
HADOOP-10355. Fix TestLoadGenerator#testLoadGenerator. Contributed by Haohui 
Mai. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1570460)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/loadGenerator/LoadGenerator.java


> TestLoadGenerator#testLoadGenerator fails
> -
>
> Key: HADOOP-10355
> URL: https://issues.apache.org/jira/browse/HADOOP-10355
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira AJISAKA
>Assignee: Haohui Mai
> Fix For: 2.4.0
>
> Attachments: HDFS-5991.000.patch, 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator-output.txt, 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator.txt
>
>
> From https://builds.apache.org/job/PreCommit-HDFS-Build/6194//testReport/
> {code}
> java.io.IOException: Stream closed
>   at java.io.BufferedReader.ensureOpen(BufferedReader.java:97)
>   at java.io.BufferedReader.readLine(BufferedReader.java:292)
>   at java.io.BufferedReader.readLine(BufferedReader.java:362)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.loadScriptFile(LoadGenerator.java:511)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.init(LoadGenerator.java:418)
>   at 
> org.apache.hadoop.fs.loadGenerator.LoadGenerator.run(LoadGenerator.java:324)
>   at 
> org.apache.hadoop.fs.loadGenerator.TestLoadGenerator.testLoadGenerator(TestLoadGenerator.java:231)
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)