[jira] [Updated] (HADOOP-10248) Property name should be included in the exception where property value is null

2014-01-23 Thread Uma Maheswara Rao G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G updated HADOOP-10248:
-

   Resolution: Fixed
Fix Version/s: 2.3.0
   3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Ted, for filing an issue. 
Thanks a lot, Akira for the patch. I have just committed this to trunk and 
branch-2

> Property name should be included in the exception where property value is null
> --
>
> Key: HADOOP-10248
> URL: https://issues.apache.org/jira/browse/HADOOP-10248
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Ted Yu
>Assignee: Akira AJISAKA
>  Labels: newbie
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10248.2.patch, HADOOP-10248.patch
>
>
> I saw the following when trying to determine startup failure:
> {code}
> 2014-01-21 06:07:17,871 FATAL 
> [master:h2-centos6-uns-1390276854-hbase-10:6] master.HMaster: Unhandled 
> exception. Starting shutdown.
> java.lang.IllegalArgumentException: Property value must not be null
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:958)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:940)
> at org.apache.hadoop.http.HttpServer.initializeWebServer(HttpServer.java:510)
> at org.apache.hadoop.http.HttpServer.(HttpServer.java:470)
> at org.apache.hadoop.http.HttpServer.(HttpServer.java:458)
> at org.apache.hadoop.http.HttpServer.(HttpServer.java:412)
> at org.apache.hadoop.hbase.util.InfoServer.(InfoServer.java:59)
> {code}
> Property name should be included in the following exception:
> {code}
> Preconditions.checkArgument(
> value != null,
> "Property value must not be null");
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Copy the HttpServer in 2.2 back to branch-2

2014-01-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880728#comment-13880728
 ] 

stack commented on HADOOP-10255:


+1 pending what hadoopqa says.

I can test a branch-2 patch when you put it up.

> Copy the HttpServer in 2.2 back to branch-2
> ---
>
> Key: HADOOP-10255
> URL: https://issues.apache.org/jira/browse/HADOOP-10255
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: HADOOP-10255.000.patch, HADOOP-10255.001.patch, 
> HADOOP-10255.002.patch, HADOOP-10255.003.patch
>
>
> As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
> from branch-2.2 to make sure it works across multiple 2.x releases.
> This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
>  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-23 Thread takeshi.miao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

takeshi.miao updated HADOOP-10274:
--

Attachment: HADOOP-10274-trunk-v01.patch

add a patch for review

> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Priority: Minor
> Attachments: HADOOP-10274-trunk-v01.patch
>
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> {code}
> So I'd like to suggest to change the logging level from '_ERROR_' to '_WARN_' 
> for 
> _o.a.hadoop.security.UserGroupInformation#doAs(PrivilegedExceptionAction)_ 
> metho

[jira] [Updated] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs method

2014-01-23 Thread takeshi.miao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

takeshi.miao updated HADOOP-10274:
--

Summary: Lower the logging level from ERROR to WARN for UGI.doAs method  
(was: Lower the logging level from ERROR to WARN for UGI.doAs)

> Lower the logging level from ERROR to WARN for UGI.doAs method
> --
>
> Key: HADOOP-10274
> URL: https://issues.apache.org/jira/browse/HADOOP-10274
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.4
> Environment: hadoop-1.0.4, hbase-0.94.16, 
> krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
>Reporter: takeshi.miao
>Priority: Minor
>
> Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while 
> we are using the HBase client API to put data into HBase-0.94.16 with 
> krb5-1.6.1 enabled. The related msg as follows...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
>  PriviledgedAction as:takeshi_miao@LAB 
> from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
> 
> [2014-01-15 
> 09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
>  Exception encountered while connecting to the server : 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
>  Initiating logout for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
>  hadoop logout
> [2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
> ][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
>  Initiating re-login for takeshi_miao@LAB
> [2014-01-15 
> 09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
>  hadoop login
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
>  hadoop login commit
> [2014-01-15 
> 09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
>  using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
> UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 
> 501, UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
> {code}
> Finally, we found that the HBase would doing the retry (5 * 10 times) and 
> recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
> viewpoint, the error msg at first line may be frightful, as we were afraid 
> there was any data loss occurring at the first sight...
> {code}
> [2014-01-15 
> 09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
>  PriviledgedActionException as:takeshi_miao@LAB 
> cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Request is a 
> replay (34) - PROCESS_TGS)]
> {code}
> So I'd like to suggest to change the logging level from '_ERROR_' to '_WARN_' 
> for 
> _o.a.hadoop.security.UserGroupInformation#doAs(PrivilegedExceptio

[jira] [Created] (HADOOP-10274) Lower the logging level from ERROR to WARN for UGI.doAs

2014-01-23 Thread takeshi.miao (JIRA)
takeshi.miao created HADOOP-10274:
-

 Summary: Lower the logging level from ERROR to WARN for UGI.doAs
 Key: HADOOP-10274
 URL: https://issues.apache.org/jira/browse/HADOOP-10274
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 1.0.4
 Environment: hadoop-1.0.4, hbase-0.94.16, 
krb5-server-1.6.1-31.el5_3.3, CentOS release 5.3 (Final)
Reporter: takeshi.miao
Priority: Minor


Recently we got the error msg "Request is a replay (34) - PROCESS_TGS" while we 
are using the HBase client API to put data into HBase-0.94.16 with krb5-1.6.1 
enabled. The related msg as follows...
{code}
[2014-01-15 
09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
 PriviledgedActionException as:takeshi_miao@LAB 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Request is a 
replay (34) - PROCESS_TGS)]
[2014-01-15 
09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.logPriviledgedAction(UserGroupInformation.java:1143)):
 PriviledgedAction as:takeshi_miao@LAB 
from:sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  
[2014-01-15 
09:40:38,453][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.ipc.SecureClient](org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:213)):
 Exception encountered while connecting to the server : 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Request is a replay (34) - 
PROCESS_TGS)]
[2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:657)):
 Initiating logout for takeshi_miao@LAB
[2014-01-15 
09:40:38,454][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.logout(UserGroupInformation.java:154)):
 hadoop logout
[2014-01-15 09:40:38,454][hbase-tablepool-1-thread-3][INFO 
][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.reloginFromTicketCache(UserGroupInformation.java:667)):
 Initiating re-login for takeshi_miao@LAB
[2014-01-15 
09:40:38,455][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.login(UserGroupInformation.java:146)):
 hadoop login
[2014-01-15 
09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:95)):
 hadoop login commit
[2014-01-15 
09:40:38,456][hbase-tablepool-1-thread-3][DEBUG][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule.commit(UserGroupInformation.java:100)):
 using existing subject:[takeshi_miao@LAB, UnixPrincipal: takeshi_miao, 
UnixNumericUserPrincipal: 501, UnixNumericGroupPrincipal [Primary Group]: 501, 
UnixNumericGroupPrincipal [Supplementary Group]: 502, takeshi_miao@LAB]
{code}

Finally, we found that the HBase would doing the retry (5 * 10 times) and 
recovery this _'request is a replay (34)'_ issue, but based on the HBase user 
viewpoint, the error msg at first line may be frightful, as we were afraid 
there was any data loss occurring at the first sight...
{code}
[2014-01-15 
09:40:38,452][hbase-tablepool-1-thread-3][ERROR][org.apache.hadoop.security.UserGroupInformation](org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1124)):
 PriviledgedActionException as:takeshi_miao@LAB 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Request is a 
replay (34) - PROCESS_TGS)]
{code}

So I'd like to suggest to change the logging level from '_ERROR_' to '_WARN_' 
for 
_o.a.hadoop.security.UserGroupInformation#doAs(PrivilegedExceptionAction)_ 
method
{code}
  public  T doAs(PrivilegedExceptionAction action
) throws IOException, InterruptedException {
try {
  // ...
} catch (PrivilegedActionException pae) {
  Throwable cause = pae.getCause();
  LOG.error("PriviledgedActionException as:"+this+" cause:"+cause); // I 
mean here
  // ...
}
  }
{code}
Due to this method already throws _checked exception_s which can be handled by 

[jira] [Updated] (HADOOP-10273) Fix 'mvn site'

2014-01-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10273:
---

Description: 
'mvn site' fails with

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-site-plugin:3.0:site (default-site) on project 
hadoop-main: Execution default-site of goal 
org.apache.maven.plugins:maven-site-plugin:3.0:site failed: A required class 
was missing while executing 
org.apache.maven.plugins:maven-site-plugin:3.0:site: 
org/sonatype/aether/graph/DependencyFilter

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
{code}

Looks related to 
https://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound

Bumping the maven-site-plugin version should fix it.

  was:
'mvn site' is broken - it gives the following error.

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-site-plugin:3.0:site (default-site) on project 
hadoop-main: Execution default-site of goal 
org.apache.maven.plugins:maven-site-plugin:3.0:site failed: A required class 
was missing while executing 
org.apache.maven.plugins:maven-site-plugin:3.0:site: 
org/sonatype/aether/graph/DependencyFilter

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
{code}

Looks related to 
https://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound

Bumping the maven-site-plugin version should fix it.


> Fix 'mvn site'
> --
>
> Key: HADOOP-10273
> URL: https://issues.apache.org/jira/browse/HADOOP-10273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
> Attachments: HADOOP-10273.patch
>
>
> 'mvn site' fails with
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.0:site (default-site) on project 
> hadoop-main: Execution default-site of goal 
> org.apache.maven.plugins:maven-site-plugin:3.0:site failed: A required class 
> was missing while executing 
> org.apache.maven.plugins:maven-site-plugin:3.0:site: 
> org/sonatype/aether/graph/DependencyFilter
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
> {code}
> Looks related to 
> https://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
> Bumping the maven-site-plugin version should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10273) Fix 'mvn site'

2014-01-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10273:
---

Attachment: HADOOP-10273.patch

Verified that the change fixes the above build break.

> Fix 'mvn site'
> --
>
> Key: HADOOP-10273
> URL: https://issues.apache.org/jira/browse/HADOOP-10273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
> Attachments: HADOOP-10273.patch
>
>
> 'mvn site' is broken - it gives the following error.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.0:site (default-site) on project 
> hadoop-main: Execution default-site of goal 
> org.apache.maven.plugins:maven-site-plugin:3.0:site failed: A required class 
> was missing while executing 
> org.apache.maven.plugins:maven-site-plugin:3.0:site: 
> org/sonatype/aether/graph/DependencyFilter
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
> {code}
> Looks related to 
> https://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
> Bumping the maven-site-plugin version should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10273) Fix 'maven site'

2014-01-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-10273:
--

 Summary: Fix 'maven site'
 Key: HADOOP-10273
 URL: https://issues.apache.org/jira/browse/HADOOP-10273
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Arpit Agarwal


'mvn site' is broken - it gives the following error.

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-site-plugin:3.0:site (default-site) on project 
hadoop-main: Execution default-site of goal 
org.apache.maven.plugins:maven-site-plugin:3.0:site failed: A required class 
was missing while executing 
org.apache.maven.plugins:maven-site-plugin:3.0:site: 
org/sonatype/aether/graph/DependencyFilter

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
{code}

Looks related to 
https://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound

Bumping the maven-site-plugin version should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10273) Fix 'mvn site'

2014-01-23 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10273:
---

Summary: Fix 'mvn site'  (was: Fix 'maven site')

> Fix 'mvn site'
> --
>
> Key: HADOOP-10273
> URL: https://issues.apache.org/jira/browse/HADOOP-10273
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Arpit Agarwal
>
> 'mvn site' is broken - it gives the following error.
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.0:site (default-site) on project 
> hadoop-main: Execution default-site of goal 
> org.apache.maven.plugins:maven-site-plugin:3.0:site failed: A required class 
> was missing while executing 
> org.apache.maven.plugins:maven-site-plugin:3.0:site: 
> org/sonatype/aether/graph/DependencyFilter
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
> {code}
> Looks related to 
> https://cwiki.apache.org/confluence/display/MAVEN/AetherClassNotFound
> Bumping the maven-site-plugin version should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10272) Hadoop 2 "-copyFromLocal" fail when source is a folder and there are spaces in the path

2014-01-23 Thread Shuaishuai Nie (JIRA)
Shuaishuai Nie created HADOOP-10272:
---

 Summary: Hadoop 2 "-copyFromLocal" fail when source is a folder 
and there are spaces in the path
 Key: HADOOP-10272
 URL: https://issues.apache.org/jira/browse/HADOOP-10272
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.2.0
Reporter: Shuaishuai Nie


Repro steps:
with folder structure like: /ab/c d/ef.txt
hadoop command (hadoop fs -copyFromLocal /ab/ /) or (hadoop fs -copyFromLocal 
"/ab/c d/" /) fail with error:
copyFromLocal: File file:/ab/c%20d/ef.txt does not exist

However command (hadoop fs -copyFromLocal "/ab/c d/ef.txt" /) success.

Seems like hadoop treat file and directory differently when "copyFromLocal".
This only happens in Hadoop 2 and causing 2 Hive unit test failures 
(external_table_with_space_in_location_path.q and 
load_hdfs_file_with_space_in_the_name.q).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10086) User document for authentication in secure cluster

2014-01-23 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880582#comment-13880582
 ] 

Arpit Agarwal commented on HADOOP-10086:


These omissions were in the old doc too, so it is not an error introduced by 
you, but perhaps this is a good time to fix it.
# IIRC secure mode requires that {{dfs.datanode.address}} use a privileged port 
(< 1024). We should change the port number in the examples and also add a note. 
# I think {{dfs.web.authentication.kerberos.principal}} and 
{{dfs.web.authentication.kerberos.keytab}} are also required for WebHDFS to 
work with security. Should we document them?
# Should we document {{yarn.nodemanager.linux-container-executor.path}} for 
yarn-site.xml?

Minor typos.
# “Kerberos principle” —> “Kerberos principal” 
# "  authentication are required.” —> “is required.”
# "which works in the same way to the” —> "which works in the same way as the”
# "if the realms is matched to the” —> "if the realm matches the”
# definitio —> definition
# "OS functionality” —> “OS”

builds.apache.org seems to be down, I’ll try to kickoff a Jenkins build once it 
is back up.

Thanks again for helping improve the documentation!

> User document for authentication in secure cluster
> --
>
> Key: HADOOP-10086
> URL: https://issues.apache.org/jira/browse/HADOOP-10086
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Masatake Iwasaki
>Priority: Minor
>  Labels: documentaion, security
> Attachments: HADOOP-10086-0.patch, HADOOP-10086-1.patch
>
>
> There are no independent section for basic security features such as 
> authentication and group mapping in the user documentation, though there are 
> sections for "Service Level Authorization" and "HTTP Authentication".
> Creating independent section for authentication and moving contents about 
> secure cluster currently residing in "Cluster Setup" section could be good 
> starting point.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10249) LdapGroupsMapping should trim ldap password read from file

2014-01-23 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated HADOOP-10249:


Attachment: HADOOP-10249.patch

patch to resolve the problem

> LdapGroupsMapping should trim ldap password read from file
> --
>
> Key: HADOOP-10249
> URL: https://issues.apache.org/jira/browse/HADOOP-10249
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Dilli Arumugam
>Assignee: Dilli Arumugam
> Attachments: HADOOP-10249.patch
>
>
>  org.apache.hadoop.security.LdapGroupsMapping allows specifying ldap 
> connection password in a file using property key
> hadoop.security.group.mapping.ldap.bind.password.file
> The code in LdapGroupsMapping  that reads the content of the password file 
> does not trim the password value. This causes ldap connection failure as the 
> password in the password file ends up having a trailing newline.
> Most of the text editors and echo adds a new line at the end of file.
> So, LdapGroupsMapping should trim the password read from the file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2014-01-23 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880505#comment-13880505
 ] 

Andrew Wang commented on HADOOP-9652:
-

Sounds good to me, thanks Jason. I'm honestly okay with reverting symlink
support for local filesystem entirely since they seem to be more trouble
than they're worth, but that's a discussion we can have at a later time.



> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.4.0
>
> Attachments: 0001-temporarily-disable-HADOOP-9652.patch, 
> hadoop-9452-1.patch, hadoop-9652-2.patch, hadoop-9652-3.patch, 
> hadoop-9652-4.patch, hadoop-9652-5.patch, hadoop-9652-6.patch, 
> hadoop-9652-workaround.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2014-01-23 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880487#comment-13880487
 ] 

Jason Lowe commented on HADOOP-9652:


+1, thanks Andrew!  I plan on committing this but need to clear up something 
about the state of this JIRA.  It looks like the reported problem will still 
exist, since getFileLinkStatus will not fill in the link owner and mode by 
default after the workaround patch.  In that sense the cleanest thing would be 
to simply revert and commit a real fix later, but it looks like the last 
committed patch did more than just fix the described problem (e.g.: also fixing 
HADOOP-9693).  So I'm thinking we need to rename this JIRA to reflect it's 
laying the groundwork for the eventual fix and file a followup JIRA to track 
completing the fix.  Does that make sense?

> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.4.0
>
> Attachments: 0001-temporarily-disable-HADOOP-9652.patch, 
> hadoop-9452-1.patch, hadoop-9652-2.patch, hadoop-9652-3.patch, 
> hadoop-9652-4.patch, hadoop-9652-5.patch, hadoop-9652-6.patch, 
> hadoop-9652-workaround.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10167) Mark hadoop-common source as UTF-8 in Maven pom files / refactoring

2014-01-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-10167:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2, branch-2.3
Thank you Mikhail

> Mark hadoop-common source as UTF-8 in Maven pom files / refactoring
> ---
>
> Key: HADOOP-10167
> URL: https://issues.apache.org/jira/browse/HADOOP-10167
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.6-alpha
> Environment: Fedora 19 x86-64
>Reporter: Mikhail Antonov
>  Labels: build
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10167-1.patch
>
>
> While looking at BIGTOP-831, turned out that the way Bigtop calls maven build 
> / site:site generation causes the errors like this:
> [ERROR] Exit code: 1 - 
> /home/user/jenkins/workspace/BigTop-RPM/label/centos-6-x86_64-HAD-1-buildbot/bigtop-repo/build/hadoop/rpm/BUILD/hadoop-2.0.2-alpha-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java:31:
>  error: unmappable character for encoding ANSI_X3.4-1968
> [ERROR] JvmMetrics("JVM related metrics etc."), // record info??
> Making the whole hadoop-common to use UTF-8 fixes that and seems in general 
> good thing to me.
> Attaching first version of patch for review.
> Original issue was observed on openjdk 7 (x86-64).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10167) Mark hadoop-common source as UTF-8 in Maven pom files / refactoring

2014-01-23 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880445#comment-13880445
 ] 

Konstantin Boudnik commented on HADOOP-10167:
-

Patch looks good - there was a line with trailing whitespaces that I've fixed 
for the same of time. Committing now.

> Mark hadoop-common source as UTF-8 in Maven pom files / refactoring
> ---
>
> Key: HADOOP-10167
> URL: https://issues.apache.org/jira/browse/HADOOP-10167
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.6-alpha
> Environment: Fedora 19 x86-64
>Reporter: Mikhail Antonov
>  Labels: build
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10167-1.patch
>
>
> While looking at BIGTOP-831, turned out that the way Bigtop calls maven build 
> / site:site generation causes the errors like this:
> [ERROR] Exit code: 1 - 
> /home/user/jenkins/workspace/BigTop-RPM/label/centos-6-x86_64-HAD-1-buildbot/bigtop-repo/build/hadoop/rpm/BUILD/hadoop-2.0.2-alpha-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java:31:
>  error: unmappable character for encoding ANSI_X3.4-1968
> [ERROR] JvmMetrics("JVM related metrics etc."), // record info??
> Making the whole hadoop-common to use UTF-8 fixes that and seems in general 
> good thing to me.
> Attaching first version of patch for review.
> Original issue was observed on openjdk 7 (x86-64).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10271) Use FileUtils.copyFile() to implement DFSTestUtils.copyFile()

2014-01-23 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-10271:
---

 Summary: Use FileUtils.copyFile() to implement 
DFSTestUtils.copyFile()
 Key: HADOOP-10271
 URL: https://issues.apache.org/jira/browse/HADOOP-10271
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
Priority: Minor
 Attachments: HDFS-5825.000.patch

{{DFSTestUtils.copyFile()}} is implemented by copying data through 
FileInputStream / FileOutputStream. Apache Common IO provides 
{{FileUtils.copyFile()}}. It uses FileChannel which is more efficient.

This jira proposes to implement {{DFSTestUtils.copyFile()}} using 
{{FileUtils.copyFile()}}.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10167) Mark hadoop-common source as UTF-8 in Maven pom files / refactoring

2014-01-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-10167:


Fix Version/s: 2.3.0
   3.0.0

> Mark hadoop-common source as UTF-8 in Maven pom files / refactoring
> ---
>
> Key: HADOOP-10167
> URL: https://issues.apache.org/jira/browse/HADOOP-10167
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.6-alpha
> Environment: Fedora 19 x86-64
>Reporter: Mikhail Antonov
>  Labels: build
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HADOOP-10167-1.patch
>
>
> While looking at BIGTOP-831, turned out that the way Bigtop calls maven build 
> / site:site generation causes the errors like this:
> [ERROR] Exit code: 1 - 
> /home/user/jenkins/workspace/BigTop-RPM/label/centos-6-x86_64-HAD-1-buildbot/bigtop-repo/build/hadoop/rpm/BUILD/hadoop-2.0.2-alpha-src/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java:31:
>  error: unmappable character for encoding ANSI_X3.4-1968
> [ERROR] JvmMetrics("JVM related metrics etc."), // record info??
> Making the whole hadoop-common to use UTF-8 fixes that and seems in general 
> good thing to me.
> Attaching first version of patch for review.
> Original issue was observed on openjdk 7 (x86-64).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9640) RPC Congestion Control with FairCallQueue

2014-01-23 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-9640:
-

Attachment: faircallqueue7_with_runtime_swapping.patch

Attached preview of patch that enables swapping the namenode call queue at 
runtime.

> RPC Congestion Control with FairCallQueue
> -
>
> Key: HADOOP-9640
> URL: https://issues.apache.org/jira/browse/HADOOP-9640
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Xiaobo Peng
>  Labels: hdfs, qos, rpc
> Attachments: MinorityMajorityPerformance.pdf, 
> NN-denial-of-service-updated-plan.pdf, faircallqueue.patch, 
> faircallqueue2.patch, faircallqueue3.patch, faircallqueue4.patch, 
> faircallqueue5.patch, faircallqueue6.patch, 
> faircallqueue7_with_runtime_swapping.patch, 
> rpc-congestion-control-draft-plan.pdf
>
>
> Several production Hadoop cluster incidents occurred where the Namenode was 
> overloaded and failed to respond. 
> We can improve quality of service for users during namenode peak loads by 
> replacing the FIFO call queue with a [Fair Call 
> Queue|https://issues.apache.org/jira/secure/attachment/12616864/NN-denial-of-service-updated-plan.pdf].
>  (this plan supersedes rpc-congestion-control-draft-plan).
> Excerpted from the communication of one incident, “The map task of a user was 
> creating huge number of small files in the user directory. Due to the heavy 
> load on NN, the JT also was unable to communicate with NN...The cluster 
> became responsive only once the job was killed.”
> Excerpted from the communication of another incident, “Namenode was 
> overloaded by GetBlockLocation requests (Correction: should be getFileInfo 
> requests. the job had a bug that called getFileInfo for a nonexistent file in 
> an endless loop). All other requests to namenode were also affected by this 
> and hence all jobs slowed down. Cluster almost came to a grinding 
> halt…Eventually killed jobtracker to kill all jobs that are running.”
> Excerpted from HDFS-945, “We've seen defective applications cause havoc on 
> the NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories 
> (60k files) etc.”



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10255) Copy the HttpServer in 2.2 back to branch-2

2014-01-23 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-10255:


Attachment: HADOOP-10255.003.patch

The v3 patch is for trunk only. It does not include the HttpServer in 
branch-2.2.

The patch for branch-2 should include the class from branch-2.2.

> Copy the HttpServer in 2.2 back to branch-2
> ---
>
> Key: HADOOP-10255
> URL: https://issues.apache.org/jira/browse/HADOOP-10255
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: HADOOP-10255.000.patch, HADOOP-10255.001.patch, 
> HADOOP-10255.002.patch, HADOOP-10255.003.patch
>
>
> As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
> from branch-2.2 to make sure it works across multiple 2.x releases.
> This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
>  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Copy the HttpServer in 2.2 back to branch-2

2014-01-23 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880339#comment-13880339
 ] 

Haohui Mai commented on HADOOP-10255:
-

bq. nit: Should you leave the - @Deprecated in place?

{{HttpServer2}} is used by HDFS / YARN only. It is safe to remove all the 
deprecated methods.

I would prefer to landing this patch on trunk before porting it to branch-2.

> Copy the HttpServer in 2.2 back to branch-2
> ---
>
> Key: HADOOP-10255
> URL: https://issues.apache.org/jira/browse/HADOOP-10255
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: HADOOP-10255.000.patch, HADOOP-10255.001.patch, 
> HADOOP-10255.002.patch
>
>
> As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
> from branch-2.2 to make sure it works across multiple 2.x releases.
> This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
>  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10255) Copy the HttpServer in 2.2 back to branch-2

2014-01-23 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-10255:


Attachment: HADOOP-10255.002.patch

Rebased the v2 patch on the current trunk.

> Copy the HttpServer in 2.2 back to branch-2
> ---
>
> Key: HADOOP-10255
> URL: https://issues.apache.org/jira/browse/HADOOP-10255
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: HADOOP-10255.000.patch, HADOOP-10255.001.patch, 
> HADOOP-10255.002.patch
>
>
> As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
> from branch-2.2 to make sure it works across multiple 2.x releases.
> This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
>  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2014-01-23 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880283#comment-13880283
 ] 

Colin Patrick McCabe commented on HADOOP-9652:
--

+1 for the workaround.  thanks, Andrew.

> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.4.0
>
> Attachments: 0001-temporarily-disable-HADOOP-9652.patch, 
> hadoop-9452-1.patch, hadoop-9652-2.patch, hadoop-9652-3.patch, 
> hadoop-9652-4.patch, hadoop-9652-5.patch, hadoop-9652-6.patch, 
> hadoop-9652-workaround.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10239) Add Spark as a related project on the Hadoop page

2014-01-23 Thread Reynold Xin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880282#comment-13880282
 ] 

Reynold Xin commented on HADOOP-10239:
--

Thanks!

> Add Spark as a related project on the Hadoop page
> -
>
> Key: HADOOP-10239
> URL: https://issues.apache.org/jira/browse/HADOOP-10239
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Reynold Xin
> Attachments: HADOOP-10239.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10239) Add Spark as a related project on the Hadoop page

2014-01-23 Thread Matei Zaharia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880272#comment-13880272
 ] 

Matei Zaharia commented on HADOOP-10239:


Thanks Sandy!

> Add Spark as a related project on the Hadoop page
> -
>
> Key: HADOOP-10239
> URL: https://issues.apache.org/jira/browse/HADOOP-10239
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Reynold Xin
> Attachments: HADOOP-10239.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HADOOP-10239) Add Spark as a related project on the Hadoop page

2014-01-23 Thread Sandy Ryza (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandy Ryza resolved HADOOP-10239.
-

Resolution: Fixed

Pushed this to the website

> Add Spark as a related project on the Hadoop page
> -
>
> Key: HADOOP-10239
> URL: https://issues.apache.org/jira/browse/HADOOP-10239
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Reynold Xin
> Attachments: HADOOP-10239.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10270) getfacl does not display effective permissions of masked entries.

2014-01-23 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880247#comment-13880247
 ] 

Chris Nauroth commented on HADOOP-10270:


See below for example output from getfacl on Linux.  The logic for this would 
be:

{code}
Find the mask entry within the scope, either access or default.
Go back and iterate through all entries.
If entry is named user, named group, or unnamed group
  Calculate effective permissions by applying the mask from the same scope 
using {{FsAction#and}}.
  If effective permissions are different from actual permissions
Also display effective permissions.
{code}

The effective permissions are not displayed if the mask doesn't turn any 
permissions off.

{code}
> getfacl dir1
# file: dir1
# owner: cnauroth
# group: cnauroth
user::rw-
user:bruce:rwx  #effective:r--
user:diana:r--
group::rw-  #effective:r--
mask::r--
other::r--
user::rw-
default:user:bruce:rwx  #effective:r--
default:user:diana:r--
default:group::rw-  #effective:r--
default:mask::r--
default:other::r--
{code}


> getfacl does not display effective permissions of masked entries.
> -
>
> Key: HADOOP-10270
> URL: https://issues.apache.org/jira/browse/HADOOP-10270
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: HDFS ACLs (HDFS-4685)
>Reporter: Chris Nauroth
>Priority: Minor
>
> The mask entry of an ACL can be changed to restrict permissions that would be 
> otherwise granted via named user and group entries.  In these cases, the 
> typical implementation of getfacl also displays the effective permissions 
> after applying the mask.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2014-01-23 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-9652:


Attachment: hadoop-9652-workaround.patch

Thanks for reminding us of this issue Jason, we definitely need to get it fixed 
for 2.4. I took Colin's branch-2 patch and fixed the LocalFS tests, and also 
verified that we no longer see a bunch of execve's of stat with strace and the 
shell. It's be great if you could verify that this fixes your problem (and +1 
and commit if you're comfortable).

> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.4.0
>
> Attachments: 0001-temporarily-disable-HADOOP-9652.patch, 
> hadoop-9452-1.patch, hadoop-9652-2.patch, hadoop-9652-3.patch, 
> hadoop-9652-4.patch, hadoop-9652-5.patch, hadoop-9652-6.patch, 
> hadoop-9652-workaround.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10270) getfacl does not display effective permissions of masked entries.

2014-01-23 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-10270:
--

 Summary: getfacl does not display effective permissions of masked 
entries.
 Key: HADOOP-10270
 URL: https://issues.apache.org/jira/browse/HADOOP-10270
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: HDFS ACLs (HDFS-4685)
Reporter: Chris Nauroth
Priority: Minor


The mask entry of an ACL can be changed to restrict permissions that would be 
otherwise granted via named user and group entries.  In these cases, the 
typical implementation of getfacl also displays the effective permissions after 
applying the mask.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9640) RPC Congestion Control with FairCallQueue

2014-01-23 Thread Mayank Bansal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880231#comment-13880231
 ] 

Mayank Bansal commented on HADOOP-9640:
---

Hi [~sureshms] 

Can you please take a look at this jira?

Thanks,
Mayank

> RPC Congestion Control with FairCallQueue
> -
>
> Key: HADOOP-9640
> URL: https://issues.apache.org/jira/browse/HADOOP-9640
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Xiaobo Peng
>  Labels: hdfs, qos, rpc
> Attachments: MinorityMajorityPerformance.pdf, 
> NN-denial-of-service-updated-plan.pdf, faircallqueue.patch, 
> faircallqueue2.patch, faircallqueue3.patch, faircallqueue4.patch, 
> faircallqueue5.patch, faircallqueue6.patch, 
> rpc-congestion-control-draft-plan.pdf
>
>
> Several production Hadoop cluster incidents occurred where the Namenode was 
> overloaded and failed to respond. 
> We can improve quality of service for users during namenode peak loads by 
> replacing the FIFO call queue with a [Fair Call 
> Queue|https://issues.apache.org/jira/secure/attachment/12616864/NN-denial-of-service-updated-plan.pdf].
>  (this plan supersedes rpc-congestion-control-draft-plan).
> Excerpted from the communication of one incident, “The map task of a user was 
> creating huge number of small files in the user directory. Due to the heavy 
> load on NN, the JT also was unable to communicate with NN...The cluster 
> became responsive only once the job was killed.”
> Excerpted from the communication of another incident, “Namenode was 
> overloaded by GetBlockLocation requests (Correction: should be getFileInfo 
> requests. the job had a bug that called getFileInfo for a nonexistent file in 
> an endless loop). All other requests to namenode were also affected by this 
> and hence all jobs slowed down. Cluster almost came to a grinding 
> halt…Eventually killed jobtracker to kill all jobs that are running.”
> Excerpted from HDFS-945, “We've seen defective applications cause havoc on 
> the NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories 
> (60k files) etc.”



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Copy the HttpServer in 2.2 back to branch-2

2014-01-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880201#comment-13880201
 ] 

stack commented on HADOOP-10255:


Patch does not apply to branch-2.  Any chance of fixing it [~wheat9]?

> Copy the HttpServer in 2.2 back to branch-2
> ---
>
> Key: HADOOP-10255
> URL: https://issues.apache.org/jira/browse/HADOOP-10255
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: HADOOP-10255.000.patch, HADOOP-10255.001.patch
>
>
> As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
> from branch-2.2 to make sure it works across multiple 2.x releases.
> This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
>  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9652) RawLocalFs#getFileLinkStatus does not fill in the link owner and mode

2014-01-23 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880138#comment-13880138
 ] 

Jason Lowe commented on HADOOP-9652:


Any update on this?  I see branch-2 is still doing a ton of fork-n-exec 
overhead for fs.exists() on local files.

> RawLocalFs#getFileLinkStatus does not fill in the link owner and mode
> -
>
> Key: HADOOP-9652
> URL: https://issues.apache.org/jira/browse/HADOOP-9652
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: 2.4.0
>
> Attachments: 0001-temporarily-disable-HADOOP-9652.patch, 
> hadoop-9452-1.patch, hadoop-9652-2.patch, hadoop-9652-3.patch, 
> hadoop-9652-4.patch, hadoop-9652-5.patch, hadoop-9652-6.patch
>
>
> {{RawLocalFs#getFileLinkStatus}} does not actually get the owner and mode of 
> the symlink, but instead uses the owner and mode of the symlink target.  If 
> the target can't be found, it fills in bogus values (the empty string and 
> FsPermission.getDefault) for these.
> Symlinks have an owner distinct from the owner of the target they point to, 
> and getFileLinkStatus ought to expose this.
> In some operating systems, symlinks can have a permission other than 0777.  
> We ought to expose this in RawLocalFilesystem and other places, although we 
> don't necessarily have to support this behavior in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10269) SaslException is completely ignored

2014-01-23 Thread Ding Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880083#comment-13880083
 ] 

Ding Yuan commented on HADOOP-10269:


Thanks for the response. It makes sense. I don't want to sound like a pest but 
in this case this "ignored" is a different exception from se, and since the 
code completely ignores it, later no one will ever know that there were another 
exception "ignored" thrown by the dispose. Although 'dispose' shouldn't fail in 
most cases, the purpose of an error handler is exactly to prepare for those 
extremely rare cases where some failure modes are not anticipated. So in this 
case maybe it's worthwhile to at least log this "ignored" exception?

> SaslException is completely ignored
> ---
>
> Key: HADOOP-10269
> URL: https://issues.apache.org/jira/browse/HADOOP-10269
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Ding Yuan
>
> In "org/apache/hadoop/security/SaslOutputStream.java", there is the following 
> code pattern:
> {noformat}
> 172try {
> 173  if (saslServer != null) { // using saslServer
> 174saslToken = saslServer.wrap(inBuf, off, len);
> 175  } else { // using saslClient
> 176saslToken = saslClient.wrap(inBuf, off, len);
> 177  }
> 178} catch (SaslException se) {
> 179  try {
> 180   disposeSasl();
> 181  } catch (SaslException ignored) {
> 182  }
> 183  throw se;
> 184}
> {noformat}
> On line 181, the exception thrown by disposeSasl(), which can be from 
> SaslServer.dispose() or SaslClient.dispose(), is ignored completely without 
> even logging it. Maybe at least log it?
> Ding



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10248) Property name should be included in the exception where property value is null

2014-01-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880076#comment-13880076
 ] 

Uma Maheswara Rao G commented on HADOOP-10248:
--

+1

> Property name should be included in the exception where property value is null
> --
>
> Key: HADOOP-10248
> URL: https://issues.apache.org/jira/browse/HADOOP-10248
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Ted Yu
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-10248.2.patch, HADOOP-10248.patch
>
>
> I saw the following when trying to determine startup failure:
> {code}
> 2014-01-21 06:07:17,871 FATAL 
> [master:h2-centos6-uns-1390276854-hbase-10:6] master.HMaster: Unhandled 
> exception. Starting shutdown.
> java.lang.IllegalArgumentException: Property value must not be null
> at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:958)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:940)
> at org.apache.hadoop.http.HttpServer.initializeWebServer(HttpServer.java:510)
> at org.apache.hadoop.http.HttpServer.(HttpServer.java:470)
> at org.apache.hadoop.http.HttpServer.(HttpServer.java:458)
> at org.apache.hadoop.http.HttpServer.(HttpServer.java:412)
> at org.apache.hadoop.hbase.util.InfoServer.(InfoServer.java:59)
> {code}
> Property name should be included in the following exception:
> {code}
> Preconditions.checkArgument(
> value != null,
> "Property value must not be null");
> {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10269) SaslException is completely ignored

2014-01-23 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880042#comment-13880042
 ] 

Daryn Sharp commented on HADOOP-10269:
--

If the sasl wrapping fails then it really doesn't matter if disposing of the 
sasl object fails.  Disposing shouldn't fail because it's clearing internal 
state but even if it does it's likely related to the wrap failure.  The 
original/rethrown exception from the wrap failure is what really matters.

If that makes sense, I think this jira should be marked invalid.

> SaslException is completely ignored
> ---
>
> Key: HADOOP-10269
> URL: https://issues.apache.org/jira/browse/HADOOP-10269
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Ding Yuan
>
> In "org/apache/hadoop/security/SaslOutputStream.java", there is the following 
> code pattern:
> {noformat}
> 172try {
> 173  if (saslServer != null) { // using saslServer
> 174saslToken = saslServer.wrap(inBuf, off, len);
> 175  } else { // using saslClient
> 176saslToken = saslClient.wrap(inBuf, off, len);
> 177  }
> 178} catch (SaslException se) {
> 179  try {
> 180   disposeSasl();
> 181  } catch (SaslException ignored) {
> 182  }
> 183  throw se;
> 184}
> {noformat}
> On line 181, the exception thrown by disposeSasl(), which can be from 
> SaslServer.dispose() or SaslClient.dispose(), is ignored completely without 
> even logging it. Maybe at least log it?
> Ding



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10255) Copy the HttpServer in 2.2 back to branch-2

2014-01-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HADOOP-10255:
---

 Priority: Blocker  (was: Major)
Fix Version/s: 2.4.0

I marked this a blocker on 2.4 since without it downstreamers will be 
incompatible w/ 2.4.  Please recalibrate it if I have it wrong.

> Copy the HttpServer in 2.2 back to branch-2
> ---
>
> Key: HADOOP-10255
> URL: https://issues.apache.org/jira/browse/HADOOP-10255
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Blocker
> Fix For: 2.4.0
>
> Attachments: HADOOP-10255.000.patch, HADOOP-10255.001.patch
>
>
> As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
> from branch-2.2 to make sure it works across multiple 2.x releases.
> This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
>  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10255) Copy the HttpServer in 2.2 back to branch-2

2014-01-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13880023#comment-13880023
 ] 

stack commented on HADOOP-10255:


A few minor comments.  (Is failure because patch build is against trunk and not 
branch2, the target for this patch?)

nit: Should you leave the -  @Deprecated in place?

nit: Do you want to explain in class comment why there is a class named 
HttpServer2: i.e. 'this class exists because hbasers were whining when their 
httpserver was taken away"?  Folks may wonder expecially in h3 when HttpServer 
is gone.  Do you want to add 'yarn' to the list of LimitedPrivate or is 
mapreduce suficient proxy for yarn?

Else looks good on quick review.  +1  Above could be addressed on commit.  
Thanks for doing this [~wheat9]

> Copy the HttpServer in 2.2 back to branch-2
> ---
>
> Key: HADOOP-10255
> URL: https://issues.apache.org/jira/browse/HADOOP-10255
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.4.0
>
> Attachments: HADOOP-10255.000.patch, HADOOP-10255.001.patch
>
>
> As suggested in HADOOP-10253, HBase needs a temporary copy of {{HttpServer}} 
> from branch-2.2 to make sure it works across multiple 2.x releases.
> This patch renames the current {{HttpServer}} into {{HttpServer2}}, and bring 
>  the {{HttpServer}} in branch-2.2 into the repository.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10269) SaslException is completely ignored

2014-01-23 Thread Ding Yuan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ding Yuan updated HADOOP-10269:
---

Description: 
In "org/apache/hadoop/security/SaslOutputStream.java", there is the following 
code pattern:

{noformat}
172try {
173  if (saslServer != null) { // using saslServer
174saslToken = saslServer.wrap(inBuf, off, len);
175  } else { // using saslClient
176saslToken = saslClient.wrap(inBuf, off, len);
177  }
178} catch (SaslException se) {
179  try {
180   disposeSasl();
181  } catch (SaslException ignored) {
182  }
183  throw se;
184}
{noformat}

On line 181, the exception thrown by disposeSasl(), which can be from 
SaslServer.dispose() or SaslClient.dispose(), is ignored completely without 
even logging it. Maybe at least log it?

Ding

  was:
In "org/apache/hadoop/security/SaslOutputStream.java", there is the following 
code pattern:

172try {
173  if (saslServer != null) { // using saslServer
174saslToken = saslServer.wrap(inBuf, off, len);
175  } else { // using saslClient
176saslToken = saslClient.wrap(inBuf, off, len);
177  }
178} catch (SaslException se) {
179  try {
180   disposeSasl();
181  } catch (SaslException ignored) {
182  }
183  throw se;
184}

On line 181, the exception thrown by disposeSasl(), which can be from 
SaslServer.dispose() or SaslClient.dispose(), is ignored completely without 
even logging it. Maybe at least log it?

Ding


> SaslException is completely ignored
> ---
>
> Key: HADOOP-10269
> URL: https://issues.apache.org/jira/browse/HADOOP-10269
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Ding Yuan
>
> In "org/apache/hadoop/security/SaslOutputStream.java", there is the following 
> code pattern:
> {noformat}
> 172try {
> 173  if (saslServer != null) { // using saslServer
> 174saslToken = saslServer.wrap(inBuf, off, len);
> 175  } else { // using saslClient
> 176saslToken = saslClient.wrap(inBuf, off, len);
> 177  }
> 178} catch (SaslException se) {
> 179  try {
> 180   disposeSasl();
> 181  } catch (SaslException ignored) {
> 182  }
> 183  throw se;
> 184}
> {noformat}
> On line 181, the exception thrown by disposeSasl(), which can be from 
> SaslServer.dispose() or SaslClient.dispose(), is ignored completely without 
> even logging it. Maybe at least log it?
> Ding



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10269) SaslException is completely ignored

2014-01-23 Thread Ding Yuan (JIRA)
Ding Yuan created HADOOP-10269:
--

 Summary: SaslException is completely ignored
 Key: HADOOP-10269
 URL: https://issues.apache.org/jira/browse/HADOOP-10269
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.2.0
Reporter: Ding Yuan


In "org/apache/hadoop/security/SaslOutputStream.java", there is the following 
code pattern:

172try {
173  if (saslServer != null) { // using saslServer
174saslToken = saslServer.wrap(inBuf, off, len);
175  } else { // using saslClient
176saslToken = saslClient.wrap(inBuf, off, len);
177  }
178} catch (SaslException se) {
179  try {
180   disposeSasl();
181  } catch (SaslException ignored) {
182  }
183  throw se;
184}

On line 181, the exception thrown by disposeSasl(), which can be from 
SaslServer.dispose() or SaslClient.dispose(), is ignored completely without 
even logging it. Maybe at least log it?

Ding



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)