[jira] [Commented] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-04-23 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16449125#comment-16449125
 ] 

Devaraj Das commented on HADOOP-15407:
--

[~esmanii], the patch seems to have been generated incorrectly. I'd expect this 
jira is adding lot of new code, but the patch does otherwise :)

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15407) Support Windows Azure Storage - Blob file system in Hadoop

2018-04-23 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16449125#comment-16449125
 ] 

Devaraj Das edited comment on HADOOP-15407 at 4/24/18 12:58 AM:


[~esmanii], the patch seems to have been generated incorrectly. I'd expect this 
jira to add a lot of new code, but the patch does otherwise :)


was (Author: devaraj):
[~esmanii], the patch seems to have been generated incorrectly. I'd expect this 
jira is adding lot of new code, but the patch does otherwise :)

> Support Windows Azure Storage - Blob file system in Hadoop
> --
>
> Key: HADOOP-15407
> URL: https://issues.apache.org/jira/browse/HADOOP-15407
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Attachments: HADOOP-15407-001.patch
>
>
> *{color:#212121}Description{color}*
>  This JIRA adds a new file system implementation, ABFS, for running Big Data 
> and Analytics workloads against Azure Storage. This is a complete rewrite of 
> the previous WASB driver with a heavy focus on optimizing both performance 
> and cost.
>  {color:#212121} {color}
>  *{color:#212121}High level design{color}*
>  At a high level, the code here extends the FileSystem class to provide an 
> implementation for accessing blobs in Azure Storage. The scheme abfs is used 
> for accessing it over HTTP, and abfss for accessing over HTTPS. The following 
> URI scheme is used to address individual paths:
>  {color:#212121} {color}
>  
> {color:#212121}abfs[s]://@.dfs.core.windows.net/{color}
>  {color:#212121} {color}
>  {color:#212121}ABFS is intended as a replacement to WASB. WASB is not 
> deprecated but is in pure maintenance mode and customers should upgrade to 
> ABFS once it hits General Availability later in CY18.{color}
>  {color:#212121}Benefits of ABFS include:{color}
>  {color:#212121}· Higher scale (capacity, throughput, and IOPS) Big 
> Data and Analytics workloads by allowing higher limits on storage 
> accounts{color}
>  {color:#212121}· Removing any ramp up time with Storage backend 
> partitioning; blocks are now automatically sharded across partitions in the 
> Storage backend{color}
> {color:#212121}          .         This avoids the need for using 
> temporary/intermediate files, increasing the cost (and framework complexity 
> around committing jobs/tasks){color}
>  {color:#212121}· Enabling much higher read and write throughput on 
> single files (tens of Gbps by default){color}
>  {color:#212121}· Still retaining all of the Azure Blob features 
> customers are familiar with and expect, and gaining the benefits of future 
> Blob features as well{color}
>  {color:#212121}ABFS incorporates Hadoop Filesystem metrics to monitor the 
> file system throughput and operations. Ambari metrics are not currently 
> implemented for ABFS, but will be available soon.{color}
>  {color:#212121} {color}
>  *{color:#212121}Credits and history{color}*
>  Credit for this work goes to (hope I don't forget anyone): Shane Mainali, 
> {color:#212121}Thomas Marquardt, Zichen Sun, Georgi Chalakov, Esfandiar 
> Manii, Amit Singh, Dana Kaban, Da Zhou, Junhua Gu, Saher Ahwal, Saurabh Pant, 
> and James Baker. {color}
>  {color:#212121} {color}
>  *Test*
>  ABFS has gone through many test procedures including Hadoop file system 
> contract tests, unit testing, functional testing, and manual testing. All the 
> Junit tests provided with the driver are capable of running in both 
> sequential/parallel fashion in order to reduce the testing time.
>  {color:#212121}Besides unit tests, we have used ABFS as the default file 
> system in Azure HDInsight. Azure HDInsight will very soon offer ABFS as a 
> storage option. (HDFS is also used but not as default file system.) Various 
> different customer and test workloads have been run against clusters with 
> such configurations for quite some time. Benchmarks such as Tera*, TPC-DS, 
> Spark Streaming and Spark SQL, and others have been run to do scenario, 
> performance, and functional testing. Third parties and customers have also 
> done various testing of ABFS.{color}
>  {color:#212121}The current version reflects to the version of the code 
> tested and used in our production environment.{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16393534#comment-16393534
 ] 

Devaraj Das commented on HADOOP-15297:
--

+1 (but pls fix the genericqa reported warnings)

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh, HADOOP-15297-002.patch, 
> HADOOP-15297-002.patch
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15297) Make S3A etag => checksum feature optional

2018-03-08 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16392043#comment-16392043
 ] 

Devaraj Das commented on HADOOP-15297:
--

Seems fine except for a minor issue.. There is an empty test() method that you 
should remove.

> Make S3A etag => checksum feature optional
> --
>
> Key: HADOOP-15297
> URL: https://issues.apache.org/jira/browse/HADOOP-15297
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15297-001.patchh
>
>
> HADOOP-15273 shows how distcp doesn't handle non-HDFS filesystems with 
> checksums.
> Exposing Etags as checksums, HADOOP-13282, breaks workflows which back up to 
> s3a.
> Rather than revert  I want to make it an option, off by default. Once we are 
> happy with distcp in future, we can turn it on.
> Why an option? Because it lines up for a successor to distcp which saves src 
> and dest checksums to a file and can then verify whether or not files have 
> really changed. Currently distcp relies on dest checksum algorithm being the 
> same as the src for incremental updates, but if either of the stores don't 
> serve checksums, silently downgrades to not checking. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15277) remove .FluentPropertyBeanIntrospector from CLI operation log output

2018-03-08 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16391870#comment-16391870
 ] 

Devaraj Das commented on HADOOP-15277:
--

+1

> remove .FluentPropertyBeanIntrospector from CLI operation log output
> 
>
> Key: HADOOP-15277
> URL: https://issues.apache.org/jira/browse/HADOOP-15277
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-15277-001.patch
>
>
> When hadoop metrics is started, a message about bean introspection appears.
> {code}
> 18/03/01 18:43:54 INFO beanutils.FluentPropertyBeanIntrospector: Error when 
> creating PropertyDescriptor for public final void 
> org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)!
>  Ignoring this property.
> {code}
> When using wasb or s3a,. this message appears in the client logs, because 
> they both start metrics
> I propose to raise the log level to ERROR for that class in log4j.properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9968) ProxyUsers does not work with NetGroups

2014-02-22 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-9968:


  Resolution: Fixed
   Fix Version/s: 2.5.0
Target Version/s: 2.5.0
  Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Benoy.

> ProxyUsers does not work with NetGroups
> ---
>
> Key: HADOOP-9968
> URL: https://issues.apache.org/jira/browse/HADOOP-9968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Fix For: 2.5.0
>
> Attachments: HADOOP-9968.patch, HADOOP-9968.patch, HADOOP-9968.patch, 
> hadoop-9968-1.2.patch
>
>
> It is possible to use NetGroups for ACLs. This requires specifying  the 
> config property hadoop.security.group.mapping as  
> org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
> org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.
> The authorization to proxy a user by another user is specified as a list of 
> groups hadoop.proxyuser..groups. The Group resolution does not 
> work  if we are using NetGroups.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9968) ProxyUsers does not work with NetGroups

2014-02-21 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909009#comment-13909009
 ] 

Devaraj Das commented on HADOOP-9968:
-

Looks good to me. [~tucu00], please have a look if you can. I will commit it 
later in the day today.

> ProxyUsers does not work with NetGroups
> ---
>
> Key: HADOOP-9968
> URL: https://issues.apache.org/jira/browse/HADOOP-9968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-9968.patch, HADOOP-9968.patch, HADOOP-9968.patch, 
> hadoop-9968-1.2.patch
>
>
> It is possible to use NetGroups for ACLs. This requires specifying  the 
> config property hadoop.security.group.mapping as  
> org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
> org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.
> The authorization to proxy a user by another user is specified as a list of 
> groups hadoop.proxyuser..groups. The Group resolution does not 
> work  if we are using NetGroups.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10183) Allow use of UPN style principals in keytab files

2014-01-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13860979#comment-13860979
 ] 

Devaraj Das commented on HADOOP-10183:
--

I think we should validate this well before we commit. In particular, when you 
talk about limit, it worries me.

> Allow use of UPN style principals in keytab files
> -
>
> Key: HADOOP-10183
> URL: https://issues.apache.org/jira/browse/HADOOP-10183
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Mubashir Kazia
>Assignee: Mubashir Kazia
> Attachments: AppConnection.java, HADOOP-10183.patch, 
> HADOOP-10183.patch.1, Jaas.java, SaslTestClient.java, SaslTestServer.java, 
> hdfs.keytab, jaas-krb5.conf, krb5.conf
>
>
> Hadoop currently only allows SPN style (E.g. hdfs/node.fqdn@REALM) principals 
> in keytab files in a cluster configured with Kerberos security. This cause 
> the burden of creating multiple principals and keytabs for each node of the 
> cluster. Active Directory allows the use of single principal across multiple 
> hosts if the SPNs for different hosts have been setup correctly on the 
> principal. With this scheme we have the server side using keytab file with 
> UPN style (E.g. hdfs@REALM) principal for a given service for all the nodes 
> of the cluster. The client side will request service tickets with SPN and 
> it's own TGT and Active Directory will grant service tickets with the correct 
> secret. 
> This will simplify the use of principals and keytab files for Active 
> Directory users with one principal for each service across all the nodes of 
> the cluster. 
> I have a patch to allow the use of UPN style principals in Hadoop. The patch 
> will not affect the use of SPN style principals. I couldn't figure out a way 
> to write test cases against MiniKDC so I have included the Oracle/Sun sample 
> Sasl server and client code along with the configuration I used to confirm 
> this scheme works. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10183) Allow use of UPN style principals in keytab files

2014-01-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13860842#comment-13860842
 ] 

Devaraj Das commented on HADOOP-10183:
--

When we did this initially in Hadoop (sharing the same principal/keytab between 
DataNodes like dn@REALM), we ran into issues where the NameNode would reject 
simultaneous authentication requests from DataNodes assuming that someone is 
trying to do a replay attack. This was noticeable in the cluster startup when 
all datanodes would try to authenticate themselves.

(Thinking aloud) What you plan to do can be supported by having the same 
keytab/principal on all the hosts without any code change, no?

> Allow use of UPN style principals in keytab files
> -
>
> Key: HADOOP-10183
> URL: https://issues.apache.org/jira/browse/HADOOP-10183
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Mubashir Kazia
>Assignee: Mubashir Kazia
> Attachments: AppConnection.java, HADOOP-10183.patch, 
> HADOOP-10183.patch.1, Jaas.java, SaslTestClient.java, SaslTestServer.java, 
> hdfs.keytab, jaas-krb5.conf, krb5.conf
>
>
> Hadoop currently only allows SPN style (E.g. hdfs/node.fqdn@REALM) principals 
> in keytab files in a cluster configured with Kerberos security. This cause 
> the burden of creating multiple principals and keytabs for each node of the 
> cluster. Active Directory allows the use of single principal across multiple 
> hosts if the SPNs for different hosts have been setup correctly on the 
> principal. With this scheme we have the server side using keytab file with 
> UPN style (E.g. hdfs@REALM) principal for a given service for all the nodes 
> of the cluster. The client side will request service tickets with SPN and 
> it's own TGT and Active Directory will grant service tickets with the correct 
> secret. 
> This will simplify the use of principals and keytab files for Active 
> Directory users with one principal for each service across all the nodes of 
> the cluster. 
> I have a patch to allow the use of UPN style principals in Hadoop. The patch 
> will not affect the use of SPN style principals. I couldn't figure out a way 
> to write test cases against MiniKDC so I have included the Oracle/Sun sample 
> Sasl server and client code along with the configuration I used to confirm 
> this scheme works. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9968) ProxyUsers does not work with NetGroups

2013-09-26 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13779039#comment-13779039
 ] 

Devaraj Das commented on HADOOP-9968:
-

Looks good to me. +1. 
Please have a patch for trunk as well. 

> ProxyUsers does not work with NetGroups
> ---
>
> Key: HADOOP-9968
> URL: https://issues.apache.org/jira/browse/HADOOP-9968
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: hadoop-9968-1.2.patch, HADOOP-9968.patch
>
>
> It is possible to use NetGroups for ACLs. This requires specifying  the 
> config property hadoop.security.group.mapping as  
> org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMapping or 
> org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping.
> The authorization to proxy a user by another user is specified as a list of 
> groups hadoop.proxyuser..groups. The Group resolution does not 
> work  if we are using NetGroups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and provide negotiation capabilities

2013-07-01 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13697366#comment-13697366
 ] 

Devaraj Das commented on HADOOP-9421:
-

Daryn / Luke, could one of you please write up a summary one last time (the 
HBase dev community is considering this stuff as well).

> Convert SASL to use ProtoBuf and provide negotiation capabilities
> -
>
> Key: HADOOP-9421
> URL: https://issues.apache.org/jira/browse/HADOOP-9421
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.0.3-alpha
>Reporter: Sanjay Radia
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0-beta, 2.2.0
>
> Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
> HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, 
> HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421-v2-demo.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9421) Add full length to SASL response to allow non-blocking readers

2013-03-27 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13616078#comment-13616078
 ] 

Devaraj Das commented on HADOOP-9421:
-

This makes sense to me..

> Add full length to SASL response to allow non-blocking readers
> --
>
> Key: HADOOP-9421
> URL: https://issues.apache.org/jira/browse/HADOOP-9421
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sanjay Radia
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-18 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das reassigned HADOOP-9299:
---

Assignee: Daryn Sharp  (was: Devaraj Das)

Sorry, didn't mean to assign this to myself.

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.0.5-beta, 2.0.4-alpha
>
> Attachments: HADOOP-9299-branch2.0.4.patch, HADOOP-9299.patch, 
> HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-18 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das reassigned HADOOP-9299:
---

Assignee: Devaraj Das  (was: Daryn Sharp)

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 3.0.0, 2.0.5-beta, 2.0.4-alpha
>
> Attachments: HADOOP-9299-branch2.0.4.patch, HADOOP-9299.patch, 
> HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7932) HA : Make client connection retries on socket time outs configurable.

2013-03-07 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7932:


Attachment: 7932-3.patch

Patch updated. This is after going through a couple of reviews on RB.

> HA : Make client connection retries on socket time outs configurable.
> -
>
> Key: HADOOP-7932
> URL: https://issues.apache.org/jira/browse/HADOOP-7932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ha, ipc
>Affects Versions: HA Branch (HDFS-1623)
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: HA Branch (HDFS-1623)
>
> Attachments: HADOOP-7932.patch, HADOOP-7932.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7932) HA : Make client connection retries on socket time outs configurable.

2013-03-07 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7932:


Attachment: (was: 7932-3.patch)

> HA : Make client connection retries on socket time outs configurable.
> -
>
> Key: HADOOP-7932
> URL: https://issues.apache.org/jira/browse/HADOOP-7932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ha, ipc
>Affects Versions: HA Branch (HDFS-1623)
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
> Fix For: HA Branch (HDFS-1623)
>
> Attachments: HADOOP-7932.patch, HADOOP-7932.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9163) The rpc msg in ProtobufRpcEngine.proto should be moved out to avoid an extra copy

2012-12-27 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13540124#comment-13540124
 ] 

Devaraj Das commented on HADOOP-9163:
-

Just posted a patch on the hbase jira. I am currently directly writing the rpc 
argument (and don't have any wrapper). The patch is not tested yet..

> The rpc msg in  ProtobufRpcEngine.proto should be moved out to avoid an extra 
> copy
> --
>
> Key: HADOOP-9163
> URL: https://issues.apache.org/jira/browse/HADOOP-9163
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sanjay Radia
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9163) The rpc msg in ProtobufRpcEngine.proto should be moved out to avoid an extra copy

2012-12-27 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13540067#comment-13540067
 ] 

Devaraj Das commented on HADOOP-9163:
-

[~sanjay.radia], yes most of the ProtobufRpcEngine work in HBase has been 
derived from Hadoop RPC. On the constructRpcRequest creating a bytestring out 
of the RPC arg, yes, this will be removed as part of HBASE-5945 (I am working 
on a patch for this, extending Todd's original patch). [In HBase, the 
compatibility issue doesn't exist since no release has been made with the proto 
stuff yet].

> The rpc msg in  ProtobufRpcEngine.proto should be moved out to avoid an extra 
> copy
> --
>
> Key: HADOOP-9163
> URL: https://issues.apache.org/jira/browse/HADOOP-9163
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sanjay Radia
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8999) SASL negotiation is flawed

2012-11-13 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13496386#comment-13496386
 ] 

Devaraj Das commented on HADOOP-8999:
-

I haven't gone through the patch and what it solves but is this problem 
relevant to branch-1?

> SASL negotiation is flawed
> --
>
> Key: HADOOP-8999
> URL: https://issues.apache.org/jira/browse/HADOOP-8999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 3.0.0, 2.0.3-alpha
>
> Attachments: HADOOP-8999.patch
>
>
> The RPC protocol used for SASL negotiation is flawed.  The server's RPC 
> response contains the next SASL challenge token, but a SASL server can return 
> null (I'm done) or a N-many byte challenge.  The server currently will not 
> send a RPC success response to the client if the SASL server returns null, 
> which causes the client to hang until it times out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8855) SSL-based image transfer does not work when Kerberos is disabled

2012-09-26 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13464180#comment-13464180
 ] 

Devaraj Das commented on HADOOP-8855:
-

Good find! [~tlipcon], quick question - this patch will work even on JDKs that 
have no inherent support for SPNEGO, right?

> SSL-based image transfer does not work when Kerberos is disabled
> 
>
> Key: HADOOP-8855
> URL: https://issues.apache.org/jira/browse/HADOOP-8855
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hadoop-8855.txt, hadoop-8855.txt
>
>
> In SecurityUtil.openSecureHttpConnection, we first check 
> {{UserGroupInformation.isSecurityEnabled()}}. However, this only checks the 
> kerberos config, which is independent of {{hadoop.ssl.enabled}}. Instead, we 
> should check {{HttpConfig.isSecure()}}.
> Credit to Wing Yew Poon for discovering this bug

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8225) DistCp fails when invoked by Oozie

2012-08-23 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13440493#comment-13440493
 ] 

Devaraj Das commented on HADOOP-8225:
-

The patch looks good on the tokens side. +1.

> DistCp fails when invoked by Oozie
> --
>
> Key: HADOOP-8225
> URL: https://issues.apache.org/jira/browse/HADOOP-8225
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.1, 2.0.0-alpha, 3.0.0
>Reporter: Mithun Radhakrishnan
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-8225.patch, HADOOP-8225.patch, HADOOP-8225.patch, 
> HADOOP-8225.patch, HADOOP-8225.patch
>
>
> When DistCp is invoked through a proxy-user (e.g. through Oozie), the 
> delegation-token-store isn't picked up by DistCp correctly. One sees failures 
> such as:
> ERROR [main] org.apache.hadoop.tools.DistCp: Couldn't complete DistCp
> operation: 
> java.lang.SecurityException: Intercepted System.exit(-999)
> at
> org.apache.oozie.action.hadoop.LauncherSecurityManager.checkExit(LauncherMapper.java:651)
> at java.lang.Runtime.exit(Runtime.java:88)
> at java.lang.System.exit(System.java:904)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:357)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:394)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:399)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:147)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:142)
> Looking over the DistCp code, one sees that HADOOP_TOKEN_FILE_LOCATION isn't 
> being copied to mapreduce.job.credentials.binary, in the job-conf. I'll post 
> a patch for this shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8225) DistCp fails when invoked by Oozie

2012-08-23 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13440459#comment-13440459
 ] 

Devaraj Das commented on HADOOP-8225:
-

Will take a look. 

> DistCp fails when invoked by Oozie
> --
>
> Key: HADOOP-8225
> URL: https://issues.apache.org/jira/browse/HADOOP-8225
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.1, 2.0.0-alpha, 3.0.0
>Reporter: Mithun Radhakrishnan
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-8225.patch, HADOOP-8225.patch, HADOOP-8225.patch, 
> HADOOP-8225.patch, HADOOP-8225.patch
>
>
> When DistCp is invoked through a proxy-user (e.g. through Oozie), the 
> delegation-token-store isn't picked up by DistCp correctly. One sees failures 
> such as:
> ERROR [main] org.apache.hadoop.tools.DistCp: Couldn't complete DistCp
> operation: 
> java.lang.SecurityException: Intercepted System.exit(-999)
> at
> org.apache.oozie.action.hadoop.LauncherSecurityManager.checkExit(LauncherMapper.java:651)
> at java.lang.Runtime.exit(Runtime.java:88)
> at java.lang.System.exit(System.java:904)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:357)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:394)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:399)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:147)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:142)
> Looking over the DistCp code, one sees that HADOOP_TOKEN_FILE_LOCATION isn't 
> being copied to mapreduce.job.credentials.binary, in the job-conf. I'll post 
> a patch for this shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8225) DistCp fails when invoked by Oozie

2012-08-22 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13439827#comment-13439827
 ] 

Devaraj Das commented on HADOOP-8225:
-

This approach feels right.

> DistCp fails when invoked by Oozie
> --
>
> Key: HADOOP-8225
> URL: https://issues.apache.org/jira/browse/HADOOP-8225
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.1
>Reporter: Mithun Radhakrishnan
>Assignee: Daryn Sharp
> Attachments: HADOOP-8225.patch, HADOOP-8225.patch, HADOOP-8225.patch, 
> HADOOP-8225.patch
>
>
> When DistCp is invoked through a proxy-user (e.g. through Oozie), the 
> delegation-token-store isn't picked up by DistCp correctly. One sees failures 
> such as:
> ERROR [main] org.apache.hadoop.tools.DistCp: Couldn't complete DistCp
> operation: 
> java.lang.SecurityException: Intercepted System.exit(-999)
> at
> org.apache.oozie.action.hadoop.LauncherSecurityManager.checkExit(LauncherMapper.java:651)
> at java.lang.Runtime.exit(Runtime.java:88)
> at java.lang.System.exit(System.java:904)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:357)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:394)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:399)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:334)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:147)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:142)
> Looking over the DistCp code, one sees that HADOOP_TOKEN_FILE_LOCATION isn't 
> being copied to mapreduce.job.credentials.binary, in the job-conf. I'll post 
> a patch for this shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8652) Change website to reflect new u...@hadoop.apache.org mailing list

2012-08-06 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13429621#comment-13429621
 ] 

Devaraj Das commented on HADOOP-8652:
-

Looks good.

> Change website to reflect new u...@hadoop.apache.org mailing list
> -
>
> Key: HADOOP-8652
> URL: https://issues.apache.org/jira/browse/HADOOP-8652
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
> Attachments: HADOOP-8652.patch
>
>
> Change website to reflect new u...@hadoop.apache.org mailing list since we've 
> merged the user lists per discussion on general@: http://s.apache.org/hv

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8552) Conflict: Same security.log.file for multiple users.

2012-07-16 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13415412#comment-13415412
 ] 

Devaraj Das commented on HADOOP-8552:
-

Yes.

> Conflict: Same security.log.file for multiple users. 
> -
>
> Key: HADOOP-8552
> URL: https://issues.apache.org/jira/browse/HADOOP-8552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, security
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: HADOOP-8552_branch1.patch, HADOOP-8552_branch2.patch
>
>
> In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. 
> In the presence of multiple users, this can lead to a potential conflict.
> Adding username to the log file would avoid this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8552) Conflict: Same security.log.file for multiple users.

2012-07-12 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13413213#comment-13413213
 ] 

Devaraj Das commented on HADOOP-8552:
-

Hi Karthik, is this on the client or on the server side? (Guessing its on 
client.. please confirm). In general, the audit log stuff doesn't make sense on 
the client side. It's meant to be used on the server side only (and in 
deployments I know about, the security audit logging is turned off on the 
client side). 
Your patch will work though. But I'll note that it might be introducing 
compatibility issues due to the filename change of the log file (if someone is 
collecting logs based on file names, etc.).

> Conflict: Same security.log.file for multiple users. 
> -
>
> Key: HADOOP-8552
> URL: https://issues.apache.org/jira/browse/HADOOP-8552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf, security
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: HADOOP-8552_branch1.patch, HADOOP-8552_branch2.patch
>
>
> In log4j.properties, hadoop.security.log.file is set to SecurityAuth.audit. 
> In the presence of multiple users, this can lead to a potential conflict.
> Adding username to the log file would avoid this scenario.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6947) Kerberos relogin should set refreshKrb5Config to true

2012-06-06 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6947:


Affects Version/s: 1.0.0
Fix Version/s: 1.1.0

I am going to commit this to branch-1.

> Kerberos relogin should set refreshKrb5Config to true
> -
>
> Key: HADOOP-6947
> URL: https://issues.apache.org/jira/browse/HADOOP-6947
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.0, 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 1.1.0, 0.22.0
>
> Attachments: hadoop-6947-branch20.txt, hadoop-6947.txt
>
>
> In working on securing a daemon that uses two different principals from 
> different threads, I found that I wasn't able to login from a second keytab 
> after I'd logged in from the first. This is because we don't set the 
> refreshKrb5Config in the Configuration for the Krb5LoginModule - hence it 
> won't switch over to the correct keytab file if it's different than the first.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-03 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-8346:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed everywhere. Thanks, Alejandro for the test help.

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.3, 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 1.0.3
>
> Attachments: 8346-trunk.patch, 8346-trunk.patch, debugger.png
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.

[jira] [Updated] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-03 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-8346:


Fix Version/s: (was: 2.0.0)
   1.0.3
Affects Version/s: 1.0.3
   Status: Patch Available  (was: Open)

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 1.0.3, 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 1.0.3
>
> Attachments: 8346-trunk.patch, 8346-trunk.patch, debugger.png
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderF

[jira] [Updated] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-03 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-8346:


Attachment: 8346-trunk.patch

Patch with some of the nits from Alejandro addressed. Currently, everywhere we 
just check for IBM's JDK and in the 'else' part we assume Oracle's. I agree 
that its better to do an explicit check for Oracle's but I think that can be a 
follow up (change everywhere in one sweep).

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: 8346-trunk.patch, 8346-trunk.patch, debugger.png
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.s

[jira] [Commented] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267195#comment-13267195
 ] 

Devaraj Das commented on HADOOP-8346:
-

Alejandro, can you please check whether the tests pass with this patch. Thanks!

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: 8346-trunk.patch, debugger.png
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   a

[jira] [Updated] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-8346:


Attachment: 8346-trunk.patch

Reverted back to original oid names.

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: 8346-trunk.patch, debugger.png
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvid

[jira] [Assigned] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das reassigned HADOOP-8346:
---

Assignee: Devaraj Das

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Devaraj Das
>Priority: Blocker
> Fix For: 2.0.0
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(

[jira] [Commented] (HADOOP-8346) Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO

2012-05-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267087#comment-13267087
 ] 

Devaraj Das commented on HADOOP-8346:
-

I'll take a look at this..
@Alejandro, can you please provide some more detail if you have on where the 
extra element to the principal is getting added.  Thanks!

> Changes to support Kerberos with non Sun JVM (HADOOP-6941) broke SPNEGO
> ---
>
> Key: HADOOP-8346
> URL: https://issues.apache.org/jira/browse/HADOOP-8346
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Alejandro Abdelnur
>Priority: Blocker
> Fix For: 2.0.0
>
>
> before HADOOP-6941 hadoop-auth testcases with Kerberos ON pass, *mvn test 
> -PtestKerberos*
> after HADOOP-6941 the tests fail with the error below.
> Doing some IDE debugging I've found out that the changes in HADOOP-6941 are 
> making the JVM Kerberos libraries to append an extra element to the kerberos 
> principal of the server (on the client side when creating the token) so 
> *HTTP/localhost* ends up being *HTTP/localhost/localhost*. Then, when 
> contacting the KDC to get the granting ticket, the server principal is 
> unknown.
> {code}
> testAuthenticationPost(org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator)
>   Time elapsed: 0.053 sec  <<< ERROR!
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Server not 
> found in Kerberos database (7) - UNKNOWN_SERVER)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:236)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:142)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:77)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator$2.call(TestKerberosAuthenticator.java:74)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils$1.run(KerberosTestUtils.java:111)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAs(KerberosTestUtils.java:108)
>   at 
> org.apache.hadoop.security.authentication.KerberosTestUtils.doAsClient(KerberosTestUtils.java:124)
>   at 
> org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator.testAuthenticationPost(TestKerberosAuthenticator.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:243)
>   at junit.framework.TestSuite.run(TestSuite.java:238)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:134)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:113)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>   at 
> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>   at 
> org

[jira] [Commented] (HADOOP-6941) Support non-SUN JREs in UserGroupInformation

2012-04-27 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13263982#comment-13263982
 ] 

Devaraj Das commented on HADOOP-6941:
-

Please open a separate jira for the Java 7 issues...

> Support non-SUN JREs in UserGroupInformation
> 
>
> Key: HADOOP-6941
> URL: https://issues.apache.org/jira/browse/HADOOP-6941
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES 11, Apache Harmony 6 and SLES 11, IBM Java 6
>Reporter: Stephen Watt
>Assignee: Devaraj Das
> Fix For: 1.0.3, 2.0.0
>
> Attachments: 6941-1.patch, 6941-branch1.patch, HADOOP-6941.patch, 
> hadoop-6941.patch
>
>
> Attempting to format the namenode or attempting to start Hadoop using Apache 
> Harmony or the IBM Java JREs results in the following exception:
> 10/09/07 16:35:05 ERROR namenode.NameNode: java.lang.NoClassDefFoundError: 
> com.sun.security.auth.UnixPrincipal
>   at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:223)
>   at java.lang.J9VMInternals.initializeImpl(Native Method)
>   at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:420)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.security.auth.UnixPrincipal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:421)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:652)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:346)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:618)
>   ... 8 more
> This is a negative regression as previous versions of Hadoop worked with 
> these JREs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6941) Support non-SUN JREs in UserGroupInformation

2012-04-26 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13262795#comment-13262795
 ] 

Devaraj Das commented on HADOOP-6941:
-

Hey John, the patch wasn't tested with the version of JRE you are referring 
to... But can you check what the System.getProperty("java.vendor") returns? The 
patch uses that to distinguish between IBM's jvm and others..

> Support non-SUN JREs in UserGroupInformation
> 
>
> Key: HADOOP-6941
> URL: https://issues.apache.org/jira/browse/HADOOP-6941
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: SLES 11, Apache Harmony 6 and SLES 11, IBM Java 6
>Reporter: Stephen Watt
>Assignee: Devaraj Das
> Fix For: 1.0.3, 2.0.0
>
> Attachments: 6941-1.patch, 6941-branch1.patch, HADOOP-6941.patch, 
> hadoop-6941.patch
>
>
> Attempting to format the namenode or attempting to start Hadoop using Apache 
> Harmony or the IBM Java JREs results in the following exception:
> 10/09/07 16:35:05 ERROR namenode.NameNode: java.lang.NoClassDefFoundError: 
> com.sun.security.auth.UnixPrincipal
>   at 
> org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:223)
>   at java.lang.J9VMInternals.initializeImpl(Native Method)
>   at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:420)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:391)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1240)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
> Caused by: java.lang.ClassNotFoundException: 
> com.sun.security.auth.UnixPrincipal
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:421)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:652)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:346)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:618)
>   ... 8 more
> This is a negative regression as previous versions of Hadoop worked with 
> these JREs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7615) Binary layout does not put share/hadoop/contrib/*.jar into the class path

2011-09-17 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13107338#comment-13107338
 ] 

Devaraj Das commented on HADOOP-7615:
-

Committed this to the branch-0.20-security and branch-0.20-security-205 
branches. Keeping the issue open to track the commit on trunk.

> Binary layout does not put share/hadoop/contrib/*.jar into the class path
> -
>
> Key: HADOOP-7615
> URL: https://issues.apache.org/jira/browse/HADOOP-7615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.204.0, 0.23.0
> Environment: Java, Linux
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7615.patch
>
>
> For contrib projects, contrib jar files are not included in HADOOP_CLASSPATH 
> in the binary layout.  Several projects jar files should be copied to 
> $HADOOP_PREFIX/share/hadoop/lib for binary deployment.  The interesting jar 
> files to include in $HADOOP_PREFIX/share/hadoop/lib are: capacity-scheduler, 
> thriftfs, fairscheduler.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7630) hadoop-metrics2.properties should have a property *.period set to a default value foe metrics

2011-09-17 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7630:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed this. Thanks, Eric!

> hadoop-metrics2.properties should have a property *.period set to a default 
> value foe metrics
> -
>
> Key: HADOOP-7630
> URL: https://issues.apache.org/jira/browse/HADOOP-7630
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Arpit Gupta
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7630-trunk.patch, HADOOP-7630.patch
>
>
> currently the hadoop-metrics2.properties file does not have a value set for 
> *.period
> This property is useful for metrics to determine when the property will 
> refresh. We should set it to default of 60

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7631) In mapred-site.xml, stream.tmpdir is mapped to ${mapred.temp.dir} which is undeclared.

2011-09-17 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7631:


   Resolution: Fixed
Fix Version/s: 0.23.0
   0.20.205.0
   Status: Resolved  (was: Patch Available)

Committed this. Thanks, Eric!

> In mapred-site.xml, stream.tmpdir is mapped to ${mapred.temp.dir} which is 
> undeclared.
> --
>
> Key: HADOOP-7631
> URL: https://issues.apache.org/jira/browse/HADOOP-7631
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 0.20.205.0
>Reporter: Ramya Sunil
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7631-trunk.patch, HADOOP-7631.patch
>
>
> Streaming jobs seem to fail with the following exception:
> {noformat}
> Exception in thread "main" java.io.IOException: No such file or directory
> at java.io.UnixFileSystem.createFileExclusively(Native Method)
> at java.io.File.checkAndCreate(File.java:1704)
> at java.io.File.createTempFile(File.java:1792)
> at 
> org.apache.hadoop.streaming.StreamJob.packageJobJar(StreamJob.java:603)
> at 
> org.apache.hadoop.streaming.StreamJob.setJobConf(StreamJob.java:798)
> at org.apache.hadoop.streaming.StreamJob.run(StreamJob.java:117)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> at 
> org.apache.hadoop.streaming.HadoopStreaming.main(HadoopStreaming.java:32)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> {noformat}
> Eric pointed out that in RPM based installs, in /etc/hadoop/mapred-site.xml, 
> stream.tmpdir is mapped to ${mapred.temp.dir}, but ${mapred.temp.dir} is not 
> declared

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7633) log4j.properties should be added to the hadoop conf on deploy

2011-09-16 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7633:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed this. Thanks, Eric!

> log4j.properties should be added to the hadoop conf on deploy
> -
>
> Key: HADOOP-7633
> URL: https://issues.apache.org/jira/browse/HADOOP-7633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Arpit Gupta
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7633-trunk.patch, HADOOP-7633.patch
>
>
> currently the log4j properties are not present in the hadoop conf dir. We 
> should add them so that log rotation happens appropriately and also define 
> other logs that hadoop can generate for example the audit and the auth logs 
> as well as the mapred summary logs etc.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7637) Fair scheduler configuration file is not bundled in RPM

2011-09-16 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7637:


   Resolution: Fixed
Fix Version/s: 0.23.0
   Status: Resolved  (was: Patch Available)

I just committed this. Thanks, Eric!

> Fair scheduler configuration file is not bundled in RPM
> ---
>
> Key: HADOOP-7637
> URL: https://issues.apache.org/jira/browse/HADOOP-7637
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.205.0
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7637-trunk.patch, HADOOP-7637.patch
>
>
> 205 build of tar is fine, but rpm failed with:
> {noformat}
>   [rpm] Processing files: hadoop-0.20.205.0-1
>   [rpm] warning: File listed twice: /usr/libexec
>   [rpm] warning: File listed twice: /usr/libexec/hadoop-config.sh
>   [rpm] warning: File listed twice: /usr/libexec/jsvc.i386
>   [rpm] Checking for unpackaged file(s): /usr/lib/rpm/check-files 
> /tmp/hadoop_package_build_hortonfo/BUILD
>   [rpm] error: Installed (but unpackaged) file(s) found:
>   [rpm]/etc/hadoop/fair-scheduler.xml
>   [rpm] File listed twice: /usr/libexec
>   [rpm] File listed twice: /usr/libexec/hadoop-config.sh
>   [rpm] File listed twice: /usr/libexec/jsvc.i386
>   [rpm] Installed (but unpackaged) file(s) found:
>   [rpm]/etc/hadoop/fair-scheduler.xml
>   [rpm] 
>   [rpm] 
>   [rpm] RPM build errors:
> BUILD FAILED
> /grid/0/dev/mfoley/hadoop-0.20-security-205/build.xml:1747: 
> '/usr/bin/rpmbuild' failed with exit code 1
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7626) Allow overwrite of HADOOP_CLASSPATH and HADOOP_OPTS

2011-09-12 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7626:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

I committed this. Thanks, Eric!

> Allow overwrite of HADOOP_CLASSPATH and HADOOP_OPTS
> ---
>
> Key: HADOOP-7626
> URL: https://issues.apache.org/jira/browse/HADOOP-7626
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.205.0
> Environment: Java, Linux
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0
>
> Attachments: HADOOP-7626-trunk.patch, HADOOP-7626.patch
>
>
> Quote email from Ashutosh Chauhan:
> bq. There is a bug in hadoop-env.sh which prevents hcatalog server to start 
> in secure settings. Instead of adding classpath, it overrides them. I was not 
> able to verify where the bug belongs to, in HMS or in hadoop scripts. Looks 
> like hadoop-env.sh is generated from hadoop-env.sh.template in installation 
> process by HMS. Hand crafted patch follows:
> bq. - export HADOOP_CLASSPATH=$f
> bq. +export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:$f
> bq. -export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true "
> bq. +export HADOOP_OPTS="${HADOOP_OPTS} -Djava.net.preferIPv4Stack=true "

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-12 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7599:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed the patch on 0.23 and trunk. Thanks, Eric!

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-4.patch, HADOOP-7599-5.patch, 
> HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk-4.patch, HADOOP-7599-trunk-5.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-12 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7599:


Fix Version/s: 0.24.0
   0.23.0

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0, 0.23.0, 0.24.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-4.patch, HADOOP-7599-5.patch, 
> HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk-4.patch, HADOOP-7599-trunk-5.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-12 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13103202#comment-13103202
 ] 

Devaraj Das commented on HADOOP-7599:
-

I am going to commit the patch in 0.23. When Yarn stabilizes, we can raise 
another jira and do the appropriate fixes.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-4.patch, HADOOP-7599-5.patch, 
> HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk-4.patch, HADOOP-7599-trunk-5.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-11 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13102373#comment-13102373
 ] 

Devaraj Das commented on HADOOP-7599:
-

I committed the patch on branch-0.20-security. I am hesitant to commit the 
patch on trunk/0.23 yet since this patch needs some work to make Yarn setup 
work. Lets discuss..

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-4.patch, HADOOP-7599-5.patch, 
> HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk-4.patch, HADOOP-7599-trunk-5.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101685#comment-13101685
 ] 

Devaraj Das commented on HADOOP-7599:
-

Please remove the config properties that aren't generated by the scripts in 
this patch (bullet 12 in my first comment). I also noticed now that there are 
references to 'hadoop' group still present. Please replace them with the 
variable you defined for the special group.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-2.patch, 
> HADOOP-7599-3.patch, HADOOP-7599-trunk-2.patch, HADOOP-7599-trunk-3.patch, 
> HADOOP-7599-trunk.patch, HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13101599#comment-13101599
 ] 

Devaraj Das commented on HADOOP-7599:
-

Went over the patch. Some comments:
1. Don't chmod the keytab dir contents to 755. The keytab files should be owned 
by the user running the respective daemon, and 700ed.
2. On the bullet#9 in my last comment, you can do a check for empty config 
files (like if the strings '' and/or '' occurs, the config 
file is not empty). Not pretty but safer.. Long term, Hadoop could stop 
shipping the empty config files.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599-1.patch, HADOOP-7599-trunk.patch, 
> HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7599) Improve hadoop setup conf script to setup secure Hadoop cluster

2011-09-06 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13098537#comment-13098537
 ] 

Devaraj Das commented on HADOOP-7599:
-

Some comments:
1. The mapred system dir should be 700ed 
2. The -format is always called by default in hadoop-setup-hdfs. Can we make 
the formatting done based on command line option (-format) provided by the user 
running the script. I really don't want implicit namenode formatting.
3. We could let the directory /mapred be group owned by the system group.
4. The --datanodes & --tasktrackers options - could they be made optional? They 
might be already optional. Please confirm.
5. Could we live with namenode-url instead of replacing it with namenode-host 
(ditto for jobtracker-url). I also see you changed a couple of other places in 
a backward incompatible way (like HADOOP_JT_HOST, taskcontroller ownership). 
Want to avoid incompatible changes.
6. Why did we remove the call to hadoop-setup-config.sh from 
src/packages/hadoop-setup-conf.sh
7. The group is hardcoded to 'hadoop' in a couple of places. Can we avoid that?
8. I see commented-out lines in hadoop-setup-conf.sh. Please remove those.
9. I don't think we should blindly overwrite the conf files in the config 
directory. We probably should warn & exit if the conf directory already has 
some files within. The user can probably use --force if he wants to avoid the 
warning.
10. All the components in the path leading up to taskcontroller.cfg has to be 
owned by root. Have this checked.
11. Where is HADOOP_SECURE_DN_LOG_DIR used?
12. A whole lot of configuration options that you added in the *-site.xml files 
are already there in *-default.xml. We don't need those. We only need the 
security related ones. Also, we don't want to play with the non-security 
configs like mapred.tasktracker.map.tasks.maximum.

> Improve hadoop setup conf script to setup secure Hadoop cluster
> ---
>
> Key: HADOOP-7599
> URL: https://issues.apache.org/jira/browse/HADOOP-7599
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.20.203.0
> Environment: Java 6, RHEL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7599.patch
>
>
> Setting up a secure Hadoop cluster requires a lot of manual setup.  The 
> motivation of this jira is to provide setup scripts to automate setup secure 
> Hadoop cluster.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7596) Enable jsvc to work with Hadoop RPM package

2011-09-06 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7596:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

I just committed this (to the 20-security branch). Thanks, Eric!

> Enable jsvc to work with Hadoop RPM package
> ---
>
> Key: HADOOP-7596
> URL: https://issues.apache.org/jira/browse/HADOOP-7596
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.204.0
> Environment: Java 6, RedHat EL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7596-2.patch, HADOOP-7596-3.patch, 
> HADOOP-7596.patch
>
>
> For secure Hadoop 0.20.2xx cluster, datanode can only run with 32 bit jvm 
> because Hadoop only packages 32 bit jsvc.  The build process should download 
> proper jsvc versions base on the build architecture.  In addition, the shell 
> script should be enhanced to locate hadoop jar files in the proper location.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7596) Enable jsvc to work with Hadoop RPM package

2011-09-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13096544#comment-13096544
 ] 

Devaraj Das commented on HADOOP-7596:
-

+1

> Enable jsvc to work with Hadoop RPM package
> ---
>
> Key: HADOOP-7596
> URL: https://issues.apache.org/jira/browse/HADOOP-7596
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.204.0
> Environment: Java 6, RedHat EL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
> Attachments: HADOOP-7596-2.patch, HADOOP-7596-3.patch, 
> HADOOP-7596.patch
>
>
> For secure Hadoop 0.20.2xx cluster, datanode can only run with 32 bit jvm 
> because Hadoop only packages 32 bit jsvc.  The build process should download 
> proper jsvc versions base on the build architecture.  In addition, the shell 
> script should be enhanced to locate hadoop jar files in the proper location.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6373) adding delegation token implementation

2011-08-11 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13083483#comment-13083483
 ] 

Devaraj Das commented on HADOOP-6373:
-

The implementation is not very different.

The SecretManager in particular is there in the trunk directory 
common/src/java/org/apache/hadoop/security/token/SecretManager.java



> adding delegation token implementation
> --
>
> Key: HADOOP-6373
> URL: https://issues.apache.org/jira/browse/HADOOP-6373
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kan Zhang
>Assignee: Kan Zhang
>Priority: Blocker
> Attachments: partial1.patch, token.patch, token2.patch
>
>
> The over-all design of delegation token is given in HADOOP-4343. This subtask 
> is for detailed design and implementation.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-6373) adding delegation token implementation

2011-08-11 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das resolved HADOOP-6373.
-

   Resolution: Duplicate
Fix Version/s: (was: 0.23.0)

Eli, the delegation token feature is there in 20-security and in trunk as part 
of other jiras. I don't have that list handy, but I am closing this one as 
duplicate.

> adding delegation token implementation
> --
>
> Key: HADOOP-6373
> URL: https://issues.apache.org/jira/browse/HADOOP-6373
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kan Zhang
>Assignee: Kan Zhang
>Priority: Blocker
> Attachments: partial1.patch, token.patch, token2.patch
>
>
> The over-all design of delegation token is given in HADOOP-4343. This subtask 
> is for detailed design and implementation.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-4343) Adding user and service-to-service authentication to Hadoop

2011-08-11 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das resolved HADOOP-4343.
-

   Resolution: Fixed
Fix Version/s: 0.20.203.0

Eli, the authentication feature is there in 20-security and in trunk. It was 
just not closed.

> Adding user and service-to-service authentication to Hadoop
> ---
>
> Key: HADOOP-4343
> URL: https://issues.apache.org/jira/browse/HADOOP-4343
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Kan Zhang
>Assignee: Kan Zhang
>Priority: Blocker
> Fix For: 0.23.0, 0.20.203.0
>
>
> Currently, Hadoop services do not authenticate users or other services. As a 
> result, Hadoop is subject to the following security risks.
> 1. A user can access an HDFS or M/R cluster as any other user. This makes it 
> impossible to enforce access control in an uncooperative environment. For 
> example, file permission checking on HDFS can be easily circumvented.
> 2. An attacker can masquerade as Hadoop services. For example, user code 
> running on a M/R cluster can register itself as a new TaskTracker.
> This JIRA is intended to be a tracking JIRA, where we discuss requirements, 
> agree on a general approach and identify subtasks. Detailed design and 
> implementation are the subject of those subtasks.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7371) Improve tarball distributions

2011-07-29 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7371:


Fix Version/s: (was: 0.20.205.0)
   0.23.0

I don't think this is a must-fix for 20.2xx. 

> Improve tarball distributions
> -
>
> Key: HADOOP-7371
> URL: https://issues.apache.org/jira/browse/HADOOP-7371
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
> Environment: Java 6, Redhat 5.5
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.23.0
>
> Attachments: HADOOP-7371.patch
>
>
> Hadoop release tarball contains both raw source and binary.  This leads users 
> to use the release tarball as base for applying patches, to build custom 
> Hadoop.  This is not the recommended method to develop hadoop because it 
> leads to mixed development system where processed files and raw source are 
> hard to separate.  
> To correct the problematic usage of the release tarball, the release build 
> target should be defined as:
> "ant source" generates source release tarball.
> "ant binary" is binary release without source/javadoc jar files.
> "ant tar" is a mirror of binary release with source/javadoc jar files.
> Does this sound reasonable?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7356) RPM packages broke bin/hadoop script for hadoop 0.20.205

2011-07-29 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7356:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Giri committed the fix.

> RPM packages broke bin/hadoop script for hadoop 0.20.205
> 
>
> Key: HADOOP-7356
> URL: https://issues.apache.org/jira/browse/HADOOP-7356
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.204.0
> Environment: Java 6, Redhat EL 5.5
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Fix For: 0.20.204.0, 0.23.0
>
> Attachments: HADOOP-7356-1.patch, HADOOP-7356-1.patch, 
> HADOOP-7356-2.patch, HADOOP-7356-trunk.patch, HADOOP-7356.patch
>
>
> hadoop-config.sh has been moved to libexec for binary package, but developers 
> prefers to have hadoop-config.sh in bin.  Hadoo shell scripts should be 
> modified to support both scenarios.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7268) FileContext.getLocalFSFileContext() behavior needs to be fixed w.r.t tokens

2011-05-08 Thread Devaraj Das (JIRA)
FileContext.getLocalFSFileContext() behavior needs to be fixed w.r.t tokens
---

 Key: HADOOP-7268
 URL: https://issues.apache.org/jira/browse/HADOOP-7268
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.0
Reporter: Devaraj Das
 Fix For: 0.23.0


FileContext.getLocalFSFileContext() instantiates a FileContext object upon the 
first call to it, and for all subsequent calls returns back that instance (a 
static localFsSingleton object). With security turned on, this causes some 
hard-to-debug situations when that fileContext is used for doing HDFS 
operations. This is because the UserGroupInformation is stored when a 
FileContext is instantiated. If the process in question wishes to use different 
UserGroupInformation objects for different file system operations (where the 
corresponding FileContext objects are obtained via calls to 
FileContext.getLocalFSFileContext()), it doesn't work.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Updated: (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2011-03-03 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-7115:


Attachment: h-7115.1.patch

This patch is a straight port from the 20.100 security branch to trunk for the 
corresponding commit in the security branch. I will also raise a MR jira to 
address the MR side of the changes.





> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Devaraj Das
> Attachments: h-7115.1.patch
>
>
> As discussed in HADOOP-6978, a cache helps a lot.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HADOOP-7115) Add a cache for getpwuid_r and getpwgid_r calls

2011-01-21 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12985042#action_12985042
 ] 

Devaraj Das commented on HADOOP-7115:
-

It was the getpwuid calls leading to ldap outages in our major clusters (almost 
twice every week). After implementing this cache, we didn't have a single 
incident (and hopefully we will have none in the future too).

The version of nscd on our clusters proved to be really unreliable.

> Add a cache for getpwuid_r and getpwgid_r calls
> ---
>
> Key: HADOOP-7115
> URL: https://issues.apache.org/jira/browse/HADOOP-7115
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Devaraj Das
>
> As discussed in HADOOP-6978, a cache helps a lot.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6978) Add JNI support for secure IO operations

2010-12-01 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6978:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

I just committed this to 0.22 and trunk. Thanks Todd and Owen!

> Add JNI support for secure IO operations
> 
>
> Key: HADOOP-6978
> URL: https://issues.apache.org/jira/browse/HADOOP-6978
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, native, security
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: fstat.patch, hadoop-6978.txt, hadoop-6978.txt, 
> hadoop-6978.txt
>
>
> In support of MAPREDUCE-2096, we need to add some JNI functionality. In 
> particular, we need the ability to use fstat() on an open file stream, and to 
> use open() with O_EXCL, O_NOFOLLOW, and without O_CREAT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6832) Provide a web server plugin that uses a static user for the web UI

2010-11-25 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12935932#action_12935932
 ] 

Devaraj Das commented on HADOOP-6832:
-

Owen, was wondering whether you want to accomodate Todd's comment here?

> Provide a web server plugin that uses a static user for the web UI
> --
>
> Key: HADOOP-6832
> URL: https://issues.apache.org/jira/browse/HADOOP-6832
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.22.0
>
> Attachments: h-6382.patch, static-web-user.patch
>
>
> We need a simple plugin that uses a static user for clusters with security 
> that don't want to authenticate users on the web UI.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6978) Add JNI support for secure IO operations

2010-11-24 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6978:


Priority: Blocker  (was: Critical)

> Add JNI support for secure IO operations
> 
>
> Key: HADOOP-6978
> URL: https://issues.apache.org/jira/browse/HADOOP-6978
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, native, security
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: fstat.patch, hadoop-6978.txt, hadoop-6978.txt
>
>
> In support of MAPREDUCE-2096, we need to add some JNI functionality. In 
> particular, we need the ability to use fstat() on an open file stream, and to 
> use open() with O_EXCL, O_NOFOLLOW, and without O_CREAT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6978) Add JNI support for secure IO operations

2010-11-24 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6978:


Attachment: fstat.patch

We could move the discussion/fix on the caching to a separate jira.
In testing the patch, it was found that NativeIO fails when the map outputs are 
large. Owen fixed this issue (patch attached). We should include the fix in 
this patch.

+1 on the patch otherwise.

> Add JNI support for secure IO operations
> 
>
> Key: HADOOP-6978
> URL: https://issues.apache.org/jira/browse/HADOOP-6978
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, native, security
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.22.0
>
> Attachments: fstat.patch, hadoop-6978.txt, hadoop-6978.txt
>
>
> In support of MAPREDUCE-2096, we need to add some JNI functionality. In 
> particular, we need the ability to use fstat() on an open file stream, and to 
> use open() with O_EXCL, O_NOFOLLOW, and without O_CREAT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6978) Add JNI support for secure IO operations

2010-11-13 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12931751#action_12931751
 ] 

Devaraj Das commented on HADOOP-6978:
-

Yeah, our belief is that the shuffle process ends up making a lot of the getpw* 
calls and we have already seen a couple of ldap servers outages. We can do a 
follow up patch though. If the cluster has a configuration similar to what i 
mentioned earlier, then yeah, it'd be really good to have this cache before 
deployment...

> Add JNI support for secure IO operations
> 
>
> Key: HADOOP-6978
> URL: https://issues.apache.org/jira/browse/HADOOP-6978
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, native, security
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.22.0
>
> Attachments: hadoop-6978.txt, hadoop-6978.txt
>
>
> In support of MAPREDUCE-2096, we need to add some JNI functionality. In 
> particular, we need the ability to use fstat() on an open file stream, and to 
> use open() with O_EXCL, O_NOFOLLOW, and without O_CREAT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6978) Add JNI support for secure IO operations

2010-11-13 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12931690#action_12931690
 ] 

Devaraj Das commented on HADOOP-6978:
-

We have noticed that sometimes the C calls like getpwuid_r ends up making 
direct calls to the ldap server. It probably is configuration/environment 
specific, but in Yahoo! the password entries are maintained by the ldap server. 
In order to prevent ldap servers from getting overloaded with password 
look-ups, we have a daemon called nscd run on all the compute nodes, that 
caches the results of such look-ups. The calls such as getpwuid_r should 
terminate at the local nscd daemon, but if, for whatever reason, the nscd 
daemon is down on the node, the calls end up talking to the ldap server 
directly. Apparently, nscd is not that stable... 

We have seen the above happening at Yahoo! and in a couple of occasions brought 
down the ldap servers. So I was wondering whether we should reduce the number 
of calls to the getpwuid_r and such by caching the resolutions 
{uid,gid}->{username,groupname} in Hadoop.. Thoughts?

> Add JNI support for secure IO operations
> 
>
> Key: HADOOP-6978
> URL: https://issues.apache.org/jira/browse/HADOOP-6978
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: io, native, security
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Critical
> Fix For: 0.22.0
>
> Attachments: hadoop-6978.txt, hadoop-6978.txt
>
>
> In support of MAPREDUCE-2096, we need to add some JNI functionality. In 
> particular, we need the ability to use fstat() on an open file stream, and to 
> use open() with O_EXCL, O_NOFOLLOW, and without O_CREAT.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6818) Provide a JNI-based implementation of GroupMappingServiceProvider

2010-11-03 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6818:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

I committed this. The ReleaseAudit warnings was due to an empty file 
FTPFileSystemConfigKeys.java in the codebase. The commit for HADOOP-6223 should 
have taken care of this. I have deleted it in this commit.

> Provide a JNI-based implementation of GroupMappingServiceProvider
> -
>
> Key: HADOOP-6818
> URL: https://issues.apache.org/jira/browse/HADOOP-6818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6818-trunk-1.patch, 6818-trunk-2.patch, 
> 6818-trunk.patch, hadoop-6818-1.patch, hadoop-6818-2.patch, 
> JNIGroupMapping.patch
>
>
> The default implementation of GroupMappingServiceProvider does a fork of a 
> unix command to get the groups of a user. Since the group resolution happens 
> in the servers, this might be costly. This jira aims at providing a JNI-based 
> implementation for GroupMappingServiceProvider.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6818) Provide a JNI-based implementation of GroupMappingServiceProvider

2010-11-03 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6818:


Status: Patch Available  (was: Open)

> Provide a JNI-based implementation of GroupMappingServiceProvider
> -
>
> Key: HADOOP-6818
> URL: https://issues.apache.org/jira/browse/HADOOP-6818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6818-trunk-1.patch, 6818-trunk-2.patch, 
> 6818-trunk.patch, hadoop-6818-1.patch, hadoop-6818-2.patch, 
> JNIGroupMapping.patch
>
>
> The default implementation of GroupMappingServiceProvider does a fork of a 
> unix command to get the groups of a user. Since the group resolution happens 
> in the servers, this might be costly. This jira aims at providing a JNI-based 
> implementation for GroupMappingServiceProvider.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6818) Provide a JNI-based implementation of GroupMappingServiceProvider

2010-11-03 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6818:


Status: Open  (was: Patch Available)

> Provide a JNI-based implementation of GroupMappingServiceProvider
> -
>
> Key: HADOOP-6818
> URL: https://issues.apache.org/jira/browse/HADOOP-6818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6818-trunk-1.patch, 6818-trunk-2.patch, 
> 6818-trunk.patch, hadoop-6818-1.patch, hadoop-6818-2.patch, 
> JNIGroupMapping.patch
>
>
> The default implementation of GroupMappingServiceProvider does a fork of a 
> unix command to get the groups of a user. Since the group resolution happens 
> in the servers, this might be costly. This jira aims at providing a JNI-based 
> implementation for GroupMappingServiceProvider.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6818) Provide a JNI-based implementation of GroupMappingServiceProvider

2010-11-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6818:


Attachment: 6818-trunk-2.patch

This should address most of the points raised by Erik. Erik, do you mind taking 
a quick look please? 

> Provide a JNI-based implementation of GroupMappingServiceProvider
> -
>
> Key: HADOOP-6818
> URL: https://issues.apache.org/jira/browse/HADOOP-6818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6818-trunk-1.patch, 6818-trunk-2.patch, 
> 6818-trunk.patch, hadoop-6818-1.patch, hadoop-6818-2.patch, 
> JNIGroupMapping.patch
>
>
> The default implementation of GroupMappingServiceProvider does a fork of a 
> unix command to get the groups of a user. Since the group resolution happens 
> in the servers, this might be costly. This jira aims at providing a JNI-based 
> implementation for GroupMappingServiceProvider.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6818) Provide a JNI-based implementation of GroupMappingServiceProvider

2010-10-12 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6818:


Attachment: 6818-trunk-1.patch

Addresses the comments from Todd.

> Provide a JNI-based implementation of GroupMappingServiceProvider
> -
>
> Key: HADOOP-6818
> URL: https://issues.apache.org/jira/browse/HADOOP-6818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6818-trunk-1.patch, 6818-trunk.patch, 
> hadoop-6818-1.patch, hadoop-6818-2.patch, JNIGroupMapping.patch
>
>
> The default implementation of GroupMappingServiceProvider does a fork of a 
> unix command to get the groups of a user. Since the group resolution happens 
> in the servers, this might be costly. This jira aims at providing a JNI-based 
> implementation for GroupMappingServiceProvider.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6988) Add support for reading multiple hadoop delegation token files

2010-10-08 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12919181#action_12919181
 ] 

Devaraj Das commented on HADOOP-6988:
-

Although it is true that HADOOP_TOKEN_FILE_LOCATION can be used to make normal 
hdfs commands work, the intent for having this was to support security for 
Map/Reduce tasks, and, hadoop streaming apps that internally invoke 
command-line hdfs operations (as Owen had pointed out earlier). If you want to 
pass multiple tokens during job submission, the preferred approach would be to 
write the tokens into a file (using the Credentials class's utilities), and 
then point mapreduce.job.credentials.binary to that file. 
Thinking about it, wouldn't the option of defining mapreduce.job.hdfs-servers 
in the job configuration work for you. The JobClient will automatically get 
delegation tokens from those namenodes and all tasks of the job can use those 
tokens..

> Add support for reading multiple hadoop delegation token files
> --
>
> Key: HADOOP-6988
> URL: https://issues.apache.org/jira/browse/HADOOP-6988
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Attachments: hadoop-6988.0.txt, hadoop-6988.1.txt
>
>
> It would be nice if there were a way to specify multiple delegation token 
> files via the HADOOP_TOKEN_FILE_LOCATION environment variable and the 
> "mapreduce.job.credentials.binary" configuration value. I suggest a 
> colon-separated list of paths, each of which is read as a separate delegation 
> token file.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6965) Method in UGI to get Kerberos ticket.

2010-09-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6965:


   Status: Resolved  (was: Patch Available)
Fix Version/s: 0.22.0
   Resolution: Fixed

I just committed this. Thanks, Jitendra!
(I'll raise a jira for the follow up work on cleaning up 
reloginFromTicketCache, removing User.lastLogin, etc.)

> Method in UGI to get Kerberos ticket. 
> --
>
> Key: HADOOP-6965
> URL: https://issues.apache.org/jira/browse/HADOOP-6965
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Fix For: 0.22.0
>
> Attachments: HADOOP-6965.1.patch, HADOOP-6965.3.patch, 
> HADOOP-6965.4.patch
>
>
> The getTGT method in AutoRenewal thread is moved to the outer UGI class. It 
> is still a private method but can be used by reloginFromKeyTab to check for 
> TGT expiry. This jira covers common changes for HDFS-1364

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6965) Method in UGI to get Kerberos ticket.

2010-09-27 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12915604#action_12915604
 ] 

Devaraj Das commented on HADOOP-6965:
-

Some comments:
1) Seems like you copy-pasted the 0.20 version of getTGT (from your patch on 
HDFS-1364). The trunk version in the ticket renewal thread is slightly 
different.
2) Let's bite the bullet and remove the check for hasSufficientTimeElapsed in 
the reloginFromKeytab method. We may do the same for reloginFromTicketCache in 
a follow-up jira.
3) The testcase can be removed. I don't think it is adding value. If it can be 
improved, fine (i understand its hard to write a unit test for this without a 
Kerberos test infrastructure).. otherwise a note on manual testing should be 
sufficient.

> Method in UGI to get Kerberos ticket. 
> --
>
> Key: HADOOP-6965
> URL: https://issues.apache.org/jira/browse/HADOOP-6965
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HADOOP-6965.1.patch, HADOOP-6965.3.patch
>
>
> The getTGT method in AutoRenewal thread is moved to the outer UGI class. It 
> is still a private method but can be used by reloginFromKeyTab to check for 
> TGT expiry. This jira covers common changes for HDFS-1364

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-09-27 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12915595#action_12915595
 ] 

Devaraj Das commented on HADOOP-6656:
-

Well, we started out by trying to make the Refreshable interface work (and Owen 
had posted a patch on that on this jira). But we couldn't get it working 
reliably, and hence switched to the kinit based renewal. That seems to be 
working well so far in our production.

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> 6656-trunk-4.patch, 6656-trunk-4.patch, c-6656-y20-internal.patch, 
> refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6818) Provide a JNI-based implementation of GroupMappingServiceProvider

2010-09-23 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6818:


Status: Patch Available  (was: Open)

> Provide a JNI-based implementation of GroupMappingServiceProvider
> -
>
> Key: HADOOP-6818
> URL: https://issues.apache.org/jira/browse/HADOOP-6818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6818-trunk.patch, hadoop-6818-1.patch, 
> hadoop-6818-2.patch, JNIGroupMapping.patch
>
>
> The default implementation of GroupMappingServiceProvider does a fork of a 
> unix command to get the groups of a user. Since the group resolution happens 
> in the servers, this might be costly. This jira aims at providing a JNI-based 
> implementation for GroupMappingServiceProvider.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6818) Provide a JNI-based implementation of GroupMappingServiceProvider

2010-09-23 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6818:


Attachment: 6818-trunk.patch

Patch for trunk (incorporates Todd's comments too).

> Provide a JNI-based implementation of GroupMappingServiceProvider
> -
>
> Key: HADOOP-6818
> URL: https://issues.apache.org/jira/browse/HADOOP-6818
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6818-trunk.patch, hadoop-6818-1.patch, 
> hadoop-6818-2.patch, JNIGroupMapping.patch
>
>
> The default implementation of GroupMappingServiceProvider does a fork of a 
> unix command to get the groups of a user. Since the group resolution happens 
> in the servers, this might be costly. This jira aims at providing a JNI-based 
> implementation for GroupMappingServiceProvider.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6912) Guard against NPE when calling UGI.isLoginKeytabBased()

2010-08-13 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12898482#action_12898482
 ] 

Devaraj Das commented on HADOOP-6912:
-

Could you please check whether the HDFS or MR builds break since you changed 
the signature of a public method that's probably used by these other 
projects... (I am guessing it shouldn't but do check).

> Guard against NPE when calling UGI.isLoginKeytabBased()
> ---
>
> Key: HADOOP-6912
> URL: https://issues.apache.org/jira/browse/HADOOP-6912
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Attachments: c6912-01.patch
>
>
> NPE can happen when isLoginKeytabBased() is called before a login is 
> performed. See MAPREDUCE-1992 for an example.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6905) Better logging messages when a delegation token is invalid

2010-08-13 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12898480#action_12898480
 ] 

Devaraj Das commented on HADOOP-6905:
-

+1

> Better logging messages when a delegation token is invalid
> --
>
> Key: HADOOP-6905
> URL: https://issues.apache.org/jira/browse/HADOOP-6905
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Attachments: c6905-01.patch
>
>
> From our production logs, we see some logging messages of "token is expired 
> or doesn't exist". It would be helpful to know whose token it was.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6913) Circular initialization between UserGroupInformation and KerberosName

2010-08-11 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12897506#action_12897506
 ] 

Devaraj Das commented on HADOOP-6913:
-

+1

> Circular initialization between UserGroupInformation and KerberosName
> -
>
> Key: HADOOP-6913
> URL: https://issues.apache.org/jira/browse/HADOOP-6913
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Attachments: c6913-01.patch
>
>
> If the first call to UGI is UGI.setConfiguration(conf), it will try to 
> initialize UGI class. During this initialization, the code calls 
> KerberosName.setConfiguration(). KerberosName's static initializer will in 
> turn call UGI.isSecurityEnabled(). Since UGI hasn't been completely 
> initialized yet, isSecurityEnabled() will re-initialize UGI with a DEFAULT 
> conf. As a result, the original conf used in UGI.setConfiguration(conf) will 
> be overwritten by the DEFAULT conf.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6706) Relogin behavior for RPC clients could be improved

2010-08-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6706:


Status: Resolved  (was: Patch Available)
Resolution: Fixed

I just committed this. Thanks, Jitendra for the trunk patches.

> Relogin behavior for RPC clients could be improved
> --
>
> Key: HADOOP-6706
> URL: https://issues.apache.org/jira/browse/HADOOP-6706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6706-bp20-2.patch, 6706.bp20.1.patch, 6706.bp20.patch, 
> HADOOP-6706-BP20-fix1.patch, HADOOP-6706-BP20-fix2.patch, 
> HADOOP-6706-BP20-fix3.patch, HADOOP-6706.2.patch, HADOOP-6706.4.patch, 
> HADOOP-6706.5.patch, HADOOP-6706.6.patch, HADOOP-6706.7.patch, 
> HADOOP-6706.8.patch
>
>
> Currently, the relogin in the RPC client happens on only a SaslException. But 
> we have seen cases where other exceptions are thrown (like 
> IllegalStateException when the client's ticket is invalid). This jira is to 
> fix that behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6892) Common component of HDFS-1150 (Verify datanodes' identities to clients in secure clusters)

2010-08-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12894708#action_12894708
 ] 

Devaraj Das commented on HADOOP-6892:
-

+1


> Common component of HDFS-1150 (Verify datanodes' identities to clients in 
> secure clusters)
> --
>
> Key: HADOOP-6892
> URL: https://issues.apache.org/jira/browse/HADOOP-6892
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 0.22.0
>
> Attachments: HADOOP-6892.patch
>
>
> HDFS-1150 will have changes to the start-up scripts and HttpServer.  These 
> are handled here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6706) Relogin behavior for RPC clients could be improved

2010-08-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12894690#action_12894690
 ] 

Devaraj Das commented on HADOOP-6706:
-

Looks like due to the order in which the patches from HADOOP-6706 and 
HADOOP-6718 got applied, there is a small problem in the part where the 
exception thrown by setupSaslConnection is handled. handleSaslConnection is not 
called anymore. Please remove that call from your patch.

> Relogin behavior for RPC clients could be improved
> --
>
> Key: HADOOP-6706
> URL: https://issues.apache.org/jira/browse/HADOOP-6706
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6706-bp20-2.patch, 6706.bp20.1.patch, 6706.bp20.patch, 
> HADOOP-6706-BP20-fix1.patch, HADOOP-6706-BP20-fix2.patch, 
> HADOOP-6706-BP20-fix3.patch, HADOOP-6706.2.patch, HADOOP-6706.4.patch, 
> HADOOP-6706.5.patch, HADOOP-6706.6.patch
>
>
> Currently, the relogin in the RPC client happens on only a SaslException. But 
> we have seen cases where other exceptions are thrown (like 
> IllegalStateException when the client's ticket is invalid). This jira is to 
> fix that behavior.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6632) Support for using different Kerberos keys for different instances of Hadoop services

2010-07-30 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12894170#action_12894170
 ] 

Devaraj Das commented on HADOOP-6632:
-

Yes this was intentional. The mr patch seemed like a hack and that's why we 
didn't commit it to trunk, and instead raised MAPREDUCE-1824 to discuss that... 
BTW, the problem which the mr patch attempted to address would be significantly 
less once we have HADOOP-6706 committed that does retries in case of failures 
due to the false replay attack detection by the rpc servers. MAPREDUCE-1824 
takes a low priority..

> Support for using different Kerberos keys for different instances of Hadoop 
> services
> 
>
> Key: HADOOP-6632
> URL: https://issues.apache.org/jira/browse/HADOOP-6632
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Fix For: 0.22.0
>
> Attachments: 6632.mr.patch, c6632-05.patch, c6632-07.patch, 
> HADOOP-6632-Y20S-18.patch, HADOOP-6632-Y20S-22.patch
>
>
> We tested using the same Kerberos key for all datanodes in a HDFS cluster or 
> the same Kerberos key for all TaskTarckers in a MapRed cluster. But it 
> doesn't work. The reason is that when datanodes try to authenticate to the 
> namenode all at once, the Kerberos authenticators they send to the namenode 
> may have the same timestamp and will be rejected as replay requests. This 
> JIRA makes it possible to use a unique key for each service instance.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6633) normalize property names for JT/NN kerberos principal names in configuration

2010-07-29 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6633:


Status: Resolved  (was: Patch Available)
Resolution: Fixed

Resolving.

> normalize property names for JT/NN kerberos principal names in configuration
> 
>
> Key: HADOOP-6633
> URL: https://issues.apache.org/jira/browse/HADOOP-6633
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Attachments: HADOOP-6633-BP20-1.patch, HADOOP-6633-BP20-2.patch, 
> HADOOP-6633-BP20.patch, HADOOP-6633.patch
>
>
> change:
> DFS_NAMENODE_USER_NAME_KEY = "dfs.namenode.user.name" to 
> "dfs.namenode.kerberos.user.name";
> DFS_NAMENODE_KRB_HTTPS_USER_NAME_KEY = "dfs.namenode.https.user.name" to  
> "dfs.namenode.kerberos.https.user.name"
> DFS_SECONDARY_NAMENODE_USER_NAME_KEY = "dfs.secondary.namenode.user.name" to 
> "dfs.secondary.namenode.kerberos.user.name";
> DFS_SECONDARY_NAMENODE_KRB_HTTPS_USER_NAME_KEY = 
> "dfs.secondary.namenode.https.user.name" to 
> "dfs.secondary.namenode.kerberos.https.user.name";
> DFS_DATANODE_USER_NAME_KEY = "dfs.datanode.user.name" to 
> "dfs.datanode.kerberos.user.name" 
> JT_USER_NAME = "mapreduce.jobtracker.user.name" to 
> "mapreduce.jobtracker.kerberos.user.name";
> TT_USER_NAME = "mapreduce.tasktracker.user.name" to 
> "mapreduce.tasktracker.kerberos.user.name"

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6653) NullPointerException in setupSaslConnection when browsing directories

2010-07-29 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das resolved HADOOP-6653.
-

Resolution: Invalid

This is not applicable in trunk anymore.

> NullPointerException in setupSaslConnection when browsing directories
> -
>
> Key: HADOOP-6653
> URL: https://issues.apache.org/jira/browse/HADOOP-6653
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: c-6653-y20.patch
>
>
> e currently get a NullPointerException when setting up SASL RPC connection as 
> part of browsing the filesystem after
> being redirected to a datanode.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-29 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Resolved  (was: Patch Available)
Resolution: Fixed

I just committed this. Thanks, Owen for the early patches on this.

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> 6656-trunk-4.patch, 6656-trunk-4.patch, c-6656-y20-internal.patch, 
> refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-29 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Open  (was: Patch Available)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> 6656-trunk-4.patch, 6656-trunk-4.patch, c-6656-y20-internal.patch, 
> refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-29 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Patch Available  (was: Open)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> 6656-trunk-4.patch, 6656-trunk-4.patch, c-6656-y20-internal.patch, 
> refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-29 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Attachment: 6656-trunk-4.patch

This should take care of findbugs.

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> 6656-trunk-4.patch, 6656-trunk-4.patch, c-6656-y20-internal.patch, 
> refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Open  (was: Patch Available)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> 6656-trunk-4.patch, c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Patch Available  (was: Open)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> 6656-trunk-4.patch, c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Attachment: 6656-trunk-4.patch

Attaching a patch fixing the findbugs warning. The javadoc warning is unrelated.

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> 6656-trunk-4.patch, c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Patch Available  (was: Open)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Attachment: 6656-trunk-4.patch

This patch has some improved javadocs.

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Open  (was: Patch Available)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, 6656-trunk-4.patch, 
> c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Patch Available  (was: Open)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, c-6656-y20-internal.patch, 
> refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Attachment: 6656-trunk-4.patch

Addressed Kan's comments..

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, 6656-trunk-4.patch, c-6656-y20-internal.patch, 
> refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Open  (was: Patch Available)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-27 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Open  (was: Patch Available)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6656) Security framework needs to renew Kerberos tickets while the process is running

2010-07-27 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6656:


Status: Patch Available  (was: Open)

> Security framework needs to renew Kerberos tickets while the process is 
> running
> ---
>
> Key: HADOOP-6656
> URL: https://issues.apache.org/jira/browse/HADOOP-6656
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Devaraj Das
> Fix For: 0.22.0
>
> Attachments: 6656-trunk-1.patch, 6656-trunk-2.patch, 
> 6656-trunk-3.patch, c-6656-y20-internal.patch, refresh.patch
>
>
> While a client process is running, there should be a thread that periodically 
> renews the Kerberos credentials to ensure they don't expire.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



  1   2   3   >