[jira] [Created] (HADOOP-11479) hdfs crypto -createZone fails to impersonate the real user in a kerberised environment
Ranadip created HADOOP-11479: Summary: hdfs crypto -createZone fails to impersonate the real user in a kerberised environment Key: HADOOP-11479 URL: https://issues.apache.org/jira/browse/HADOOP-11479 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Environment: CentOS Reporter: Ranadip Priority: Blocker The problem occurs when KMS key level acl is created for the key before the encryption zone is created. The command tried to create the encryption zone using "hdfs" user's identity and not the real user's identity. Steps: In a kerberised environment: 1. Create key level ACL in KMS for a new key. 2. Create encryption key now. (Goes through fine) 3. Create encryption zone. (Fails) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11478) HttpFSServer does not properly impersonate a real user when executing "open" operation in a kerberised environment
Ranadip created HADOOP-11478: Summary: HttpFSServer does not properly impersonate a real user when executing "open" operation in a kerberised environment Key: HADOOP-11478 URL: https://issues.apache.org/jira/browse/HADOOP-11478 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Environment: CentOS Reporter: Ranadip Priority: Blocker Setup: - Kerberos enabled in the cluster, including Hue SSO - Encryption enabled using KMS. Encryption key and encryption zone created. KMS key level ACL created to allow only real user to have all access to the key and no one else. Manifestation: Using Hue, real user logged in using Kerberos credentials. For direct access, user does kinit and then uses curl calls. New file creation inside encryption zone goes ahead fine as expected. But attempts to view the contents of the file fails with exception: "User [httpfs] is not authorized to perform [DECRYPT_EEK] on key with ACL name [mykeyname]!!" Perhaps, this is linked to bug #HDFS-6849. In the file HttpFSServer.java, the OPEN handler calls command.execute(fs) directly (and this fails). In CREATE, that call is wrapped within fsExecute(user, command). Apparently, this seems to cause the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-9188) FileUtil.CopyMerge can support optional headers and footers when merging files
Ranadip created HADOOP-9188: --- Summary: FileUtil.CopyMerge can support optional headers and footers when merging files Key: HADOOP-9188 URL: https://issues.apache.org/jira/browse/HADOOP-9188 Project: Hadoop Common Issue Type: Improvement Reporter: Ranadip Similar to addString - which is added at the end of each merged file - there should be the option to add some rows of header strings and footer strings globally at the beginning and end of the single merged file. Background: My experience has been that this method is mostly used to aggregate part-files into a single file that can be sent, for example, as a data feed file or a report to business customers. In such cases there is often a requirement to provide header and footer rows - with headers describing the schema of the data and the footers (or even headers in some cases) containing stats like total counts, bad record counts, etc. Along with [#HADOOP-9187], I have found myself replicating this method to add these features at least in 3 different use cases. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9187) FileUtil.CopyMerge should handle compressed input and output
Ranadip created HADOOP-9187: --- Summary: FileUtil.CopyMerge should handle compressed input and output Key: HADOOP-9187 URL: https://issues.apache.org/jira/browse/HADOOP-9187 Project: Hadoop Common Issue Type: Improvement Components: fs Reporter: Ranadip This method, if run on compressed input, results in corrupt output since it just does a byte-by-byte concatenation disregarding compression codecs. It should automatically detect compression codecs from input files and handle them intelligently. Additionally, there should be an option to create a compressed output file so that the output can be efficiently stored and sent out to customers (over the network outside the cluster). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira