[jira] [Resolved] (HADOOP-11478) HttpFSServer does not properly impersonate a real user when executing open operation in a kerberised environment
[ https://issues.apache.org/jira/browse/HADOOP-11478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Lamb resolved HADOOP-11478. --- Resolution: Not a Problem HttpFSServer does not properly impersonate a real user when executing open operation in a kerberised environment -- Key: HADOOP-11478 URL: https://issues.apache.org/jira/browse/HADOOP-11478 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Environment: CentOS Reporter: Ranadip Priority: Blocker Setup: - Kerberos enabled in the cluster, including Hue SSO - Encryption enabled using KMS. Encryption key and encryption zone created. KMS key level ACL created to allow only real user to have all access to the key and no one else. Manifestation: Using Hue, real user logged in using Kerberos credentials. For direct access, user does kinit and then uses curl calls. New file creation inside encryption zone goes ahead fine as expected. But attempts to view the contents of the file fails with exception: User [httpfs] is not authorized to perform [DECRYPT_EEK] on key with ACL name [mykeyname]!! Perhaps, this is linked to bug #HDFS-6849. In the file HttpFSServer.java, the OPEN handler calls command.execute(fs) directly (and this fails). In CREATE, that call is wrapped within fsExecute(user, command). Apparently, this seems to cause the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11479) hdfs crypto -createZone fails to impersonate the real user in a kerberised environment
[ https://issues.apache.org/jira/browse/HADOOP-11479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Lamb resolved HADOOP-11479. --- Resolution: Not a Problem hdfs crypto -createZone fails to impersonate the real user in a kerberised environment -- Key: HADOOP-11479 URL: https://issues.apache.org/jira/browse/HADOOP-11479 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Environment: CentOS Reporter: Ranadip Attachments: KMS-test-acl.txt The problem occurs when KMS key level acl is created for the key before the encryption zone is created. The command tried to create the encryption zone using hdfs user's identity and not the real user's identity. Steps: In a kerberised environment: 1. Create key level ACL in KMS for a new key. 2. Create encryption key now. (Goes through fine) 3. Create encryption zone. (Fails) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11493) Typo in kms-acls.xml description for hadoop.kms.acl.ROLLOVER
Charles Lamb created HADOOP-11493: - Summary: Typo in kms-acls.xml description for hadoop.kms.acl.ROLLOVER Key: HADOOP-11493 URL: https://issues.apache.org/jira/browse/HADOOP-11493 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 2.7.0 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Trivial does is: property namehadoop.kms.acl.ROLLOVER/name value*/value description ACL for rollover-key operations. If the user does is not in the GET ACL, the key material is not returned as part of the response. /description /property -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11455) KMS and Credential CLI should request confirmation for deletion by default
Charles Lamb created HADOOP-11455: - Summary: KMS and Credential CLI should request confirmation for deletion by default Key: HADOOP-11455 URL: https://issues.apache.org/jira/browse/HADOOP-11455 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.7.0 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor The hadoop key delete and hadoop credential delete currently only ask for confirmation of the delete if -i is specified. Asking for confirmation should be the default action for both. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Solaris Port SOLVED!
On 12/16/2014 11:01 AM, malcolm wrote: This is weird, Jenkins complains about: 1. Findbugs , 3 warnings in Java code (which of course I did not touch) The FB warnings seem to be a recent phenomenon. I have seen them on a recent test run of my own and they come and go depending on the run. I think they can be safely ignored. However, if you want to be sure, then you could do the findbugs run on your local machine both with and without your patch applied and compare the results. If you find that there's no difference, then just put a comment in the Jira stating that. 2. Test failures also with no connection to terror: A java socket timeout, Yes, probably unrelated. To be sure, run those same tests on your local machine and if they pass, then put a comment in the Jira saying that they run on your local machine. If they fail, then run them with and without the patch to make sure they fail both ways. Charles
Re: [VOTE] Release Apache Hadoop 2.6.0
I built from the src package, started a cluster, created an encryption zone and read/wrote data from/to it. +1 (non-binding) Charles
[jira] [Created] (HADOOP-11289) Fix typo in RpcInfo log message
Charles Lamb created HADOOP-11289: - Summary: Fix typo in RpcInfo log message Key: HADOOP-11289 URL: https://issues.apache.org/jira/browse/HADOOP-11289 Project: Hadoop Common Issue Type: Bug Components: net Affects Versions: 2.7.0 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Trivial From RpcUtil.java: LOG.info(Malfromed RPC request from + e.getRemoteAddress()); s/Malfromed/malformed/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11026) add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted
[ https://issues.apache.org/jira/browse/HADOOP-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Lamb resolved HADOOP-11026. --- Resolution: Duplicate Fix Version/s: 2.7.0 The doc changes were included in HDFS-6843. add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted -- Key: HADOOP-11026 URL: https://issues.apache.org/jira/browse/HADOOP-11026 Project: Hadoop Common Issue Type: Bug Components: documentation, test Affects Versions: 3.0.0, 2.6.0 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Fix For: 2.7.0 Attachments: HADOOP-11026-prelim.001.patch, HADOOP-11026.001.patch Following on to HDFS-6843, the contract specification for FSDataInputStream and FSDataOutputStream needs to be updated to reflect the addition of isEncrypted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11097) kms docs say proxyusers, not proxyuser for config params
Charles Lamb created HADOOP-11097: - Summary: kms docs say proxyusers, not proxyuser for config params Key: HADOOP-11097 URL: https://issues.apache.org/jira/browse/HADOOP-11097 Project: Hadoop Common Issue Type: Bug Components: documentation Affects Versions: 3.0.0 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Trivial The KMS docs have the proxy parameters named proxyusers, not proxyuser. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11026) add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted
Charles Lamb created HADOOP-11026: - Summary: add FileSystem contract specification for FSDataInputStream and FSDataOutputStream#isEncrypted Key: HADOOP-11026 URL: https://issues.apache.org/jira/browse/HADOOP-11026 Project: Hadoop Common Issue Type: Bug Components: documentation, test Affects Versions: 3.0.0, 2.6.0 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor Following on to HDFS-6843, the contract specification for FSDataInputStream and FSDataOutputStream needs to be updated to reflect the addition of isEncrypted. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Git repo ready to use
On 8/28/2014 12:07 PM, Giridharan Kesavan wrote: Fixed all the 3 pre-commit buids. test-patch's git reset --hard is removing the patchprocess dir, so moved it off the workspace. Thanks Giri. Should I resubmit HDFS-6954's patch? I've gotten 3 or 4 jenkins messages that indicated the problem so something is resubmitting, but now that you've fixed it, should I resubmit it again? Charles
[jira] [Created] (HADOOP-11006) cp should automatically use /.reserved/raw when run by the superuser
Charles Lamb created HADOOP-11006: - Summary: cp should automatically use /.reserved/raw when run by the superuser Key: HADOOP-11006 URL: https://issues.apache.org/jira/browse/HADOOP-11006 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0 Reporter: Charles Lamb Assignee: Charles Lamb On HDFS-6134, Sanjay Radia asked for cp to automatically prepend /.reserved/raw if the cp is being performed by the superuser and /.reserved/raw is supported by both the source and destination filesystems. This behavior only occurs if none of the src and target pathnames are /.reserved/raw. The -disablereservedraw flag can be used to disable this option. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: [VOTE] Merge fs-encryption branch to trunk
+1 (non-binding) I've actively worked on developing and reviewing this feature and am happy to see it in its current state. I believe it is ready to be merged. Charles
[jira] [Resolved] (HADOOP-10919) Copy command should preserve raw.* namespace extended attributes
[ https://issues.apache.org/jira/browse/HADOOP-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Lamb resolved HADOOP-10919. --- Resolution: Fixed Fix Version/s: fs-encryption (HADOOP-10150 and HDFS-6134) Thanks for the review [~andrew.wang]. I've committed this to the fs-encryption branch. Copy command should preserve raw.* namespace extended attributes Key: HADOOP-10919 URL: https://issues.apache.org/jira/browse/HADOOP-10919 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 3.0.0 Reporter: Charles Lamb Assignee: Charles Lamb Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) Attachments: HADOOP-10919.001.patch, HADOOP-10919.002.patch Refer to the doc attached to HDFS-6509 for background. Like distcp -p (see MAPREDUCE-6007), the copy command also needs to preserve extended attributes in the raw.* namespace by default whenever the src and target are in /.reserved/raw. To not preserve raw xattrs, don't specify /.reserved/raw in either the src or target. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10919) Copy command should preserve raw.* namespace extended attributes
Charles Lamb created HADOOP-10919: - Summary: Copy command should preserve raw.* namespace extended attributes Key: HADOOP-10919 URL: https://issues.apache.org/jira/browse/HADOOP-10919 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 3.0.0 Reporter: Charles Lamb Assignee: Charles Lamb Refer to the doc attached to HDFS-6509 for background. Like distcp -p (see MAPREDUCE-6007), the copy command also needs to rpeserve extended attributes in the raw.* namespace by default whenever the src and target are in /.reserved/raw. A new option to -p (preserve) which explicitly disables this copy will be added. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HADOOP-10628) Javadoc and few code style improvement for Crypto input and output streams
[ https://issues.apache.org/jira/browse/HADOOP-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Lamb resolved HADOOP-10628. --- Resolution: Fixed Thanks Yi, I committed this to fs-encryption. Javadoc and few code style improvement for Crypto input and output streams -- Key: HADOOP-10628 URL: https://issues.apache.org/jira/browse/HADOOP-10628 Project: Hadoop Common Issue Type: Sub-task Components: security Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134) Reporter: Yi Liu Assignee: Yi Liu Fix For: fs-encryption (HADOOP-10150 and HDFS-6134) Attachments: HADOOP-10628.patch There are some additional comments from [~clamb] related to javadoc and few code style on HADOOP-10603, let's fix them in this follow-on JIRA. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HADOOP-10586) KeyShell doesn't allow setting Options via CLI
Charles Lamb created HADOOP-10586: - Summary: KeyShell doesn't allow setting Options via CLI Key: HADOOP-10586 URL: https://issues.apache.org/jira/browse/HADOOP-10586 Project: Hadoop Common Issue Type: Bug Components: bin Affects Versions: 3.0.0 Reporter: Charles Lamb Assignee: Charles Lamb Priority: Minor You should be able to set any of the Options passed to the KeyProvider via the CLI. -- This message was sent by Atlassian JIRA (v6.2#6252)