[jira] [Commented] (HDFS-15778) distcp supports basic/kerberos authentication over knox gateway

2021-01-19 Thread Larry McCay (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268125#comment-17268125
 ] 

Larry McCay commented on HDFS-15778:


Hi [~pushpendrasingh]!
I assume this is a broader issue with the Hadoop/dfs java client not expecting 
to be challenged for authentication.
I believe this is based on the fact that it expects the redirect to the DN to 
only use the block access token rather than any other set of credentials.
Since Knox doesn't differentiate the DN endpoint from the NN endpoint, the 
authentication configured for the hosting topology is used universally.
A change to the java client would not only address distcp but any other dfs CLI 
command that currently needs to interact with DNs.

Did you plan to provide such a patch here?

>  distcp supports basic/kerberos authentication over knox gateway
> 
>
> Key: HDFS-15778
> URL: https://issues.apache.org/jira/browse/HDFS-15778
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Affects Versions: 3.2.2
>Reporter: Pushpendra Singh
>Priority: Minor
>
> Distcp doesnt support copying data using basic/Kerberos authentication using 
> knox gateway.
> If the source/target cluster has secured (basic auth or kerberos auth) knox 
> gateway configured, distcp cannot be used to copy the data.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13495) RBF: Support Router Admin REST API

2018-04-25 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16453306#comment-16453306
 ] 

Larry McCay commented on HDFS-13495:


[~chris.douglas] - I don't know of any particular reason that admin REST APIs 
should be avoided.

It is possible that many admin like things require a restart which would be 
awkward/impossible to fully do via REST APIs?

> RBF: Support Router Admin REST API
> --
>
> Key: HDFS-13495
> URL: https://issues.apache.org/jira/browse/HDFS-13495
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Mohammad Arshad
>Priority: Major
>  Labels: RBF
>
> This JIRA intends to add REST API support for all admin commands. Router 
> Admin REST APIs can be useful in managing the Routers from a central 
> management layer tool. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10579) HDFS web interfaces lack configs for X-FRAME-OPTIONS protection

2016-07-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373453#comment-15373453
 ] 

Larry McCay commented on HDFS-10579:


I think that should be it.
There isn't some reason to keep it out of 2.8 that I am missing - is there?

> HDFS web interfaces lack configs for X-FRAME-OPTIONS protection
> ---
>
> Key: HDFS-10579
> URL: https://issues.apache.org/jira/browse/HDFS-10579
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: 2.9.0
>
> Attachments: HDFS-10579.001.patch, HDFS-10579.002.patch, 
> HDFS-10579.003.patch
>
>
> This JIRA proposes to extend the work done in HADOOP-12964 and enable a 
> configuration value that enables or disables that option.
> This allows HDFS to remain backward compatible as required by the branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10579) HDFS web interfaces lack configs for X-FRAME-OPTIONS protection

2016-07-11 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372063#comment-15372063
 ] 

Larry McCay commented on HDFS-10579:


[~jnp] - do we need this in branch-2.8 as well?

> HDFS web interfaces lack configs for X-FRAME-OPTIONS protection
> ---
>
> Key: HDFS-10579
> URL: https://issues.apache.org/jira/browse/HDFS-10579
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: 2.9.0
>
> Attachments: HDFS-10579.001.patch, HDFS-10579.002.patch, 
> HDFS-10579.003.patch
>
>
> This JIRA proposes to extend the work done in HADOOP-12964 and enable a 
> configuration value that enables or disables that option.
> This allows HDFS to remain backward compatible as required by the branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10579) HDFS web interfaces lack configs for X-FRAME-OPTIONS protection

2016-07-07 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15366916#comment-15366916
 ] 

Larry McCay commented on HDFS-10579:


[~anu] - This looks good.
I will review the new patches when they arrive as well.

Thanks for adding this!

> HDFS web interfaces lack configs for X-FRAME-OPTIONS protection
> ---
>
> Key: HDFS-10579
> URL: https://issues.apache.org/jira/browse/HDFS-10579
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, namenode
>Affects Versions: 3.0.0-alpha1
>Reporter: Anu Engineer
>Assignee: Anu Engineer
> Fix For: 2.9.0
>
> Attachments: HDFS-10579.001.patch, HDFS-10579.002.patch
>
>
> This JIRA proposes to extend the work done in HADOOP-12964 and enable a 
> configuration value that enables or disables that option. This JIRA will also 
> add an ability to pick the right x-frame-option, since right now it looks 
> like we have hardcoded that to SAMEORIGIN.
> This allows HDFS to remain backward compatible as required by the branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144513#comment-15144513
 ] 

Larry McCay commented on HDFS-9711:
---

I am much more inclined to try and make v004 work than go back to v003.

What do you think about going with option #2 and also pulling the 
handleHttpInteraction out into a CsrfUtils class.
This makes it less odd that it is all encapsulated in the same impl and a 
little more clear that the handler is used by multiple classes.

Perhaps CsrfUtils.handleRestHttpInteraction(HttpInteraction interation) with 
the anticipation that a Csrf.handleWebAppHttpInteraction(HttpInteraction 
interation)?

The webapp one would have to be able to compare a session value of the header 
to the actual value sent by the client - which would be a new constructor 
argument on ServletFilterHttpInteraction/NettyHttpInteraction.

We could also just overload the method with the additional parameter of the 
value to check against and leave it as handleHttpInteraction(HttpInteraction 
interation, String nonce)

Anyway, I think that some simple separation with a Utils class would help make 
it more readable as well.

> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch, HDFS-9711.004.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145008#comment-15145008
 ] 

Larry McCay commented on HDFS-9711:
---

No, I don't think that is necessary to go that far.

+1 for removing the anonymous inner classes.

On Fri, Feb 12, 2016 at 1:05 PM, Chris Nauroth (JIRA) 



> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch, HDFS-9711.004.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15145663#comment-15145663
 ] 

Larry McCay commented on HDFS-9711:
---

I don't know why but I think it is much better now.
May just be that I am not drawn to try and understand what is in the anonymous 
class or what.

+1

> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch, HDFS-9711.004.patch, HDFS-9711.005.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-11 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15144011#comment-15144011
 ] 

Larry McCay commented on HDFS-9711:
---

Hey [~cnauroth] - I love the objectives of this and can see its elegance. I do 
also agree that it makes it hard to read. There is something especially odd 
about the indirection being completely built into the CSRF filter itself as 
well as the handler of the httpInteraction that makes you really wonder why is 
it done this way. I would find myself wanting to simplify it and not realize 
that some other component was using the handler method with another 
implementation of httpInteraction.

You do however document the HttpInteraction pretty well in the same file. So, 
maybe it is fine.

I am curious about the following line in the javadoc for the interface though:

{quote}
+   * Typical usage of the filter inside a servlet container will not need to 
use
+   * this interface.
{quote}

The HttpInteraction is certainly being use from doFilter. Are you saying the 
there is no code other than the doFilter implementation that will need to use 
the HttpInteraction instance directly? That seems to make sense.

Removing the anonymous extension may help make it more readable.

Instead of:

{quote}
+handleHttpInteraction(new HttpInteraction() {
+@Override
+public String getHeader(String header) {
+  return httpRequest.getHeader(header);
+}
+
+@Override
+public String getMethod() {
+  return httpRequest.getMethod();
+}
+
+@Override
+public void proceed() throws IOException, ServletException {
+  chain.doFilter(httpRequest, httpResponse);
+}
+
+@Override
+public void sendError(int code, String message) throws IOException {
+  httpResponse.sendError(code, message);
+}
+});
{quote}

We created a concrete instance of the ServletFilterHttpInteraction like:

{quote}
handleHttpInteraction(new ServletFilterHttpInteraction(request, response, 
chain));
{quote}

and:

{quote}
handleHttpInteraction(new NettyHttpInteraction(final ChannelHandlerContext ctx, 
final HttpRequest req));
{quote}

Do you think it would help?


> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch, HDFS-9711.004.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-09 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140195#comment-15140195
 ] 

Larry McCay commented on HDFS-9711:
---

Hi [~cnauroth] - Looks great! 
The effort to add a filter for webhdfs is greater than I anticipated.

a couple quick things:

* I like the refactoring for an isRequestAllowed method on the filter - I 
actually meant to go back and do that earlier
* I notice that you have to return your own error message in the channelRead0 
method of RestCsrfPreventionFilterHandler. Perhaps, we should provide a 
constant for that in the filter too. As it stands now, the message you return 
is slightly different and a bit more ambiguous then what is returned by the 
filter itself (which is why I changed it).
* I'd also like to understand why the typical filter processing isn't being 
used in this code path. Not because I think it should but I'd like to 
understand the usecase here.

> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch, 
> HDFS-9711.003.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-02-05 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15134615#comment-15134615
 ] 

Larry McCay commented on HDFS-9711:
---

Heads up, [~cnauroth] - my latest patch changed the error messages slightly in 
order to provide more clarity of the vulnerability check that is in violation. 
Changed "Missing Required Header for Vulnerability Protection" to "Missing 
Required Header for CSRF Vulnerability Protection".

Sorry for the inconvenience. If this is troublesome or you don't feel it is 
needed, I can revert that change.

The configurability of the header name, I think is just a general convenience. 
Some shops have very strict guidelines on what they do for certain things. If 
they wanted to always use the same header for CSRF protection for the 
convenience of the app developers then they could configure the CSRF filter 
across the platform to expect the same header. If a shop has some notion that 
headers per component makes sense then they could do that as well. Otherwise, I 
would have expected the default to be used.

>From a platform perspective, I would rather the same header be used across the 
>board so as not to put too much of a burden on an app that must communicate 
>with many - maybe like Ambari might have to?  Finding out and keeping track of 
>the header name for each component in every deployment may be a lot.

The fact of the matter is that it really doesn't matter from a security 
perspective what the name is as long as it is what is configured for the filter 
enforcing the CSRF filter. We are really just ensuring that the request is 
coming from a client that has the ability to set a header. We just have to know 
what name to look for.

> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-9711) Integrate CSRF prevention filter in WebHDFS.

2016-01-29 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15124034#comment-15124034
 ] 

Larry McCay commented on HDFS-9711:
---

Excellent summary, [~cnauroth]!

> Integrate CSRF prevention filter in WebHDFS.
> 
>
> Key: HDFS-9711
> URL: https://issues.apache.org/jira/browse/HDFS-9711
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode, namenode, webhdfs
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: HDFS-9711.001.patch, HDFS-9711.002.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.  This issue tracks integration of 
> that filter in WebHDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-08-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14095470#comment-14095470
 ] 

Larry McCay commented on HDFS-6134:
---

I guess if webhdfs is allowed to doAs the end user 'hdfs' then that can be a 
problem.
But again, I don't see what keeps an admin from doing that with httpfs as well.

It seems as though KMS needs to have the ability to not allow 'hdfs' user gain 
keys through any trusted proxy but still allow a trusted proxy that is running 
as a superuser to doAs other users.

 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0, 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Charles Lamb
 Attachments: HDFS-6134.001.patch, HDFS-6134.002.patch, 
 HDFS-6134_test_plan.pdf, HDFSDataatRestEncryption.pdf, 
 HDFSDataatRestEncryptionProposal_obsolete.pdf, 
 HDFSEncryptionConceptualDesignProposal-2014-06-20.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-08-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14095719#comment-14095719
 ] 

Larry McCay commented on HDFS-6134:
---

Thanks [~tucu00] that is pretty clear.

The question that remains for me is why this same scenario isn't achievable by 
the admin kinit'ing as httpfs/HOST or Oozie or some other trusted proxy and 
then issuing a request with a doAs user X.

We have to somehow fix this for webhdfs - it is an expected and valuable API 
and should remain so with encrypted files without introducing a vulnerability.

Even if we have to do something like use another proxy (like Knox) and a shared 
secret to ensure that there is additional verification of the origin of a KMS 
request from webhdfs. This would enable proxies to access webhdfs resources 
with a signed/encrypted token - if KMS gets a signed request from webhdfs that 
it can verify then it can proceed. The shared secret can be made available 
through the credential provider API and webhdfs itself would just see it as an 
opaque token that needs to be passed in the KMS request. Requiring an extra hop 
for this access would be unfortunate too but if it is for additional security 
of the data it may be acceptable.

Anyway, that's just a thought for keeping webhdfs as a first class citizen. We 
have to do something.

 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0, 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Charles Lamb
 Attachments: HDFS-6134.001.patch, HDFS-6134.002.patch, 
 HDFS-6134_test_plan.pdf, HDFSDataatRestEncryption.pdf, 
 HDFSDataatRestEncryptionProposal_obsolete.pdf, 
 HDFSEncryptionConceptualDesignProposal-2014-06-20.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-08-13 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14096177#comment-14096177
 ] 

Larry McCay commented on HDFS-6134:
---

And that is ensured by file permissions on the keytab?


On Wed, Aug 13, 2014 at 1:14 PM, Alejandro Abdelnur (JIRA) j...@apache.org



 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0, 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Charles Lamb
 Attachments: HDFS-6134.001.patch, HDFS-6134.002.patch, 
 HDFS-6134_test_plan.pdf, HDFSDataatRestEncryption.pdf, 
 HDFSDataatRestEncryptionProposal_obsolete.pdf, 
 HDFSEncryptionConceptualDesignProposal-2014-06-20.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-08-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094621#comment-14094621
 ] 

Larry McCay commented on HDFS-6134:
---

[~sanjay.radia] that sounds right to me. 
In fact, that would be the only way for Knox to be able to access files in HDFS 
through webhdfs.
IMO, relegating webhdfs to being an audit violation should be a showstopper.

The hdfs user should not be able to access the keys but an enduser with 
appropriate permissions should be given access through webhdfs.

 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0, 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Charles Lamb
 Attachments: HDFS-6134.001.patch, HDFS-6134.002.patch, 
 HDFS-6134_test_plan.pdf, HDFSDataatRestEncryption.pdf, 
 HDFSDataatRestEncryptionProposal_obsolete.pdf, 
 HDFSEncryptionConceptualDesignProposal-2014-06-20.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-08-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14095129#comment-14095129
 ] 

Larry McCay commented on HDFS-6134:
---

Hey [~tucu00] - I need a little more clarification here. When you describe 
webhdfs authenticating as 'hdfs while it is accessing a file on behalf of an 
end user - are you referring to the fact that the services authenticate to one 
another even though the effective user (via doas) will be the end user and 
therefore the authorization will be checking the end user's permissions? If so, 
isn't this the same for httpfs?

What keeps an admin from using httpfs to gain access to decrypt encrypted 
files? If an admin can authenticate as an end user to either proxy then it 
seems they will be able to gain access.

I must be missing some nuance about webhdfs and hdfs user.
That doesn't lessen my concern about webhdfs not being considered a trusted API 
to encrypted files though.

 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 3.0.0, 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Charles Lamb
 Attachments: HDFS-6134.001.patch, HDFS-6134.002.patch, 
 HDFS-6134_test_plan.pdf, HDFSDataatRestEncryption.pdf, 
 HDFSDataatRestEncryptionProposal_obsolete.pdf, 
 HDFSEncryptionConceptualDesignProposal-2014-06-20.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-05 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14086819#comment-14086819
 ] 

Larry McCay commented on HDFS-6790:
---

Hi [~devaraj] or [~sanjay.radia] can I bother one of you for a quick review and 
possible commit for this patch? It will need to go to branch-2 as well.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-05 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14087108#comment-14087108
 ] 

Larry McCay commented on HDFS-6790:
---

Thank you, [~brandonli]!

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Larry McCay
Assignee: Larry McCay
 Fix For: 2.6.0

 Attachments: HDFS-6790.patch, HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-04 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Open  (was: Patch Available)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-04 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Patch Available  (was: Open)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-04 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Attachment: HDFS-6790.patch

Resubmit for another jenkins run.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-04 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Patch Available  (was: Open)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-04 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Open  (was: Patch Available)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-04 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14085408#comment-14085408
 ] 

Larry McCay commented on HDFS-6790:
---

Hmmm - failure must be related - I need to investigate further.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-04 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14085625#comment-14085625
 ] 

Larry McCay commented on HDFS-6790:
---

It runs cleanly locally and I don't see any way in which this patch would have 
affected this test.
Going back to my original assertion that it is unrelated.

Has this been a flaky test lately?

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-04 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14085627#comment-14085627
 ] 

Larry McCay commented on HDFS-6790:
---

Okay - seems to be a known issue as being addressed in HDFS-6694.

This patch is fine.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6694) TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently with various symptoms

2014-08-04 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14085643#comment-14085643
 ] 

Larry McCay commented on HDFS-6694:
---

I am seeing this with my patch for HDFS-6790

{code}
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 82.647 sec  
FAILURE! - in org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
testPipelineRecoveryStress(org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover)
  Time elapsed: 33.705 sec   ERROR!
java.lang.RuntimeException: Deferred
at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.checkException(MultithreadedTestUtil.java:130)
at 
org.apache.hadoop.test.MultithreadedTestUtil$TestContext.waitFor(MultithreadedTestUtil.java:121)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.testPipelineRecoveryStress(TestPipelinesFailover.java:456)
Caused by: org.apache.hadoop.ipc.RemoteException: File /test-7 could only be 
replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) 
running and 3 node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1486)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2801)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:613)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:462)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:607)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2099)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2095)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1626)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2093)

at org.apache.hadoop.ipc.Client.call(Client.java:1411)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy17.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:372)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
at com.sun.proxy.$Proxy18.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1442)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1265)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:521)
{code}

 TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
 with various symptoms
 

 Key: HDFS-6694
 URL: https://issues.apache.org/jira/browse/HDFS-6694
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6694.001.dbg.patch, 
 org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover-output.txt, 
 org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover.txt


 TestPipelinesFailover.testPipelineRecoveryStress tests fail intermittently 
 with various symptoms. Typical failures are described in first comment.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Attachment: HDFS-6790.patch

Patch to leverage Configuration.getPassword in order to provide an alternative 
to SSL passwords stored in clear text within ssl-server.xml or a side file - 
while maintaining backward compatibility.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Patch Available  (was: Open)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Open  (was: Patch Available)

Invalid line in test.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Attachment: HDFS-6790.patch

Removed invalid line in TestDFSUtil.testGetPassword() and attaching new patch.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Patch Available  (was: Open)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14084196#comment-14084196
 ] 

Larry McCay commented on HDFS-6790:
---

Test failure is unrelated to the patch.

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Open  (was: Patch Available)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-08-03 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HDFS-6790:
--

Status: Patch Available  (was: Open)

 DFSUtil Should Use configuration.getPassword for SSL passwords
 --

 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
 Attachments: HDFS-6790.patch, HDFS-6790.patch


 As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
 on Configuration for acquiring known passwords for SSL. The getPassword 
 method will leverage the credential provider API and/or fallback to the clear 
 text value stored in ssl-server.xml.
 This will provide an alternative to clear text passwords on disk while 
 maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6790) DFSUtil Should Use configuration.getPassword for SSL passwords

2014-07-30 Thread Larry McCay (JIRA)
Larry McCay created HDFS-6790:
-

 Summary: DFSUtil Should Use configuration.getPassword for SSL 
passwords
 Key: HDFS-6790
 URL: https://issues.apache.org/jira/browse/HDFS-6790
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay


As part of HADOOP-10904, DFSUtil should be changed to leverage the new method 
on Configuration for acquiring known passwords for SSL. The getPassword method 
will leverage the credential provider API and/or fallback to the clear text 
value stored in ssl-server.xml.

This will provide an alternative to clear text passwords on disk while 
maintaining backward compatibility for this behavior.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-06-18 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14036234#comment-14036234
 ] 

Larry McCay commented on HDFS-6134:
---

I can buy the overall approach and agree that it is more secure.
However, I'm not so sure that we need to add these methods to the KeyProvider 
API.

Follow up questions:
* do we need these methods for any other usecases?
* does/can the HDFS client have access to the EZ key at the same time that it 
has to decrypt the DEK?
* if this particular key provider always returns an encrypted DEK then can't 
the client know to always decrypt it with the EZ key?

thoughts?

 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HDFSDataAtRestEncryption.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-06-18 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14036560#comment-14036560
 ] 

Larry McCay commented on HDFS-6134:
---

[~tucu00] - I realize that it is the first usecase - that doesn't make it the 
only one that we have in mind or in the works. The fact that you have run into 
an issue with the EZ key granularity while using the CTR mode is a problem with 
the usecase design not necessarily with the abstraction of key providers. The 
question is whether wrapped keys will be required by other usecases where 
either the key usage pattern or the encryption modes in use may not require 
them. 

Currently, the KeyProvider API doesn't do any encryption itself - I just want 
to make sure that adding the additional complexity and responsibility to this 
interface is really necessary.

Additional questions:

* how does the keyprovider know what EZ key to use - is it the key that is 
referenced by the keyVersionName?
* how do we key HDFS clients from asking for the EZ key - if it is stored by 
the passed in keyVersionName?
** will this require special access control protection for EZ keys?
* would the unique DEK be stored in the provider as well or only in the 
extended attributes of the file?
** if stored in the provider what is the keyVersionName for it?



 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HDFSDataAtRestEncryption.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-06-17 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14034028#comment-14034028
 ] 

Larry McCay commented on HDFS-6134:
---

Hmmm, I agree with Owen. For usecases where these are not inherently known, 
metadata or some other packaging mechanism will need to identify the keys or 
file for which keys are required. Additionally, adding getDelegationToken to 
KeyProvider API is leaking specific provider implementations through the 
KeyProvider abstraction and should be avoided.

 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HDFSDataAtRestEncryption.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-06-17 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14034040#comment-14034040
 ] 

Larry McCay commented on HDFS-6134:
---

[~tucu00] - that is a good example of where additional metadata would have to 
indicate that a resource that requires a key is required by this deployed 
application. The idea is to avoid KMS having to deal with hadoop runtime 
level scale when it can be accommodated at submit time. It is also much better 
to fail at submit time if the key is not available than at runtime.

 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HDFSDataAtRestEncryption.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6241) Unable to reset password

2014-04-14 Thread Larry McCay (JIRA)
Larry McCay created HDFS-6241:
-

 Summary: Unable to reset password
 Key: HDFS-6241
 URL: https://issues.apache.org/jira/browse/HDFS-6241
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
Priority: Blocker


I tried emailing r...@apache.org - as indicated in INFRA-5241 - about these 
difficulties but it seems to be bouncing. Here is the email that I sent:

Greetings -

I am trying to reset my password and have encountered the following problems:

1. it seems that the public key associated with my account is erroneously that 
of the lead of our project (Knox). His must have set up my account in the 
beginning and provided his key maybe? Anyway, this means that he has to decrypt 
my email.

2. Once he does decrypt it and I follow the link to reset it - I get a No Such 
Token error message and am unable to reset my password.

The email below indicated that I should email root with problems.

Please let me know if I should file and Infra jira. I did find a similar one 
there that told them to email root. So, that is where I am starting.

We are in the process of trying to get a release out - so I would greatly 
appreciate the help here.

thanks!

--larry


Hi Larry McCay,

96.235.186.40 has asked Apache ID https://id.apache.org

to initiate a password reset for your apache.org account 'lmccay'.

If you requested this password reset, please use the following link to

reset your Apache LDAP password:

deleted-url

If you did not request this password reset, please email r...@apache.org --- but

delete the above URL from the text of the reply email before sending it.

This link will expire at 2014-04-14 14:31:25 +, and can only be used from 
96.235.186.40.

--

Best Regards,

Apache Infrastructure



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6241) Unable to reset password

2014-04-14 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13968469#comment-13968469
 ] 

Larry McCay commented on HDFS-6241:
---

Ugh - yes, apologies!

 Unable to reset password
 

 Key: HDFS-6241
 URL: https://issues.apache.org/jira/browse/HDFS-6241
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
Priority: Blocker

 I tried emailing r...@apache.org - as indicated in INFRA-5241 - about these 
 difficulties but it seems to be bouncing. Here is the email that I sent:
 Greetings -
 I am trying to reset my password and have encountered the following problems:
 1. it seems that the public key associated with my account is erroneously 
 that of the lead of our project (Knox). His must have set up my account in 
 the beginning and provided his key maybe? Anyway, this means that he has to 
 decrypt my email.
 2. Once he does decrypt it and I follow the link to reset it - I get a No 
 Such Token error message and am unable to reset my password.
 The email below indicated that I should email root with problems.
 Please let me know if I should file and Infra jira. I did find a similar one 
 there that told them to email root. So, that is where I am starting.
 We are in the process of trying to get a release out - so I would greatly 
 appreciate the help here.
 thanks!
 --larry
 Hi Larry McCay,
 96.235.186.40 has asked Apache ID https://id.apache.org
 to initiate a password reset for your apache.org account 'lmccay'.
 If you requested this password reset, please use the following link to
 reset your Apache LDAP password:
 deleted-url
 If you did not request this password reset, please email r...@apache.org --- 
 but
 delete the above URL from the text of the reply email before sending it.
 This link will expire at 2014-04-14 14:31:25 +, and can only be used from 
 96.235.186.40.
 --
 Best Regards,
 Apache Infrastructure



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6242) CLONE - Unable to reset password

2014-04-14 Thread Larry McCay (JIRA)
Larry McCay created HDFS-6242:
-

 Summary: CLONE - Unable to reset password
 Key: HDFS-6242
 URL: https://issues.apache.org/jira/browse/HDFS-6242
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
Priority: Blocker


I tried emailing r...@apache.org - as indicated in INFRA-5241 - about these 
difficulties but it seems to be bouncing. Here is the email that I sent:

Greetings -

I am trying to reset my password and have encountered the following problems:

1. it seems that the public key associated with my account is erroneously that 
of the lead of our project (Knox). His must have set up my account in the 
beginning and provided his key maybe? Anyway, this means that he has to decrypt 
my email.

2. Once he does decrypt it and I follow the link to reset it - I get a No Such 
Token error message and am unable to reset my password.

The email below indicated that I should email root with problems.

Please let me know if I should file and Infra jira. I did find a similar one 
there that told them to email root. So, that is where I am starting.

We are in the process of trying to get a release out - so I would greatly 
appreciate the help here.

thanks!

--larry


Hi Larry McCay,

96.235.186.40 has asked Apache ID https://id.apache.org

to initiate a password reset for your apache.org account 'lmccay'.

If you requested this password reset, please use the following link to

reset your Apache LDAP password:

deleted-url

If you did not request this password reset, please email r...@apache.org --- but

delete the above URL from the text of the reply email before sending it.

This link will expire at 2014-04-14 14:31:25 +, and can only be used from 
96.235.186.40.

--

Best Regards,

Apache Infrastructure



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6242) CLONE - Unable to reset password

2014-04-14 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay resolved HDFS-6242.
---

Resolution: Invalid

 CLONE - Unable to reset password
 

 Key: HDFS-6242
 URL: https://issues.apache.org/jira/browse/HDFS-6242
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
Priority: Blocker

 I tried emailing r...@apache.org - as indicated in INFRA-5241 - about these 
 difficulties but it seems to be bouncing. Here is the email that I sent:
 Greetings -
 I am trying to reset my password and have encountered the following problems:
 1. it seems that the public key associated with my account is erroneously 
 that of the lead of our project (Knox). His must have set up my account in 
 the beginning and provided his key maybe? Anyway, this means that he has to 
 decrypt my email.
 2. Once he does decrypt it and I follow the link to reset it - I get a No 
 Such Token error message and am unable to reset my password.
 The email below indicated that I should email root with problems.
 Please let me know if I should file and Infra jira. I did find a similar one 
 there that told them to email root. So, that is where I am starting.
 We are in the process of trying to get a release out - so I would greatly 
 appreciate the help here.
 thanks!
 --larry
 Hi Larry McCay,
 96.235.186.40 has asked Apache ID https://id.apache.org
 to initiate a password reset for your apache.org account 'lmccay'.
 If you requested this password reset, please use the following link to
 reset your Apache LDAP password:
 deleted-url
 If you did not request this password reset, please email r...@apache.org --- 
 but
 delete the above URL from the text of the reply email before sending it.
 This link will expire at 2014-04-14 14:31:25 +, and can only be used from 
 96.235.186.40.
 --
 Best Regards,
 Apache Infrastructure



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HDFS-6241) Unable to reset password

2014-04-14 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay resolved HDFS-6241.
---

Resolution: Invalid

 Unable to reset password
 

 Key: HDFS-6241
 URL: https://issues.apache.org/jira/browse/HDFS-6241
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Larry McCay
Priority: Blocker

 I tried emailing r...@apache.org - as indicated in INFRA-5241 - about these 
 difficulties but it seems to be bouncing. Here is the email that I sent:
 Greetings -
 I am trying to reset my password and have encountered the following problems:
 1. it seems that the public key associated with my account is erroneously 
 that of the lead of our project (Knox). His must have set up my account in 
 the beginning and provided his key maybe? Anyway, this means that he has to 
 decrypt my email.
 2. Once he does decrypt it and I follow the link to reset it - I get a No 
 Such Token error message and am unable to reset my password.
 The email below indicated that I should email root with problems.
 Please let me know if I should file and Infra jira. I did find a similar one 
 there that told them to email root. So, that is where I am starting.
 We are in the process of trying to get a release out - so I would greatly 
 appreciate the help here.
 thanks!
 --larry
 Hi Larry McCay,
 96.235.186.40 has asked Apache ID https://id.apache.org
 to initiate a password reset for your apache.org account 'lmccay'.
 If you requested this password reset, please use the following link to
 reset your Apache LDAP password:
 deleted-url
 If you did not request this password reset, please email r...@apache.org --- 
 but
 delete the above URL from the text of the reply email before sending it.
 This link will expire at 2014-04-14 14:31:25 +, and can only be used from 
 96.235.186.40.
 --
 Best Regards,
 Apache Infrastructure



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6134) Transparent data at rest encryption

2014-03-24 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13945790#comment-13945790
 ] 

Larry McCay commented on HDFS-6134:
---

Hi [~tucu00] - I like what I see here. We should file jira's for the 
KeyProvider API work that you mention in your document and discuss some of 
those aspects there. We have a number of common interests in that area.

 Transparent data at rest encryption
 ---

 Key: HDFS-6134
 URL: https://issues.apache.org/jira/browse/HDFS-6134
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: security
Affects Versions: 2.3.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HDFSDataAtRestEncryption.pdf


 Because of privacy and security regulations, for many industries, sensitive 
 data at rest must be in encrypted form. For example: the health­care industry 
 (HIPAA regulations), the card payment industry (PCI DSS regulations) or the 
 US government (FISMA regulations).
 This JIRA aims to provide a mechanism to encrypt HDFS data at rest that can 
 be used transparently by any application accessing HDFS via Hadoop Filesystem 
 Java API, Hadoop libhdfs C library, or WebHDFS REST API.
 The resulting implementation should be able to be used in compliance with 
 different regulation requirements.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-5333) Improvement of current HDFS Web UI

2013-10-28 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807481#comment-13807481
 ] 

Larry McCay commented on HDFS-5333:
---

Interesting work!

It seems to me that we may need to consider deployments where a gateway such as 
Knox is between the UI client and the Hadoop cluster.
How are the relevant URLs configured for the deployment - are they easily 
configured for a particular deployment scenario such as this?

 Improvement of current HDFS Web UI
 --

 Key: HDFS-5333
 URL: https://issues.apache.org/jira/browse/HDFS-5333
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Haohui Mai

 This is an umbrella jira for improving the current JSP-based HDFS Web UI. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5333) Improvement of current HDFS Web UI

2013-10-28 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807548#comment-13807548
 ] 

Larry McCay commented on HDFS-5333:
---

Well, I think it is important to consider that serverside code is executing 
within the cluster (other side of the firewall) and that it would have direct 
access to service endpoints. So, in that respect the old web UI will work - 
assuming that the port is open to get to it from the outside.

In the new UI, the connections will be made from the client where it will need 
to go through the gateway to get the services.
Unless I am missing something.

 Improvement of current HDFS Web UI
 --

 Key: HDFS-5333
 URL: https://issues.apache.org/jira/browse/HDFS-5333
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Haohui Mai

 This is an umbrella jira for improving the current JSP-based HDFS Web UI. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5333) Improvement of current HDFS Web UI

2013-10-28 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13807555#comment-13807555
 ] 

Larry McCay commented on HDFS-5333:
---

Okay, I may be off base then.
Are REST APIs being invoked from the Browser or not?
If they are then they won't be able to get to the services.

 Improvement of current HDFS Web UI
 --

 Key: HDFS-5333
 URL: https://issues.apache.org/jira/browse/HDFS-5333
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Jing Zhao
Assignee: Haohui Mai

 This is an umbrella jira for improving the current JSP-based HDFS Web UI. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-4794) Browsing filesystem via webui throws kerberos exception when NN service RPC is enabled in a secure cluster

2013-07-12 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13707100#comment-13707100
 ] 

Larry McCay commented on HDFS-4794:
---

I agree with Jitendra.
In my opinion we should either: backport the change from Hadoop 2 or introduce 
the new variable in order to avoid unforeseen effects from use of this public 
static.


 Browsing filesystem via webui throws kerberos exception when NN service RPC 
 is enabled in a secure cluster
 --

 Key: HDFS-4794
 URL: https://issues.apache.org/jira/browse/HDFS-4794
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.2
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HDFS-4794.patch


 Browsing filesystem via webui throws kerberos exception when NN service RPC 
 is enabled in a secure cluster
 To reproduce this error, 
 Enable security 
 Enable serviceRPC by setting dfs.namenode.servicerpc-address and use a 
 different port than the rpc port.
 Click on Browse the filesystem on NameNode web.
 The following error will be shown :
 Call to NN001/12.123.123.01:8030 failed on local exception: 
 java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
 [Caused by GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4794) Browsing filesystem via webui throws kerberos exception when NN service RPC is enabled in a secure cluster

2013-07-10 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13704955#comment-13704955
 ] 

Larry McCay commented on HDFS-4794:
---

Hi Benoy - is the browsing functionality actually broken by this error or are 
you just seeing the error in the log?

 Browsing filesystem via webui throws kerberos exception when NN service RPC 
 is enabled in a secure cluster
 --

 Key: HDFS-4794
 URL: https://issues.apache.org/jira/browse/HDFS-4794
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.2
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: HDFS-4794.patch


 Browsing filesystem via webui throws kerberos exception when NN service RPC 
 is enabled in a secure cluster
 To reproduce this error, 
 Enable security 
 Enable serviceRPC by setting dfs.namenode.servicerpc-address and use a 
 different port than the rpc port.
 Click on Browse the filesystem on NameNode web.
 The following error will be shown :
 Call to NN001/12.123.123.01:8030 failed on local exception: 
 java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed 
 [Caused by GSSException: No valid credentials provided (Mechanism level: 
 Failed to find any Kerberos tgt)]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira