[jira] [Updated] (HADOOP-16231) Reduce KMS error logging severity from WARN to INFO

2019-04-26 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-16231:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

> Reduce KMS error logging severity from WARN to INFO
> ---
>
> Key: HADOOP-16231
> URL: https://issues.apache.org/jira/browse/HADOOP-16231
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Trivial
> Attachments: HDFS-14404.001.patch
>
>
> When the KMS is deployed as an HA service and a failure occurs the current 
> error severity in the client code appears to be WARN. It can result in 
> excessive errors despite the fact that another instance may succeed.
> Maybe this log level can be adjusted in only the load balancing provider.
> {code}
> 19/02/27 05:10:10 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException 
> [java.net.ConnectException: Connection refused (Connection refused)]!!
> 19/02/12 20:50:09 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException:
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16231) Reduce KMS error logging severity from WARN to INFO

2019-04-26 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826774#comment-16826774
 ] 

Kitti Nanasi commented on HADOOP-16231:
---

Thanks for the comments [~smeng] and [~xyao], I will close this issue as Won't 
fix then.

> Reduce KMS error logging severity from WARN to INFO
> ---
>
> Key: HADOOP-16231
> URL: https://issues.apache.org/jira/browse/HADOOP-16231
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Trivial
> Attachments: HDFS-14404.001.patch
>
>
> When the KMS is deployed as an HA service and a failure occurs the current 
> error severity in the client code appears to be WARN. It can result in 
> excessive errors despite the fact that another instance may succeed.
> Maybe this log level can be adjusted in only the load balancing provider.
> {code}
> 19/02/27 05:10:10 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException 
> [java.net.ConnectException: Connection refused (Connection refused)]!!
> 19/02/12 20:50:09 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException:
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16231) Reduce KMS error logging severity from WARN to INFO

2019-04-25 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826125#comment-16826125
 ] 

Kitti Nanasi commented on HADOOP-16231:
---

[~smeng], that's a good example. In that case the warning could be reasonable, 
even if the other KMS can satisfy the request. 

> Reduce KMS error logging severity from WARN to INFO
> ---
>
> Key: HADOOP-16231
> URL: https://issues.apache.org/jira/browse/HADOOP-16231
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Trivial
> Attachments: HDFS-14404.001.patch
>
>
> When the KMS is deployed as an HA service and a failure occurs the current 
> error severity in the client code appears to be WARN. It can result in 
> excessive errors despite the fact that another instance may succeed.
> Maybe this log level can be adjusted in only the load balancing provider.
> {code}
> 19/02/27 05:10:10 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException 
> [java.net.ConnectException: Connection refused (Connection refused)]!!
> 19/02/12 20:50:09 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException:
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16231) Reduce KMS error logging severity from WARN to INFO

2019-04-03 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808782#comment-16808782
 ] 

Kitti Nanasi commented on HADOOP-16231:
---

Thanks [~xyao] for the comment and for moving the issue to hadoop common!
{quote}Usually, the default log4j.properties set the default log level to INFO. 
If that is the case, changing the log level in LBKMSCP from WARN to INFO may 
not reduce the amount of log produced as expected.
{quote}
I agree, but at least it will not show as a warning, which would indicate a 
bigger problem. However I think INFO level is fine here, because even if the 
one of the KMS's could satisfy the request, I would like to see if there was a 
problem with the other one. What do you think?

> Reduce KMS error logging severity from WARN to INFO
> ---
>
> Key: HADOOP-16231
> URL: https://issues.apache.org/jira/browse/HADOOP-16231
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 3.2.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Trivial
> Attachments: HDFS-14404.001.patch
>
>
> When the KMS is deployed as an HA service and a failure occurs the current 
> error severity in the client code appears to be WARN. It can result in 
> excessive errors despite the fact that another instance may succeed.
> Maybe this log level can be adjusted in only the load balancing provider.
> {code}
> 19/02/27 05:10:10 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException 
> [java.net.ConnectException: Connection refused (Connection refused)]!!
> 19/02/12 20:50:09 WARN kms.LoadBalancingKMSClientProvider: KMS provider at 
> [https://example.com:16000/kms/v1/] threw an IOException:
> java.io.IOException: java.lang.reflect.UndeclaredThrowableException
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13907) Fix KerberosUtil#getDefaultRealm() on Windows

2019-02-01 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16758699#comment-16758699
 ] 

Kitti Nanasi commented on HADOOP-13907:
---

I had the same problem in my environment and what [~ste...@apache.org] 
suggested here solves the problem. I attached a patch for it.

> Fix KerberosUtil#getDefaultRealm() on Windows
> -
>
> Key: HADOOP-13907
> URL: https://issues.apache.org/jira/browse/HADOOP-13907
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Priority: Major
>  Labels: kerberos
> Attachments: HADOOP-13907.001.patch
>
>
> Running unit test 
> TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on windows 
> will fail with {{java.lang.IllegalArgumentException: Can't get Kerberos 
> realm}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13907) Fix KerberosUtil#getDefaultRealm() on Windows

2019-02-01 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-13907:
--
Assignee: Kitti Nanasi
  Status: Patch Available  (was: Open)

> Fix KerberosUtil#getDefaultRealm() on Windows
> -
>
> Key: HADOOP-13907
> URL: https://issues.apache.org/jira/browse/HADOOP-13907
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Assignee: Kitti Nanasi
>Priority: Major
>  Labels: kerberos
> Attachments: HADOOP-13907.001.patch
>
>
> Running unit test 
> TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on windows 
> will fail with {{java.lang.IllegalArgumentException: Can't get Kerberos 
> realm}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13907) Fix KerberosUtil#getDefaultRealm() on Windows

2019-02-01 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-13907:
--
Attachment: HADOOP-13907.001.patch

> Fix KerberosUtil#getDefaultRealm() on Windows
> -
>
> Key: HADOOP-13907
> URL: https://issues.apache.org/jira/browse/HADOOP-13907
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Xiaoyu Yao
>Priority: Major
>  Labels: kerberos
> Attachments: HADOOP-13907.001.patch
>
>
> Running unit test 
> TestWebDelegationToken#testKerberosDelegationTokenAuthenticator on windows 
> will fail with {{java.lang.IllegalArgumentException: Can't get Kerberos 
> realm}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16014) Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler

2018-12-19 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725126#comment-16725126
 ] 

Kitti Nanasi commented on HADOOP-16014:
---

Thanks for the new patch [~dineshchitlangia]!

+1 (non-binding) Looks good to me

> Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler
> 
>
> Key: HADOOP-16014
> URL: https://issues.apache.org/jira/browse/HADOOP-16014
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-16014.001.patch, HADOOP-16014.002.patch
>
>
> TestKerberosAuthenticationHandler has multiple checkstyle violations, missing 
> javadoc and some tests are not annotated with @Test thus not being run.
> This jira aims to fix above issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16014) Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler

2018-12-19 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16725098#comment-16725098
 ] 

Kitti Nanasi commented on HADOOP-16014:
---

Thanks [~dineshchitlangia] for the patch! Overall the patch looks good to me.

I only have a minor comment that you can define a global timeout for the whole 
test class like this:
{code:java}
@Rule
public Timeout globalTimeout = new Timeout(6);
{code}

> Fix test, checkstyle and javadoc issues in TestKerberosAuthenticationHandler
> 
>
> Key: HADOOP-16014
> URL: https://issues.apache.org/jira/browse/HADOOP-16014
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HADOOP-16014.001.patch
>
>
> TestKerberosAuthenticationHandler has multiple checkstyle violations, missing 
> javadoc and some tests are not annotated with @Test thus not being run.
> This jira aims to fix above issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-07 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16712883#comment-16712883
 ] 

Kitti Nanasi commented on HADOOP-15845:
---

Thanks [~ste...@apache.org] for reporting and [~gabor.bota] for the patch!

I am not that familiar with this part of the code, but the patch looks good to 
me.

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15845.001.patch
>
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout

2018-10-12 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16647794#comment-16647794
 ] 

Kitti Nanasi commented on HADOOP-11100:
---

Thanks for the answer [~adam.antal]!

You are right, I double checked the javadoc and the parameter is in seconds. I 
don't have any reason against setting this timeout to 5 minutes if it works 
fine with other FTP configs.

> Support to configure  ftpClient.setControlKeepAliveTimeout 
> ---
>
> Key: HADOOP-11100
> URL: https://issues.apache.org/jira/browse/HADOOP-11100
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Krishnamoorthy Dharmalingam
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HADOOP-11100.002.patch, HADOOP-11100.003.patch, 
> HDFS-11000.001.patch
>
>
> In FTPFilesystem or Configuration, timeout is not possible to configure.
> It is very straight forward to configure, in FTPFilesystem.connect() method.
>  ftpClient.setControlKeepAliveTimeout
> Like
> private FTPClient connect() throws IOException {
> ...
> String timeout = conf.get("fs.ftp.timeout." + host);
> ...
>  ftpClient.setControlKeepAliveTimeout(new Integer(300));
> 
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout

2018-10-12 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16647588#comment-16647588
 ] 

Kitti Nanasi commented on HADOOP-11100:
---

Thanks for the new patch [~adam.antal]! It looks good to me.

+1

> Support to configure  ftpClient.setControlKeepAliveTimeout 
> ---
>
> Key: HADOOP-11100
> URL: https://issues.apache.org/jira/browse/HADOOP-11100
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Krishnamoorthy Dharmalingam
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HADOOP-11100.002.patch, HADOOP-11100.003.patch, 
> HDFS-11000.001.patch
>
>
> In FTPFilesystem or Configuration, timeout is not possible to configure.
> It is very straight forward to configure, in FTPFilesystem.connect() method.
>  ftpClient.setControlKeepAliveTimeout
> Like
> private FTPClient connect() throws IOException {
> ...
> String timeout = conf.get("fs.ftp.timeout." + host);
> ...
>  ftpClient.setControlKeepAliveTimeout(new Integer(300));
> 
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout

2018-10-09 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16643708#comment-16643708
 ] 

Kitti Nanasi commented on HADOOP-11100:
---

Thanks for working on this [~adam.antal]!

You can test the added configuration in TestFTPFileSystem, like in the test 
TestFTPFileSystem#testFTPTransferMode.

Do I understand correctly that the default timeout is 300 milliseconds?

> Support to configure  ftpClient.setControlKeepAliveTimeout 
> ---
>
> Key: HADOOP-11100
> URL: https://issues.apache.org/jira/browse/HADOOP-11100
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.3.0
>Reporter: Krishnamoorthy Dharmalingam
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HADOOP-11100.002.patch, HDFS-11000.001.patch
>
>
> In FTPFilesystem or Configuration, timeout is not possible to configure.
> It is very straight forward to configure, in FTPFilesystem.connect() method.
>  ftpClient.setControlKeepAliveTimeout
> Like
> private FTPClient connect() throws IOException {
> ...
> String timeout = conf.get("fs.ftp.timeout." + host);
> ...
>  ftpClient.setControlKeepAliveTimeout(new Integer(300));
> 
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15698) KMS startup logs don't show

2018-08-28 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594751#comment-16594751
 ] 

Kitti Nanasi commented on HADOOP-15698:
---

Thanks [~ajayydv] for reviewing!

To start KMSWebServer with logging, it needs the configuration to be set up in 
advance, so it needs similar config setup as it is done in the MiniKMS#start. 
it is possible to do that, but maybe it does not really make sense to create a 
MiniKMS like test just for testing the logging. I think a better solution would 
be to modify MiniKMS, so it uses the same log4j configuration and 
initialisation as the production code, but I think that would be a way bigger 
refactor, because it would also require to start KMSWebServer differently to 
include the startup logging as well. What do you think?

> KMS startup logs don't show
> ---
>
> Key: HADOOP-15698
> URL: https://issues.apache.org/jira/browse/HADOOP-15698
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15698.001.patch
>
>
> During KMs startup, log4j logs don't show up resulting in important logs 
> getting omitted. This happens because log4 initialisation only happens in 
> KMSWebApp#contextInitialized and logs written before that don't show up.
> For example the following log never shows up:
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java#L197-L199]
> Another example is that the KMS startup message never shows up in the kms 
> logs.
> Note that this works in the unit tests, because MiniKMS sets the log4j system 
> property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15698) KMS startup logs don't show

2018-08-28 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15698:
--
Status: Patch Available  (was: Open)

> KMS startup logs don't show
> ---
>
> Key: HADOOP-15698
> URL: https://issues.apache.org/jira/browse/HADOOP-15698
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15698.001.patch
>
>
> During KMs startup, log4j logs don't show up resulting in important logs 
> getting omitted. This happens because log4 initialisation only happens in 
> KMSWebApp#contextInitialized and logs written before that don't show up.
> For example the following log never shows up:
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java#L197-L199]
> Another example is that the KMS startup message never shows up in the kms 
> logs.
> Note that this works in the unit tests, because MiniKMS sets the log4j system 
> property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15698) KMS startup logs don't show

2018-08-27 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16593456#comment-16593456
 ] 

Kitti Nanasi commented on HADOOP-15698:
---

I tried my patch on a cluster and the logs mentioned in the description do show 
up in the kms.log file.

> KMS startup logs don't show
> ---
>
> Key: HADOOP-15698
> URL: https://issues.apache.org/jira/browse/HADOOP-15698
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15698.001.patch
>
>
> During KMs startup, log4j logs don't show up resulting in important logs 
> getting omitted. This happens because log4 initialisation only happens in 
> KMSWebApp#contextInitialized and logs written before that don't show up.
> For example the following log never shows up:
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java#L197-L199]
> Another example is that the KMS startup message never shows up in the kms 
> logs.
> Note that this works in the unit tests, because MiniKMS sets the log4j system 
> property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15698) KMS startup logs don't show

2018-08-27 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15698:
--
Attachment: HADOOP-15698.001.patch

> KMS startup logs don't show
> ---
>
> Key: HADOOP-15698
> URL: https://issues.apache.org/jira/browse/HADOOP-15698
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15698.001.patch
>
>
> During KMs startup, log4j logs don't show up resulting in important logs 
> getting omitted. This happens because log4 initialisation only happens in 
> KMSWebApp#contextInitialized and logs written before that don't show up.
> For example the following log never shows up:
> [https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java#L197-L199]
> Another example is that the KMS startup message never shows up in the kms 
> logs.
> Note that this works in the unit tests, because MiniKMS sets the log4j system 
> property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15698) KMS startup logs don't show

2018-08-27 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15698:
-

 Summary: KMS startup logs don't show
 Key: HADOOP-15698
 URL: https://issues.apache.org/jira/browse/HADOOP-15698
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


During KMs startup, log4j logs don't show up resulting in important logs 
getting omitted. This happens because log4 initialisation only happens in 
KMSWebApp#contextInitialized and logs written before that don't show up.

For example the following log never shows up:

[https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java#L197-L199]

Another example is that the KMS startup message never shows up in the kms logs.

Note that this works in the unit tests, because MiniKMS sets the log4j system 
property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15675) checkstyle suppressions files are cached

2018-08-22 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16588623#comment-16588623
 ] 

Kitti Nanasi commented on HADOOP-15675:
---

Thanks [~aw] for creating YETUS-660! I think it would be good to have it 
working for the precommit at least.

> checkstyle suppressions files are cached
> 
>
> Key: HADOOP-15675
> URL: https://issues.apache.org/jira/browse/HADOOP-15675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kitti Nanasi
>Priority: Minor
>
> If a patch is created with checkstyle errors, for example when a modified 
> line is longer than 80 characters, then running checkstyle with the 
> test-patch script runs to success (though it should fail and show an error 
> about the long line).
> {code:java}
> dev-support/bin/test-patch  --plugins="-checkstyle" test.patch{code}
> However it does show the error (so works correctly) when running it with the 
> IDEA checkstyle plugin.
>  
> I only tried it out it for patches with too long lines and wrong indentation, 
> but I assume that it can be a more general problem.
> We realised this when reviewing HDFS-13217, where patch 004 has a "too long 
> line" checkstyle error. In the first build for that patch, the checkstyle 
> report was showing the error, but when it was ran again with the same patch, 
> the error disappeared. So probably the checkstyle checking stopped working on 
> trunk somewhere between April and July 2018.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15675) checkstyle fails to execute

2018-08-16 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16582970#comment-16582970
 ] 

Kitti Nanasi commented on HADOOP-15675:
---

Thanks [~aw] and [~xiaochen] for the comments and for fixing the issue! It 
seems that reverting HDDS-119 fixed it when run by the pre commit.

It still shows false positive report for me locally (without the 
"-checkstyle"), but you are right [~aw], that can be because the in-tree 
version of yetus is very old.

> checkstyle fails to execute
> ---
>
> Key: HADOOP-15675
> URL: https://issues.apache.org/jira/browse/HADOOP-15675
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Kitti Nanasi
>Priority: Minor
>
> If a patch is created with checkstyle errors, for example when a modified 
> line is longer than 80 characters, then running checkstyle with the 
> test-patch script runs to success (though it should fail and show an error 
> about the long line).
> {code:java}
> dev-support/bin/test-patch  --plugins="-checkstyle" test.patch{code}
> However it does show the error (so works correctly) when running it with the 
> IDEA checkstyle plugin.
>  
> I only tried it out it for patches with too long lines and wrong indentation, 
> but I assume that it can be a more general problem.
> We realised this when reviewing HDFS-13217, where patch 004 has a "too long 
> line" checkstyle error. In the first build for that patch, the checkstyle 
> report was showing the error, but when it was ran again with the same patch, 
> the error disappeared. So probably the checkstyle checking stopped working on 
> trunk somewhere between April and July 2018.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15638) KMS Accept Queue Size default changed from 500 to 128 in Hadoop 3.x

2018-08-13 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16578050#comment-16578050
 ] 

Kitti Nanasi commented on HADOOP-15638:
---

Thanks for reporting this and for the patch [~jojochuang]!
+1 (non-binding) Looks good to me.

> KMS Accept Queue Size default changed from 500 to 128 in Hadoop 3.x
> ---
>
> Key: HADOOP-15638
> URL: https://issues.apache.org/jira/browse/HADOOP-15638
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-15638.001.patch
>
>
> HADOOP-13597 migrated KMS from Tomcat to Jetty.
> In Hadoop 2.x, the Accept Queue Size default value was set by environment 
> variable KMS_ACCEPT_QUEUE with default value 500.
> But after the migration, Hadoop 3.x now uses the default HttpServer2 
> configuration to set accept queue size (hadoop.http.socket.backlog.size), 
> which is 128.
> Should we restore the default accept queue size in kms-default.xml? Should we 
> document it as a known issue? After all, Hadoop 2 --> 3 allows breaking 
> changing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15655) KMS should retry upon SocketTimeoutException

2018-08-10 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16576162#comment-16576162
 ] 

Kitti Nanasi commented on HADOOP-15655:
---

Thanks [~gabor.bota] for the comment! I agree that other possible invocations 
should be tested as well, but I modified the code in patch v003 to only affect 
LoadBalancingKMSClientProvider, so with the newest patch it makes sense to only 
test it in TestLoadBalancingKMSClientProvider. Retrying upon 
SocketTimeoutException in other cases than LoadBalancingKMSClientProvider might 
cause unexpected behaviour.

I also added "isIdempotent" flag to KMS operations, so it can be passed down to 
FailoverOnNetworkExceptionRetry policy, which will retry on IOExceptions 
regardless if the operation is idempotent. So the new implementation will retry 
on SocketTimeoutException as well in case of the operation is idempotent.

> KMS should retry upon SocketTimeoutException
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
> Attachments: HADOOP-15655.001.patch, HADOOP-15655.002.patch, 
> HADOOP-15655.003.patch
>
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out).
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.D

[jira] [Updated] (HADOOP-15655) KMS should retry upon SocketTimeoutException

2018-08-10 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15655:
--
Attachment: HADOOP-15655.003.patch

> KMS should retry upon SocketTimeoutException
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
> Attachments: HADOOP-15655.001.patch, HADOOP-15655.002.patch, 
> HADOOP-15655.003.patch
>
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out).
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
>   at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
>   at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
>   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.checkR

[jira] [Created] (HADOOP-15665) Checkstyle shows false positive report

2018-08-09 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15665:
-

 Summary: Checkstyle shows false positive report
 Key: HADOOP-15665
 URL: https://issues.apache.org/jira/browse/HADOOP-15665
 Project: Hadoop Common
  Issue Type: Bug
  Components: precommit, yetus
Affects Versions: 3.1.0
Reporter: Kitti Nanasi


If a patch is created with checkstyle errors, for example when a modified line 
is longer than 80 characters, then running checkstyle with the test-patch 
script runs to success (though it should fail and show an error about the long 
line).
{code:java}
dev-support/bin/test-patch  --plugins="-checkstyle" test.patch{code}
However it does show the error (so works correctly) when running it with the 
IDEA checkstyle plugin.

 

I only tried it out it for patches with too long lines and wrong indentation, 
but I assume that it can be a more general problem.

We realised this when reviewing HDFS-13217, where patch 004 has a "too long 
line" checkstyle error. In the first build for that patch, the checkstyle 
report was showing the error, but when it was ran again with the same patch, 
the error disappeared. So probably the checkstyle checking stopped working on 
trunk somewhere between April and July 2018.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15655) KMS should retry upon SocketTimeoutException

2018-08-08 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572951#comment-16572951
 ] 

Kitti Nanasi commented on HADOOP-15655:
---

Thanks for reviewing it [~ajayydv] and [~gabor.bota]!

I added unit tests for the change in patch v002.

> KMS should retry upon SocketTimeoutException
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
> Attachments: HADOOP-15655.001.patch, HADOOP-15655.002.patch
>
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out).
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
>   at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
>   at org.apache.hadoop.hbase.util.FSUtils.che

[jira] [Updated] (HADOOP-15655) KMS should retry upon SocketTimeoutException

2018-08-08 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15655:
--
Attachment: HADOOP-15655.002.patch

> KMS should retry upon SocketTimeoutException
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
> Attachments: HADOOP-15655.001.patch, HADOOP-15655.002.patch
>
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out).
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
>   at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
>   at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
>   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.jav

[jira] [Updated] (HADOOP-15655) KMS should retry upon SocketTimeoutException

2018-08-07 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15655:
--
Status: Patch Available  (was: Open)

> KMS should retry upon SocketTimeoutException
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
> Attachments: HADOOP-15655.001.patch
>
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out).
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
>   at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
>   at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
>   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:260)
>   at 
> o

[jira] [Updated] (HADOOP-15655) KMS should retry upon SocketTimeoutException

2018-08-07 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15655:
--
Attachment: HADOOP-15655.001.patch

> KMS should retry upon SocketTimeoutException
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
> Attachments: HADOOP-15655.001.patch
>
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out).
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
>   at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
>   at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
>   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:260)
>   at 
> org

[jira] [Updated] (HADOOP-15655) KMS should retry upon SocketTimeoutException

2018-08-07 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15655:
--
Description: 
KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
established, but the handshake timed out).
{noformat}
6:08:55.315 PM  WARNKMSClientProvider   
Failed to connect to example.com:16000
6:08:55.317 PM  WARNLoadBalancingKMSClientProvider  
KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:260)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java:122)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:795)
at 
org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2036)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:553)
at java.lang.Thread.run(Thread.java:748)
6:08:55.346 PM  WARNLoadBalancingKMSClientProvider  
Aborting since the Re

[jira] [Updated] (HADOOP-15655) KMS should retry upon SocketTimeoutException

2018-08-07 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15655:
--
Summary: KMS should retry upon SocketTimeoutException  (was: KMS should 
retry upon IOException regardless)

> KMS should retry upon SocketTimeoutException
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out). It would be better if KMS retried 
> upon any kind of IOException.
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
>   at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
>   at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
>   at 
> org.apache.ha

[jira] [Updated] (HADOOP-15655) KMS should retry upon IOException regardless

2018-08-07 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15655:
--
Target Version/s: 3.2.0

> KMS should retry upon IOException regardless
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out). It would be better if KMS retried 
> upon any kind of IOException.
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
>   at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
>   at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
>   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:260)
>   a

[jira] [Created] (HADOOP-15655) KMS should retry upon IOException regardless

2018-08-07 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15655:
-

 Summary: KMS should retry upon IOException regardless
 Key: HADOOP-15655
 URL: https://issues.apache.org/jira/browse/HADOOP-15655
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi
 Fix For: 3.2.0


KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
established, but the handshake timed out). It would be better if KMS retried 
upon any kind of IOException.
{noformat}
6:08:55.315 PM  WARNKMSClientProvider   
Failed to connect to example.com:16000
6:08:55.317 PM  WARNLoadBalancingKMSClientProvider  
KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:260)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java:122)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(H

[jira] [Updated] (HADOOP-15655) KMS should retry upon IOException regardless

2018-08-07 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15655:
--
Fix Version/s: (was: 3.2.0)

> KMS should retry upon IOException regardless
> 
>
> Key: HADOOP-15655
> URL: https://issues.apache.org/jira/browse/HADOOP-15655
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Critical
>
> KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
> established, but the handshake timed out). It would be better if KMS retried 
> upon any kind of IOException.
> {noformat}
> 6:08:55.315 PMWARNKMSClientProvider   
> Failed to connect to example.com:16000
> 6:08:55.317 PMWARNLoadBalancingKMSClientProvider  
> KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
> java.net.SocketTimeoutException: Read timed out
>   at java.net.SocketInputStream.socketRead0(Native Method)
>   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
>   at java.net.SocketInputStream.read(SocketInputStream.java:171)
>   at java.net.SocketInputStream.read(SocketInputStream.java:141)
>   at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
>   at sun.security.ssl.InputRecord.read(InputRecord.java:503)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
>   at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
>   at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
>   at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
>   at 
> org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:260)
>

[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-20 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16550711#comment-16550711
 ] 

Kitti Nanasi commented on HADOOP-15609:
---

Thanks [~xiaochen] for the comments! I fixed them in patch v004.

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, 
> HADOOP-15609.003.patch, HADOOP-15609.004.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-20 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Attachment: HADOOP-15609.004.patch

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, 
> HADOOP-15609.003.patch, HADOOP-15609.004.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-19 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549317#comment-16549317
 ] 

Kitti Nanasi commented on HADOOP-15596:
---

[~xiaochen], I agree with you on changing the log level to error when the 
operation failed on all KMS servers, I modified that in my latest patch.

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch, HADOOP-15596.002.patch, 
> HADOOP-15596.003.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-19 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15596:
--
Attachment: HADOOP-15596.003.patch

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch, HADOOP-15596.002.patch, 
> HADOOP-15596.003.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-19 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549191#comment-16549191
 ] 

Kitti Nanasi commented on HADOOP-15609:
---

Thanks [~ste...@apache.org] and [~xiaochen] for the comments! I fixed them in 
patch v003.

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, 
> HADOOP-15609.003.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-19 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Attachment: HADOOP-15609.003.patch

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, 
> HADOOP-15609.003.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-19 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Attachment: (was: HADOOP-15609.003.patch)

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-19 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Attachment: HADOOP-15609.003.patch

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch, 
> HADOOP-15609.003.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-18 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16547814#comment-16547814
 ] 

Kitti Nanasi commented on HADOOP-15609:
---

Thanks for the comments [~xiaochen]! I modified the code to only retry in case 
of KMSClientProvider and I also added some tests in my latest patch.

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-18 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Attachment: HADOOP-15609.002.patch

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch, HADOOP-15609.002.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-17 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Status: Patch Available  (was: Open)

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-17 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16546581#comment-16546581
 ] 

Kitti Nanasi commented on HADOOP-15609:
---

I uploaded a patch in which I modified FailoverOnNetworkExceptionRetry to retry 
on SSLHandshakeException, because I think this could be a more general solution.

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-17 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Attachment: HADOOP-15609.001.patch

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
> Attachments: HADOOP-15609.001.patch
>
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-17 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16546335#comment-16546335
 ] 

Kitti Nanasi commented on HADOOP-15596:
---

Thanks for the comments [~xiaochen]! I fixed them in patch v002.

One more thing, that the exception trace is still printed in a warn log in 
LoadBalancingKMSClientProvider class. Does it make sense to only print the full 
stack trace out in a debug log there and print only the exception message in 
the warn log?
{code:java}
LOG.warn("KMS provider at [{}] threw an IOException: ",
provider.getKMSUrl(), ioe);
{code}

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch, HADOOP-15596.002.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-17 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15596:
--
Attachment: HADOOP-15596.002.patch

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch, HADOOP-15596.002.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-16 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16545448#comment-16545448
 ] 

Kitti Nanasi commented on HADOOP-15609:
---

It was not introduced by HADOOP-14521, because it uses 
FailoverOnNetworkExceptionRetry which doesn't retry in case of 
SSLHandshakeException.

I thinks the retry is needed here, because however SSLHandshakeException can be 
thrown because of any kind of SSL-related issues, it can also be thrown because 
of losing the connection with the KMS provider, which is the same kind of 
network error as getting a ConnectException, for which the retrying was 
introduced. What do you think [~jojochuang]? 

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional command

[jira] [Updated] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-16 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15596:
--
Status: Patch Available  (was: Open)

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-16 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15596:
--
Attachment: HADOOP-15596.001.patch

> Stack trace should not be printed out when running hadoop key commands
> --
>
> Key: HADOOP-15596
> URL: https://issues.apache.org/jira/browse/HADOOP-15596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Minor
> Attachments: HADOOP-15596.001.patch
>
>
> Stack trace is printed out if any exception occurs while executing hadoop key 
> commands. The whole stack trace should not be printed out.
> For example when the kms is down, we get this error message for the hadoop 
> key list command:
> {code:java}
>  -bash-4.1$ hadoop key list
>  Cannot list keys for KeyProvider: 
> KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
> refusedjava.net.ConnectException: Connection refused
>  at java.net.PlainSocketImpl.socketConnect(Native Method)
>  at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
>  at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
>  at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
>  at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>  at java.net.Socket.connect(Socket.java:579)
>  at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
>  at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
>  at sun.net.www.http.HttpClient.(HttpClient.java:211)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:308)
>  at sun.net.www.http.HttpClient.New(HttpClient.java:326)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
>  at 
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
>  at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
>  at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
>  at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:415)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
>  at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
>  at 
> org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
>  at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>  at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-16 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15609:
--
Affects Version/s: 3.1.0
 Target Version/s: 3.2.0

> Retry KMS calls when SSLHandshakeException occurs
> -
>
> Key: HADOOP-15609
> URL: https://issues.apache.org/jira/browse/HADOOP-15609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, kms
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
>
> KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
> FailoverOnNetworkExceptionRetry policy is used.
> For example in the following stack trace, we can see that the KMS Provider's 
> connection is lost, an SSLHandshakeException is thrown and the operation is 
> not retried:
> {code}
> W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
> provider at [https://example.com:16000/kms/v1/] threw an IOException:
> Java exception follows:
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
> at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
> at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
> at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
> at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
> at 
> sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
> at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
> at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
> at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
> at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
> at 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
> at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
> Caused by: java.io.EOFException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.read(InputRecord.java:505)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
> ... 22 more
> W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
> since the Request has failed with all KMS providers(depending on 
> hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
> in the group OR the exception is not recoverable
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-16 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15609:
-

 Summary: Retry KMS calls when SSLHandshakeException occurs
 Key: HADOOP-15609
 URL: https://issues.apache.org/jira/browse/HADOOP-15609
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common, kms
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
FailoverOnNetworkExceptionRetry policy is used.

For example in the following stack trace, we can see that the KMS Provider's 
connection is lost, an SSLHandshakeException is thrown and the operation is not 
retried:
{code}
W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
provider at [https://example.com:16000/kms/v1/] threw an IOException:
Java exception follows:
javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
... 22 more
W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
since the Request has failed with all KMS providers(depending on 
hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
in the group OR the exception is not recoverable
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-10 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15596:
--
Description: 
Stack trace is printed out if any exception occurs while executing hadoop key 
commands. The whole stack trace should not be printed out.

For example when the kms is down, we get this error message for the hadoop key 
list command:
{code:java}
 -bash-4.1$ hadoop key list
 Cannot list keys for KeyProvider: 
KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
refusedjava.net.ConnectException: Connection refused
 at java.net.PlainSocketImpl.socketConnect(Native Method)
 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 at java.net.Socket.connect(Socket.java:579)
 at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
 at sun.net.www.http.HttpClient.(HttpClient.java:211)
 at sun.net.www.http.HttpClient.New(HttpClient.java:308)
 at sun.net.www.http.HttpClient.New(HttpClient.java:326)
 at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
 at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
 at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
 at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
 at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
 at org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
 at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
{code}

  was:
Stack trace is printed out if any exception occurs while executing hadoop key 
commands. The whole stack trace should not be printed out.

For example when the kms is down, we get this error message for the hadoop key 
list command 

{code}
 -bash-4.1$ hadoop key list
 Cannot list keys for KeyProvider: 
KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
refusedjava.net.ConnectException: Connection refused
 at java.net.PlainSocketImpl.socketConnect(Native Method)
 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 at java.net.Socket.connect(Socket.java:579)
 at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
 at sun.net.www.http.HttpClient.(HttpClient.java:211)
 at sun.net.www.http.HttpClient.New(HttpClient.java:308)
 at sun.net.www.http.HttpClient.New(HttpClient.java:326)
 at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
 at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
 at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
 at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
 at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(Del

[jira] [Created] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-10 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15596:
-

 Summary: Stack trace should not be printed out when running hadoop 
key commands
 Key: HADOOP-15596
 URL: https://issues.apache.org/jira/browse/HADOOP-15596
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


Stack trace is printed out if any exception occurs while executing hadoop key 
commands. The whole stack trace should not be printed out.

For example when the kms is down, we get this error message for the hadoop key 
list command 

{code}
 -bash-4.1$ hadoop key list
 Cannot list keys for KeyProvider: 
KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
refusedjava.net.ConnectException: Connection refused
 at java.net.PlainSocketImpl.socketConnect(Native Method)
 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 at java.net.Socket.connect(Socket.java:579)
 at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
 at sun.net.www.http.HttpClient.(HttpClient.java:211)
 at sun.net.www.http.HttpClient.New(HttpClient.java:308)
 at sun.net.www.http.HttpClient.New(HttpClient.java:326)
 at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
 at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
 at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
 at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
 at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
 at org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
 at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15581) Set default jetty log level to INFO in KMS

2018-07-04 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15581:
--
Status: Patch Available  (was: Open)

> Set default jetty log level to INFO in KMS
> --
>
> Key: HADOOP-15581
> URL: https://issues.apache.org/jira/browse/HADOOP-15581
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
>  Labels: kms
> Attachments: HADOOP-15581.001.patch
>
>
> During debugging KMS, jetty is printing lots of things at DEBUG/TRACE level. 
> These isn't helpful usually unless someone is debugging the web server part, 
> so we should consider putting an explicit INFO log for them, similar to 
> https://issues.apache.org/jira/browse/HADOOP-14515.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15581) Set default jetty log level to INFO in KMS

2018-07-04 Thread Kitti Nanasi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kitti Nanasi updated HADOOP-15581:
--
Attachment: HADOOP-15581.001.patch

> Set default jetty log level to INFO in KMS
> --
>
> Key: HADOOP-15581
> URL: https://issues.apache.org/jira/browse/HADOOP-15581
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Kitti Nanasi
>Assignee: Kitti Nanasi
>Priority: Major
>  Labels: kms
> Attachments: HADOOP-15581.001.patch
>
>
> During debugging KMS, jetty is printing lots of things at DEBUG/TRACE level. 
> These isn't helpful usually unless someone is debugging the web server part, 
> so we should consider putting an explicit INFO log for them, similar to 
> https://issues.apache.org/jira/browse/HADOOP-14515.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15581) Set default jetty log level to INFO in KMS

2018-07-04 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15581:
-

 Summary: Set default jetty log level to INFO in KMS
 Key: HADOOP-15581
 URL: https://issues.apache.org/jira/browse/HADOOP-15581
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


During debugging KMS, jetty is printing lots of things at DEBUG/TRACE level. 
These isn't helpful usually unless someone is debugging the web server part, so 
we should consider putting an explicit INFO log for them, similar to 
https://issues.apache.org/jira/browse/HADOOP-14515.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-06-06 Thread Kitti Nanasi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16503055#comment-16503055
 ] 

Kitti Nanasi commented on HADOOP-15307:
---

Patch v003 looks good to me.

+1

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch, 
> HADOOP-15307.003.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15473) Configure serialFilter to avoid UnrecoverableKeyException caused by JDK-8189997

2018-05-25 Thread Kitti Nanasi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16490589#comment-16490589
 ] 

Kitti Nanasi commented on HADOOP-15473:
---

+1 for patch v006, thanks [~gabor.bota] for working on this!

> Configure serialFilter to avoid UnrecoverableKeyException caused by 
> JDK-8189997
> ---
>
> Key: HADOOP-15473
> URL: https://issues.apache.org/jira/browse/HADOOP-15473
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.7.6, 3.0.2
> Environment: JDK 8u171
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15473.004.patch, HADOOP-15473.005.patch, 
> HADOOP-15473.006.patch, HDFS-13494.001.patch, HDFS-13494.002.patch, 
> HDFS-13494.003.patch, org.apache.hadoop.crypto.key.TestKeyProviderFactory.txt
>
>
> There is a new feature in JDK 8u171 called Enhanced KeyStore Mechanisms 
> (http://www.oracle.com/technetwork/java/javase/8u171-relnotes-430.html#JDK-8189997).
> This is the cause of the following errors in the TestKeyProviderFactory:
> {noformat}
> Caused by: java.security.UnrecoverableKeyException: Rejected by the 
> jceks.key.serialFilter or jdk.serialFilter property
>   at com.sun.crypto.provider.KeyProtector.unseal(KeyProtector.java:352)
>   at 
> com.sun.crypto.provider.JceKeyStore.engineGetKey(JceKeyStore.java:136)
>   at java.security.KeyStore.getKey(KeyStore.java:1023)
>   at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.getMetadata(JavaKeyStoreProvider.java:410)
>   ... 28 more
> {noformat}
> This issue causes errors and failures in hbase tests right now (using hdfs) 
> and could affect other products running on this new Java version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-24 Thread Kitti Nanasi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16489666#comment-16489666
 ] 

Kitti Nanasi commented on HADOOP-15307:
---

Thanks for working on this, [~gabor.bota]! It looks good to me.
+1 for patch v002

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch, HADOOP-15307.002.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15307) Improve NFS error handling: Unsupported verifier flavorAUTH_SYS

2018-05-23 Thread Kitti Nanasi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487123#comment-16487123
 ] 

Kitti Nanasi commented on HADOOP-15307:
---

I have some comments:
 * The writeFlavorAndVerifier function should handle the new verifier as well, 
so it doesn't throw an UnsupportedOperationException.
 * Update the javadoc in Verifier class.

> Improve NFS error handling: Unsupported verifier flavorAUTH_SYS
> ---
>
> Key: HADOOP-15307
> URL: https://issues.apache.org/jira/browse/HADOOP-15307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
> Environment: CentOS 7.4, CDH5.13.1, Kerberized Hadoop cluster
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Major
>  Labels: newbie
> Attachments: HADOOP-15307.001.patch
>
>
> When NFS gateway starts and if the portmapper request is denied by rpcbind 
> for any reason (in our case, /etc/hosts.allow did not have the localhost), 
> NFS gateway fails with the following obscure exception:
> {noformat}
> 2018-03-05 12:49:31,976 INFO org.apache.hadoop.oncrpc.SimpleUdpServer: 
> Started listening to UDP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,988 INFO org.apache.hadoop.oncrpc.SimpleTcpServer: 
> Started listening to TCP requests at port 4242 for Rpc program: mountd at 
> localhost:4242 with workerCount 1
> 2018-03-05 12:49:31,993 TRACE org.apache.hadoop.oncrpc.RpcCall: 
> Xid:692394656, messageType:RPC_CALL, rpcVersion:2, program:10, version:2, 
> procedure:1, credential:(AuthFlavor:AUTH_NONE), 
> verifier:(AuthFlavor:AUTH_NONE)
> 2018-03-05 12:49:31,998 FATAL org.apache.hadoop.mount.MountdBase: Failed to 
> start the server. Cause:
> java.lang.UnsupportedOperationException: Unsupported verifier flavorAUTH_SYS
> at 
> org.apache.hadoop.oncrpc.security.Verifier.readFlavorAndVerifier(Verifier.java:45)
> at org.apache.hadoop.oncrpc.RpcDeniedReply.read(RpcDeniedReply.java:50)
> at org.apache.hadoop.oncrpc.RpcReply.read(RpcReply.java:67)
> at org.apache.hadoop.oncrpc.SimpleUdpClient.run(SimpleUdpClient.java:71)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:130)
> at org.apache.hadoop.oncrpc.RpcProgram.register(RpcProgram.java:101)
> at org.apache.hadoop.mount.MountdBase.start(MountdBase.java:83)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startServiceInternal(Nfs3.java:56)
> at org.apache.hadoop.hdfs.nfs.nfs3.Nfs3.startService(Nfs3.java:69)
> at 
> org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter.start(PrivilegedNfsGatewayStarter.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
> 2018-03-05 12:49:32,007 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 1{noformat}
>  Reading the code comment for class Verifier, I think this bug existed since 
> its inception
> {code:java}
> /**
>  * Base class for verifier. Currently our authentication only supports 3 types
>  * of auth flavors: {@link RpcAuthInfo.AuthFlavor#AUTH_NONE}, {@link 
> RpcAuthInfo.AuthFlavor#AUTH_SYS},
>  * and {@link RpcAuthInfo.AuthFlavor#RPCSEC_GSS}. Thus for verifier we only 
> need to handle
>  * AUTH_NONE and RPCSEC_GSS
>  */
> public abstract class Verifier extends RpcAuthInfo {{code}
> The verifier should also handle AUTH_SYS too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org