[jira] [Created] (HADOOP-15698) KMS startup logs don't show

2018-08-27 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15698:
-

 Summary: KMS startup logs don't show
 Key: HADOOP-15698
 URL: https://issues.apache.org/jira/browse/HADOOP-15698
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


During KMs startup, log4j logs don't show up resulting in important logs 
getting omitted. This happens because log4 initialisation only happens in 
KMSWebApp#contextInitialized and logs written before that don't show up.

For example the following log never shows up:

[https://github.com/apache/hadoop/blob/a55d6bba71c81c1c4e9d8cd11f55c78f10a548b0/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java#L197-L199]

Another example is that the KMS startup message never shows up in the kms logs.

Note that this works in the unit tests, because MiniKMS sets the log4j system 
property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15665) Checkstyle shows false positive report

2018-08-09 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15665:
-

 Summary: Checkstyle shows false positive report
 Key: HADOOP-15665
 URL: https://issues.apache.org/jira/browse/HADOOP-15665
 Project: Hadoop Common
  Issue Type: Bug
  Components: precommit, yetus
Affects Versions: 3.1.0
Reporter: Kitti Nanasi


If a patch is created with checkstyle errors, for example when a modified line 
is longer than 80 characters, then running checkstyle with the test-patch 
script runs to success (though it should fail and show an error about the long 
line).
{code:java}
dev-support/bin/test-patch  --plugins="-checkstyle" test.patch{code}
However it does show the error (so works correctly) when running it with the 
IDEA checkstyle plugin.

 

I only tried it out it for patches with too long lines and wrong indentation, 
but I assume that it can be a more general problem.

We realised this when reviewing HDFS-13217, where patch 004 has a "too long 
line" checkstyle error. In the first build for that patch, the checkstyle 
report was showing the error, but when it was ran again with the same patch, 
the error disappeared. So probably the checkstyle checking stopped working on 
trunk somewhere between April and July 2018.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15655) KMS should retry upon IOException regardless

2018-08-07 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15655:
-

 Summary: KMS should retry upon IOException regardless
 Key: HADOOP-15655
 URL: https://issues.apache.org/jira/browse/HADOOP-15655
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi
 Fix For: 3.2.0


KMS doesn't retry upon SocketTimeoutException (the ssl connection was 
established, but the handshake timed out). It would be better if KMS retried 
upon any kind of IOException.
{noformat}
6:08:55.315 PM  WARNKMSClientProvider   
Failed to connect to example.com:16000
6:08:55.317 PM  WARNLoadBalancingKMSClientProvider  
KMS provider at [https://example.com:16000/kms/v1/] threw an IOException: 
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:140)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:478)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:473)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:472)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:788)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:949)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:338)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:423)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:260)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.(MasterFileSystem.java:122)
at 
org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(H

[jira] [Created] (HADOOP-15609) Retry KMS calls when SSLHandshakeException occurs

2018-07-16 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15609:
-

 Summary: Retry KMS calls when SSLHandshakeException occurs
 Key: HADOOP-15609
 URL: https://issues.apache.org/jira/browse/HADOOP-15609
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common, kms
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


KMS call should retry when javax.net.ssl.SSLHandshakeException occurs and 
FailoverOnNetworkExceptionRetry policy is used.

For example in the following stack trace, we can see that the KMS Provider's 
connection is lost, an SSLHandshakeException is thrown and the operation is not 
retried:
{code}
W0711 18:19:50.213472  1508 LoadBalancingKMSClientProvider.java:132] KMS 
provider at [https://example.com:16000/kms/v1/] threw an IOException:
Java exception follows:
javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002)
at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at 
sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1316)
at 
sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1291)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:250)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:512)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:502)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:791)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:288)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:124)
at 
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:284)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
at 
org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927)
at 
org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323)
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:983)
... 22 more
W0711 18:19:50.239328  1508 LoadBalancingKMSClientProvider.java:149] Aborting 
since the Request has failed with all KMS providers(depending on 
hadoop.security.kms.client.failover.max.retries=1 setting and numProviders=1) 
in the group OR the exception is not recoverable
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15596) Stack trace should not be printed out when running hadoop key commands

2018-07-10 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15596:
-

 Summary: Stack trace should not be printed out when running hadoop 
key commands
 Key: HADOOP-15596
 URL: https://issues.apache.org/jira/browse/HADOOP-15596
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


Stack trace is printed out if any exception occurs while executing hadoop key 
commands. The whole stack trace should not be printed out.

For example when the kms is down, we get this error message for the hadoop key 
list command 

{code}
 -bash-4.1$ hadoop key list
 Cannot list keys for KeyProvider: 
KMSClientProvider[http://example.com:16000/kms/v1/]: Connection 
refusedjava.net.ConnectException: Connection refused
 at java.net.PlainSocketImpl.socketConnect(Native Method)
 at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 at java.net.Socket.connect(Socket.java:579)
 at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
 at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
 at sun.net.www.http.HttpClient.(HttpClient.java:211)
 at sun.net.www.http.HttpClient.New(HttpClient.java:308)
 at sun.net.www.http.HttpClient.New(HttpClient.java:326)
 at 
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
 at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
 at 
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
 at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:186)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:125)
 at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
 at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:312)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:397)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:392)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:392)
 at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.getKeys(KMSClientProvider.java:479)
 at org.apache.hadoop.crypto.key.KeyShell$ListCommand.execute(KeyShell.java:286)
 at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:513)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15581) Set default jetty log level to INFO in KMS

2018-07-04 Thread Kitti Nanasi (JIRA)
Kitti Nanasi created HADOOP-15581:
-

 Summary: Set default jetty log level to INFO in KMS
 Key: HADOOP-15581
 URL: https://issues.apache.org/jira/browse/HADOOP-15581
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.1.0
Reporter: Kitti Nanasi
Assignee: Kitti Nanasi


During debugging KMS, jetty is printing lots of things at DEBUG/TRACE level. 
These isn't helpful usually unless someone is debugging the web server part, so 
we should consider putting an explicit INFO log for them, similar to 
https://issues.apache.org/jira/browse/HADOOP-14515.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org