[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-17116: Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Fix For: 3.4.0 > > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156934#comment-17156934 ] Hanisha Koneru commented on HADOOP-17116: - Thanks [~arp] and [~ayushtkn] for the reviews. I have opened HDFS-15467 to fix this for \{{ObserverReadProxyProvider}}. Will commit patch v03 shortly. > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154904#comment-17154904 ] Hanisha Koneru edited comment on HADOOP-17116 at 7/9/20, 8:19 PM: -- {{ObserverReadProxyProvider}} would need to be handled separately. Let's open a new Jira for that. was (Author: hanishakoneru): {{ObserverReadProxyProvider}} would need to be handled separately. Let's open a new Jira for that? > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154904#comment-17154904 ] Hanisha Koneru commented on HADOOP-17116: - {{ObserverReadProxyProvider}} would need to be handled separately. Let's open a new Jira for that? > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17154757#comment-17154757 ] Hanisha Koneru commented on HADOOP-17116: - [~ayushtkn], thanks for the UT. Helped me with the debug. {{ObserverReadProxyProvider}} is using s \{{combinedProxy}} object which assigns \{{combinedInfo}} as the ProxyInfo. {noformat} ObserverReadProxyProvider# Lines 197-207: for (int i = 0; i < nameNodeProxies.size(); i++) { if (i > 0) { combinedInfo.append(","); } combinedInfo.append(nameNodeProxies.get(i).proxyInfo); } combinedInfo.append(']'); T wrappedProxy = (T) Proxy.newProxyInstance( ObserverReadInvocationHandler.class.getClassLoader(), new Class[] {xface}, new ObserverReadInvocationHandler()); combinedProxy = new ProxyInfo<>(wrappedProxy, combinedInfo.toString()){noformat} RetryInvocationHandler depends on the ProxyInfo to differentiate between proxies while checking if failover from that proxy happened before. {code:java} failedAtLeastOnce.add(proxyDescriptor.getProxyInfo().toString()); {code} And since combined proxy has only 1 proxy (or assigns the same ProxyInfo to all proxies), we see the multiple failover messages. > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug > Components: ha >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-17116: Attachment: HADOOP-17116.003.patch > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch, > HADOOP-17116.003.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-17116: Attachment: HADOOP-17116.002.patch > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch, HADOOP-17116.002.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
[ https://issues.apache.org/jira/browse/HADOOP-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-17116: Attachment: HADOOP-17116.001.patch > Skip Retry INFO logging on first failover from a proxy > -- > > Key: HADOOP-17116 > URL: https://issues.apache.org/jira/browse/HADOOP-17116 > Project: Hadoop Common > Issue Type: Task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-17116.001.patch > > > RetryInvocationHandler logs an INFO level message on every failover except > the first. This used to be ideal before when there were only 2 proxies in the > FailoverProxyProvider. But if there are more than 2 proxies (as is possible > with 3 or more NNs in HA), then there could be more than one failover to find > the currently active proxy. > To avoid creating noise in clients logs/ console, RetryInvocationHandler > should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
Hanisha Koneru created HADOOP-17116: --- Summary: Skip Retry INFO logging on first failover from a proxy Key: HADOOP-17116 URL: https://issues.apache.org/jira/browse/HADOOP-17116 Project: Hadoop Common Issue Type: Task Reporter: Hanisha Koneru Assignee: Hanisha Koneru RetryInvocationHandler logs an INFO level message on every failover except the first. This used to be ideal before when there were only 2 proxies in the FailoverProxyProvider. But if there are more than 2 proxies (as is possible with 3 or more NNs in HA), then there could be more than one failover to find the currently active proxy. To avoid creating noise in clients logs/ console, RetryInvocationHandler should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16991) Remove RetryInvocation INFO logging from ozone CLI output
Hanisha Koneru created HADOOP-16991: --- Summary: Remove RetryInvocation INFO logging from ozone CLI output Key: HADOOP-16991 URL: https://issues.apache.org/jira/browse/HADOOP-16991 Project: Hadoop Common Issue Type: Improvement Reporter: Nilotpal Nandi Assignee: Hanisha Koneru In OM HA failover proxy provider, RetryInvocationHandler logs error message when client tries contacting non-leader OM. This error message can be suppressed as the failover would happen to leader OM. {code:java} org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException): OM:om2 is not the leader. Suggested leader is OM:om3. at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186) at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174) at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110) at org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72) at org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98) at org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while invoking $Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 failover attempts. Trying to failover immediately. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16727) KMS Jetty server does not startup if trust store password is null
[ https://issues.apache.org/jira/browse/HADOOP-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-16727: Resolution: Fixed Status: Resolved (was: Patch Available) > KMS Jetty server does not startup if trust store password is null > - > > Key: HADOOP-16727 > URL: https://issues.apache.org/jira/browse/HADOOP-16727 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-16727.003.patch, HDFS-14951.001.patch, > HDFS-14951.002.patch > > > In HttpServe2, if the trustStore is set but the trust store password is not, > then we set the TrustStorePassword of SSLContextFactory to null. This results > in the Jetty server not starting up. > {code:java} > In HttpServer2#createHttpsChannelConnector(), > if (trustStore != null) { > sslContextFactory.setTrustStorePath(trustStore); > sslContextFactory.setTrustStoreType(trustStoreType); > sslContextFactory.setTrustStorePassword(trustStorePassword); > } > {code} > Before setting the trust store password, we should check that it is not null. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16727) KMS Jetty server does not startup if trust store password is null
[ https://issues.apache.org/jira/browse/HADOOP-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17010022#comment-17010022 ] Hanisha Koneru commented on HADOOP-16727: - The Jenkins run is the same as last time. Javac issues are not introduced by this patch. Thank you [~smeng] and [~weichiu] for the reviews. I will commit patch v03 shortly. > KMS Jetty server does not startup if trust store password is null > - > > Key: HADOOP-16727 > URL: https://issues.apache.org/jira/browse/HADOOP-16727 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-16727.003.patch, HDFS-14951.001.patch, > HDFS-14951.002.patch > > > In HttpServe2, if the trustStore is set but the trust store password is not, > then we set the TrustStorePassword of SSLContextFactory to null. This results > in the Jetty server not starting up. > {code:java} > In HttpServer2#createHttpsChannelConnector(), > if (trustStore != null) { > sslContextFactory.setTrustStorePath(trustStore); > sslContextFactory.setTrustStoreType(trustStoreType); > sslContextFactory.setTrustStorePassword(trustStorePassword); > } > {code} > Before setting the trust store password, we should check that it is not null. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16727) KMS Jetty server does not startup if trust store password is null
[ https://issues.apache.org/jira/browse/HADOOP-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17009208#comment-17009208 ] Hanisha Koneru commented on HADOOP-16727: - Thank you [~weichiu]. I have retriggered a Jenkins pre-commit run as the last run was long back. If it comes back clean, I will commit the patch. > KMS Jetty server does not startup if trust store password is null > - > > Key: HADOOP-16727 > URL: https://issues.apache.org/jira/browse/HADOOP-16727 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-16727.003.patch, HDFS-14951.001.patch, > HDFS-14951.002.patch > > > In HttpServe2, if the trustStore is set but the trust store password is not, > then we set the TrustStorePassword of SSLContextFactory to null. This results > in the Jetty server not starting up. > {code:java} > In HttpServer2#createHttpsChannelConnector(), > if (trustStore != null) { > sslContextFactory.setTrustStorePath(trustStore); > sslContextFactory.setTrustStoreType(trustStoreType); > sslContextFactory.setTrustStorePassword(trustStorePassword); > } > {code} > Before setting the trust store password, we should check that it is not null. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16727) KMS Jetty server does not startup if trust store password is null
[ https://issues.apache.org/jira/browse/HADOOP-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16989279#comment-16989279 ] Hanisha Koneru commented on HADOOP-16727: - Thank you [~smeng]. The javac issues are not introduced by this patch. > KMS Jetty server does not startup if trust store password is null > - > > Key: HADOOP-16727 > URL: https://issues.apache.org/jira/browse/HADOOP-16727 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-16727.003.patch, HDFS-14951.001.patch, > HDFS-14951.002.patch > > > In HttpServe2, if the trustStore is set but the trust store password is not, > then we set the TrustStorePassword of SSLContextFactory to null. This results > in the Jetty server not starting up. > {code:java} > In HttpServer2#createHttpsChannelConnector(), > if (trustStore != null) { > sslContextFactory.setTrustStorePath(trustStore); > sslContextFactory.setTrustStoreType(trustStoreType); > sslContextFactory.setTrustStorePassword(trustStorePassword); > } > {code} > Before setting the trust store password, we should check that it is not null. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16727) KMS Jetty server does not startup if trust store password is null
[ https://issues.apache.org/jira/browse/HADOOP-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988120#comment-16988120 ] Hanisha Koneru commented on HADOOP-16727: - Thanks [~smeng] for the review. I have addressed the checkstyle issues in the new patch. Also removed the TestSSLFactory.testNoTrustStorePassword test as this scenario is being tested in TestSSLHttpServerConfigs. > KMS Jetty server does not startup if trust store password is null > - > > Key: HADOOP-16727 > URL: https://issues.apache.org/jira/browse/HADOOP-16727 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-16727.003.patch, HDFS-14951.001.patch, > HDFS-14951.002.patch > > > In HttpServe2, if the trustStore is set but the trust store password is not, > then we set the TrustStorePassword of SSLContextFactory to null. This results > in the Jetty server not starting up. > {code:java} > In HttpServer2#createHttpsChannelConnector(), > if (trustStore != null) { > sslContextFactory.setTrustStorePath(trustStore); > sslContextFactory.setTrustStoreType(trustStoreType); > sslContextFactory.setTrustStorePassword(trustStorePassword); > } > {code} > Before setting the trust store password, we should check that it is not null. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16727) KMS Jetty server does not startup if trust store password is null
[ https://issues.apache.org/jira/browse/HADOOP-16727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-16727: Attachment: HADOOP-16727.003.patch > KMS Jetty server does not startup if trust store password is null > - > > Key: HADOOP-16727 > URL: https://issues.apache.org/jira/browse/HADOOP-16727 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Major > Attachments: HADOOP-16727.003.patch, HDFS-14951.001.patch, > HDFS-14951.002.patch > > > In HttpServe2, if the trustStore is set but the trust store password is not, > then we set the TrustStorePassword of SSLContextFactory to null. This results > in the Jetty server not starting up. > {code:java} > In HttpServer2#createHttpsChannelConnector(), > if (trustStore != null) { > sslContextFactory.setTrustStorePath(trustStore); > sslContextFactory.setTrustStoreType(trustStoreType); > sslContextFactory.setTrustStorePassword(trustStorePassword); > } > {code} > Before setting the trust store password, we should check that it is not null. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15690) Hadoop docs' current should point to the latest release
Hanisha Koneru created HADOOP-15690: --- Summary: Hadoop docs' current should point to the latest release Key: HADOOP-15690 URL: https://issues.apache.org/jira/browse/HADOOP-15690 Project: Hadoop Common Issue Type: Bug Reporter: Hanisha Koneru Assignee: Hanisha Koneru In [http://hadoop.apache.org/docs/,] the current folder points to Hadoop 2.9.1. It should point to the latest release - Hadoop 3.1.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14210) Directories are not listed recursively when fs.defaultFs is viewFs
[ https://issues.apache.org/jira/browse/HADOOP-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396260#comment-16396260 ] Hanisha Koneru commented on HADOOP-14210: - It would be good to have {{-ls -R}} operation on ViewFs to work as it would on Unix. But I do have a concern/ question about recursively listing files/ directories in ViewFs. How are we handling the scenario where one mount target is a parent of another mount target. For example, in the config below, if we recursively list the files/ directories under viewFs root, then the files/ directories under {{/user}} will be listed twice (once for {{/nn1}} and once for {{/user}}). I think this would be a bad experience for users. {code:java} fs.defaultFS = viewfs:/// fs.viewfs.mounttable.default.link./nn1 = hdfs://ns1/ fs.viewfs.mounttable.default.link./user = hdfs://ns1/user/ {code} One option is to duplicate the behavior as is for symlinks in Unix. In Unix, {{ls -R}} does not list the contents of a symlink's target. We need to add "*{{-L | --dereference}}*" option to recursively list contents of symlinks along with directories. We can copy this behaviour in ViewFs. That is, we recursively list the contents of mount's target filesystem only when {{-ls -R}} is called with the option {{-L}}. This would still list the contents of \{{/user}} twice for the scenario mentioned above, but I think that should be fine. Would love to hear thoughts on this. > Directories are not listed recursively when fs.defaultFs is viewFs > -- > > Key: HADOOP-14210 > URL: https://issues.apache.org/jira/browse/HADOOP-14210 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 2.7.0 >Reporter: Ajith S >Priority: Major > Labels: viewfs > Attachments: HDFS-8413.patch > > > Mount a cluster on client throught viewFs mount table > Example: > {quote} > > fs.defaultFS > viewfs:/// > > > fs.viewfs.mounttable.default.link./nn1 > hdfs://ns1/ > > > fs.viewfs.mounttable.default.link./user > hdfs://host-72:8020/ > > > {quote} > Try to list the files recursively *(hdfs dfs -ls -R / or hadoop fs -ls -R /)* > only the parent folders are listed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16350939#comment-16350939 ] Hanisha Koneru commented on HADOOP-15168: - Committed patch v05 to trunk. Thanks for the contribution [~bharatviswa]. > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 3.1.0 > > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch, > HADOOP-15168.05.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15168: Fix Version/s: 3.1.0 > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 3.1.0 > > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch, > HADOOP-15168.05.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15168: Resolution: Fixed Status: Resolved (was: Patch Available) > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Fix For: 3.1.0 > > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch, > HADOOP-15168.05.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349118#comment-16349118 ] Hanisha Koneru commented on HADOOP-15168: - Thanks Bharat for updating the patch. +1 for patch v04 pending Jenkins. > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15168: Comment: was deleted (was: Uploaded patch v00 as v04. Will commit it shortly. ) > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15168: Attachment: (was: HADOOP-15168.04.patch) > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347498#comment-16347498 ] Hanisha Koneru commented on HADOOP-15168: - Uploaded patch v00 as v04. Will commit it shortly. > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15168: Attachment: HADOOP-15168.04.patch > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch, HADOOP-15168.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16347494#comment-16347494 ] Hanisha Koneru commented on HADOOP-15168: - Thanks [~bharatviswa]. I had an offline discussion with [~arpitagarwal]. We do not need to add kdiag to hdfs and yarn. It is sufficient to add it to hadoop cli. Patch v00 is good. I am sorry about the extra revisions, Bharat. > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch, > HADOOP-15168.02.patch, HADOOP-15168.03.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions
[ https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16345939#comment-16345939 ] Hanisha Koneru commented on HADOOP-10571: - Thanks for working on this [~boky01]. In the files modified in the patch, there are a lot of other log messages which also need to be fixed to avoid string concatenation. For example, lines {{142, 175, 178, 230, 235}} in {{FailoverController}}. It would be good to fix all the log messages in a file at once. Covering all the files at once would become huge. We can break this down into sub-tasks of HDFS-12829. > Use Log.*(Object, Throwable) overload to log exceptions > --- > > Key: HADOOP-10571 > URL: https://issues.apache.org/jira/browse/HADOOP-10571 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Arpit Agarwal >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-10571.01.patch, HADOOP-10571.01.patch > > > When logging an exception, we often convert the exception to string or call > {{.getMessage}}. Instead we can use the log method overloads which take > {{Throwable}} as a parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions
[ https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-10571: Status: Patch Available (was: Reopened) > Use Log.*(Object, Throwable) overload to log exceptions > --- > > Key: HADOOP-10571 > URL: https://issues.apache.org/jira/browse/HADOOP-10571 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.4.0 >Reporter: Arpit Agarwal >Assignee: Andras Bokor >Priority: Major > Attachments: HADOOP-10571.01.patch, HADOOP-10571.01.patch > > > When logging an exception, we often convert the exception to string or call > {{.getMessage}}. Instead we can use the log method overloads which take > {{Throwable}} as a parameter. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16343880#comment-16343880 ] Hanisha Koneru commented on HADOOP-15168: - {quote}I think, the commands which are related to hdfs, we add in hdfs script, similar for yarn. Or do we add to all scripts in general? {quote} No, if you are adding, you should add it in {{hdfs}} and {{yarn}} scripts. I am not sure if it is required as they do not have other kerberos related commands (such as {{key}} and {{kerbname}}). I meant to say we should change the following lines in {{SecureMode.md}} to reflect the changes introduced by this Jira. {code:java} The `KDiag` command has its own entry point; it is currently not hooked up to the end-user CLI. It is invoked simply by passing its full classname to one of the `bin/hadoop`, `bin/hdfs` or `bin/yarn` commands. Accordingly, it will display the kerberos client state of the command used to invoke it. ``` hadoop org.apache.hadoop.security.KDiag hdfs org.apache.hadoop.security.KDiag yarn org.apache.hadoop.security.KDiag {code} > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch, HADOOP-15168.01.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338079#comment-16338079 ] Hanisha Koneru edited comment on HADOOP-15168 at 1/24/18 7:30 PM: -- Thanks [~bharatviswa] for the patch. LGTM. Sorry missed it before. In {{SecureMode.md}}, the {{Troubleshooting with `KDiag`}} needs to be updated to reflect the new changes. Also, do you want to add the tool to hdfs and yarn CLIs as well? To keep the usage consistent. was (Author: hanishakoneru): Thanks [~bharatviswa] for the patch. LGTM. Sorry missed it before. In \{{SecureMode.md}}, the \{{Troubleshooting with `KDiag`}} needs to be updated to reflect the new changes. > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15168: Priority: Minor (was: Major) > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-15168.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338079#comment-16338079 ] Hanisha Koneru edited comment on HADOOP-15168 at 1/24/18 7:24 PM: -- Thanks [~bharatviswa] for the patch. LGTM. Sorry missed it before. In \{{SecureMode.md}}, the \{{Troubleshooting with `KDiag`}} needs to be updated to reflect the new changes. was (Author: hanishakoneru): Thanks [~bharatviswa] for the patch. LGTM. +1. > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15168.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15121: Resolution: Fixed Status: Resolved (was: Patch Available) > Encounter NullPointerException when using DecayRpcScheduler > --- > > Key: HADOOP-15121 > URL: https://issues.apache.org/jira/browse/HADOOP-15121 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, > HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, > HADOOP-15121.006.patch, HADOOP-15121.007.patch, HADOOP-15121.008.patch > > > I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but > got excetion in namenode: > {code} > 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter > (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from > source DecayRpcSchedulerMetrics2.ipc.8020 > java.lang.NullPointerException > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693) > at > org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102) > at > org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76) > at org.apache.hadoop.ipc.Server.(Server.java:2612) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678) > {code} > It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its > {{delegate}} field in its Initialization method -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15168: Issue Type: Improvement (was: Bug) > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15168.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15168) Add kdiag tool to hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-15168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338079#comment-16338079 ] Hanisha Koneru commented on HADOOP-15168: - Thanks [~bharatviswa] for the patch. LGTM. +1. > Add kdiag tool to hadoop command > > > Key: HADOOP-15168 > URL: https://issues.apache.org/jira/browse/HADOOP-15168 > Project: Hadoop Common > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HADOOP-15168.00.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16335219#comment-16335219 ] Hanisha Koneru commented on HADOOP-15121: - Committed to {{trunk}} and {{branch-3.0}}. Thanks [~Tao Jie] for working on this and [~ajayydv] for the reviews. > Encounter NullPointerException when using DecayRpcScheduler > --- > > Key: HADOOP-15121 > URL: https://issues.apache.org/jira/browse/HADOOP-15121 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, > HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, > HADOOP-15121.006.patch, HADOOP-15121.007.patch, HADOOP-15121.008.patch > > > I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but > got excetion in namenode: > {code} > 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter > (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from > source DecayRpcSchedulerMetrics2.ipc.8020 > java.lang.NullPointerException > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693) > at > org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102) > at > org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76) > at org.apache.hadoop.ipc.Server.(Server.java:2612) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678) > {code} > It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its > {{delegate}} field in its Initialization method -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands,
[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16334750#comment-16334750 ] Hanisha Koneru commented on HADOOP-15121: - +1 for patch v08. The failed test cases are unrelated and pass locally. Will commit this shortly. > Encounter NullPointerException when using DecayRpcScheduler > --- > > Key: HADOOP-15121 > URL: https://issues.apache.org/jira/browse/HADOOP-15121 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, > HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, > HADOOP-15121.006.patch, HADOOP-15121.007.patch, HADOOP-15121.008.patch > > > I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but > got excetion in namenode: > {code} > 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter > (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from > source DecayRpcSchedulerMetrics2.ipc.8020 > java.lang.NullPointerException > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693) > at > org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102) > at > org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76) > at org.apache.hadoop.ipc.Server.(Server.java:2612) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678) > {code} > It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its > {{delegate}} field in its Initialization method -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail:
[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333014#comment-16333014 ] Hanisha Koneru commented on HADOOP-15121: - We can check that the {{metricsProxy#delegate}} object is not the same as the current {{DecayRpcScheduler}} object before setting it again. We can do this in \{{MetricsProxy#getInstance()}}. {code:java} if (mp == null) { // We must create one mp = new MetricsProxy(namespace, numLevels, drs); INSTANCES.put(namespace, mp); } else { if (mp.delegate.get() != drs) { mp.setDelegate(drs); } }{code} > Encounter NullPointerException when using DecayRpcScheduler > --- > > Key: HADOOP-15121 > URL: https://issues.apache.org/jira/browse/HADOOP-15121 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, > HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch, > HADOOP-15121.006.patch, HADOOP-15121.007.patch > > > I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but > got excetion in namenode: > {code} > 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter > (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from > source DecayRpcSchedulerMetrics2.ipc.8020 > java.lang.NullPointerException > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693) > at > org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102) > at > org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76) > at org.apache.hadoop.ipc.Server.(Server.java:2612) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678) > {code} > It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its >
[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331377#comment-16331377 ] Hanisha Koneru commented on HADOOP-15121: - [~arpitagarwal], can you please add [~Tao Jie] to the contributors list. Thanks. > Encounter NullPointerException when using DecayRpcScheduler > --- > > Key: HADOOP-15121 > URL: https://issues.apache.org/jira/browse/HADOOP-15121 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: Tao Jie >Priority: Major > Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, > HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch > > > I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but > got excetion in namenode: > {code} > 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter > (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from > source DecayRpcSchedulerMetrics2.ipc.8020 > java.lang.NullPointerException > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693) > at > org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102) > at > org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76) > at org.apache.hadoop.ipc.Server.(Server.java:2612) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678) > {code} > It seems that {{metricsProxy}} in DecayRpcScheduler should initiate its > {{delegate}} field in its Initialization method -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15121) Encounter NullPointerException when using DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-15121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331362#comment-16331362 ] Hanisha Koneru commented on HADOOP-15121: - Thanks for the patch, [~Tao Jie]. I have a couple of minor comments. The patch LGTM otherwise. * The {{setDelegate()}} call here is redundant as you have already set it during MetricsProxy initialization. {code:java} metricsProxy = MetricsProxy.getInstance(ns, numLevels, this); metricsProxy.setDelegate(this);{code} * If the 2s test case is timing out occasionally on local machine, then a 5s timeout might also fail on an under-powered VM. It is better to set a higher test case timeout than we would ever expect it to take (say 60s). > Encounter NullPointerException when using DecayRpcScheduler > --- > > Key: HADOOP-15121 > URL: https://issues.apache.org/jira/browse/HADOOP-15121 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.2 >Reporter: Tao Jie >Priority: Major > Attachments: HADOOP-15121.001.patch, HADOOP-15121.002.patch, > HADOOP-15121.003.patch, HADOOP-15121.004.patch, HADOOP-15121.005.patch > > > I set ipc.8020.scheduler.impl to org.apache.hadoop.ipc.DecayRpcScheduler, but > got excetion in namenode: > {code} > 2017-12-15 15:26:34,662 ERROR impl.MetricsSourceAdapter > (MetricsSourceAdapter.java:getMetrics(202)) - Error getting metrics from > source DecayRpcSchedulerMetrics2.ipc.8020 > java.lang.NullPointerException > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getMetrics(DecayRpcScheduler.java:781) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:199) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:182) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:155) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:333) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:319) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) > at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:66) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:222) > at > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:100) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:268) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:233) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.registerMetrics2Source(DecayRpcScheduler.java:709) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.(DecayRpcScheduler.java:685) > at > org.apache.hadoop.ipc.DecayRpcScheduler$MetricsProxy.getInstance(DecayRpcScheduler.java:693) > at > org.apache.hadoop.ipc.DecayRpcScheduler.(DecayRpcScheduler.java:236) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native > Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.ipc.CallQueueManager.createScheduler(CallQueueManager.java:102) > at > org.apache.hadoop.ipc.CallQueueManager.(CallQueueManager.java:76) > at org.apache.hadoop.ipc.Server.(Server.java:2612) > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:958) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:374) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:349) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:800) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:415) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:755) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:697) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:905) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1610) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1678) > {code} > It seems that
[jira] [Comment Edited] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16317282#comment-16317282 ] Hanisha Koneru edited comment on HADOOP-14788 at 1/8/18 11:44 PM: -- [~ajayydv], I am still not sure why the exception should be changed to PathIOException. I might be missing something here. Steve, can you please clarify. Thanks. PathIOException is for "Exceptions based on standard posix/linux style exceptions for path related errors". Instead, can we add the path to the IOException message? was (Author: hanishakoneru): [~ajayydv], I am still not sure why the exception should be changed to PathIOException. I might be missing something here. Steve, can you please clarify. Thanks. > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch, HADOOP-14788.004.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16317282#comment-16317282 ] Hanisha Koneru commented on HADOOP-14788: - [~ajayydv], I am still not sure why the exception should be changed to PathIOException. I might be missing something here. Steve, can you please clarify. Thanks. > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch, HADOOP-14788.004.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15164) DataNode Replica Trash
[ https://issues.apache.org/jira/browse/HADOOP-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16317246#comment-16317246 ] Hanisha Koneru commented on HADOOP-15164: - Should have been in HDFS. Sorry for the duplication. Moved it to HDFS-12996. > DataNode Replica Trash > -- > > Key: HADOOP-15164 > URL: https://issues.apache.org/jira/browse/HADOOP-15164 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: DataNode_Replica_Trash_Design_Doc.pdf > > > DataNode Replica Trash will allow administrators to recover from a recent > delete request that resulted in catastrophic loss of user data. This is > achieved by placing all invalidated blocks in a replica trash on the datanode > before completely purging them from the system. The design doc is attached > here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15164) DataNode Replica Trash
[ https://issues.apache.org/jira/browse/HADOOP-15164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru resolved HADOOP-15164. - Resolution: Duplicate > DataNode Replica Trash > -- > > Key: HADOOP-15164 > URL: https://issues.apache.org/jira/browse/HADOOP-15164 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: DataNode_Replica_Trash_Design_Doc.pdf > > > DataNode Replica Trash will allow administrators to recover from a recent > delete request that resulted in catastrophic loss of user data. This is > achieved by placing all invalidated blocks in a replica trash on the datanode > before completely purging them from the system. The design doc is attached > here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15164) DataNode Replica Trash
Hanisha Koneru created HADOOP-15164: --- Summary: DataNode Replica Trash Key: HADOOP-15164 URL: https://issues.apache.org/jira/browse/HADOOP-15164 Project: Hadoop Common Issue Type: New Feature Reporter: Hanisha Koneru Assignee: Hanisha Koneru Attachments: DataNode_Replica_Trash_Design_Doc.pdf DataNode Replica Trash will allow administrators to recover from a recent delete request that resulted in catastrophic loss of user data. This is achieved by placing all invalidated blocks in a replica trash on the datanode before completely purging them from the system. The design doc is attached here. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15128) TestViewFileSystem tests are broken in trunk
[ https://issues.apache.org/jira/browse/HADOOP-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16298921#comment-16298921 ] Hanisha Koneru commented on HADOOP-15128: - Thanks [~eyang] and [~ste...@apache.org] for the insights. We can override the _toString()_ function in {{ViewFsFileStatus}} and {{ViewFsLocatedFileStatus}}. The rest of the classes extending FileStatus either override the toString() function already or do not override any of the get methods (_getPath()_, _isDirectory()_ etc.) we want to introduce in toString() ( with the exception of DeprecatedRawLocalFileSystem). {{DeprecatedRawLocalFileSystem#toString()}} would remain the same as is (calliing _FileStatus#toString()_). The toString() output might not give the actual owner, group and permission correctly. But it would avoid the calls to _loadPermissionInfo()_ every time toString() is called. And there won't be any modifications done to {{FileStatus}}. > TestViewFileSystem tests are broken in trunk > > > Key: HADOOP-15128 > URL: https://issues.apache.org/jira/browse/HADOOP-15128 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.1.0 >Reporter: Anu Engineer >Assignee: Hanisha Koneru > > The fix in Hadoop-10054 seems to have caused a test failure. Please take a > look. Thanks [~eyang] for reporting this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15128) TestViewFileSystem tests are broken in trunk
[ https://issues.apache.org/jira/browse/HADOOP-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295960#comment-16295960 ] Hanisha Koneru commented on HADOOP-15128: - Thanks [~anu] and [~eyang]. The test failures in _TestViewFileSystemLocalFileSystem_ and _TestViewFileSystemWithAuthorityLocalFileSystem_ are because {{RawLocalFileSystem}} overrides the {{getOwner()}}, {{getGroup()}} and {{getPermission()}} functions from {{FileStatus}}. And in the overridden functions, there is a call to {{loadPermissionInfo()}} which can throw an Exception. Possible fixes: # Revert {{getOwner()}}, {{getGroup}} and {{getPermission()}} in _FileStatus#toString()_ to {{owner}}, {{group}} and {{permission}}. # Catch the exception thrown by _RawLocalFileSystem#loadPermissionInfo()_ in _FileStatus#toString()_ and log a debug message. I am inclined towards the second option. Thoughts? > TestViewFileSystem tests are broken in trunk > > > Key: HADOOP-15128 > URL: https://issues.apache.org/jira/browse/HADOOP-15128 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.1.0 >Reporter: Anu Engineer >Assignee: Hanisha Koneru > > The fix in Hadoop-10054 seems to have caused a test failure. Please take a > look. Thanks [~eyang] for reporting this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16295919#comment-16295919 ] Hanisha Koneru commented on HADOOP-10054: - Thanks [~eyang] for reporting this and [~anu] for filing the follow-up Jira > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Fix For: 3.0.1 > > Attachments: HADOOP-10054.001.patch, HADOOP-10054.002.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16292024#comment-16292024 ] Hanisha Koneru commented on HADOOP-10054: - Thank you [~xyao] for committing the patch. > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Fix For: 3.0.1 > > Attachments: HADOOP-10054.001.patch, HADOOP-10054.002.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14788) Credentials readTokenStorageFile to stop wrapping IOEs in IOEs
[ https://issues.apache.org/jira/browse/HADOOP-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16289522#comment-16289522 ] Hanisha Koneru commented on HADOOP-14788: - Thanks for working on this [~ajayydv]. In the catch clause in _IOUtils#wrapException_, why are we returning a {{PathIOException}}? Shouldn't it be an {{IOException}}? {code} catch (Exception ex) { // For subclasses which have no (String) constructor throw IOException // with wrapped message return new PathIOException(path, exception); } {code} A tiny nit: In the method description of _wrapException_, "if exception" string is repeated. > Credentials readTokenStorageFile to stop wrapping IOEs in IOEs > -- > > Key: HADOOP-14788 > URL: https://issues.apache.org/jira/browse/HADOOP-14788 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Ajay Kumar >Priority: Minor > Attachments: HADOOP-14788.001.patch, HADOOP-14788.002.patch, > HADOOP-14788.003.patch > > > When {{Credentials readTokenStorageFile}} gets an IOE. it catches & wraps > with the filename, so losing the exception class information. > Is this needed. or can it pass everything up? > If it is needed, well, it's a common pattern: wrapping the exception with the > path & operation. Maybe it's time to add an IOE version of > {{NetworkUtils.wrapException()}} which handles the broader set of IOEs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-10054: Attachment: HADOOP-10054.002.patch Thanks for the suggestion [~xyao]. Fixing FileStatus would be the better and simpler option. I have attached patch v02 fixing it in FileStatus. > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-10054.001.patch, HADOOP-10054.002.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287984#comment-16287984 ] Hanisha Koneru commented on HADOOP-10054: - Thanks for the review, [~ajayydv]. Do you mean owner should not be included in toString? > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-10054.001.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9129) ViewFs does not validate internal names in the mount table
[ https://issues.apache.org/jira/browse/HADOOP-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-9129: --- Target Version/s: (was: ) Status: Patch Available (was: Open) > ViewFs does not validate internal names in the mount table > -- > > Key: HADOOP-9129 > URL: https://issues.apache.org/jira/browse/HADOOP-9129 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.0.0-alpha1 >Reporter: Chris Nauroth >Assignee: Hanisha Koneru > Attachments: HADOOP-9129.001.patch > > > Currently, there is no explicit validation of {{ViewFs}} internal names in > the mount table during initialization. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-9129) ViewFs does not validate internal names in the mount table
[ https://issues.apache.org/jira/browse/HADOOP-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-9129: --- Attachment: HADOOP-9129.001.patch For a start, attached patch v01 which validates that the mount table source entry path is a valid HDFS path. > ViewFs does not validate internal names in the mount table > -- > > Key: HADOOP-9129 > URL: https://issues.apache.org/jira/browse/HADOOP-9129 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.0.0-alpha1 >Reporter: Chris Nauroth >Assignee: Hanisha Koneru > Attachments: HADOOP-9129.001.patch > > > Currently, there is no explicit validation of {{ViewFs}} internal names in > the mount table during initialization. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-9129) ViewFs does not validate internal names in the mount table
[ https://issues.apache.org/jira/browse/HADOOP-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reassigned HADOOP-9129: -- Assignee: Hanisha Koneru > ViewFs does not validate internal names in the mount table > -- > > Key: HADOOP-9129 > URL: https://issues.apache.org/jira/browse/HADOOP-9129 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.0.0-alpha1 >Reporter: Chris Nauroth >Assignee: Hanisha Koneru > > Currently, there is no explicit validation of {{ViewFs}} internal names in > the mount table during initialization. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-10054: Status: Patch Available (was: Open) > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-10054.001.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10054) ViewFsFileStatus.toString() is broken
[ https://issues.apache.org/jira/browse/HADOOP-10054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-10054: Attachment: HADOOP-10054.001.patch > ViewFsFileStatus.toString() is broken > - > > Key: HADOOP-10054 > URL: https://issues.apache.org/jira/browse/HADOOP-10054 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.0.5-alpha >Reporter: Paul Han >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-10054.001.patch > > > ViewFsFileStatus.toString is broken. Following code snippet : > {code} > FileStatus stat= somefunc(); // somefunc() returns an instance of > ViewFsFileStatus > System.out.println("path:" + stat.getPath()); > System.out.println(stat.toString()); > {code} > produces the output: > {code} > path:viewfs://x.com/user/X/tmp-48 > ViewFsFileStatus{path=null; isDirectory=false; length=0; replication=0; > blocksize=0; modification_time=0; access_time=0; owner=; group=; > permission=rw-rw-rw-; isSymlink=false} > {code} > Note that "path=null" is not correct. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt
[ https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-15046: Attachment: HADOOP-15046-branch-2.001.patch HADOOP-15046.001.patch > Document Apache Hadoop does not support Java 9 in BUILDING.txt > -- > > Key: HADOOP-15046 > URL: https://issues.apache.org/jira/browse/HADOOP-15046 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Hanisha Koneru > Labels: newbie > Attachments: HADOOP-15046-branch-2.001.patch, HADOOP-15046.001.patch > > > Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should > update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15046) Document Apache Hadoop does not support Java 9 in BUILDING.txt
[ https://issues.apache.org/jira/browse/HADOOP-15046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reassigned HADOOP-15046: --- Assignee: Hanisha Koneru > Document Apache Hadoop does not support Java 9 in BUILDING.txt > -- > > Key: HADOOP-15046 > URL: https://issues.apache.org/jira/browse/HADOOP-15046 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Hanisha Koneru > Labels: newbie > > Now the java version is documented as "JDK 1.8+" or "JDK 1.7+", we should > update this to "JDK 1.8" or "JDK 1.7 or 1.8" to exclude Java 9. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14954) MetricsSystemImpl#init should increment refCount when already initialized
[ https://issues.apache.org/jira/browse/HADOOP-14954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206603#comment-16206603 ] Hanisha Koneru commented on HADOOP-14954: - Thanks for the fix, [~jzhuge]. LGTM.. +1 (non-binding). > MetricsSystemImpl#init should increment refCount when already initialized > - > > Key: HADOOP-14954 > URL: https://issues.apache.org/jira/browse/HADOOP-14954 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.0 >Reporter: John Zhuge >Priority: Minor > Attachments: HADOOP-14954.001.patch > > > {{++refCount}} here in {{init}} should be symmetric to {{--refCount}} in > {{shutdown}}. > {code:java} > public synchronized MetricsSystem init(String prefix) { > if (monitoring && !DefaultMetricsSystem.inMiniClusterMode()) { > LOG.warn(this.prefix +" metrics system already initialized!"); > return this; > } > this.prefix = checkNotNull(prefix, "prefix"); > ++refCount; > {code} > Move {{++refCount}} to the beginning of this method. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16184997#comment-16184997 ] Hanisha Koneru commented on HADOOP-14902: - Thanks [~jlowe] for reviewing and committing the patch. > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.0.0 > > Attachments: HADOOP-14902.001.patch, HADOOP-14902.002.patch, > HADOOP-14902.003.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14902: Attachment: HADOOP-14902.003.patch Thanks for the review [~jlowe]. Updated patch v03 to check for null {{out}} before closing in finally block. > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Attachments: HADOOP-14902.001.patch, HADOOP-14902.002.patch, > HADOOP-14902.003.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14902: Attachment: HADOOP-14902.002.patch Thanks for the review, [~jlowe]. I have updated the patch. > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Attachments: HADOOP-14902.001.patch, HADOOP-14902.002.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14902: Status: Patch Available (was: Open) > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Attachments: HADOOP-14902.001.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14902: Attachment: HADOOP-14902.001.patch Attached a patch which would attempt to _close_ the OutputStream and add the close time to metrics only if the _close_ is successful. > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > Attachments: HADOOP-14902.001.patch > > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14901: Attachment: HADOOP-14901-branch-2.002.patch > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch, HADOOP-14901-branch-2.001.patch, > HADOOP-14901-branch-2.002.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14901: Attachment: HADOOP-14901-branch-2.001.patch Thanks [~anu]. Fixed the typo. > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch, HADOOP-14901-branch-2.001.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14901: Attachment: (was: HADOOP-14901-brnach-2.001.patch) > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14901: Status: Patch Available (was: Reopened) > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14901: Attachment: HADOOP-14901-brnach-2.001.patch Thanks [~anu] for reviewing and committing the patch. I have uploaded the patch for branch-2. > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reopened HADOOP-14901: - Patch for branch-2 > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177115#comment-16177115 ] Hanisha Koneru commented on HADOOP-14902: - Thanks for reporting this bug, [~jlowe]. As per your [comment|https://issues.apache.org/jira/browse/HADOOP-14881?focusedCommentId=16177074=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16177074] in HADOOP-14881, I agree that we should not do double close as a norm. And I think we should not update the metric if close fails. The metric tracks the execution time for close operation. If we add the time when exception is thrown, it would pollute the metric. What do you think? > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated
[ https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reassigned HADOOP-14902: --- Assignee: Hanisha Koneru > LoadGenerator#genFile write close timing is incorrectly calculated > -- > > Key: HADOOP-14902 > URL: https://issues.apache.org/jira/browse/HADOOP-14902 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.4.0 >Reporter: Jason Lowe >Assignee: Hanisha Koneru > > LoadGenerator#genFile's write close timing code looks like the following: > {code} > startTime = Time.now(); > executionTime[WRITE_CLOSE] += (Time.now() - startTime); > {code} > That code will generate a zero (or near zero) write close timing since it > isn't actually closing the file in-between timestamp lookups. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14901: Attachment: HADOOP-14901.001.patch > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
[ https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14901: Status: Patch Available (was: Open) > ReuseObjectMapper in Hadoop Common > -- > > Key: HADOOP-14901 > URL: https://issues.apache.org/jira/browse/HADOOP-14901 > Project: Hadoop Common > Issue Type: Bug >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Attachments: HADOOP-14901.001.patch > > > It is recommended to reuse ObjectMapper, if possible, for better performance. > We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in > some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14901) ReuseObjectMapper in Hadoop Common
Hanisha Koneru created HADOOP-14901: --- Summary: ReuseObjectMapper in Hadoop Common Key: HADOOP-14901 URL: https://issues.apache.org/jira/browse/HADOOP-14901 Project: Hadoop Common Issue Type: Bug Reporter: Hanisha Koneru Assignee: Hanisha Koneru Priority: Minor It is recommended to reuse ObjectMapper, if possible, for better performance. We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in some places: they are straightforward and thread safe. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14827) Allow StopWatch to accept a Timer parameter for tests
[ https://issues.apache.org/jira/browse/HADOOP-14827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155769#comment-16155769 ] Hanisha Koneru commented on HADOOP-14827: - Thanks for the improvement, [~xkrogen]. The patch LGTM. TestZKFailoverController and TestShellBasedUnixGroupsMapping pass locally for me too. +1 (non-binding). > Allow StopWatch to accept a Timer parameter for tests > - > > Key: HADOOP-14827 > URL: https://issues.apache.org/jira/browse/HADOOP-14827 > Project: Hadoop Common > Issue Type: Improvement > Components: common, test >Reporter: Erik Krogen >Assignee: Erik Krogen >Priority: Minor > Attachments: HADOOP-14827.000.patch > > > {{StopWatch}} should optionally accept a {{Timer}} parameter rather than > directly using {{Time}} so that its behavior can be controlled during tests. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14806) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
[ https://issues.apache.org/jira/browse/HADOOP-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reassigned HADOOP-14806: --- Assignee: Hanisha Koneru > "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX > - > > Key: HADOOP-14806 > URL: https://issues.apache.org/jira/browse/HADOOP-14806 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.1 >Reporter: Sai Nukavarapu >Assignee: Hanisha Koneru > > "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX > Looking at the code, i see below description. > {noformat} > `BlockVerificationFailures` | Total number of verifications failures | > `BlocksVerified` | Total number of blocks verified | > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations
[ https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16126414#comment-16126414 ] Hanisha Koneru commented on HADOOP-14732: - Thanks for the review [~arpitagarwal]. The failing unit test TestRPC passes locally. > ProtobufRpcEngine should use Time.monotonicNow to measure durations > --- > > Key: HADOOP-14732 > URL: https://issues.apache.org/jira/browse/HADOOP-14732 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14732.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations
[ https://issues.apache.org/jira/browse/HADOOP-14732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14732: Attachment: HADOOP-14732.001.patch > ProtobufRpcEngine should use Time.monotonicNow to measure durations > --- > > Key: HADOOP-14732 > URL: https://issues.apache.org/jira/browse/HADOOP-14732 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14732.001.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14732) ProtobufRpcEngine should use Time.monotonicNow to measure durations
Hanisha Koneru created HADOOP-14732: --- Summary: ProtobufRpcEngine should use Time.monotonicNow to measure durations Key: HADOOP-14732 URL: https://issues.apache.org/jira/browse/HADOOP-14732 Project: Hadoop Common Issue Type: Sub-task Reporter: Hanisha Koneru Assignee: Hanisha Koneru -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14543) Should use getAversion() while setting the zkacl
[ https://issues.apache.org/jira/browse/HADOOP-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16056174#comment-16056174 ] Hanisha Koneru commented on HADOOP-14543: - [~brahmareddy], did you see any error/ exception for this? The _Zookeeper#setACL()_ method sets the version number of the SetACLRequest. So ideally, we should be setting the version only and not the aversion. {code} request.setAcl(acl); request.setVersion(version); {code} If the client doesn't get the correct version number of the data from the ACL, the update/ delete operation would fail. > Should use getAversion() while setting the zkacl > > > Key: HADOOP-14543 > URL: https://issues.apache.org/jira/browse/HADOOP-14543 > Project: Hadoop Common > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HADOOP-14543.patch > > > while setting the zkacl we used {color:red}{{getVersion()}}{color} which is > dataVersion,Ideally we should use {{{color:#14892c}getAversion{color}()}}. If > there is any acl changes( i.e relam change/..) ,we set the ACL with > dataversion which will cause {color:#d04437}BADVersion {color}and > {color:#d04437}*process will not start*{color}. See > [here|https://issues.apache.org/jira/browse/HDFS-11403?focusedCommentId=16051804=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051804] > {{zkClient.setACL(path, zkAcl, stat.getVersion());}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Attachment: HADOOP-14503-branch-2.001.patch > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, > HADOOP-14503.003.patch, HADOOP-14503.004.patch, HADOOP-14503.005.patch, > HADOOP-14503.006.patch, HADOOP-14503.007.patch, > HADOOP-14503-branch-2.001.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16048269#comment-16048269 ] Hanisha Koneru commented on HADOOP-14503: - Thanks for committing the patch, [~arpitagarwal]. I will post a branch-2 patch soon. > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Fix For: 3.0.0-alpha4 > > Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, > HADOOP-14503.003.patch, HADOOP-14503.004.patch, HADOOP-14503.005.patch, > HADOOP-14503.006.patch, HADOOP-14503.007.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Attachment: HADOOP-14503.007.patch Thanks [~arpitagarwal] for the review. I have addressed your comments in patch v07. The test failures are unrelated and pass locally. > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, > HADOOP-14503.003.patch, HADOOP-14503.004.patch, HADOOP-14503.005.patch, > HADOOP-14503.006.patch, HADOOP-14503.007.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Attachment: HADOOP-14503.006.patch > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, > HADOOP-14503.003.patch, HADOOP-14503.004.patch, HADOOP-14503.005.patch, > HADOOP-14503.006.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Attachment: HADOOP-14503.005.patch In patch v05, removed the init method. The default window size and num windows will be used when using the MutableRollingAverages as a metric registered with the metric system. Thanks [~arpitagarwal] for the offline discussion. > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, > HADOOP-14503.003.patch, HADOOP-14503.004.patch, HADOOP-14503.005.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Attachment: HADOOP-14503.004.patch Thanks for the review [~arpitagarwal]. Updated patch v04 to address the comments. And fixed failing unit tests. > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, > HADOOP-14503.003.patch, HADOOP-14503.004.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Attachment: HADOOP-14503.003.patch Updated patch v03 with the following change: Discarding any existing samples if RollingAverages parameters (Window size or Num Windows) is changed. Thanks [~arpitagarwal] for pointing it out. > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, > HADOOP-14503.003.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Attachment: HADOOP-14503.002.patch Thanks for the review, [~arpitagarwal]. I have addressed your comments in patch v02. The unit test failures are unrelated to the patch. > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Issue Type: Improvement (was: Bug) > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Status: Patch Available (was: Open) > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Attachment: HADOOP-14503.001.patch > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > Attachments: HADOOP-14503.001.patch > > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric
[ https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14503: Summary: Make RollingAverages a mutable metric (was: MutableMetricsFactory should allow RollingAverages field to be added as a metric) > Make RollingAverages a mutable metric > - > > Key: HADOOP-14503 > URL: https://issues.apache.org/jira/browse/HADOOP-14503 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru > > RollingAverages metric extends on MutableRatesWithAggregation metric and > maintains a group of rolling average metrics. This class should be allowed to > register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14503) MutableMetricsFactory should allow RollingAverages field to be added as a metric
Hanisha Koneru created HADOOP-14503: --- Summary: MutableMetricsFactory should allow RollingAverages field to be added as a metric Key: HADOOP-14503 URL: https://issues.apache.org/jira/browse/HADOOP-14503 Project: Hadoop Common Issue Type: Bug Components: common Reporter: Hanisha Koneru Assignee: Hanisha Koneru RollingAverages metric extends on MutableRatesWithAggregation metric and maintains a group of rolling average metrics. This class should be allowed to register as a metric with the MetricSystem. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14456) Modifier 'static' is redundant for inner enums less
[ https://issues.apache.org/jira/browse/HADOOP-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16025451#comment-16025451 ] Hanisha Koneru commented on HADOOP-14456: - [~linzhangbing], could you submit the patch again. The findbug warnings look unrelated to the patch. The patch otherwise LGTM. > Modifier 'static' is redundant for inner enums less > --- > > Key: HADOOP-14456 > URL: https://issues.apache.org/jira/browse/HADOOP-14456 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha3 >Reporter: ZhangBing Lin >Assignee: ZhangBing Lin >Priority: Minor > Attachments: HADOOP-14456.001.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14229) hadoop.security.auth_to_local example is incorrect in the documentation
[ https://issues.apache.org/jira/browse/HADOOP-14229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15951544#comment-15951544 ] Hanisha Koneru commented on HADOOP-14229: - [~boky01], Verified that the current suggested settings for hadoop.security.auth_to_local in https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html do not perform the intended action. As you said, the command _hadoop kerbname jhs/host.domain@REALM.TLD_ gives the following result: bq. Name: jhs/host.dom...@realm.tld to jhs/host.dom...@realm.tld whereas, the intended result is: bq. Name: jhs/host.dom...@realm.tld to mapred The patch LGTM. > hadoop.security.auth_to_local example is incorrect in the documentation > --- > > Key: HADOOP-14229 > URL: https://issues.apache.org/jira/browse/HADOOP-14229 > Project: Hadoop Common > Issue Type: Bug >Reporter: Andras Bokor >Assignee: Andras Bokor >Priority: Trivial > Attachments: HADOOP-14229.01.patch, HADOOP-14229.02.patch > > > Let's see jhs as example: > {code}RULE:[2:$1@$0](jhs/.*@.*REALM.TLD)s/.*/mapred/{code} > That means principal has 2 components (jhs/myhost@REALM). > The second column converts this to jhs@REALM. So the regex will not match on > this since regex expects / in the principal. > My suggestion is > {code}RULE:[2:$1](jhs)s/.*/mapred/{code} > https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14233) Delay construction of PreCondition.check failure message in Configuration#set
[ https://issues.apache.org/jira/browse/HADOOP-14233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15940953#comment-15940953 ] Hanisha Koneru commented on HADOOP-14233: - Thank you [~jeagles]. The patch LGTM. It would be good to follow this practice for logging as well. Passing concatenated strings into a logging method can also incur a needless performance hit because the concatenation will be performed every time the method is called, whether or not the log level is set low enough to show the message. > Delay construction of PreCondition.check failure message in Configuration#set > - > > Key: HADOOP-14233 > URL: https://issues.apache.org/jira/browse/HADOOP-14233 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles > Attachments: HADOOP-14233.1.patch > > > The String in the precondition check is constructed prior to failure > detection. Since the normal case is no error, we can gain performance by > delaying the construction of the string until the failure is detected. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14002) Document -DskipShade property in BUILDING.txt
[ https://issues.apache.org/jira/browse/HADOOP-14002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HADOOP-14002: Attachment: HADOOP-14002.001.patch Thanks [~arpitagarwal] and [~asuresh] for reviewing the patch. I have addressed your comments in patch v01. > Document -DskipShade property in BUILDING.txt > - > > Key: HADOOP-14002 > URL: https://issues.apache.org/jira/browse/HADOOP-14002 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Hanisha Koneru >Assignee: Hanisha Koneru >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-14002.000.patch, HADOOP-14002.001.patch > > > HADOOP-13999 added a maven profile to disable client jar shading. This > property should be documented in BUILDING.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org