[jira] [Commented] (HDFS-15250) Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException

2021-04-21 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326996#comment-17326996
 ] 

Ctest commented on HDFS-15250:
--

Hello [~sodonnell]

Sorry that we didn't keep the stack trace of this issue.

All I remembered is that we set `dfs.client.use.datanode.hostname` to true and 
set the hostname of the datanode wrongly which triggers the exception.

I think the system throws the correct exception here, but probably needs to 
handle it better.

 

> Setting `dfs.client.use.datanode.hostname` to true can crash the system 
> because of unhandled UnresolvedAddressException
> ---
>
> Key: HDFS-15250
> URL: https://issues.apache.org/jira/browse/HDFS-15250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ctest
>Assignee: Ctest
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HDFS-15250-001.patch, HDFS-15250-002.patch
>
>
> *Problem:*
> `dfs.client.use.datanode.hostname` by default is set to false, which means 
> the client will use the IP address of the datanode to connect to the 
> datanode, rather than the hostname of the datanode.
> In `org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer`:
>  
> {code:java}
>  try {
>    Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token,
>    datanode);
>    LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer);
>    return new BlockReaderPeer(peer, false);
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  }
> {code}
>  
> If `dfs.client.use.datanode.hostname` is false, then it will try to connect 
> via IP address. If the IP address is illegal and the connection fails, 
> IOException will be thrown from `newConnectedPeer` and be handled.
> If `dfs.client.use.datanode.hostname` is true, then it will try to connect 
> via hostname. If the hostname cannot be resolved, UnresolvedAddressException 
> will be thrown from `newConnectedPeer`. However, UnresolvedAddressException 
> is not a subclass of IOException so `nextTcpPeer` doesn’t handle this 
> exception at all. This unhandled exception could crash the system.
>  
> *Solution:*
> Since the method is handling the illegal IP address, then the illegal 
> hostname should be also handled as well. One solution is to add the handling 
> logic in `nextTcpPeer`:
> {code:java}
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  } catch (UnresolvedAddressException e) {
>    ... // handling logic 
>  }{code}
> I am very happy to provide a patch to do this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15250) Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException

2020-05-09 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15250:
-
Attachment: HDFS-15250-002.patch

> Setting `dfs.client.use.datanode.hostname` to true can crash the system 
> because of unhandled UnresolvedAddressException
> ---
>
> Key: HDFS-15250
> URL: https://issues.apache.org/jira/browse/HDFS-15250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ctest
>Assignee: Ctest
>Priority: Major
> Attachments: HDFS-15250-001.patch, HDFS-15250-002.patch
>
>
> *Problem:*
> `dfs.client.use.datanode.hostname` by default is set to false, which means 
> the client will use the IP address of the datanode to connect to the 
> datanode, rather than the hostname of the datanode.
> In `org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer`:
>  
> {code:java}
>  try {
>    Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token,
>    datanode);
>    LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer);
>    return new BlockReaderPeer(peer, false);
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  }
> {code}
>  
> If `dfs.client.use.datanode.hostname` is false, then it will try to connect 
> via IP address. If the IP address is illegal and the connection fails, 
> IOException will be thrown from `newConnectedPeer` and be handled.
> If `dfs.client.use.datanode.hostname` is true, then it will try to connect 
> via hostname. If the hostname cannot be resolved, UnresolvedAddressException 
> will be thrown from `newConnectedPeer`. However, UnresolvedAddressException 
> is not a subclass of IOException so `nextTcpPeer` doesn’t handle this 
> exception at all. This unhandled exception could crash the system.
>  
> *Solution:*
> Since the method is handling the illegal IP address, then the illegal 
> hostname should be also handled as well. One solution is to add the handling 
> logic in `nextTcpPeer`:
> {code:java}
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  } catch (UnresolvedAddressException e) {
>    ... // handling logic 
>  }{code}
> I am very happy to provide a patch to do this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15250) Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException

2020-05-06 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15250:
-
Attachment: HDFS-15250-001.patch
Status: Patch Available  (was: Open)

> Setting `dfs.client.use.datanode.hostname` to true can crash the system 
> because of unhandled UnresolvedAddressException
> ---
>
> Key: HDFS-15250
> URL: https://issues.apache.org/jira/browse/HDFS-15250
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ctest
>Assignee: Ctest
>Priority: Major
> Attachments: HDFS-15250-001.patch
>
>
> *Problem:*
> `dfs.client.use.datanode.hostname` by default is set to false, which means 
> the client will use the IP address of the datanode to connect to the 
> datanode, rather than the hostname of the datanode.
> In `org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer`:
>  
> {code:java}
>  try {
>    Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token,
>    datanode);
>    LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer);
>    return new BlockReaderPeer(peer, false);
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  }
> {code}
>  
> If `dfs.client.use.datanode.hostname` is false, then it will try to connect 
> via IP address. If the IP address is illegal and the connection fails, 
> IOException will be thrown from `newConnectedPeer` and be handled.
> If `dfs.client.use.datanode.hostname` is true, then it will try to connect 
> via hostname. If the hostname cannot be resolved, UnresolvedAddressException 
> will be thrown from `newConnectedPeer`. However, UnresolvedAddressException 
> is not a subclass of IOException so `nextTcpPeer` doesn’t handle this 
> exception at all. This unhandled exception could crash the system.
>  
> *Solution:*
> Since the method is handling the illegal IP address, then the illegal 
> hostname should be also handled as well. One solution is to add the handling 
> logic in `nextTcpPeer`:
> {code:java}
>  } catch (IOException e) {
>    LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
>    + "{}", datanode);
>    throw e;
>  } catch (UnresolvedAddressException e) {
>    ... // handling logic 
>  }{code}
> I am very happy to provide a patch to do this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15250) Setting `dfs.client.use.datanode.hostname` to true can crash the system because of unhandled UnresolvedAddressException

2020-03-30 Thread Ctest (Jira)
Ctest created HDFS-15250:


 Summary: Setting `dfs.client.use.datanode.hostname` to true can 
crash the system because of unhandled UnresolvedAddressException
 Key: HDFS-15250
 URL: https://issues.apache.org/jira/browse/HDFS-15250
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Ctest


*Problem:*

`dfs.client.use.datanode.hostname` by default is set to false, which means the 
client will use the IP address of the datanode to connect to the datanode, 
rather than the hostname of the datanode.

In `org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextTcpPeer`:

 
{code:java}
 try {
   Peer peer = remotePeerFactory.newConnectedPeer(inetSocketAddress, token,
   datanode);
   LOG.trace("nextTcpPeer: created newConnectedPeer {}", peer);
   return new BlockReaderPeer(peer, false);
 } catch (IOException e) {
   LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
   + "{}", datanode);
   throw e;
 }
{code}
 

If `dfs.client.use.datanode.hostname` is false, then it will try to connect via 
IP address. If the IP address is illegal and the connection fails, IOException 
will be thrown from `newConnectedPeer` and be handled.

If `dfs.client.use.datanode.hostname` is true, then it will try to connect via 
hostname. If the hostname cannot be resolved, UnresolvedAddressException will 
be thrown from `newConnectedPeer`. However, UnresolvedAddressException is not a 
subclass of IOException so `nextTcpPeer` doesn’t handle this exception at all. 
This unhandled exception could crash the system.

 

*Solution:*

Since the method is handling the illegal IP address, then the illegal hostname 
should be also handled as well. One solution is to add the handling logic in 
`nextTcpPeer`:
{code:java}
 } catch (IOException e) {
   LOG.trace("nextTcpPeer: failed to create newConnectedPeer connected to"
   + "{}", datanode);
   throw e;
 } catch (UnresolvedAddressException e) {
   ... // handling logic 
 }{code}
I am very happy to provide a patch to do this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-03-15 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17059883#comment-17059883
 ] 

Ctest commented on HDFS-15193:
--

Hi [~ayushtkn] Thank you for your reply.

I have verified that the test failures are not related.

Please let me know what else I can help to merge this patch.

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Assignee: Ctest
>Priority: Minor
> Attachments: HDFS-15193-001.patch, HDFS-15193-002.patch, 
> HDFS-15193-003.patch, HDFS-15193-004.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-03-10 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17056085#comment-17056085
 ] 

Ctest commented on HDFS-15193:
--

[~ayushtkn] 

Thank you for your help. I just uploaded a new patch (004).

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Assignee: Ctest
>Priority: Minor
> Attachments: HDFS-15193-001.patch, HDFS-15193-002.patch, 
> HDFS-15193-003.patch, HDFS-15193-004.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-03-09 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15193:
-
Attachment: HDFS-15193-004.patch

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Assignee: Ctest
>Priority: Minor
> Attachments: HDFS-15193-001.patch, HDFS-15193-002.patch, 
> HDFS-15193-003.patch, HDFS-15193-004.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-03-06 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17053798#comment-17053798
 ] 

Ctest commented on HDFS-15193:
--

[~ayushtkn] Thank you for the suggestions. I will push a new patch covering 
these.

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Assignee: Ctest
>Priority: Minor
> Attachments: HDFS-15193-001.patch, HDFS-15193-002.patch, 
> HDFS-15193-003.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-03-04 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17051661#comment-17051661
 ] 

Ctest commented on HDFS-15193:
--

[~ayushtkn]

Thank you for your help!

I uploaded a new patch (003) containing a unit test for the change in error msg.

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Assignee: Ctest
>Priority: Minor
> Attachments: HDFS-15193-001.patch, HDFS-15193-002.patch, 
> HDFS-15193-003.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-03-04 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15193:
-
Attachment: HDFS-15193-003.patch

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Assignee: Ctest
>Priority: Minor
> Attachments: HDFS-15193-001.patch, HDFS-15193-002.patch, 
> HDFS-15193-003.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-02-26 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17045732#comment-17045732
 ] 

Ctest commented on HDFS-15193:
--

[~ayushtkn]

I just submitted a patch to fix the checkstyle problem.

This patch only makes a change in the error message. I am not sure how to write 
a test for this change.

Maybe trigger this error and assert the error message like:
{code:java}
assertEquals(expectedMsg, errorMsg);{code}
I am not sure whether it is a good way to write the test.

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Priority: Minor
> Attachments: HDFS-15193-001.patch, HDFS-15193-002.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-02-26 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15193:
-
Attachment: HDFS-15193-002.patch

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Priority: Minor
> Attachments: HDFS-15193-001.patch, HDFS-15193-002.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-02-25 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15193:
-
Attachment: (was: HDFS-15193-001.patch)

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Priority: Major
> Attachments: HDFS-15193-001.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-02-25 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15193:
-
Attachment: HDFS-15193-001.patch
Status: Patch Available  (was: Open)

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Priority: Major
> Attachments: HDFS-15193-001.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-02-25 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15193:
-
Attachment: HDFS-15193-001.patch

> Improving the error message for missing 
> `dfs.namenode.rpc-address.$NAMESERVICE`
> ---
>
> Key: HDFS-15193
> URL: https://issues.apache.org/jira/browse/HDFS-15193
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Ctest
>Priority: Major
> Attachments: HDFS-15193-001.patch
>
>
> I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
> for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 
> Then I got an error message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> at 
> org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
> at 
> org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> The error message above told me that `dfs.namenode.rpc-address` or 
> `dfs.namenode.servicerpc-address` should be set.
> However, the actual reason for the error is that 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` is 
> not set.
>  
> *How to improve* 
> I wrote a patch to improve the error message. Here is the current error 
> message:
> {code:java}
> [ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
> elapsed: 0.195 s <<< ERROR!
> java.io.IOException: Incorrect configuration: namenode address 
> dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
> dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
> {code}
> Then the users can immediately know that they should set 
> `dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
> according to the error message.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-02-25 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15193:
-
Description: 
I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 

Then I got an error message:
{code:java}
[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!
java.io.IOException: Incorrect configuration: namenode address 
dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
at 
org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}
 

The error message above told me that `dfs.namenode.rpc-address` or 
`dfs.namenode.servicerpc-address` should be set.

However, the actual reason for the error is that `dfs.namenode.rpc-address.ns1` 
or `dfs.namenode.servicerpc-address.ns1` is not set.

 

*How to improve* 

I wrote a patch to improve the error message. Here is the current error message:
{code:java}
[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!
java.io.IOException: Incorrect configuration: namenode address 
dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
{code}
Then the users can immediately know that they should set 
`dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
according to the error message.

 

  was:
I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 

Then I got an error message:
{code:java}
[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!
java.io.IOException: Incorrect configuration: namenode address 
dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
at 
org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}
 

The error message above told me that `dfs.namenode.rpc-address` or 
`dfs.namenode.servicerpc-address` should be set.

However, the actual reason for the error is that `dfs.namenode.rpc-address.ns1` 
or `dfs.namenode.servicerpc-address.ns1` is not set.

 

*How to improve* 

I wrote a patch to improve the error message. Here is the current error message:

 
{code:java}
[ERROR] 

[jira] [Updated] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-02-25 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15193:
-
Description: 
I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 

Then I got an error message:
{code:java}
[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!
java.io.IOException: Incorrect configuration: namenode address 
dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
at 
org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}
 

The error message above told me that `dfs.namenode.rpc-address` or 
`dfs.namenode.servicerpc-address` should be set.

However, the actual reason for the error is that `dfs.namenode.rpc-address.ns1` 
or `dfs.namenode.servicerpc-address.ns1` is not set.

 

*How to improve* 

I wrote a patch to improve the error message. Here is the current error message:

 
{code:java}
[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!
java.io.IOException: Incorrect configuration: namenode address 
dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.
{code}
Then the users can immediately know that they should set 
`dfs.namenode.rpc-address.ns1` or `dfs.namenode.servicerpc-address.ns1` 
according to the error message.

 

  was:
I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 

Then I got error message:

 

 
{code:java}
[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!
java.io.IOException: Incorrect configuration: namenode address 
dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
at 
org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}
 

 

But the actual reason for the error is that `dfs.namenode.rpc-address.ns1` or 
`dfs.namenode.servicerpc-address.ns1` is not set.

 

How to improve

 

I wrote a patch to improve the error message. Here is the current error message:

[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!

java.io.IOException: Incorrect configuration: namenode address 

[jira] [Created] (HDFS-15193) Improving the error message for missing `dfs.namenode.rpc-address.$NAMESERVICE`

2020-02-25 Thread Ctest (Jira)
Ctest created HDFS-15193:


 Summary: Improving the error message for missing 
`dfs.namenode.rpc-address.$NAMESERVICE`
 Key: HDFS-15193
 URL: https://issues.apache.org/jira/browse/HDFS-15193
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Ctest


I set `dfs.nameservices` with the value of one name service (let’s say `ns1` 
for simplicity) and forgot to set `dfs.namenode.rpc-address.ns1`. 

Then I got error message:

 

 
{code:java}
[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!
java.io.IOException: Incorrect configuration: namenode address 
dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
at 
org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:629)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.getAddressListFromConf(TestGetConf.java:132)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.verifyAddresses(TestGetConf.java:234)
at 
org.apache.hadoop.hdfs.tools.TestGetConf.testNonFederation(TestGetConf.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
{code}
 

 

But the actual reason for the error is that `dfs.namenode.rpc-address.ns1` or 
`dfs.namenode.servicerpc-address.ns1` is not set.

 

How to improve

 

I wrote a patch to improve the error message. Here is the current error message:

[ERROR] testNonFederation(org.apache.hadoop.hdfs.tools.TestGetConf)  Time 
elapsed: 0.195 s <<< ERROR!

java.io.IOException: Incorrect configuration: namenode address 
dfs.namenode.servicerpc-address[.$NAMESERVICE] or 
dfs.namenode.rpc-address[.$NAMESERVICE] is not configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-02-25 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17044894#comment-17044894
 ] 

Ctest commented on HDFS-15124:
--

[~ayushtkn]

Thank you a lot for your help! I have submitted a new patch (006) with the unit 
test.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Assignee: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch, 
> HDFS-15124.005.patch, HDFS-15124.006.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-02-25 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.006.patch

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Assignee: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch, 
> HDFS-15124.005.patch, HDFS-15124.006.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-02-23 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17043171#comment-17043171
 ] 

Ctest commented on HDFS-15124:
--

[~ayushtkn]

Thank you a lot! I will try to write the test and submit the patch later!

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Assignee: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch, 
> HDFS-15124.005.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-31 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027881#comment-17027881
 ] 

Ctest edited comment on HDFS-15124 at 1/31/20 11:29 PM:


I can have a try to write a test to do this, but it may take some time.(I am 
not sure whether it will be a quick test or not).

Should I incorporate the test into this patch? Or start a new issue to write 
the test?


was (Author: ctest.team):
I can have a try to write a test to do this, but it may take some time.(I am 
not sure whether it is easy or not). Should I incorporate the test into this 
patch?

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Assignee: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch, 
> HDFS-15124.005.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-31 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027881#comment-17027881
 ] 

Ctest edited comment on HDFS-15124 at 1/31/20 11:28 PM:


I can have a try to write a test to do this, but it may take some time.(I am 
not sure whether it is easy or not). Should I incorporate the test into this 
patch?


was (Author: ctest.team):
I can try to write a test to do this, it may take some time. Should I 
incorporate the test into this patch?

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Assignee: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch, 
> HDFS-15124.005.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : 

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-31 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027881#comment-17027881
 ] 

Ctest commented on HDFS-15124:
--

I can try to write a test to do this, it may take some time. Should I 
incorporate the test into this patch?

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Assignee: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch, 
> HDFS-15124.005.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) 

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-31 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027790#comment-17027790
 ] 

Ctest commented on HDFS-15124:
--

[~elgoiri]  Thank you a lot for pointing out this! The 004 patch has already 
fixed the checkstyle issue.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch, 
> HDFS-15124.005.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-31 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.005.patch

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch, 
> HDFS-15124.005.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-31 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.004.patch

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
> 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-30 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: (was: HDFS-15124.004.patch)

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-30 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.004.patch

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch, HDFS-15124.004.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
> 

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-30 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027070#comment-17027070
 ] 

Ctest commented on HDFS-15124:
--

[~elgoiri] Thank you for pointing this out. I have already uploaded a new patch 
for this.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-30 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.003.patch

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch, HDFS-15124.003.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-30 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027046#comment-17027046
 ] 

Ctest edited comment on HDFS-15124 at 1/30/20 10:22 PM:


Hi, [~elgoiri]  I have already run the 4 failed test classes with my patch in 
the official docker image and all of them passed successfully.

I feel like the failures are not about the content in the patch. Actually the 
content in the patch won't be executed if not setting 
`dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

Could you please help to check whether these failures are due to some flakiness 
in tests?

Thank you a lot!


was (Author: ctest.team):
[~elgoiri] 

I have already run the 4 failed test classes with my patch in the official 
docker image and all of them passed successfully.

I feel like the failures are not about the content in the patch. Actually the 
content in the patch won't be executed if not setting 
`dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

Could you please help to check whether these failures are due to some flakiness 
in tests?

Thank you a lot!

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-30 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027046#comment-17027046
 ] 

Ctest edited comment on HDFS-15124 at 1/30/20 10:22 PM:


Hi, [~elgoiri]  I have already run the 4 failed test classes with my patch in 
the official hadoop docker image and all of them passed successfully.

I feel like the failures are not about the content in the patch. Actually the 
content in the patch won't be executed if not setting 
`dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

Could you please help to check whether these failures are due to some flakiness 
in tests?

Thank you a lot!


was (Author: ctest.team):
Hi, [~elgoiri]  I have already run the 4 failed test classes with my patch in 
the official docker image and all of them passed successfully.

I feel like the failures are not about the content in the patch. Actually the 
content in the patch won't be executed if not setting 
`dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

Could you please help to check whether these failures are due to some flakiness 
in tests?

Thank you a lot!

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-30 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027046#comment-17027046
 ] 

Ctest commented on HDFS-15124:
--

[~elgoiri] 

I have already run the 4 failed test classes with my patch in the official 
docker image and all of them passed successfully.

I feel like the failures are not about the content in the patch. Actually the 
content in the patch won't be executed if not setting 
`dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

Could you please help to check whether these failures are due to some flakiness 
in tests?

Thank you a lot!

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     

[jira] [Commented] (HDFS-15128) Unit test failing to clean testing data and crashed future Maven test run due to failure in TestDataNodeVolumeFailureToleration

2020-01-26 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17024070#comment-17024070
 ] 

Ctest commented on HDFS-15128:
--

[~ayushtkn], [~hudson], thank you!

> Unit test failing to clean testing data and crashed future Maven test run due 
> to failure in TestDataNodeVolumeFailureToleration
> ---
>
> Key: HDFS-15128
> URL: https://issues.apache.org/jira/browse/HDFS-15128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.1
>Reporter: Ctest
>Assignee: Ctest
>Priority: Critical
>  Labels: easyfix, patch, test
> Fix For: 3.3.0
>
> Attachments: HDFS-15128-000.patch, HDFS-15128-001.patch
>
>
> Actively-used test helper function `testVolumeConfig` in 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
> chmod a directory with invalid perm 000 for testing purposes but later failed 
> to chmod back this directory with a valid perm if the assertion inside this 
> function failed. Any subsequent `mvn test` command would fail to run if this 
> test had failed before. It is because Maven failed to build itself as it did 
> not have permission to clean the temporarily-generated directory that has 
> perm 000. See below for the code snippet that is buggy.
> {code:java}
> try {
>   for (int i = 0; i < volumesFailed; i++) {
> prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
>   }
>   restartDatanodes(volumesTolerated, manageDfsDirs);
> } catch (DiskErrorException e) {
>  ...
> } finally {
> ...
> }
>  
>   assertEquals(expectedBPServiceState, bpServiceState);
>  
>   for (File dir : dirs) {
> FileUtil.chmod(dir.toString(), "755");
>   }
> }
> {code}
> The failure of the statement `assertEquals(expectedBPServiceState, 
> bpServiceState)` caused function to terminate without executing 
> `FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
> invalid perm 000 the test has created. 
>  
> *Consequence*
> Any subsequent `mvn test` command would fail to run if this test had failed 
> before. It is because Maven failed to build itself since it does not have 
> permission to clean this temporarily-generated directory. For details of the 
> failure, see below:
> {noformat}
> [INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
> [INFO] Executing tasks
>  
> main:
> [delete] Deleting directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  8.349 s
> [INFO] Finished at: 2019-12-27T03:53:04-06:00
> [INFO] 
> 
> [ERROR] Failed to execute 
> goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
> project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
> directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
> [ERROR] around Ant part ... dir="/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data"/>...
>  @ 4:105 in 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
>  
> *Root Cause*
> The test helper function 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
>  purposely set the directory 
> `/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
>  to have perm 000. And at the end of this function, it changed the perm of 
> this directory to 755. However, there is an assertion in this function before 
> the perm was able to changed to 755. Once this assertion fails, the function 
> terminates before the directory’s perm can be changed to 755. Hence, this 
> directory was later unable to be removed by Maven for when executing `mvn 
> test`. 
>  
> *Fix*
> In 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
>  move the assertion `assertEquals(expectedBPServiceState, 

[jira] [Commented] (HDFS-15128) Unit test failing to clean testing data and crashed future Maven test run

2020-01-24 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17023330#comment-17023330
 ] 

Ctest commented on HDFS-15128:
--

[~ayushtkn], thank you for the comment, I have uploaded a new patch 
*HDFS-15128-001.patch* to reflect the change as suggested.

> Unit test failing to clean testing data and crashed future Maven test run
> -
>
> Key: HDFS-15128
> URL: https://issues.apache.org/jira/browse/HDFS-15128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
>  Labels: easyfix, patch, test
> Attachments: HDFS-15128-000.patch, HDFS-15128-001.patch
>
>
> Actively-used test helper function `testVolumeConfig` in 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
> chmod a directory with invalid perm 000 for testing purposes but later failed 
> to chmod back this directory with a valid perm if the assertion inside this 
> function failed. Any subsequent `mvn test` command would fail to run if this 
> test had failed before. It is because Maven failed to build itself as it did 
> not have permission to clean the temporarily-generated directory that has 
> perm 000. See below for the code snippet that is buggy.
> {code:java}
> try {
>   for (int i = 0; i < volumesFailed; i++) {
> prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
>   }
>   restartDatanodes(volumesTolerated, manageDfsDirs);
> } catch (DiskErrorException e) {
>  ...
> } finally {
> ...
> }
>  
>   assertEquals(expectedBPServiceState, bpServiceState);
>  
>   for (File dir : dirs) {
> FileUtil.chmod(dir.toString(), "755");
>   }
> }
> {code}
> The failure of the statement `assertEquals(expectedBPServiceState, 
> bpServiceState)` caused function to terminate without executing 
> `FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
> invalid perm 000 the test has created. 
>  
> *Consequence*
> Any subsequent `mvn test` command would fail to run if this test had failed 
> before. It is because Maven failed to build itself since it does not have 
> permission to clean this temporarily-generated directory. For details of the 
> failure, see below:
> {noformat}
> [INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
> [INFO] Executing tasks
>  
> main:
> [delete] Deleting directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  8.349 s
> [INFO] Finished at: 2019-12-27T03:53:04-06:00
> [INFO] 
> 
> [ERROR] Failed to execute 
> goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
> project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
> directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
> [ERROR] around Ant part ... dir="/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data"/>...
>  @ 4:105 in 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
>  
> *Root Cause*
> The test helper function 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
>  purposely set the directory 
> `/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
>  to have perm 000. And at the end of this function, it changed the perm of 
> this directory to 755. However, there is an assertion in this function before 
> the perm was able to changed to 755. Once this assertion fails, the function 
> terminates before the directory’s perm can be changed to 755. Hence, this 
> directory was later unable to be removed by Maven for when executing `mvn 
> test`. 
>  
> *Fix*
> In 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
>  move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  
> to the last line of this function. This fix will fix the bug 

[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and crashed future Maven test run

2020-01-24 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Attachment: (was: HDFS-15128-000.patch)

> Unit test failing to clean testing data and crashed future Maven test run
> -
>
> Key: HDFS-15128
> URL: https://issues.apache.org/jira/browse/HDFS-15128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
>  Labels: easyfix, patch, test
> Attachments: HDFS-15128-000.patch, HDFS-15128-001.patch
>
>
> Actively-used test helper function `testVolumeConfig` in 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
> chmod a directory with invalid perm 000 for testing purposes but later failed 
> to chmod back this directory with a valid perm if the assertion inside this 
> function failed. Any subsequent `mvn test` command would fail to run if this 
> test had failed before. It is because Maven failed to build itself as it did 
> not have permission to clean the temporarily-generated directory that has 
> perm 000. See below for the code snippet that is buggy.
> {code:java}
> try {
>   for (int i = 0; i < volumesFailed; i++) {
> prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
>   }
>   restartDatanodes(volumesTolerated, manageDfsDirs);
> } catch (DiskErrorException e) {
>  ...
> } finally {
> ...
> }
>  
>   assertEquals(expectedBPServiceState, bpServiceState);
>  
>   for (File dir : dirs) {
> FileUtil.chmod(dir.toString(), "755");
>   }
> }
> {code}
> The failure of the statement `assertEquals(expectedBPServiceState, 
> bpServiceState)` caused function to terminate without executing 
> `FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
> invalid perm 000 the test has created. 
>  
> *Consequence*
> Any subsequent `mvn test` command would fail to run if this test had failed 
> before. It is because Maven failed to build itself since it does not have 
> permission to clean this temporarily-generated directory. For details of the 
> failure, see below:
> {noformat}
> [INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
> [INFO] Executing tasks
>  
> main:
> [delete] Deleting directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  8.349 s
> [INFO] Finished at: 2019-12-27T03:53:04-06:00
> [INFO] 
> 
> [ERROR] Failed to execute 
> goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
> project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
> directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
> [ERROR] around Ant part ... dir="/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data"/>...
>  @ 4:105 in 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
>  
> *Root Cause*
> The test helper function 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
>  purposely set the directory 
> `/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
>  to have perm 000. And at the end of this function, it changed the perm of 
> this directory to 755. However, there is an assertion in this function before 
> the perm was able to changed to 755. Once this assertion fails, the function 
> terminates before the directory’s perm can be changed to 755. Hence, this 
> directory was later unable to be removed by Maven for when executing `mvn 
> test`. 
>  
> *Fix*
> In 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
>  move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  
> to the last line of this function. This fix will fix the bug and will not 
> change the test outcome. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and crashed future Maven test run

2020-01-24 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Attachment: HDFS-15128-001.patch

> Unit test failing to clean testing data and crashed future Maven test run
> -
>
> Key: HDFS-15128
> URL: https://issues.apache.org/jira/browse/HDFS-15128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
>  Labels: easyfix, patch, test
> Attachments: HDFS-15128-000.patch, HDFS-15128-001.patch
>
>
> Actively-used test helper function `testVolumeConfig` in 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
> chmod a directory with invalid perm 000 for testing purposes but later failed 
> to chmod back this directory with a valid perm if the assertion inside this 
> function failed. Any subsequent `mvn test` command would fail to run if this 
> test had failed before. It is because Maven failed to build itself as it did 
> not have permission to clean the temporarily-generated directory that has 
> perm 000. See below for the code snippet that is buggy.
> {code:java}
> try {
>   for (int i = 0; i < volumesFailed; i++) {
> prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
>   }
>   restartDatanodes(volumesTolerated, manageDfsDirs);
> } catch (DiskErrorException e) {
>  ...
> } finally {
> ...
> }
>  
>   assertEquals(expectedBPServiceState, bpServiceState);
>  
>   for (File dir : dirs) {
> FileUtil.chmod(dir.toString(), "755");
>   }
> }
> {code}
> The failure of the statement `assertEquals(expectedBPServiceState, 
> bpServiceState)` caused function to terminate without executing 
> `FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
> invalid perm 000 the test has created. 
>  
> *Consequence*
> Any subsequent `mvn test` command would fail to run if this test had failed 
> before. It is because Maven failed to build itself since it does not have 
> permission to clean this temporarily-generated directory. For details of the 
> failure, see below:
> {noformat}
> [INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
> [INFO] Executing tasks
>  
> main:
> [delete] Deleting directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  8.349 s
> [INFO] Finished at: 2019-12-27T03:53:04-06:00
> [INFO] 
> 
> [ERROR] Failed to execute 
> goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
> project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
> directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
> [ERROR] around Ant part ... dir="/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data"/>...
>  @ 4:105 in 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
>  
> *Root Cause*
> The test helper function 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
>  purposely set the directory 
> `/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
>  to have perm 000. And at the end of this function, it changed the perm of 
> this directory to 755. However, there is an assertion in this function before 
> the perm was able to changed to 755. Once this assertion fails, the function 
> terminates before the directory’s perm can be changed to 755. Hence, this 
> directory was later unable to be removed by Maven for when executing `mvn 
> test`. 
>  
> *Fix*
> In 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
>  move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  
> to the last line of this function. This fix will fix the bug and will not 
> change the test outcome. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and crashed future Maven test run

2020-01-22 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Attachment: HDFS-15128-000.patch
Status: Patch Available  (was: Open)

> Unit test failing to clean testing data and crashed future Maven test run
> -
>
> Key: HDFS-15128
> URL: https://issues.apache.org/jira/browse/HDFS-15128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
>  Labels: easyfix, patch, test
> Attachments: HDFS-15128-000.patch, HDFS-15128-000.patch
>
>
> Actively-used test helper function `testVolumeConfig` in 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
> chmod a directory with invalid perm 000 for testing purposes but later failed 
> to chmod back this directory with a valid perm if the assertion inside this 
> function failed. Any subsequent `mvn test` command would fail to run if this 
> test had failed before. It is because Maven failed to build itself as it did 
> not have permission to clean the temporarily-generated directory that has 
> perm 000. See below for the code snippet that is buggy.
> {code:java}
> try {
>   for (int i = 0; i < volumesFailed; i++) {
> prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
>   }
>   restartDatanodes(volumesTolerated, manageDfsDirs);
> } catch (DiskErrorException e) {
>  ...
> } finally {
> ...
> }
>  
>   assertEquals(expectedBPServiceState, bpServiceState);
>  
>   for (File dir : dirs) {
> FileUtil.chmod(dir.toString(), "755");
>   }
> }
> {code}
> The failure of the statement `assertEquals(expectedBPServiceState, 
> bpServiceState)` caused function to terminate without executing 
> `FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
> invalid perm 000 the test has created. 
>  
> *Consequence*
> Any subsequent `mvn test` command would fail to run if this test had failed 
> before. It is because Maven failed to build itself since it does not have 
> permission to clean this temporarily-generated directory. For details of the 
> failure, see below:
> {noformat}
> [INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
> [INFO] Executing tasks
>  
> main:
> [delete] Deleting directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  8.349 s
> [INFO] Finished at: 2019-12-27T03:53:04-06:00
> [INFO] 
> 
> [ERROR] Failed to execute 
> goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
> project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
> directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
> [ERROR] around Ant part ... dir="/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data"/>...
>  @ 4:105 in 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
>  
> *Root Cause*
> The test helper function 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
>  purposely set the directory 
> `/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
>  to have perm 000. And at the end of this function, it changed the perm of 
> this directory to 755. However, there is an assertion in this function before 
> the perm was able to changed to 755. Once this assertion fails, the function 
> terminates before the directory’s perm can be changed to 755. Hence, this 
> directory was later unable to be removed by Maven for when executing `mvn 
> test`. 
>  
> *Fix*
> In 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
>  move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  
> to the last line of this function. This fix will fix the bug and will not 
> change the test outcome. 



--
This message was sent by Atlassian Jira

[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and crashed future Maven test run

2020-01-16 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Summary: Unit test failing to clean testing data and crashed future Maven 
test run  (was: Unit test failing to clean testing data and crashed subsequent 
Maven test run)

> Unit test failing to clean testing data and crashed future Maven test run
> -
>
> Key: HDFS-15128
> URL: https://issues.apache.org/jira/browse/HDFS-15128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
>  Labels: easyfix, patch, test
> Attachments: HDFS-15128-000.patch
>
>
> Actively-used test helper function `testVolumeConfig` in 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
> chmod a directory with invalid perm 000 for testing purposes but later failed 
> to chmod back this directory with a valid perm if the assertion inside this 
> function failed. Any subsequent `mvn test` command would fail to run if this 
> test had failed before. It is because Maven failed to build itself as it did 
> not have permission to clean the temporarily-generated directory that has 
> perm 000. See below for the code snippet that is buggy.
> {code:java}
> try {
>   for (int i = 0; i < volumesFailed; i++) {
> prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
>   }
>   restartDatanodes(volumesTolerated, manageDfsDirs);
> } catch (DiskErrorException e) {
>  ...
> } finally {
> ...
> }
>  
>   assertEquals(expectedBPServiceState, bpServiceState);
>  
>   for (File dir : dirs) {
> FileUtil.chmod(dir.toString(), "755");
>   }
> }
> {code}
> The failure of the statement `assertEquals(expectedBPServiceState, 
> bpServiceState)` caused function to terminate without executing 
> `FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
> invalid perm 000 the test has created. 
>  
> *Consequence*
> Any subsequent `mvn test` command would fail to run if this test had failed 
> before. It is because Maven failed to build itself since it does not have 
> permission to clean this temporarily-generated directory. For details of the 
> failure, see below:
> {noformat}
> [INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
> [INFO] Executing tasks
>  
> main:
> [delete] Deleting directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  8.349 s
> [INFO] Finished at: 2019-12-27T03:53:04-06:00
> [INFO] 
> 
> [ERROR] Failed to execute 
> goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
> project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
> directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
> [ERROR] around Ant part ... dir="/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data"/>...
>  @ 4:105 in 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
>  
> *Root Cause*
> The test helper function 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
>  purposely set the directory 
> `/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
>  to have perm 000. And at the end of this function, it changed the perm of 
> this directory to 755. However, there is an assertion in this function before 
> the perm was able to changed to 755. Once this assertion fails, the function 
> terminates before the directory’s perm can be changed to 755. Hence, this 
> directory was later unable to be removed by Maven for when executing `mvn 
> test`. 
>  
> *Fix*
> In 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
>  move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  
> to the last line of this function. This fix will fix the bug and will not 
> 

[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and crashed subsequent Maven test run

2020-01-16 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Summary: Unit test failing to clean testing data and crashed subsequent 
Maven test run  (was: Unit test failing to clean testing data and caused Maven 
to crash)

> Unit test failing to clean testing data and crashed subsequent Maven test run
> -
>
> Key: HDFS-15128
> URL: https://issues.apache.org/jira/browse/HDFS-15128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
>  Labels: easyfix, patch, test
> Attachments: HDFS-15128-000.patch
>
>
> Actively-used test helper function `testVolumeConfig` in 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
> chmod a directory with invalid perm 000 for testing purposes but later failed 
> to chmod back this directory with a valid perm if the assertion inside this 
> function failed. Any subsequent `mvn test` command would fail to run if this 
> test had failed before. It is because Maven failed to build itself as it did 
> not have permission to clean the temporarily-generated directory that has 
> perm 000. See below for the code snippet that is buggy.
> {code:java}
> try {
>   for (int i = 0; i < volumesFailed; i++) {
> prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
>   }
>   restartDatanodes(volumesTolerated, manageDfsDirs);
> } catch (DiskErrorException e) {
>  ...
> } finally {
> ...
> }
>  
>   assertEquals(expectedBPServiceState, bpServiceState);
>  
>   for (File dir : dirs) {
> FileUtil.chmod(dir.toString(), "755");
>   }
> }
> {code}
> The failure of the statement `assertEquals(expectedBPServiceState, 
> bpServiceState)` caused function to terminate without executing 
> `FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
> invalid perm 000 the test has created. 
>  
> *Consequence*
> Any subsequent `mvn test` command would fail to run if this test had failed 
> before. It is because Maven failed to build itself since it does not have 
> permission to clean this temporarily-generated directory. For details of the 
> failure, see below:
> {noformat}
> [INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
> [INFO] Executing tasks
>  
> main:
> [delete] Deleting directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  8.349 s
> [INFO] Finished at: 2019-12-27T03:53:04-06:00
> [INFO] 
> 
> [ERROR] Failed to execute 
> goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
> project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
> directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
> [ERROR] around Ant part ... dir="/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data"/>...
>  @ 4:105 in 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
>  
> *Root Cause*
> The test helper function 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
>  purposely set the directory 
> `/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
>  to have perm 000. And at the end of this function, it changed the perm of 
> this directory to 755. However, there is an assertion in this function before 
> the perm was able to changed to 755. Once this assertion fails, the function 
> terminates before the directory’s perm can be changed to 755. Hence, this 
> directory was later unable to be removed by Maven for when executing `mvn 
> test`. 
>  
> *Fix*
> In 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
>  move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  
> to the last line of this function. This fix will fix the bug and will not 
> 

[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and caused Maven to crash

2020-01-16 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Description: 
Actively-used test helper function `testVolumeConfig` in 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
chmod a directory with invalid perm 000 for testing purposes but later failed 
to chmod back this directory with a valid perm if the assertion inside this 
function failed. Any subsequent `mvn test` command would fail to run if this 
test had failed before. It is because Maven failed to build itself as it did 
not have permission to clean the temporarily-generated directory that has perm 
000. See below for the code snippet that is buggy.
{code:java}
try {
  for (int i = 0; i < volumesFailed; i++) {
prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
  }
  restartDatanodes(volumesTolerated, manageDfsDirs);
} catch (DiskErrorException e) {
 ...
} finally {
...
}
 
  assertEquals(expectedBPServiceState, bpServiceState);
 
  for (File dir : dirs) {
FileUtil.chmod(dir.toString(), "755");
  }
}
{code}
The failure of the statement `assertEquals(expectedBPServiceState, 
bpServiceState)` caused function to terminate without executing 
`FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
invalid perm 000 the test has created. 

 

*Consequence*

Any subsequent `mvn test` command would fail to run if this test had failed 
before. It is because Maven failed to build itself since it does not have 
permission to clean this temporarily-generated directory. For details of the 
failure, see below:
{noformat}
[INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
[INFO] Executing tasks
 
main:
[delete] Deleting directory 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  8.349 s
[INFO] Finished at: 2019-12-27T03:53:04-06:00
[INFO] 
[ERROR] Failed to execute 
goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
directory 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
[ERROR] around Ant part ..
 @ 4:105 in 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
 

*Root Cause*

The test helper function 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
 purposely set the directory 
`/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
 to have perm 000. And at the end of this function, it changed the perm of this 
directory to 755. However, there is an assertion in this function before the 
perm was able to changed to 755. Once this assertion fails, the function 
terminates before the directory’s perm can be changed to 755. Hence, this 
directory was later unable to be removed by Maven for when executing `mvn 
test`. 

 

*Fix*

In 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
 move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  to 
the last line of this function. This fix will fix the bug and will not change 
the test outcome. 

  was:
Actively-used test helper function `testVolumeConfig` in 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
chmod a directory with invalid perm 000 for testing purposes but later failed 
to chmod back this directory with a valid perm if the assertion inside this 
function failed. Any subsequent `mvn test` command would fail to run if this 
test had failed before. It is because Maven failed to build itself as it did 
not have permission to clean the temporarily-generated directory that has perm 
000. See below for the code snippet that is buggy.
{code:java}
try {
  for (int i = 0; i < volumesFailed; i++) {
prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
  }
  restartDatanodes(volumesTolerated, manageDfsDirs);
} catch (DiskErrorException e) {
 ...
} finally {
...
}
 
  assertEquals(expectedBPServiceState, bpServiceState);
 
  

[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and caused Maven to crash

2020-01-16 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Description: 
Actively-used test helper function `testVolumeConfig` in 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
chmod a directory with invalid perm 000 for testing purposes but later failed 
to chmod back this directory with a valid perm if the assertion inside this 
function failed. Any subsequent `mvn test` command would fail to run if this 
test had failed before. It is because Maven failed to build itself as it did 
not have permission to clean the temporarily-generated directory that has perm 
000. See below for the code snippet that is buggy.
{code:java}
try {
  for (int i = 0; i < volumesFailed; i++) {
prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
  }
  restartDatanodes(volumesTolerated, manageDfsDirs);
} catch (DiskErrorException e) {
 ...
} finally {
...
}
 
  assertEquals(expectedBPServiceState, bpServiceState);
 
  for (File dir : dirs) {
FileUtil.chmod(dir.toString(), "755");
  }
}
{code}
The failure of the statement `assertEquals(expectedBPServiceState, 
bpServiceState)` caused function to terminate without executing 
`FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
invalid perm 000 the test has created. 

 

*Consequence:*

Any subsequent `mvn test` command would fail to run if this test had failed 
before. It is because Maven failed to build itself since it does not have 
permission to clean this temporarily-generated directory. For details of the 
failure, see below:
{noformat}
[INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
[INFO] Executing tasks
 
main:
[delete] Deleting directory 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  8.349 s
[INFO] Finished at: 2019-12-27T03:53:04-06:00
[INFO] 
[ERROR] Failed to execute 
goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
directory 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
[ERROR] around Ant part ..
 @ 4:105 in 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
 

*Root Cause:*

The test helper function 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
 purposely set the directory 
`/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
 to have perm 000. And at the end of this function, it changed the perm of this 
directory to 755. However, there is an assertion in this function before the 
perm was able to changed to 755. Once this assertion fails, the function 
terminates before the directory’s perm can be changed to 755. Hence, this 
directory was later unable to be removed by Maven for when executing `mvn 
test`. 

 

*Fix:*

In 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
 move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  to 
the last line of this function. This fix will fix the bug and will not change 
the test outcome. 

  was:
*Description:*

Actively-used test helper function `testVolumeConfig` in 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
chmod a directory with invalid perm 000 for testing purposes but later failed 
to chmod back this directory with a valid perm if the assertion inside this 
function failed. Any subsequent `mvn test` command would fail to run if this 
test had failed before. It is because Maven failed to build itself as it did 
not have permission to clean the temporarily-generated directory that has perm 
000. See below for the code snippet that is buggy.
{code:java}
try {
  for (int i = 0; i < volumesFailed; i++) {
prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
  }
  restartDatanodes(volumesTolerated, manageDfsDirs);
} catch (DiskErrorException e) {
 ...
} finally {
...
}
 
  assertEquals(expectedBPServiceState, 

[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and caused Maven to crash

2020-01-16 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Description: 
*Description:*

Actively-used test helper function `testVolumeConfig` in 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
chmod a directory with invalid perm 000 for testing purposes but later failed 
to chmod back this directory with a valid perm if the assertion inside this 
function failed. Any subsequent `mvn test` command would fail to run if this 
test had failed before. It is because Maven failed to build itself as it did 
not have permission to clean the temporarily-generated directory that has perm 
000. See below for the code snippet that is buggy.
{code:java}
try {
  for (int i = 0; i < volumesFailed; i++) {
prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
  }
  restartDatanodes(volumesTolerated, manageDfsDirs);
} catch (DiskErrorException e) {
 ...
} finally {
...
}
 
  assertEquals(expectedBPServiceState, bpServiceState);
 
  for (File dir : dirs) {
FileUtil.chmod(dir.toString(), "755");
  }
}
{code}
The failure of the statement `assertEquals(expectedBPServiceState, 
bpServiceState)` caused function to terminate without executing 
`FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
invalid perm 000 the test has created. 

 

*Consequence:*

Any subsequent `mvn test` command would fail to run if this test had failed 
before. It is because Maven failed to build itself since it does not have 
permission to clean this temporarily-generated directory. For details of the 
failure, see below:
{noformat}
[INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
[INFO] Executing tasks
 
main:
[delete] Deleting directory 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  8.349 s
[INFO] Finished at: 2019-12-27T03:53:04-06:00
[INFO] 
[ERROR] Failed to execute 
goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
directory 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
[ERROR] around Ant part ..
 @ 4:105 in 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
 

*Root Cause:*

The test helper function 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
 purposely set the directory 
`/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
 to have perm 000. And at the end of this function, it changed the perm of this 
directory to 755. However, there is an assertion in this function before the 
perm was able to changed to 755. Once this assertion fails, the function 
terminates before the directory’s perm can be changed to 755. Hence, this 
directory was later unable to be removed by Maven for when executing `mvn 
test`. 

 

*Fix:*

In 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
 move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  to 
the last line of this function. This fix will fix the bug and will not change 
the test outcome. 

  was:
*Description:*

Actively-used test helper function `testVolumeConfig` in 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
chmod a directory with invalid perm 000 for testing purposes but later failed 
to chmod back this directory with a valid perm if the assertion inside this 
function failed. Any subsequent `mvn test` command would fail to run if this 
test had failed before. It is because Maven failed to build itself as it did 
not have permission to clean the temporarily-generated directory that has perm 
000. See below for the code snippet that is buggy.

 

 
{code:java}
try {
  for (int i = 0; i < volumesFailed; i++) {
prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
  }
  restartDatanodes(volumesTolerated, manageDfsDirs);
} catch (DiskErrorException e) {
 ...
} finally {
...
}
 
  

[jira] [Updated] (HDFS-15128) Unit test failing to clean testing data and caused Maven to crash

2020-01-16 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15128:
-
Attachment: HDFS-15128-000.patch
  Tags:   (was: test)

> Unit test failing to clean testing data and caused Maven to crash
> -
>
> Key: HDFS-15128
> URL: https://issues.apache.org/jira/browse/HDFS-15128
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs, test
>Affects Versions: 3.2.1
>Reporter: Ctest
>Priority: Critical
>  Labels: easyfix, patch, test
> Attachments: HDFS-15128-000.patch
>
>
> *Description:*
> Actively-used test helper function `testVolumeConfig` in 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
> chmod a directory with invalid perm 000 for testing purposes but later failed 
> to chmod back this directory with a valid perm if the assertion inside this 
> function failed. Any subsequent `mvn test` command would fail to run if this 
> test had failed before. It is because Maven failed to build itself as it did 
> not have permission to clean the temporarily-generated directory that has 
> perm 000. See below for the code snippet that is buggy.
>  
>  
> {code:java}
> try {
>   for (int i = 0; i < volumesFailed; i++) {
> prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
>   }
>   restartDatanodes(volumesTolerated, manageDfsDirs);
> } catch (DiskErrorException e) {
>  ...
> } finally {
> ...
> }
>  
>   assertEquals(expectedBPServiceState, bpServiceState);
>  
>   for (File dir : dirs) {
> FileUtil.chmod(dir.toString(), "755");
>   }
> }
> {code}
>  
>  
> The failure of the statement `assertEquals(expectedBPServiceState, 
> bpServiceState)` caused function to terminate without executing 
> `FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
> invalid perm 000 the test has created. 
>  
> *Consequence:*
> Any subsequent `mvn test` command would fail to run if this test had failed 
> before. It is because Maven failed to build itself since it does not have 
> permission to clean this temporarily-generated directory. For details of the 
> failure, see below:
>  
> {noformat}
> [INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
> [INFO] Executing tasks
>  
> main:
> [delete] Deleting directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  8.349 s
> [INFO] Finished at: 2019-12-27T03:53:04-06:00
> [INFO] 
> 
> [ERROR] Failed to execute 
> goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
> project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
> directory 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
> [ERROR] around Ant part ... dir="/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data"/>...
>  @ 4:105 in 
> /home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
>  
>  
>  
> *Root Cause:*
> The test helper function 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
>  purposely set the directory 
> `/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
>  to have perm 000. And at the end of this function, it changed the perm of 
> this directory to 755. However, there is an assertion in this function before 
> the perm was able to changed to 755. Once this assertion fails, the function 
> terminates before the directory’s perm can be changed to 755. Hence, this 
> directory was later unable to be removed by Maven for when executing `mvn 
> test`. 
>  
> *Fix:*
> In 
> `org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
>  move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  
> to the last line of this function. This fix will fix the bug and will not 
> change the test outcome. 
>  
> *Content for the patch:*
> {code:java}
> diff 

[jira] [Created] (HDFS-15128) Unit test failing to clean testing data and caused Maven to crash

2020-01-16 Thread Ctest (Jira)
Ctest created HDFS-15128:


 Summary: Unit test failing to clean testing data and caused Maven 
to crash
 Key: HDFS-15128
 URL: https://issues.apache.org/jira/browse/HDFS-15128
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, test
Affects Versions: 3.2.1
Reporter: Ctest


*Description:*

Actively-used test helper function `testVolumeConfig` in 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration` 
chmod a directory with invalid perm 000 for testing purposes but later failed 
to chmod back this directory with a valid perm if the assertion inside this 
function failed. Any subsequent `mvn test` command would fail to run if this 
test had failed before. It is because Maven failed to build itself as it did 
not have permission to clean the temporarily-generated directory that has perm 
000. See below for the code snippet that is buggy.

 

 
{code:java}
try {
  for (int i = 0; i < volumesFailed; i++) {
prepareDirToFail(dirs[i]); // this will chmod dirs[i] to perm 000
  }
  restartDatanodes(volumesTolerated, manageDfsDirs);
} catch (DiskErrorException e) {
 ...
} finally {
...
}
 
  assertEquals(expectedBPServiceState, bpServiceState);
 
  for (File dir : dirs) {
FileUtil.chmod(dir.toString(), "755");
  }
}
{code}
 

 

The failure of the statement `assertEquals(expectedBPServiceState, 
bpServiceState)` caused function to terminate without executing 
`FileUtil.chmod(dir.toString(), "755")` for each temporary directory with 
invalid perm 000 the test has created. 

 

*Consequence:*

Any subsequent `mvn test` command would fail to run if this test had failed 
before. It is because Maven failed to build itself since it does not have 
permission to clean this temporarily-generated directory. For details of the 
failure, see below:

 
{noformat}
[INFO] --- maven-antrun-plugin:1.7:run (create-log-dir) @ hadoop-hdfs ---
[INFO] Executing tasks
 
main:
[delete] Deleting directory 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  8.349 s
[INFO] Finished at: 2019-12-27T03:53:04-06:00
[INFO] 
[ERROR] Failed to execute 
goalorg.apache.maven.plugins:maven-antrun-plugin:1.7:run (create-log-dir) on 
project hadoop-hdfs: An Ant BuildException has occured: Unable to delete 
directory 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current
[ERROR] around Ant part ..
 @ 4:105 in 
/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException{noformat}
 

 

 

*Root Cause:*

The test helper function 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`
 purposely set the directory 
`/home/ctest/app/Ctest-Hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data1/current`
 to have perm 000. And at the end of this function, it changed the perm of this 
directory to 755. However, there is an assertion in this function before the 
perm was able to changed to 755. Once this assertion fails, the function 
terminates before the directory’s perm can be changed to 755. Hence, this 
directory was later unable to be removed by Maven for when executing `mvn 
test`. 

 

*Fix:*

In 
`org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration#testVolumeConfig`,
 move the assertion `assertEquals(expectedBPServiceState, bpServiceState)`  to 
the last line of this function. This fix will fix the bug and will not change 
the test outcome. 

 

*Content for the patch:*
{code:java}
diff --git 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
index a9e4096df4b..a492fa5fd44 100644
--- 
a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
+++ 
b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDataNodeVolumeFailureToleration.java
@@ -256,11 +256,11 @@ private void testVolumeConfig(int volumesTolerated, int 

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-16 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017619#comment-17017619
 ] 

Ctest commented on HDFS-15124:
--

Uploaded one new patch for trunk.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-16 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.002.patch

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch, 
> HDFS-15124.002.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch (Exception e) {
>         

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-16 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017418#comment-17017418
 ] 

Ctest edited comment on HDFS-15124 at 1/16/20 7:55 PM:
---

[~elgoiri] Sorry that I was using hadoop-2.10.0 and the FSNameSystem was using 
{code:java}
public static final Log LOG = LogFactory.getLog(FSNamesystem.class);{code}
[https://github.com/apache/hadoop/blob/e2f1f118e465e787d8567dfa6e2f3b72a0eb9194/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L328]

It seems that 3.x.x is using org.slf4j.Logger but 2.x.x is using 
org.apache.commons.logging.Log for FSNanesystem.java.

I will write another patch for trunk and upload it again. Thank you for 
pointing this out!


was (Author: ctest.team):
I was using hadoop-2.10.0 and the FSNameSystem was using 
{code:java}
public static final Log LOG = LogFactory.getLog(FSNamesystem.class);{code}
[https://github.com/apache/hadoop/blob/e2f1f118e465e787d8567dfa6e2f3b72a0eb9194/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L328]

Should I use the 3.x.x version to do the fix? It seems that 3.x.x is using 
org.slf4j.Logger but 2.x.x is using org.apache.commons.logging.Log for 
FSNanesystem.java

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-16 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017418#comment-17017418
 ] 

Ctest commented on HDFS-15124:
--

I was using hadoop-2.10.0 and the FSNameSystem was using 
{code:java}
public static final Log LOG = LogFactory.getLog(FSNamesystem.class);{code}
[https://github.com/apache/hadoop/blob/e2f1f118e465e787d8567dfa6e2f3b72a0eb9194/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L328]

Should I use the 3.x.x version to do the fix? It seems that 3.x.x is using 
org.slf4j.Logger but 2.x.x is using org.apache.commons.logging.Log for 
FSNanesystem.java

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for 

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-16 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17017329#comment-17017329
 ] 

Ctest commented on HDFS-15124:
--

I uploaded a new patch to do all these changes.

[~elgoiri] I am sorry I didn't use {} style for the LOG in FSNamesystem.java 
because in FSNamesystem.java the LOG is `org.apache.commons.logging.Log` 
instead of `org.slf4j.Logger`. The `org.apache.commons.logging.Log` doesn't 
support
{code:java}
LOG.error("xxx {}", "yyy"){code}
Please let me know if anything else needed. Thank you!

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-16 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.001.patch

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.001.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch (Exception e) {
>         throw new 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-15 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016458#comment-17016458
 ] 

Ctest edited comment on HDFS-15124 at 1/16/20 2:06 AM:
---

[~elgoiri] [~weichiu] Sure. I will upload a new patch to do that. Thank you for 
your suggestions.


was (Author: ctest.team):
[~elgoiri] [~weichiu] Sure. I will upload a new patch to do that.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) 

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-15 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016458#comment-17016458
 ] 

Ctest commented on HDFS-15124:
--

[~elgoiri] [~weichiu] Sure. I will upload a new patch to do that.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015581#comment-17015581
 ] 

Ctest edited comment on HDFS-15124 at 1/15/20 3:30 AM:
---

[~elgoiri] Thank you for your reply and this is a good point! How about 
catching InstantiationException in `initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this if 
this is the right way to do it.


was (Author: ctest.team):
[~elgoiri] This is a good point! How about catching InstantiationException in 
`initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this if 
this is the right way to do it.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015581#comment-17015581
 ] 

Ctest edited comment on HDFS-15124 at 1/15/20 3:28 AM:
---

[~elgoiri] This is a good point! How about catching InstantiationException in 
`initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this if 
this is the right way to do it.


was (Author: ctest.team):
[~elgoiri] This is a good point!

How about catching InstantiationException in `initAuditLoggers(Configuration 
conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this if 
this is the right way to do it.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015581#comment-17015581
 ] 

Ctest edited comment on HDFS-15124 at 1/15/20 3:26 AM:
---

[~elgoiri] This is a good point!

How about catching InstantiationException in `initAuditLoggers(Configuration 
conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this if 
this is the right way to do it.


was (Author: ctest.team):
[~elgoiri] How about catching InstantiationException in 
`initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this if 
this is the right way to do it.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015581#comment-17015581
 ] 

Ctest edited comment on HDFS-15124 at 1/15/20 3:07 AM:
---

[~elgoiri] How about catching InstantiationException in 
`initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this if 
this is the right way to do it.


was (Author: ctest.team):
[~elgoiri] How about catching InstantiationException in 
`initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015581#comment-17015581
 ] 

Ctest edited comment on HDFS-15124 at 1/15/20 3:06 AM:
---

[~elgoiri] How about catching InstantiationException in 
`initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation error for " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this.


was (Author: ctest.team):
[~elgoiri] How about catching InstantiationException in 
`initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation Error For " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015581#comment-17015581
 ] 

Ctest edited comment on HDFS-15124 at 1/15/20 3:02 AM:
---

[~elgoiri] How about catching InstantiationException in 
initAuditLoggers(Configuration conf)

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation Error For " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this.


was (Author: ctest.team):
[~elgoiri] How about catch InstantiationException in 
initAuditLoggers(Configuration conf)

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation Error For " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015581#comment-17015581
 ] 

Ctest edited comment on HDFS-15124 at 1/15/20 3:03 AM:
---

[~elgoiri] How about catching InstantiationException in 
`initAuditLoggers(Configuration conf)`

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation Error For " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this.


was (Author: ctest.team):
[~elgoiri] How about catching InstantiationException in 
initAuditLoggers(Configuration conf)

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation Error For " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: 

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015581#comment-17015581
 ] 

Ctest commented on HDFS-15124:
--

[~elgoiri] How about catch InstantiationException in 
initAuditLoggers(Configuration conf)

It will look like:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (InstantiationException e) {
LOG.error("Instantiation Error For " + className);
throw new RuntimeException(e);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }
{code}
The log error message can be refined. I can upload another patch for this.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: (was: HDFS-15124.000.patch)

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch (Exception e) {
>         throw new RuntimeException(e);
>       

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.000.patch
Status: Patch Available  (was: Open)

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch, HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
> Caused by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger
> at java.lang.Class.newInstance(Class.java:427)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
> 8 more
> Caused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch 

[jira] [Comment Edited] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015478#comment-17015478
 ] 

Ctest edited comment on HDFS-15124 at 1/14/20 11:26 PM:


[~weichiu] Thank you for your reply!

Yes. The default value can also add TopAuditLogger, but most users don't read 
the src code and don't know it.

If the users want to use TopAuditLogger and they directly set it to 
TopAuditLogger (without understanding the src code), then the namenode will 
crash.

I wrote a patch to add the default constructor for the TopAuditLogger which I 
think can make this part more robust.


was (Author: ctest.team):
[~weichiu] Thank you for your reply!

Yes. The default value can also add TopAuditLogger, but most users didn't read 
the src code and don't know it.

If the users want to use TopAuditLogger and they directly set it to 
TopAuditLogger (without understanding the src code), then the namenode will 
crash.

I wrote a patch to add the default constructor for the TopAuditLogger which I 
think can make this part more robust.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at
>  org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused
>  by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> java.lang.Class.newInstance(Class.java:427)at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
>  8 moreCaused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
> java.lang.Class.getConstructor0(Class.java:3082)at 
> java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to 

[jira] [Commented] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015478#comment-17015478
 ] 

Ctest commented on HDFS-15124:
--

[~weichiu] Thank you for your reply!

Yes. The default value can also add TopAuditLogger, but most users didn't read 
the src code and don't know it.

If the users want to use TopAuditLogger and they directly set it to 
TopAuditLogger (without understanding the src code), then the namenode will 
crash.

I wrote a patch to add the default constructor for the TopAuditLogger which I 
think can make this part more robust.

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at
>  org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused
>  by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> java.lang.Class.newInstance(Class.java:427)at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
>  8 moreCaused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
> java.lang.Class.getConstructor0(Class.java:3082)at 
> java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Description: 
I am using Hadoop-2.10.0.

The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
(which is the default value) and 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
namenode will not be started successfully because of an 
`InstantiationException` thrown from 
`org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 

The root cause is that while initializing namenode, `initAuditLoggers` will be 
called and it will try to call the default constructor of 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't have 
a default constructor. Thus the `InstantiationException` exception is thrown.

 

*Symptom*

*$ ./start-dfs.sh*
{code:java}
2019-12-18 14:05:20,670 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.java.lang.RuntimeException: 
java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused 
by: java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
java.lang.Class.newInstance(Class.java:427)at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
 8 moreCaused by: java.lang.NoSuchMethodException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
java.lang.Class.getConstructor0(Class.java:3082)at 
java.lang.Class.newInstance(Class.java:412)
... 9 more{code}
 

 

*Detailed Root Cause*

There is no default constructor in 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
{code:java}
/** 
 * An {@link AuditLogger} that sends logged data directly to the metrics 
 * systems. It is used when the top service is used directly by the name node 
 */ 
@InterfaceAudience.Private 
public class TopAuditLogger implements AuditLogger { 
  public static finalLogger LOG = 
LoggerFactory.getLogger(TopAuditLogger.class); 

  private final TopMetrics topMetrics; 

  public TopAuditLogger(TopMetrics topMetrics) {
Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
"TopMetrics");
this.topMetrics = topMetrics; 
  }

  @Override
  public void initialize(Configuration conf) { 
  }
{code}
As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, `initAuditLoggers` 
will try to call its default constructor to make a new instance: 
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
      conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
    for (String className : alClasses) {
      try {
        AuditLogger logger;
        if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
          logger = new DefaultAuditLogger();
        } else {
          logger = (AuditLogger) Class.forName(className).newInstance();
        }
        logger.initialize(conf);
        auditLoggers.add(logger);
      } catch (RuntimeException re) {
        throw re;
      } catch (Exception e) {
        throw new RuntimeException(e);
      }
    }
  }
{code}
`initAuditLoggers` tries to call the default constructor to make a new instance 
in:
{code:java}
logger = (AuditLogger) Class.forName(className).newInstance();
{code}
This is very different from the default configuration, `default`, which 
implements a default constructor so the default is fine.

 

*How To Reproduce* 

The version of Hadoop: 2.10.0
 # Set the value of configuration parameter `dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` in 
"hdfs-site.xml"(the default value is `default`)
 # Start the namenode by running "start-dfs.sh"
 # The namenode will not be started successfully.

{code:java}

  dfs.namenode.audit.loggers
  

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: HDFS-15124.000.patch

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
> Attachments: HDFS-15124.000.patch
>
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at
>  org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused
>  by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> java.lang.Class.newInstance(Class.java:427)at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
>  8 moreCaused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
> java.lang.Class.getConstructor0(Class.java:3082)at 
> java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch (Exception e) {
>         throw new RuntimeException(e);
>       }
>     }
>   }
> 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: patch.txt

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at
>  org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused
>  by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> java.lang.Class.newInstance(Class.java:427)at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
>  8 moreCaused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
> java.lang.Class.getConstructor0(Class.java:3082)at 
> java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch (Exception e) {
>         throw new RuntimeException(e);
>       }
>     }
>   }
> {code}
> `initAuditLoggers` tries to call the default 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Attachment: (was: patch.txt)

> Crashing bugs in NameNode when using a valid configuration for 
> `dfs.namenode.audit.loggers`
> ---
>
> Key: HDFS-15124
> URL: https://issues.apache.org/jira/browse/HDFS-15124
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Ctest
>Priority: Critical
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at
>  org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused
>  by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> java.lang.Class.newInstance(Class.java:427)at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
>  8 moreCaused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
> java.lang.Class.getConstructor0(Class.java:3082)at 
> java.lang.Class.newInstance(Class.java:412)
> ... 9 more{code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }
> {code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance: 
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>       conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
>     for (String className : alClasses) {
>       try {
>         AuditLogger logger;
>         if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>           logger = new DefaultAuditLogger();
>         } else {
>           logger = (AuditLogger) Class.forName(className).newInstance();
>         }
>         logger.initialize(conf);
>         auditLoggers.add(logger);
>       } catch (RuntimeException re) {
>         throw re;
>       } catch (Exception e) {
>         throw new RuntimeException(e);
>       }
>     }
>   }
> {code}
> `initAuditLoggers` tries to call the 

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Description: 
I am using Hadoop-2.10.0.

The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
(which is the default value) and 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
namenode will not be started successfully because of an 
`InstantiationException` thrown from 
`org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 

The root cause is that while initializing namenode, `initAuditLoggers` will be 
called and it will try to call the default constructor of 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't have 
a default constructor. Thus the `InstantiationException` exception is thrown.

 

*Symptom*

*$ ./start-dfs.sh*

 
{code:java}
2019-12-18 14:05:20,670 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.java.lang.RuntimeException: 
java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused 
by: java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
java.lang.Class.newInstance(Class.java:427)at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
 8 moreCaused by: java.lang.NoSuchMethodException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
java.lang.Class.getConstructor0(Class.java:3082)at 
java.lang.Class.newInstance(Class.java:412)
... 9 more{code}
 

 

*Detailed Root Cause*

There is no default constructor in 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
{code:java}
/** 
 * An {@link AuditLogger} that sends logged data directly to the metrics 
 * systems. It is used when the top service is used directly by the name node 
 */ 
@InterfaceAudience.Private 
public class TopAuditLogger implements AuditLogger { 
  public static finalLogger LOG = 
LoggerFactory.getLogger(TopAuditLogger.class); 

  private final TopMetrics topMetrics; 

  public TopAuditLogger(TopMetrics topMetrics) {
Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
"TopMetrics");
this.topMetrics = topMetrics; 
  }

  @Override
  public void initialize(Configuration conf) { 
  }
{code}
 

 

As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, `initAuditLoggers` 
will try to call its default constructor to make a new instance: 
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
      conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
    for (String className : alClasses) {
      try {
        AuditLogger logger;
        if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
          logger = new DefaultAuditLogger();
        } else {
          logger = (AuditLogger) Class.forName(className).newInstance();
        }
        logger.initialize(conf);
        auditLoggers.add(logger);
      } catch (RuntimeException re) {
        throw re;
      } catch (Exception e) {
        throw new RuntimeException(e);
      }
    }
  }
{code}
 

`initAuditLoggers` tries to call the default constructor to make a new instance 
in:
{code:java}
logger = (AuditLogger) Class.forName(className).newInstance();
{code}
This is very different from the default configuration, `default`, which 
implements a default constructor so the default is fine.

 

*How To Reproduce* 

The version of Hadoop: 2.10.0
 # Set the value of configuration parameter `dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` in 
"hdfs-site.xml"(the default value is `default`)
 # Start the namenode by running "start-dfs.sh"
 # The namenode will not be started successfully.

{code:java}

  dfs.namenode.audit.loggers
  

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Description: 
I am using Hadoop-2.10.0.

The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
(which is the default value) and 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
namenode will not be started successfully because of an 
`InstantiationException` thrown from 
`org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 

The root cause is that while initializing namenode, `initAuditLoggers` will be 
called and it will try to call the default constructor of 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't have 
a default constructor. Thus the `InstantiationException` exception is thrown.

 

*Symptom*

*$ ./start-dfs.sh*
{code:java}
2019-12-18 14:05:20,670 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.java.lang.RuntimeException: 
java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused 
by: java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
java.lang.Class.newInstance(Class.java:427)at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
 8 moreCaused by: java.lang.NoSuchMethodException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
java.lang.Class.getConstructor0(Class.java:3082)at 
java.lang.Class.newInstance(Class.java:412)
... 9 more{code}
 

 

*Detailed Root Cause*

There is no default constructor in 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
{code:java}
/** 
 * An {@link AuditLogger} that sends logged data directly to the metrics 
 * systems. It is used when the top service is used directly by the name node 
 */ 
@InterfaceAudience.Private 
public class TopAuditLogger implements AuditLogger { 
  public static finalLogger LOG = 
LoggerFactory.getLogger(TopAuditLogger.class); 

  private final TopMetrics topMetrics; 

  public TopAuditLogger(TopMetrics topMetrics) {
Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
"TopMetrics");
this.topMetrics = topMetrics; 
  }

  @Override
  public void initialize(Configuration conf) { 
  }
{code}
As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, `initAuditLoggers` 
will try to call its default constructor to make a new instance: 
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
      conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
    for (String className : alClasses) {
      try {
        AuditLogger logger;
        if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
          logger = new DefaultAuditLogger();
        } else {
          logger = (AuditLogger) Class.forName(className).newInstance();
        }
        logger.initialize(conf);
        auditLoggers.add(logger);
      } catch (RuntimeException re) {
        throw re;
      } catch (Exception e) {
        throw new RuntimeException(e);
      }
    }
  }
{code}
`initAuditLoggers` tries to call the default constructor to make a new instance 
in:
{code:java}
logger = (AuditLogger) Class.forName(className).newInstance();
{code}
This is very different from the default configuration, `default`, which 
implements a default constructor so the default is fine.

 

*How To Reproduce* 

The version of Hadoop: 2.10.0
 # Set the value of configuration parameter `dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` in 
"hdfs-site.xml"(the default value is `default`)
 # Start the namenode by running "start-dfs.sh"
 # The namenode will not be started successfully.

{code:java}

  dfs.namenode.audit.loggers
  

[jira] [Updated] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ctest updated HDFS-15124:
-
Description: 
{code:java}
// code placeholder
{code}
I am using Hadoop-2.10.0.

The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
(which is the default value) and 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
namenode will not be started successfully because of an 
`InstantiationException` thrown from 
`org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 

The root cause is that while initializing namenode, `initAuditLoggers` will be 
called and it will try to call the default constructor of 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't have 
a default constructor. Thus the `InstantiationException` exception is thrown.

 

*Symptom*

*$ ./start-dfs.sh*

 
{code:java}
2019-12-18 14:05:20,670 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.java.lang.RuntimeException: 
java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused 
by: java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
java.lang.Class.newInstance(Class.java:427)at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
 8 moreCaused by: java.lang.NoSuchMethodException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
java.lang.Class.getConstructor0(Class.java:3082)at 
java.lang.Class.newInstance(Class.java:412)
... 9 more{code}
 

 

*Detailed Root Cause*

There is no default constructor in 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`: 
{code:java}
/** 
 * An {@link AuditLogger} that sends logged data directly to the metrics 
 * systems. It is used when the top service is used directly by the name node 
 */ 
@InterfaceAudience.Private 
public class TopAuditLogger implements AuditLogger { 
  public static finalLogger LOG = 
LoggerFactory.getLogger(TopAuditLogger.class); 

  private final TopMetrics topMetrics; 

  public TopAuditLogger(TopMetrics topMetrics) {
Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
"TopMetrics");
this.topMetrics = topMetrics; 
  }

  @Override
  public void initialize(Configuration conf) { 
  }
{code}
 

 

As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, `initAuditLoggers` 
will try to call its default constructor to make a new instance: 
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
      conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
    for (String className : alClasses) {
      try {
        AuditLogger logger;
        if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
          logger = new DefaultAuditLogger();
        } else {
          logger = (AuditLogger) Class.forName(className).newInstance();
        }
        logger.initialize(conf);
        auditLoggers.add(logger);
      } catch (RuntimeException re) {
        throw re;
      } catch (Exception e) {
        throw new RuntimeException(e);
      }
    }
  }
{code}
 

`initAuditLoggers` tries to call the default constructor to make a new instance 
in:
{code:java}
logger = (AuditLogger) Class.forName(className).newInstance();
{code}
This is very different from the default configuration, `default`, which 
implements a default constructor so the default is fine.

 

*How To Reproduce* 

The version of Hadoop: 2.10.0
 # Set the value of configuration parameter `dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` in 
"hdfs-site.xml"(the default value is `default`)
 # Start the namenode by running "start-dfs.sh"
 # The namenode will not be started successfully.

{code:java}

  dfs.namenode.audit.loggers
 

[jira] [Created] (HDFS-15124) Crashing bugs in NameNode when using a valid configuration for `dfs.namenode.audit.loggers`

2020-01-14 Thread Ctest (Jira)
Ctest created HDFS-15124:


 Summary: Crashing bugs in NameNode when using a valid 
configuration for `dfs.namenode.audit.loggers`
 Key: HDFS-15124
 URL: https://issues.apache.org/jira/browse/HDFS-15124
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.10.0
Reporter: Ctest


I am using Hadoop-2.10.0.

 

The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
(which is the default value) and 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

 

When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
namenode will not be started successfully because of an 
`InstantiationException` thrown from 
`org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 

 

The root cause is that while initializing namenode, `initAuditLoggers` will be 
called and it will try to call the default constructor of 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't have 
a default constructor. Thus the `InstantiationException` exception is thrown.

 

 

 

Symptom

 

$ ./start-dfs.sh

 

 

 

2019-12-18 14:05:20,670 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.java.lang.RuntimeException: 
java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused 
by: java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
java.lang.Class.newInstance(Class.java:427)at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
 8 moreCaused by: java.lang.NoSuchMethodException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
java.lang.Class.getConstructor0(Class.java:3082)at 
java.lang.Class.newInstance(Class.java:412)

... 9 more

 

 

 

 

Detailed Root Cause

 

There is no default constructor in 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`:

 

/** 

 * An \{@link AuditLogger} that sends logged data directly to the metrics 

 * systems. It is used when the top service is used directly by the name node 

 */ 

@InterfaceAudience.Private 

public class TopAuditLogger implements AuditLogger {     

  public static finalLogger LOG = 
LoggerFactory.getLogger(TopAuditLogger.class); 

 

  private final TopMetrics topMetrics; 

 

  public TopAuditLogger(TopMetrics topMetrics) {

    Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 

        "TopMetrics");

    this.topMetrics = topMetrics; 

  }

 

  @Override

  public void initialize(Configuration conf) { 

  }

As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, `initAuditLoggers` 
will try to call its default constructor to make a new instance:

 

private List initAuditLoggers(Configuration conf) {

  // Initialize the custom access loggers if configured.

  Collection alClasses =

      conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);

  List auditLoggers = Lists.newArrayList();

  if (alClasses != null && !alClasses.isEmpty()) {

    for (String className : alClasses) {

      try {

        AuditLogger logger;

        if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {

          logger = new DefaultAuditLogger();

        } else {

          logger = (AuditLogger) Class.forName(className).newInstance();

        }

        logger.initialize(conf);

        auditLoggers.add(logger);

      } catch (RuntimeException re) {

        throw re;

      } catch (Exception e) {

        throw new RuntimeException(e);

      }

    }

  }

`initAuditLoggers` tries to call the default constructor to make a new instance 
in:

 

logger = (AuditLogger) Class.forName(className).newInstance();

This is very different from the default configuration, `default`, which 
implements a default constructor so the default is fine.

 

 

 

How To Reproduce 

 

The version of Hadoop: 2.10.0

 

Set the value of configuration parameter `dfs.namenode.audit.loggers` to