[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-21 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286911#comment-14286911
 ] 

Vinayakumar B commented on HDFS-3443:
-

Thanks [~szetszwo] for reviews and commit

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Fix For: 2.6.1
>
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443-007.patch, 
> HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286148#comment-14286148
 ] 

Hudson commented on HDFS-3443:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6904 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6904/])
HDFS-3443. Fix NPE when namenode transition to active during startup by adding 
checkNNStartup() in NameNodeRpcServer.  Contributed by Vinayakumar B (szetszwo: 
rev db334bb8625da97c7e518cbcf477530c7ba7001e)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java


> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443-007.patch, 
> HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Se

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-21 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286058#comment-14286058
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

> I have intentionally didnt add check to these. ... is that fine with you?

Sure, let's leave them unchecked for the moment.  We may add checkNNStartup() 
later on if necessary.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443-007.patch, 
> HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285409#comment-14285409
 ] 

Hadoop QA commented on HDFS-3443:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693518/HDFS-3443-007.patch
  against trunk revision 6b17eb9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
  org.apache.hadoop.hdfs.server.balancer.TestBalancer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9289//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9289//console

This message is automatically generated.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443-007.patch, 
> HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-20 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14285234#comment-14285234
 ] 

Vinayakumar B commented on HDFS-3443:
-

{quote}Need to add checkNNStartup() for the rpc methods below.
getGroupsForUser(String)
refresh(String, String[])
refreshCallQueue()
refreshSuperUserGroupsConfiguration()
refreshUserToGroupsMappings(){quote}
I have intentionally didnt add check to these. Because except 
refreshCallQueue(), all are RPCs for static datastructure updates. Not really 
relevant to NameNode instance. So NameNode startup doesn't really matters for 
these. {{refreshCallQueue()}} also doesn't really do anything specific to 
NameNode it only refreshes RPC server's call queue. 
And some don't have {{throws}} clause for throwing exception in their signature 
and these protocols are from hadoop-common. So don't want to messup those.

is that fine with you?

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443-007.patch, 
> HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-20 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14284276#comment-14284276
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

Thanks Vinay.  Some comments on the patch:
- NameNode.started should be volatile or use AtomicBoolean.
- Need to add checkNNStartup() for the rpc methods below.
-* getGroupsForUser(String)
-* refresh(String, String[])
-* refreshCallQueue()
-* refreshSuperUserGroupsConfiguration()
-* refreshUserToGroupsMappings()


> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-20 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283613#comment-14283613
 ] 

Vinayakumar B commented on HDFS-3443:
-

Above failures are unrelated.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283600#comment-14283600
 ] 

Hadoop QA commented on HDFS-3443:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693217/HDFS-3443-006.patch
  against trunk revision 5a6c084.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
  org.apache.hadoop.hdfs.server.namenode.TestFileTruncate

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9274//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9274//console

This message is automatically generated.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443-006.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>  

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14283386#comment-14283386
 ] 

Vinayakumar B commented on HDFS-3443:
-

bq. Hi Vinay, I do not oppose the idea of using lock. But it seems not easy to 
get it right as some unit tests still failing. Also, it will be harder for 
changing the code later on. Why not adding a boolean for indicating namenode 
starting up? It looks like a straightforward solution to me.
Thanks for the clarification [~szetszwo]. I am fine with using boolean option.
I will try to post a patch with the boolean changes soon.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282794#comment-14282794
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

> All these will be processed once all the services (common and state specific) 
> are started, because after this patch everything starts under same lock.

Hi Vinay, I do not oppose the idea of using lock.  But it seems not easy to get 
it right as some unit tests still failing.  Also, it will be harder for 
changing the code later on.  Why not adding a boolean for indicating namenode 
starting up?  It looks like a straightforward solution to me.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282550#comment-14282550
 ] 

Hadoop QA commented on HDFS-3443:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693066/HDFS-3443-005.patch
  against trunk revision 19cbce3.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
  org.apache.hadoop.hdfs.server.namenode.TestBackupNode
  org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9266//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9266//console

This message is automatically generated.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443-005.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apac

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282279#comment-14282279
 ] 

Hadoop QA commented on HDFS-3443:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12693012/HDFS-3443-004.patch
  against trunk revision 24315e7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.namenode.TestStartup
  org.apache.hadoop.hdfs.server.namenode.TestBackupNode
  org.apache.hadoop.hdfs.server.namenode.TestFsLimits

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9262//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9262//console

This message is automatically generated.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover, ha
>Reporter: suja s
>Assignee: Vinayakumar B
> Attachments: HDFS-3443-003.patch, HDFS-3443-004.patch, 
> HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-18 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14282148#comment-14282148
 ] 

Vinayakumar B commented on HDFS-3443:
-

{quote}Some methods such as saveNamespace() and refreshNodes are 
OperationCategory.UNCHECKED operations so that standby nn should serve them.
Some other methods such as blockReceivedAndDeleted(), 
refreshUserToGroupsMappings() and addSpanReceiver() do not check 
OperationCategory. Some of them probably are bugs.{quote}
All these will be processed once all the services (common and state specific) 
are started, because after this patch everything starts under same lock.
So I feel not a problem.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-14 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277967#comment-14277967
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

> ... Remaining all requests will anyway will be rejected since the initial 
> state will be STANDBY. ...

Some methods such as saveNamespace() and refreshNodes are 
OperationCategory.UNCHECKED operations so that standby nn should serve them.

Some other methods such as blockReceivedAndDeleted(), 
refreshUserToGroupsMappings() and addSpanReceiver() do not check 
OperationCategory.  Some of them probably are bugs.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277867#comment-14277867
 ] 

Hadoop QA commented on HDFS-3443:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12692158/HDFS-3443-003.patch
  against trunk revision 6464a89.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestParallelShortCircuitRead
  org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
  org.apache.hadoop.hdfs.server.namenode.TestAllowFormat
  
org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
  org.apache.hadoop.hdfs.TestBlockStoragePolicy
  org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
  
org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM
  org.apache.hadoop.hdfs.TestEncryptedTransfer
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics
  org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
  org.apache.hadoop.hdfs.TestSnapshotCommands
  org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing
  org.apache.hadoop.hdfs.TestRead
  
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots
  org.apache.hadoop.hdfs.TestBlocksScheduledCounter
  
org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
  org.apache.hadoop.cli.TestHDFSCLI
  org.apache.hadoop.hdfs.TestDFSPermission
  org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
  org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
  org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure
  org.apache.hadoop.hdfs.tools.TestGetGroups
  org.apache.hadoop.hdfs.server.namenode.TestStartup
  
org.apache.hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics
  org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr
  org.apache.hadoop.hdfs.TestMultiThreadedHflush
  org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
  org.apache.hadoop.hdfs.TestDFSClientFailover
  org.apache.hadoop.hdfs.TestBlockReaderLocal
  org.apache.hadoop.cli.TestCacheAdminCLI
  org.apache.hadoop.hdfs.server.mover.TestMover
  
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestRbwSpaceReservation
  
org.apache.hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks
  
org.apache.hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits
  
org.apache.hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality
  org.apache.hadoop.hdfs.server.namenode.TestNameNodeRecovery
  
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
  org.apache.hadoop.hdfs.TestLeaseRecovery2
  org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
  org.apache.hadoop.hdfs.server.namenode.TestFSImageWithAcl
  
org.apache.hadoop.hdfs.server.namenode.TestLargeDirectoryDelete
  org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
  org.apache.hadoop.fs.TestFcHdfsSetUMask
  org.apache.hadoop.hdfs.TestPread
  org.apache.hadoop.hdfs.server.namenode.TestFSEditLogLoader
  
org.apache

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-13 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276510#comment-14276510
 ] 

Vinayakumar B commented on HDFS-3443:
-

bq, How about adding a boolean for indicating namenode starting up so that 
NameNodeRpcServer could refuse all operations?
Option is good. currently I think only transition RPCs will have problem, which 
should be after this patch.
Remaining all requests will anyway will be rejected since the initial state 
will be STANDBY.

am I right?

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443-003.patch, HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-13 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14275346#comment-14275346
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

Hi [~amithdk], are you still working on this issue?  If not, I am happy to pick 
this up.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2015-01-10 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272551#comment-14272551
 ] 

Tsz Wo Nicholas Sze commented on HDFS-3443:
---

It seems that trunk still has this bug.  How about adding a boolean for 
indicating namenode starting up so that NameNodeRpcServer could refuse all 
operations?

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-10-24 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483910#comment-13483910
 ] 

Vinay commented on HDFS-3443:
-

I have one more option to solve this without breaking the inheritance for 
BackupNode.

How about creating EditLogTailer instance inside constructor of FSNameSystem, 
because this is used in both StandBy and Active states. We will start/stop the 
thread only in Standby as usual.
{code}this.tailerThread = new EditLogTailerThread(){code}
Above initialization we will do in EditLogTrailer#start().

Since *editLogTrailer* is the only object which is initialized in standby 
state, and also used in active state. So always order should be maintained. 
After above suggested fix, maintaining the order not required.


> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.ja

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-08-31 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13446285#comment-13446285
 ] 

Uma Maheswara Rao G commented on HDFS-3443:
---

Cancelling the patch, as the patch handles the inheritance wrongly. It will 
break the BNN functionality.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-06-18 Thread amith (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396504#comment-13396504
 ] 

amith commented on HDFS-3443:
-

Hi Todd can u review the patch :)

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-06-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289202#comment-13289202
 ] 

Hadoop QA commented on HDFS-3443:
-

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12530906/HDFS-3443_1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/2589//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2589//console

This message is automatically generated.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
> Attachments: HDFS-3443_1.patch, HDFS-3443_1.patch
>
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:

[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-06-04 Thread amith (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13288398#comment-13288398
 ] 

amith commented on HDFS-3443:
-

Yes Todd working on it, will provide patch soon

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-06-01 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287736#comment-13287736
 ] 

Todd Lipcon commented on HDFS-3443:
---

Hey Amith. Are you planning on working on this? Happy to review.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: amith
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-05-23 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13281795#comment-13281795
 ] 

Uma Maheswara Rao G commented on HDFS-3443:
---

Now I could move this to top-level issue..

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: auto-failover
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-05-20 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279837#comment-13279837
 ] 

Todd Lipcon commented on HDFS-3443:
---

Thanks, Uma. Mind filing a JIRA in the "INFRA" project for the failure to 
convert to top-level? The JIRA upgrade last week probably broke it.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: auto-failover
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-05-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279775#comment-13279775
 ] 

Uma Maheswara Rao G commented on HDFS-3443:
---

I could not covert to parent issue. 

Cause:
java.lang.ClassNotFoundException: 
org.apache.jsp.secure.views.issue.convertissuetosubtask_002dselectparentandtype_jsp
 

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: auto-failover
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-05-20 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279773#comment-13279773
 ] 

Uma Maheswara Rao G commented on HDFS-3443:
---

Yes, You are right. We can trigger this manually as well. But this will trigger 
mostly with Automatic failover.
Since you are planning the merge, I will prepare patch on trunk it-self 
directly. I think, I can move this to top level issue. 


> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: auto-failover
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-05-19 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279674#comment-13279674
 ] 

Todd Lipcon commented on HDFS-3443:
---

Just to clarify, this is a bug with HA in general, not specifically auto 
failover, right? i.e. if a user manually triggered failover as the NN was 
starting up, the same problem would occur, I think.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: auto-failover
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-05-18 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279278#comment-13279278
 ] 

Todd Lipcon commented on HDFS-3443:
---

I think it makes sense we should lock the FSN lock while we're starting active 
services at startup -- ie we should write-lock everything, start the RPC 
server, wait until initialization is all done, then unlock.

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: auto-failover
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HDFS-3443) Unable to catch up edits during standby to active switch due to NPE

2012-05-18 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13278734#comment-13278734
 ] 

Uma Maheswara Rao G commented on HDFS-3443:
---

I think, StandBy node not initialized completely and ZKFC gave the call for 
transitioning to active. At this point We have the FSNameSystem write lock only 
for starting activeServices. So, parallelly starting activeServices also can go 
ahead. By this time, editLogTailer might not have been initialized sompletely. 
Hence thrwing NPE.

I think we should block the active initialization until it completes standby 
initialization. should have lock here?
or since there wont be any FSNameSystem updations for standby locking may not 
be require here. So, just having null check should be fine?

> Unable to catch up edits during standby to active switch due to NPE
> ---
>
> Key: HDFS-3443
> URL: https://issues.apache.org/jira/browse/HDFS-3443
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: auto-failover
>Reporter: suja s
>Assignee: Uma Maheswara Rao G
>
> Start NN
> Let NN standby services be started.
> Before the editLogTailer is initialised start ZKFC and allow the 
> activeservices start to proceed further.
> Here editLogTailer.catchupDuringFailover() will throw NPE.
> void startActiveServices() throws IOException {
> LOG.info("Starting services required for active state");
> writeLock();
> try {
>   FSEditLog editLog = dir.fsImage.getEditLog();
>   
>   if (!editLog.isOpenForWrite()) {
> // During startup, we're already open for write during initialization.
> editLog.initJournalsForWrite();
> // May need to recover
> editLog.recoverUnclosedStreams();
> 
> LOG.info("Catching up to latest edits from old active before " +
> "taking over writer role in edits logs.");
> editLogTailer.catchupDuringFailover();
> {noformat}
> 2012-05-18 16:51:27,585 WARN org.apache.hadoop.ipc.Server: IPC Server 
> Responder, call org.apache.hadoop.ha.HAServiceProtocol.getServiceStatus from 
> XX.XX.XX.55:58003: output error
> 2012-05-18 16:51:27,586 WARN org.apache.hadoop.ipc.Server: IPC Server handler 
> 8 on 8020, call org.apache.hadoop.ha.HAServiceProtocol.transitionToActive 
> from XX.XX.XX.55:58004: error: java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:602)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1287)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:63)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1219)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:978)
>   at 
> org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
>   at 
> org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3633)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> 2012-05-18 16:51:27,586 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
> 9 on 8020 caught an exception
> java.nio.channels.ClosedChannelException
>   at 
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:133)
>   at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:324)
>   at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2092)
>   at org.apache.hadoop.ipc.Server.access$2000(Server.java:107)
>   at 
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:930)
>   at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:994)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1738)
> {noformat}

--
This mes