[jira] [Resolved] (HDFS-6590) NullPointerException was generated in getBlockLocalPathInfo when datanode restarts

2014-10-26 Thread Shengjun Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengjun Xin resolved HDFS-6590.

Resolution: Fixed

> NullPointerException was generated in getBlockLocalPathInfo when datanode 
> restarts
> --
>
> Key: HDFS-6590
> URL: https://issues.apache.org/jira/browse/HDFS-6590
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Guo Ruijing
>
> 2014-06-11 20:34:40.240119, p43949, th140725562181728, ERROR cannot setup 
> block reader for Block: [block pool ID: 
> BP-1901161041-172.28.1.251-1402542341112 block ID 1073741926_1102] on 
> Datanode: sdw3(172.28.1.3).
> RpcHelper.h: 74: HdfsIOException: Unexpected exception: when unwrap the rpc 
> remote exception ""java.lang.NullPointerException"", 
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.getBlockLocalPathInfo(DataNode.java:1014)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientDatanodeProtocolServerSideTranslatorPB.getBlockLocalPathInfo(ClientDatanodeProtocolServerSideTranslatorPB.java:112)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientDatanodeProtocolProtos$ClientDatanodeProtocolService$2.callBlockingMethod(ClientDatanodeProtocolProtos.java:6373)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-6520) Failed to run fsck -move

2014-06-11 Thread Shengjun Xin (JIRA)
Shengjun Xin created HDFS-6520:
--

 Summary: Failed to run fsck -move
 Key: HDFS-6520
 URL: https://issues.apache.org/jira/browse/HDFS-6520
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Shengjun Xin


I met some error when I run fsck -move.
My steps are as the following:
1. Set up a pseudo cluster
2. Copy a file to hdfs
3. Corrupt a block of the file
4. Run fsck to check:
{code}
Connecting to namenode via http://localhost:50070
FSCK started by hadoop (auth:SIMPLE) from /127.0.0.1 for path /user/hadoop at 
Wed Jun 11 15:58:38 CST 2014
.
/user/hadoop/fsck-test: CORRUPT blockpool BP-654596295-10.37.7.84-1402466764642 
block blk_1073741825

/user/hadoop/fsck-test: MISSING 1 blocks of total size 1048576 B.Status: CORRUPT
 Total size:4104304 B
 Total dirs:1
 Total files:   1
 Total symlinks:0
 Total blocks (validated):  4 (avg. block size 1026076 B)
  
  CORRUPT FILES:1
  MISSING BLOCKS:   1
  MISSING SIZE: 1048576 B
  CORRUPT BLOCKS:   1
  
 Minimally replicated blocks:   3 (75.0 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:1
 Average block replication: 0.75
 Corrupt blocks:1
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  1
 Number of racks:   1
FSCK ended at Wed Jun 11 15:58:38 CST 2014 in 1 milliseconds


The filesystem under path '/user/hadoop' is CORRUPT
{code}
5. Run fsck -move to move the corrupted file to /lost+found and the error 
message in the namenode log:
{code}
2014-06-11 15:48:16,686 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: 
FSCK started by hadoop (auth:SIMPLE) from /127.0.0.1 for path /user/hadoop at 
Wed Jun 11 15:48:16 CST 2014
2014-06-11 15:48:16,894 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 35 Total time for transactions(ms): 9 Number of 
transactions batched in Syncs: 0 Number of syncs: 25 SyncTimes(ms): 73
2014-06-11 15:48:16,991 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: 
Error reading block
java.io.IOException: Expected empty end-of-read packet! Header: PacketHeader 
with packetLen=66048 header data: offsetInBlock: 65536
seqno: 1
lastPacketInBlock: false
dataLen: 65536

at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readTrailingEmptyPacket(RemoteBlockReader2.java:259)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:220)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:138)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlock(NamenodeFsck.java:649)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(NamenodeFsck.java:543)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:460)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.check(NamenodeFsck.java:324)
at 
org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.fsck(NamenodeFsck.java:233)
at 
org.apache.hadoop.hdfs.server.namenode.FsckServlet$1.run(FsckServlet.java:67)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at 
org.apache.hadoop.hdfs.server.namenode.FsckServlet.doGet(FsckServlet.java:58)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1192)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppCont