[ 
https://issues.apache.org/jira/browse/HDFS-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336302#comment-14336302
 ] 

J.Andreina commented on HDFS-7842:
----------------------------------

Observation:
===========
Logs after Step 5
{noformat}
Namenode Log:
=============
15/02/25 13:10:59 INFO hdfs.StateChange: BLOCK* allocate 
blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-da5955d6-d021-4576-aa43-6caf70fcfd17:NORMAL:XXXXXXXXXXX:50010|RBW]]}
 for /File_1._COPYING_
15/02/25 13:10:59 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: XXXXXXXXXXX:50010 is added to blk_1073741830_1006{UCState=COMMITTED, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-da5955d6-d021-4576-aa43-6caf70fcfd17:NORMAL:XXXXXXXXXXX:50010|RBW]]}
 size 11526
15/02/25 13:10:59 INFO hdfs.StateChange: DIR* completeFile: /File_1._COPYING_ 
is closed by DFSClient_NONMAPREDUCE_-1004187273_1

Datanode Log:
=============
2015-02-25 13:10:59,222 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Receiving BP-1954121396-XXXXXXXXXXX-1424840820188:blk_1073741830_1006 src: 
/XXXXXXXXXXX:34363 dest: /XXXXXXXXXXX:50010
2015-02-25 13:10:59,295 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
PacketResponder: BP-1954121396-XXXXXXXXXXX-1424840820188:blk_1073741830_1006, 
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
{noformat}

Logs after step 12

{noformat}
Namenode Log:
============
15/02/25 13:15:51 INFO BlockStateChange: BLOCK* InvalidateBlocks: add 
blk_1073741830_1006 to XXXXXXXXXXX:50010

15/02/25 13:16:04 INFO hdfs.StateChange: BLOCK* allocate 
blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-f560cc10-74e8-4ea8-a8d9-6959fe5c1104:NORMAL:XXXXXXXXXXX:50010|RBW]]}
 for /File_2._COPYING_
15/02/25 13:16:05 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: XXXXXXXXXXX:50010 is added to 
blk_1073741830_1006{UCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-da5955d6-d021-4576-aa43-6caf70fcfd17:NORMAL:XXXXXXXXXXX:50010|FINALIZED]]}
 size 0
15/02/25 13:16:05 INFO hdfs.StateChange: DIR* completeFile: /File_2._COPYING_ 
is closed by DFSClient_NONMAPREDUCE_-1317707332_1

Datanode Log:
=============
2015-02-25 13:15:51,831 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Enabled trash for bpid BP-1954121396-XXXXXXXXXXX-1424840820188
2015-02-25 13:15:54,801 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
 Scheduling blk_1073741830_1006 file 
/mnt/tmp1/current/BP-1954121396-XXXXXXXXXXX-1424840820188/current/finalized/subdir0/subdir0/blk_1073741830
 for deletion
2015-02-25 13:15:54,805 INFO 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
 Deleted BP-1954121396-XXXXXXXXXXX-1424840820188 blk_1073741830_1006 file 
/mnt/tmp1/current/BP-1954121396-XXXXXXXXXXX-1424840820188/current/finalized/subdir0/subdir0/blk_1073741830

2015-02-25 13:16:05,074 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
Receiving BP-1954121396-XXXXXXXXXXX-1424840820188:blk_1073741830_1006 src: 
/XXXXXXXXXXX:34528 dest: /XXXXXXXXXXX:50010
2015-02-25 13:16:05,138 INFO 
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 
/XXXXXXXXXXX:34528, dest: /XXXXXXXXXXX:50010, bytes: 6324, op: HDFS_WRITE, 
cliID: DFSClient_NONMAPREDUCE_-1317707332_1, offset: 0, srvID: 
e33b81ce-8820-4343-955f-8726965d1917, blockid: 
BP-1954121396-XXXXXXXXXXX-1424840820188:blk_1073741830_1006, duration: 50371413
2015-02-25 13:16:05,141 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
PacketResponder: BP-1954121396-XXXXXXXXXXX-1424840820188:blk_1073741830_1006, 
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
{noformat}

Log after Step 14
{noformat}
Datanode Log:
=============
2015-02-25 13:18:06,796 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Restoring 
/mnt/tmp1/current/BP-1954121396-XXXXXXXXXXX-1424840820188/trash/finalized/subdir0/subdir0/blk_1073741832_1008.meta
 to 
/mnt/tmp1/current/BP-1954121396-XXXXXXXXXXX-1424840820188/current/finalized/subdir0/subdir0
2015-02-25 13:18:06,797 INFO org.apache.hadoop.hdfs.server.common.Storage: 
Restored 4 block files from trash.

Namenode Log:
============
15/02/25 13:18:07 INFO BlockStateChange: BLOCK 
NameSystem.addToCorruptReplicasMap: blk_1073741830 added as corrupt on 
XXXXXXXXXXX:50010 by host-10-177-112-123/XXXXXXXXXXX  because block is COMPLETE 
and reported length 11526 does not match length in block map 6324
15/02/25 13:18:07 INFO BlockStateChange: BLOCK* processReport: from storage 
DS-da5955d6-d021-4576-aa43-6caf70fcfd17 node DatanodeRegistration(XXXXXXXXXXX, 
datanodeUuid=e33b81ce-8820-4343-955f-8726965d1917, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-56;cid=CID-dd48fb1f-1d88-4d65-90c3-a7535053f4e1;nsid=2021392782;c=0),
 blocks: 5, hasStaleStorage: false, processing time: 0 msecs
{noformat}


Suggession :
===========
Restoring blocks from trash after downgrade can be avoided.

Please review and give your opinion on this issue. If it sounds good ill give a 
patch on this.

> Blocks missed while performing downgrade immediately after rolling back the 
> cluster.
> ------------------------------------------------------------------------------------
>
>                 Key: HDFS-7842
>                 URL: https://issues.apache.org/jira/browse/HDFS-7842
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: J.Andreina
>            Assignee: J.Andreina
>            Priority: Critical
>
> Performing downgrade immediately after rolling back the cluster , will 
> replace the blocks from trash 
> Since the block id for the files created before rollback will be same as the 
> file created before downgrade, namenode will get into safemode , as the block 
> size reported from Datanode will be different from the one in block map 
> (corrupted blocks) .
> Steps to Reproduce
> {noformat}
> Step 1: Prepare rolling upgrade using "hdfs dfsadmin -rollingUpgrade prepare"
> Step 2: Shutdown SNN and NN
> Step 3: Start NN with the "hdfs namenode -rollingUpgrade started" option.
> Step 4: Executed "hdfs dfsadmin -shutdownDatanode <DATANODE_HOST:IPC_PORT> 
> upgrade" and restarted Datanode
> Step 5: Create File_1 of size 11526
> Step 6: Shutdown both NN and DN
> Step 7: Start NNs with the "hdfs namenode -rollingUpgrade rollback" option.
>       Start DNs with the "-rollback" option.
> Step 8: Prepare rolling upgrade using "hdfs dfsadmin -rollingUpgrade prepare"
> Step 9: Shutdown SNN and NN
> Step 10: Start NN with the "hdfs namenode -rollingUpgrade started" option .
> Step 11: Executed "hdfs dfsadmin -shutdownDatanode <DATANODE_HOST:IPC_PORT> 
> upgrade" and restarted Datanode
> step 12: Add file File_2 with size 6324 (which has same blockid as previous 
> created File_1 with block size 11526)
> Step 13: Shutdown both NN and DN
> Step 14: Start NNs with the "hdfs namenode -rollingUpgrade downgrade" 
> option.Start DNs normally.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to