[ 
https://issues.apache.org/jira/browse/HDFS-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12758946#action_12758946
 ] 

ryan rawson commented on HDFS-200:
----------------------------------

I'm not sure exactly what I'm seeing, here is some flow of events:

namenode says when the regionserver closes a log:

2009-09-23 17:21:05,128 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.blockReceived: from 10.10.21.38:50010 1 blocks.
2009-09-23 17:21:05,128 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.blockReceived: blk_8594965619504827451_4351 is received from 
10.10.21.38:50010
2009-09-23 17:21:05,128 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.10.21.38:50010 is added to 
blk_859496561950482745
1_4351 size 573866
2009-09-23 17:21:05,130 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.blockReceived: from 10.10.21.45:50010 1 blocks.
2009-09-23 17:21:05,130 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.blockReceived: blk_8594965619504827451_4351 is received from 
10.10.21.45:50010
2009-09-23 17:21:05,130 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.10.21.45:50010 is added to 
blk_859496561950482745
1_4351 size 573866
2009-09-23 17:21:05,131 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.blockReceived: from 10.10.21.32:50010 1 blocks.
2009-09-23 17:21:05,131 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.blockReceived: blk_8594965619504827451_4351 is received from 
10.10.21.32:50010
2009-09-23 17:21:05,131 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.addStoredBlock: blockMap updated: 10.10.21.32:50010 is added to 
blk_859496561950482745
1_4351 size 573866
2009-09-23 17:21:05,131 DEBUG org.apache.hadoop.hdfs.StateChange: *DIR* 
NameNode.complete: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for DFSClien
t_-2129099062
2009-09-23 17:21:05,131 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.completeFile: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for DFS
Client_-2129099062
2009-09-23 17:21:05,132 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
FSDirectory.closeFile: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 with 1 bl
ocks is persisted to the file system
2009-09-23 17:21:05,132 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.completeFile: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 blockli
st persisted

So at this point we have 3 blocks, they have all checked in, right?

then during logfile recovery we see:

2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: *DIR* 
NameNode.append: file 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for DFSCl
ient_-828773542 at 10.10.21.29
2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.startFile: 
src=/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228, holde
r=DFSClient_-828773542, clientMachine=10.10.21.29, replication=512, 
overwrite=false, append=true
2009-09-23 17:21:45,997 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions: 
567 Total time for transactions(ms): 9Number of transactions ba
tched in Syncs: 54 Number of syncs: 374 SyncTimes(ms): 12023 4148 3690 7663 
2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: 
UnderReplicationBlocks.update blk_8594965619504827451_4351 curReplicas 0 
curExpectedReplicas 3 oldReplicas 0 oldExpectedReplicas  3 curPri  2 oldPri  2
2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.UnderReplicationBlock.update:blk_8594965619504827451_4351 has only 0 
replicas and need 3 replicas so is added to neededReplications at priority 
level 2
2009-09-23 17:21:45,997 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
NameSystem.appendFile: file 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for 
DFSClient_-828773542 at 10.10.21.29 block blk_8594965619504827451_4351 block 
size 573866
2009-09-23 17:21:45,997 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=hadoop,hadoop    
   ip=/10.10.21.29 cmd=append      
src=/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228   
dst=null        perm=null
2009-09-23 17:21:47,265 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.UnderReplicationBlock.remove: Removing block 
blk_8594965619504827451_4351 from priority queue 2
2009-09-23 17:21:56,016 DEBUG org.apache.hadoop.hdfs.StateChange: *BLOCK* 
NameNode.addBlock: file 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for 
DFSClient_-828773542
2009-09-23 17:21:56,016 DEBUG org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.getAdditionalBlock: file 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 for 
DFSClient_-828773542
2009-09-23 17:21:56,016 DEBUG org.apache.hadoop.hdfs.StateChange: DIR* 
FSDirectory.addFile: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228 with 
blk_-7773763727920913762_4364 block is added to the in-memory file system
2009-09-23 17:21:56,016 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
NameSystem.allocateBlock: 
/hbase/.logs/sv4borg32,60020,1253751520085/hlog.dat.1253751663228. 
blk_-7773763727920913762_4364

At this point the client is stuck waiting on a block recovery that wont ever 
really happen.

I set this config at your suggestion:
<property>
  <name>dfs.replication.min</name>
  <value>2</value>
</property>

in the hbase-site.xml so it should be in the HBase dfs client config.

from my reading of the logfile, apparently the namenode seems to "forget" the 
other 2 locations of the block in question. Am I reading it wrong?

> In HDFS, sync() not yet guarantees data available to the new readers
> --------------------------------------------------------------------
>
>                 Key: HDFS-200
>                 URL: https://issues.apache.org/jira/browse/HDFS-200
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Tsz Wo (Nicholas), SZE
>            Assignee: dhruba borthakur
>            Priority: Blocker
>         Attachments: 4379_20081010TC3.java, fsyncConcurrentReaders.txt, 
> fsyncConcurrentReaders11_20.txt, fsyncConcurrentReaders12_20.txt, 
> fsyncConcurrentReaders13_20.txt, fsyncConcurrentReaders14_20.txt, 
> fsyncConcurrentReaders3.patch, fsyncConcurrentReaders4.patch, 
> fsyncConcurrentReaders5.txt, fsyncConcurrentReaders6.patch, 
> fsyncConcurrentReaders9.patch, 
> hadoop-stack-namenode-aa0-000-12.u.powerset.com.log.gz, 
> hdfs-200-ryan-existing-file-fail.txt, hypertable-namenode.log.gz, 
> namenode.log, namenode.log, Reader.java, Reader.java, reopen_test.sh, 
> ReopenProblem.java, Writer.java, Writer.java
>
>
> In the append design doc 
> (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc), it 
> says
> * A reader is guaranteed to be able to read data that was 'flushed' before 
> the reader opened the file
> However, this feature is not yet implemented.  Note that the operation 
> 'flushed' is now called "sync".

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to