[
https://issues.apache.org/jira/browse/HADOOP-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12678469#action_12678469
]
dhruba borthakur commented on HADOOP-4379:
------------------------------------------
Hi Jim,
In the namenode log, I see the followign statement:
NameSystem.startFile: failed to create file
/hbase/log_208.76.44.139_1235681989771_8020/hlog.dat.1235682444379 for
DFSClient_1908447348 on client 208.76.44.139 because current leaseholder is
trying to recreate file.
This means that the same client that originally created the file is trying to
re-open the file. Is this possible? It started at time 21:14 and continued all
the way to 22:07. An attempt was made to recreate thias file every 10 seconds
(which matches the periodicity that you set)
> In HDFS, sync() not yet guarantees data available to the new readers
> --------------------------------------------------------------------
>
> Key: HADOOP-4379
> URL: https://issues.apache.org/jira/browse/HADOOP-4379
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Reporter: Tsz Wo (Nicholas), SZE
> Assignee: dhruba borthakur
> Priority: Blocker
> Fix For: 0.19.2, 0.20.0
>
> Attachments: 4379_20081010TC3.java, fsyncConcurrentReaders.txt,
> fsyncConcurrentReaders3.patch, fsyncConcurrentReaders4.patch,
> hypertable-namenode.log.gz, namenode.log, Reader.java, Reader.java,
> reopen_test.sh, ReopenProblem.java, Writer.java, Writer.java
>
>
> In the append design doc
> (https://issues.apache.org/jira/secure/attachment/12370562/Appends.doc), it
> says
> * A reader is guaranteed to be able to read data that was 'flushed' before
> the reader opened the file
> However, this feature is not yet implemented. Note that the operation
> 'flushed' is now called "sync".
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.