What do you mean by block?  An HDFS chunk?  Or a flushed write?

The answer depends a bit on which version of HDFS / Hadoop you are using.
 With the append branches, things happen a lot more like what you expect.
 Without that version, it is difficult to say what will happen.

Also, there are very few guarantees about what happens if the namenode
crashes.  There are some provisions for recovery, but none of them really
have any sort of transactional guarantees.  This means that there may be
some unspecified time before the writes that you have done are actually
persisted in a recoverable way.

On Sun, Mar 13, 2011 at 9:52 AM, Sean Bigdatafun
<sean.bigdata...@gmail.com>wrote:

> Let's say an HDFS client starts writing a file A (which is 10 blocks
> long) and 5 blocks have been writen to datanodes.
>
> At this time, if the HDFS client crashes (apparently without a close
> op), will we see 5 valid blocks for file A?
>
> Similary, at this time if the HDFS cluster crashes, will we see 5
> valid blocks for file A?
>
> (I guess both answers are yes, but I'd have some confirmation :-)
> --
> --Sean
>

Reply via email to