I meant an HDFS chunk (the size of 64MB), and I meant the version of
0.20.2 without append patch.

I think even without the append patch, the previous 64MB blocks (in my
example, the first 5 blocks) should be safe. Isn't it?


On 3/13/11, Ted Dunning <tdunn...@maprtech.com> wrote:
> What do you mean by block?  An HDFS chunk?  Or a flushed write?
>
> The answer depends a bit on which version of HDFS / Hadoop you are using.
>  With the append branches, things happen a lot more like what you expect.
>  Without that version, it is difficult to say what will happen.
>
> Also, there are very few guarantees about what happens if the namenode
> crashes.  There are some provisions for recovery, but none of them really
> have any sort of transactional guarantees.  This means that there may be
> some unspecified time before the writes that you have done are actually
> persisted in a recoverable way.
>
> On Sun, Mar 13, 2011 at 9:52 AM, Sean Bigdatafun
> <sean.bigdata...@gmail.com>wrote:
>
>> Let's say an HDFS client starts writing a file A (which is 10 blocks
>> long) and 5 blocks have been writen to datanodes.
>>
>> At this time, if the HDFS client crashes (apparently without a close
>> op), will we see 5 valid blocks for file A?
>>
>> Similary, at this time if the HDFS cluster crashes, will we see 5
>> valid blocks for file A?
>>
>> (I guess both answers are yes, but I'd have some confirmation :-)
>> --
>> --Sean
>>
>


-- 
--Sean

Reply via email to