Hi All,

I am just started Learning fundamentals of  HDFS  and its internal
mechanism , concepts used here are very impressive and looks simple but
makes me confusing and my question is *who will responsible for handling
DFS write failure in pipe line (assume replication factor is 3 and 2nd DN
failed in the pipeline)*? if any data node failed during the pipe line
write then the entire pipe line will get stopped? or new data node added to
the existing pipe line? how this entire mechanism works?I really appreciate
if someone with good knowledge of HDFS can explains to me.

Note:I read bunch of documents but none seems to be explained what i am
looking for.

thanks
srinivas

Reply via email to