Hi Coli,
Thank you very much for your reply.
I am researching the hadoop2.0 code, I find there are below code in
FSEditLogLoader.loadEditRecords method.
if (op.hasTransactionId()) {
if (op.getTransactionId() > expectedTxId) {
MetaRecoveryContext.editLogLoaderPrompt("T
Hi,
If you want to learn more about HA in HDFS, here are some slides from
a talk that Aaron T. Meyers and Suresh Srinivas gave:
http://www.slideshare.net/hortonworks/nn-ha-hadoop-worldfinal-10173419
branch-2 and later contain HDFS HA.
cheers,
Colin
On Sun, Nov 4, 2012 at 1:06 AM, lei liu wrot
> I research FSEditLog.loadFSEdits method, I find OP_ADD and OP_CLOSE
> opration first delete inode(call FSDirectory.unprotectedDelete method),
> then re-add inode(call FSDirectory.unprotectedAddFile method), so if NN
> read one OP_ADD transaction log and add one inode to namespace for the
> op
I research FSEditLog.loadFSEdits method, I find OP_ADD and OP_CLOSE
opration first delete inode(call FSDirectory.unprotectedDelete method),
then re-add inode(call FSDirectory.unprotectedAddFile method), so if NN
read one OP_ADD transaction log and add one inode to namespace for the
operation, w
I am using hadoop0.20.2, now I want to use HDFS HA function. I research
AvatarNode. I find if the StandbyNN do checkpoint fail, when next time the
StandbyNN do checkpoint, the same edits file is loaded again. Can same
edits file be loaded more than once in hadoop0.20.2?
if not, what is the harm?