ZanderXu commented on PR #4628:
URL: https://github.com/apache/hadoop/pull/4628#issuecomment-1213594732

   Thanks @xkrogen for your detailed explanation. 
   I have through this corner case during coding this patch. Please correct me 
if I'm wrong. I think above scenario always existed before HDFS-12943, so I 
fixed this bug like this way. And we can find a good idea to fix data loss 
issue you mentioned above.
   
   > 1. Currently NN0 is active and JN0-2 all have txn 2 committed.
   2. NN0 attempts to write txn 3. It only succeeds to JN0, and crashes before 
writing to JN1/JN2.
   3. We fail over to NN1, which currently has txns up to 1
   4. NN1 attempts to load most recent state from JNs
        4a. Before HDFS-12943, NN1 uses `getEditLogManifest()`, it will load 
and apply txn 2 AND 3.
   
   Because during NN0 stoping active service, it will close the current 
segment, last finalize segment of JN0 contains the txn 3. So during NN1 
starting active service, it can load and apply txn 3 through  
`getEditLogManifest()`.
   
   And the current logic in `startActiveServices` is confusing.
   1.  using `onlyDurableTxns=true` to catchup all edits from JNs.
   2. using `onlyDurableTxns=false` to check if there are newer txid readable 
in `openForWrite`.
   
   There is indeed a probability of data loss, if the disk in JN0 corrupted 
before the segment is not synchronized by JN1 and JN2 in time. But maybe we 
need add a new logic to find this case and let JNs synchronously sync the 
missing txid, such as in `startActive()` method.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to