I was wondering if someone could give me some answers or maybe some pointers
where to look in the code.  All these questions are in the same vein of hard
drive failure.

Question 1: If a master (system disks/data) is lost for good, can the data
on all the slave nodes be recovered? meaning are data blocks serialized and
rebuildable?

Question 2: If data blocks have different hashes, hows does hadoop handle
which block is right during replication?

Question 3: How does hadoop handle bad sectors on a disk? For example, on a
raid, the raid will reject the whole disk.

Question 4: If I were to unplug a hot-swap drive, then i were to reconnect
it a few days later, how does hadoop handle this?  I am assuming that hadoop
would see the missing/out of sync data blocks and re-balance?

Question 5: Can hadoop tell me when a hard drive (a data dir path) is going
bad? If not, any papers or docs on how tod eal with drive failure would be
great.

Thank you in advance.


-Ryan

Reply via email to