Hi guys,


Does anyone knows how to ‘capture’ the exception which actually failed the
job running on Mapper or Reducer at runtime? It seems Hadoop is designed to
be fault tolerant that the failed jobs will be automatically rerun for a
certain amount of times and won’t actually expose the real problem unless
you look into the error log? In my use case, I would like to capture the
exception and make different response based on the type of the exception.

Thanks in advance.



Regards,

Ken

Reply via email to