Hi, I found the SkippingRecordReader is no longer supported in the new api and
I am curious about the reason, can anyone tell me.
Besides, when I look into the old api and try to figure out what skip mode was
doing, I am a little confused about the logic there.
In my comprehension, if java api
Albert,
Thanks for the link. This is indeed what I am talking about.
The authors have taken the idea even further, avoiding disk writes on either
the mapper or reducer side. It's not clear to me that this scales well to
1000s of nodes however, as the downside to not landing data on disk on the
Sandeep/Mayank,
If you take a look at the volume selection parts of the code, you can
notice it is simply round robin. There's no way we continuously may select
the same disk, unless the disk is deselected for errors (tolerated) or
space (due to lack or reservation). Its better to monitor for a
from the log, there is no room on the HDFS.
--Send from my Sony mobile.
On Jun 16, 2013 5:12 AM, sumit piparsania sumitpiparsa...@yahoo.com
wrote:
Hi,
I am getting the below error while executing the command. Kindly assist me
in resolving this issue.
$ bin/hadoop fs -put conf input