Thank you Naga & Sunil .
Naga, Would like to know more about the counters ; Are they a cluster wide
resource managed at a central location - so they can be tracked/verified
later ?!
Please advise
Thanks,
Rajila
On Tue, Jul 25, 2017 at 7:01 PM, Naganarasimha Garla <
Hi Rajila,
One option you can think of is using custom "counters" and
have a logic to increment them when ever you insert or have any custom
logic. These counters can be got from the MR interfaces and even in the web
ui even after the job has finished.
Regards,
+ Naga
On Tue, Jul
Hi Cinyoung,
Concat has some restrictions, like the need for src file having last block size
to be the same as the configured dfs.block.size. If all the conditions are met,
below command example should work (where we are concatenating /user/root/file-2
into /user/root/file-1):
curl -i -X
Hi Rajila,
>From YARN side, you will be able to get detailed information about the
application. And that application could be MapReduce or anything. But in
side that mapreduce app, what kind of operation is done, its specific to
that application (here its mapreduce).
YARN could only be able to
https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Concat_Files
I tried to concat multiple parts to single target file through webhdfs.
But, I couldn't do it.
Could you give me examples concatenating parts?