Counters are per-job in Hadoop MapReduce. You need an external aggregator for
such cross-job counters - for e.g. a node in Zookeeper.
Also, is it just for display or your job-logic depends on this? If it is the
earlier, and if you don't have a problem with waiting till jobs finish, you can
do
Hi Subbu,
You're probably looking for something called "Distributed counters". Take a
look at this question at StackOverflow:
http://stackoverflow.com/questions/2671858/distributed-sequence-number-generation
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E
Thank you Bejoy.
Cheers!
Manoj.
On Mon, Sep 10, 2012 at 1:36 PM, Bejoy Ks wrote:
> Hi Manoj
>
> From my limited knowledge on file appends in hdfs , i have seen more
> recommendations to use sync() in the latest releases than using append().
> Let us wait for some commiter to authoritatively c
Hi Manoj
>From my limited knowledge on file appends in hdfs , i have seen more
recommendations to use sync() in the latest releases than using append().
Let us wait for some commiter to authoritatively comment on 'the production
readiness of append()' . :)
Regards
Bejoy KS
On Mon, Sep 10, 2012 a