Re: Counters across all jobs

2012-09-10 Thread Vinod Kumar Vavilapalli
Counters are per-job in Hadoop MapReduce. You need an external aggregator for such cross-job counters - for e.g. a node in Zookeeper. Also, is it just for display or your job-logic depends on this? If it is the earlier, and if you don't have a problem with waiting till jobs finish, you can do

Re: Counters across all jobs

2012-09-10 Thread Robin Verlangen
Hi Subbu, You're probably looking for something called "Distributed counters". Take a look at this question at StackOverflow: http://stackoverflow.com/questions/2671858/distributed-sequence-number-generation Best regards, Robin Verlangen *Software engineer* * * W http://www.robinverlangen.nl E

Re: Reg: parsing all files & file append

2012-09-10 Thread Manoj Babu
Thank you Bejoy. Cheers! Manoj. On Mon, Sep 10, 2012 at 1:36 PM, Bejoy Ks wrote: > Hi Manoj > > From my limited knowledge on file appends in hdfs , i have seen more > recommendations to use sync() in the latest releases than using append(). > Let us wait for some commiter to authoritatively c

Re: Reg: parsing all files & file append

2012-09-10 Thread Bejoy Ks
Hi Manoj >From my limited knowledge on file appends in hdfs , i have seen more recommendations to use sync() in the latest releases than using append(). Let us wait for some commiter to authoritatively comment on 'the production readiness of append()' . :) Regards Bejoy KS On Mon, Sep 10, 2012 a