Uhm...

You want to save the counters as in counts per job run or something? (Remember 
HDFS == WORM) 

Then you could do a sequence file and then use something like HBase to manage 
the index. 
(Every time you add a set of counters, you have a new file and a new index.) 
Heck you could use HBase for the whole thing but it would be overkill if this 
was all that you were doing. 


On Jul 23, 2013, at 4:57 PM, Elazar Leibovich <elaz...@gmail.com> wrote:

> Hi,
> 
> A common use case one want an ordered structure for, is for saving counters.
> 
> Naturally, I wanted to save the counters in a Mapfile:
> 
>     for (long ix = 0; ix < MAXVALUE; ix++) {
>         mapfile.append(new Text("counter key of val " + ix), new 
> LongWritable(ix));
>     }
> 
> This however looks a bit inefficient. We'll store two files, and an index 
> file. The index file will contain an offset (long) to the sequence file, 
> which would contain a single long.
> 
> I'd rather have only the index file, that would store the counter value 
> instead of offsets.
> 
> Is there a way to do that with Mapfile? Perhaps there's a better way to save 
> searchable counters in HDFS file?
> 
> Thanks,

Reply via email to