Any help about this..?
What if I save a field as an array? how could I read it from a mapreduce
job? Is there a separator char to use for splitting or what?

On Tue, Sep 9, 2014 at 10:36 AM, Flavio Pompermaier <[email protected]>
wrote:

> Hi to all,
>
> I'd like to know which is the correct way to run a mapreduce job on a
> table managed by phoenix to put data in another table (always managed by
> Phoenix).
> Is it sufficient to read data contained in column 0 (like 0:id, 0:value)
> and create insert statements in the reducer to put things correctly in the
> output table?
> Should I filter rows containing some special value for ccolumn 0:_0..?
>
> Best,
> FP
>

Reply via email to