What are you using to load the data? It sounds like your loader is reporting a
desired schema, but does not actually convert the data into the schema. So it
tells pig to expect ints, but gives it byte arrays.
On Feb 6, 2012, at 10:48 AM, praveenesh kumar wrote:
> Hi everyone,
>
> I have a qu
Thanks. Was hoping/assuming there was a built-in, but I guess udf it is.
Eli
On 2/9/12 2:14 PM, Yulia Tolskaya wrote:
I actually can't think of an easy way to do this without it becoming a
cross product. You could just right a really simple udf that takes a bag
and spits out just the members.
I actually can't think of an easy way to do this without it becoming a
cross product. You could just right a really simple udf that takes a bag
and spits out just the members.
Yulia
On 2/9/12 1:26 PM, "Eli Finkelshteyn" wrote:
>This is probably easy, but my PigLatin is rusty, and I don't seem t
This is probably easy, but my PigLatin is rusty, and I don't seem to be
able to find an answer on Google. If I have a record of the form:
98812 3 {(48567859),(15996334),(15897772)}
How can I flatten that bag to leave all members on a single row, ie:
9881234856785915
The code pasted is wrong, sorry for my mistake.
here's the updated code:
@SuppressWarnings("rawtypes")
public class HEDataConverter extends EvalFunc {
@Override
public Map exec(Tuple input) throws IOException {
byte[] mapValue = ((DataByteArray) input.get(0)).get(