Hi,

I passed 3,344,109,862 records to ORDER and got 3,339,587,570 in the
output with no noticeable errors.

There were three jobs.
First got 3,344,109,862 records (map input) and produced the same
number (map output).
Second got 248,820 (map input) and produced 1 (reduce output).
Third got 3,339,587,570 (map input) and produced the same number
(reduce output).
So I guess something was wrong in the second job.

I used pig from trunk at revision 743989 and hadoop from branch-0.19
at revision 745383.

I'd be happy to use pig with no data lost and ready to provide
additional details or tests if it helps.
Thanks.

Reply via email to