Hello, do you plan to add that support on the field level?
I can write this process using the pig for example, right? But each retention check will require full input file check, record by record, i can see anything more sophisticated how to design/solve it. Thank you On Sun, Jan 24, 2016 at 5:14 PM, Venkat Ramachandran <[email protected]> wrote: > Falcon data management is agnostic to the data and schema. Retiring > specific range of rows inside a file is not supported. However, you can > write a custom job that can read the data, remove those older records and > write it out - this process can be managed by Falcon. Yes, this will be a > resource intensive Hadoop job.
