Hi Nishanth.

Not clear what exactly you are building.
Can you share more detailed description of what you are building, how
parquet files are supposed to be ingested.
Some questions arise:
1. is that online import or bulk load
2. why rules need to be deployed to cluster. Do you suppose to do reading
inside hbase region server?

As for deploying filters your cat try to use coprocessors instead. They can
be configurable and loadable (but not
unloadable, so you need to think about some class loading magic like
ClassWorlds)
For bulk imports you can create HFiles directly and add them incrementally:
http://hbase.apache.org/book/arch.bulk.load.html

On Wed, Oct 8, 2014 at 8:13 PM, Nishanth S <nishanth.2...@gmail.com> wrote:

> I was thinking of using org.apache.hadoop.hbase.mapreduce.Driver import. I
> could see that we can pass in filters  to this utility but looks less
> flexible since  you need to deploy a new filter every time  the rules for
> processing records change.Is there some way that we could define a rules
> engine?
>
>
> Thanks,
> -Nishan
>
> On Wed, Oct 8, 2014 at 9:50 AM, Nishanth S <nishanth.2...@gmail.com>
> wrote:
>
> > Hey folks,
> >
> > I am evaluating on loading  an  hbase table from parquet files based on
> > some rules that  would be applied on  parquet file records.Could some one
> > help me on what would be the best way to do this?.
> >
> >
> > Thanks,
> > Nishan
> >
>



-- 
Andrey.

Reply via email to