Hi Gabriel,

I am using phoenix psql.py utility to load this data.

Thanks Yeshwant I have already done that.

Regards,
Poonam.



On Wed, Sep 17, 2014 at 8:11 PM, Gabriel Reid <[email protected]>
wrote:

> Hi Poonam,
>
> Could you say a bit about how you're doing this bulk load? Are you
> loading from a CSV file, or making JDBC calls yourself (or something
> else)?
>
> Are you aware of the CSV bulk loader
> (http://phoenix.apache.org/bulk_dataload.html)? That will probably
> give you the best performance for a large load of data.
>
> - Gabriel
>
>
> On Wed, Sep 17, 2014 at 3:44 PM, Poonam Ligade
> <[email protected]> wrote:
> > Hi,
> >
> > I am trying to load around 15GB of data in hbase
> > Files are split in around 1.3GB. I have 2 regionservers and 1 master
> running
> > on seperate machines with 4 GB physical RAM each.
> > Heapsize For Hmaster and regionserver is 3 GB each
> >
> > How can I configure HBase to handle these writes faster.
> >
> > Currently I have below settings
> >
> > hbase.regionserver.handler.count                         60
> > hbase.client.write.buffer                                      8388608
> > hbase.hregion.memstore.flush.size                     134217728
> > hbase.regionserver.global.memstore.lowerLimit    0.95
> >
> > hbase.regionserver.global.memstore.upperLimit    0.4
> >
> > hbase.hregion.max.filesize                                 3294967296
> >
> >
> >
> > Regards,
> > Poonam
>

Reply via email to