Re: Custom Retention of data based on Rowkey

2017-03-09 Thread Gaurav Agarwal
Hi, Looking deeper, I found that RegionObserver interface provides general hooks to intercept the pre-compaction scanner. That should suffice for our purpose! In any case, if there are any suggestions/guidelines, it will be much appreciated. From: Gaurav Agarwal <gauravagarw...@gmail.

Custom Retention of data based on Rowkey

2017-03-09 Thread Gaurav Agarwal
Hi All, We have an application that stores information on multiple users/customers/tenants in a common table. Each tenant has a unique id which we encode in the row key of the records that are stored in the table. We want to apply custom (and dynamically updatable) data retention

Re: Hbase in memory rest running in jenkins

2016-03-19 Thread Gaurav Agarwal
No Jenkins is running on Unix box then what need to be done On Mar 16, 2016 8:12 PM, "Ted Yu" <yuzhih...@gmail.com> wrote: > bq. Do I need to configure winutils.exe > > Assuming the Jenkins runs on Windows, winutils.exe would be needed. > > Cheers > > On

Re: Hbase in memory rest running in jenkins

2016-03-19 Thread Gaurav Agarwal
Thanks, Will confirm On Wed, Mar 16, 2016 at 9:24 PM, Ted Yu <yuzhih...@gmail.com> wrote: > maven, git (which I think you may have setup already) > > winutils.exe is out of the picture. > > On Wed, Mar 16, 2016 at 7:56 AM, Gaurav Agarwal <gaurav130...@gmail.com>

Hbase in memory rest running in jenkins

2016-03-19 Thread Gaurav Agarwal
Hello I used hbasetestingutility to run test cases on Windows and configure winutils.exe there in the class path. But when I have to run same test case on Jenkins where no hadoop is installed .Do I need to configure winutils.exe there or what needs to be done please share the info Thanks

hbase inMemory Test And DFS Cluster

2016-03-18 Thread Gaurav Agarwal
Hello I have one more query with HbaseTestingUtility. HbaseTestingUtility testing = new HbaseTestingUtility(); testing.startDfsCluster(); testing.startHbaseCluster(); This will start Hdfs Cluster and Hbase Cluster, While creating the table in hbase-inememory, Master Coprocessor will create the

Re: Example of spinning up a Hbase mock style test for integration testing in scala

2016-03-14 Thread Gaurav Agarwal
I am not aware of a scala heating am only aware of java testing with hbase. You can hbasetestingutility.java that is present in hbase testing java. Call there startminicluster or dfs cluster. Thanks On Mar 14, 2016 12:22 PM, "Nkechi Achara" wrote: > Hi, > > I am trying

Hbase procedure

2016-03-02 Thread Gaurav Agarwal
Hello I was exploring namespace in hbasse and suddenly saw there is something called as hbaseadmin.execProcedure(...); The comment is it is a globally barrier distributed stored procedure. Can I take some one help me to find Is this similar to rdbms stored proc? Or any inputs where can be

Re: Compaction settings per table

2016-02-26 Thread Gaurav Agarwal
statement and use the API on HTD/ HCD > > -Anoop- > > On Fri, Feb 26, 2016 at 11:21 AM, Gaurav Agarwal <gau...@arkin.net> wrote: > > Thanks Enis! this is really helpful. > > I could not understand your second suggestion; could you please explain > it > &g

Re: Compaction settings per table

2016-02-25 Thread Gaurav Agarwal
nfiguration() values. > > You can also use hbase shell to set the configuration as well. > > Enis > > On Thu, Feb 25, 2016 at 4:51 AM, Gaurav Agarwal <gau...@arkin.net> wrote: > > > Go the answer to memstore size per table > > via TableDescriptor#setMemStoreFl

Re: Compaction settings per table

2016-02-25 Thread Gaurav Agarwal
Go the answer to memstore size per table via TableDescriptor#setMemStoreFlushSize(long) On Thu, Feb 25, 2016 at 5:38 PM, Gaurav Agarwal <gau...@arkin.net> wrote: > In addition, is there a way to set memstore flush size per table/cf as > well? > > On Thu, Feb 25, 2016 at 5:20

Re: Compaction settings per table

2016-02-25 Thread Gaurav Agarwal
In addition, is there a way to set memstore flush size per table/cf as well? On Thu, Feb 25, 2016 at 5:20 PM, Gaurav Agarwal <gau...@arkin.net> wrote: > Hi, > > Is there a way to set Compaction configurations differently for each of my > table? Specificall

Compaction settings per table

2016-02-25 Thread Gaurav Agarwal
Hi, Is there a way to set Compaction configurations differently for each of my table? Specifically, I want to tweak `hbase.hstore.compaction.min.size` parameter for one of my table while keeping it to its default value for others. --cheers, gaurav

RE: Hbase testing utility

2016-02-22 Thread Gaurav Agarwal
It worked thanks On Feb 22, 2016 9:00 PM, "Gaurav Agarwal" <gaurav130...@gmail.com> wrote: > I got the files now and I am getting NPE when i debug I got WINUTILS as > null in diskchecker.checkDir() > On Feb 22, 2016 6:55 PM, "ashish singhi" <ashish.sin...@

Re: Hbase testing utility

2016-02-22 Thread Gaurav Agarwal
Hbase 0.98.4 and hadoop 2.2 On Feb 22, 2016 6:41 PM, "Ted Yu" <yuzhih...@gmail.com> wrote: > Which hbase / hadoop release are you using ? > > Can you give the complete stack trace ? > > Cheers > > On Mon, Feb 22, 2016 at 3:03 AM, Gaurav Agarwal <gaura

RE: Hbase testing utility

2016-02-22 Thread Gaurav Agarwal
p processes on Windows. > If you don't have hadoop dll files then you can generate your own. Google > "Steps to build Hadoop bin distribution for Windows". > > Regards, > Ashish > > -Original Message- > From: Gaurav Agarwal [mailto:gaurav130...@gmail.com] &g

Hi

2016-02-22 Thread Gaurav Agarwal
I am trying to hbase testing utility to start minicluster on my local but getting some exception Java.Lang.unsatisfiedLinkError: org.apache.hadoop.ok.native.NativeIO$windows.access Please let me know what needs to be done

Hbase testing utility

2016-02-22 Thread Gaurav Agarwal
> I am trying to hbase testing utility to start minicluster on my local but getting some exception > Java.Lang.unsatisfiedLinkError: org.apache.hadoop.ok.native.NativeIO$windows.access > > Please let me know what needs to be done

Re: Hi

2016-02-22 Thread Gaurav Agarwal
Hi > > I am trying to hbase testing utility to start minicluster on my local but getting some exception > Java.Lang.unsatisfiedLinkError: org.apache.hadoop.ok.native.NativeIO$windows.access > > Please let me know what needs to be done

Re: wal.FSHLog: Error syncing, request close of wal (regionserver crashes)

2015-10-29 Thread Gaurav Agarwal
Hi, just a bump on this post to check if anyone knows more about this... On Mon, Oct 26, 2015 at 11:06 PM, Gaurav Agarwal <gau...@arkin.net> wrote: > Hi All, > > We are running hbase - *Version 1.0.0-cdh5.4.2, rUnknown, Tue May 19 > 17:04:41 PDT 2015,* and are facing the

wal.FSHLog: Error syncing, request close of wal (regionserver crashes)

2015-10-26 Thread Gaurav Agarwal
Hi All, We are running hbase - *Version 1.0.0-cdh5.4.2, rUnknown, Tue May 19 17:04:41 PDT 2015,* and are facing the problem in the bug ( https://issues.apache.org/jira/browse/HBASE-12074), where the regionserver crashes due to concurrent roll of wal file. Below are failure logs from one of the

Re: Large number of column qualifiers

2015-09-24 Thread Gaurav Agarwal
-value size, but its > > > unlimited in a master (I was wrong about 1MB, this is probably for > older > > > versions of HBase) > > > > > > > > > -Vlad > > > > > > On Wed, Sep 23, 2015 at 11:45 AM, Gaurav Agarw

Re: Large number of column qualifiers

2015-09-24 Thread Gaurav Agarwal
please point me these settings/code? On Thu, Sep 24, 2015 at 12:05 PM, Gaurav Agarwal <gau...@arkin.net> wrote: > After spending more time I realised that my understanding and my question > (was invalid). > I am still trying to get more information regarding the problem an

Re: Large number of column qualifiers

2015-09-23 Thread Gaurav Agarwal
at HBASE-11544 which is in hbase 1.1 > > Cheers > > On Wed, Sep 23, 2015 at 10:18 AM, Gaurav Agarwal <gau...@arkin.net> wrote: > > > Hi All, > > > > I have Column Family with very large number of column qualifiers (> > > 50,000). Each column qualifier

Large number of column qualifiers

2015-09-23 Thread Gaurav Agarwal
Hi All, I have Column Family with very large number of column qualifiers (> 50,000). Each column qualifier is 8 bytes long. The problem is the when I do a scan operation to fetch some rows, the client side Cell object does not have enough space allocated in it to hold all the columnQaulifiers for

Re: Large number of column qualifiers

2015-09-23 Thread Gaurav Agarwal
is * Short.MAX_VALUE which is 32,767 bytes. * @return The array containing the qualifier bytes. */ byte[] getQualifierArray(); On Thu, Sep 24, 2015 at 12:10 AM, Gaurav Agarwal <gau...@arkin.net> wrote: > Thanks Vlad. Could you please point me the KV size setting (default 1MB)? > Just

Re: How to delete regions from a table

2015-05-11 Thread Gaurav Agarwal
Thanks! Will try these steps and update how it went.. -- View this message in context: http://apache-hbase.679495.n3.nabble.com/How-to-delete-regions-from-a-table-tp4071226p4071239.html Sent from the HBase User mailing list archive at Nabble.com.

Re: How to delete regions from a table

2015-05-10 Thread Gaurav Agarwal
Hi Talat, Thanks for the reply! Few specific followup questions: 1. Do I need to disable and enable table for these operations? what would be exact sequence? 2. I guess somewhere in between these steps, I need to manually delete the real data files corresponding to these regions, from hdfs? 3.

How to delete regions from a table

2015-05-10 Thread Gaurav Agarwal
Hi All, We are using hbase version 0.96.1.1-cdh5.0.0 and need to selectively delete some regions from a table. I can afford to disable the table for sometime in order to perform this activity but absolutely cannot risk loosing the data stored in active regions (other than ones that need to be

How to delete regions from a table

2015-05-10 Thread Gaurav Agarwal
Hi All, We are using hbase version 0.96.1.1-cdh5.0.0 and need to selectively delete some regions from a table. I can afford to disable the table for sometime in order to perform this activity but absolutely cannot risk loosing the data stored in active regions (other than ones that need to be

Hbase overhead for completely inactive regions

2014-12-28 Thread Gaurav Agarwal
Hi All, I have timeseries data that has most of the the regions completely inactive. With my current set of resources and estimates, I would end up with close to 15TB of data per RegionServer and with a region size of about 15G, this would mean 1000 regions per region server. On whole I expect