scan startrow and stoprow

2015-04-22 Thread Sachin
My rowkey format is uniqueid|timestamp|randomnumber. I want to retrieve data from hbase by using scanner with java api where my startRow : aabb|timeStamp1| stopRow is : aabb|timeStamp2|*any_number timeStamp1 and timeStamp2 are time ranges. So I want to fetch all values in between above timest

scan startrow and stoprow

2015-04-22 Thread Sachin
My rowkey format is uniqueid|timestamp|randomnumber. I want to retrieve data from hbase by using scanner with java api where my startRow : aabb|timeStamp1| stopRow is : aabb|timeStamp2|any_number timeStamp1 and timeStamp2 are time ranges. So I want to fetch all values in between above timestam

How to retrive records from hbase which i inserted in last 7 days

2014-07-22 Thread Sachin
Hello, I want to know how can I retrive rows which are inserted in last 7 days or in a perticular time period. I am putting current timestamp in row key while inserting data to hbase.

Log levels for thrift logs

2013-10-23 Thread Sachin Sudarashana
ver, only DEBUG level logs are being generated. How do i change the logger level to include the other levels as well? Any help is greatly appreciated! Thank you, Sachin

Connecting to hbase 1.0.3 via java client stuck at zookeeper.ClientCnxn: Session establishment complete on server

2016-04-03 Thread Sachin Mittal
In my hosts I have this entry: 127.0.0.1 localhost.localdomain localhost Sachin-PC Also in hbase regionservers has one entry localhost I have tried many options for hbase.zookeeper.quorum like localhost, Sachin-PC, 127.0.0.1 but none have worked. Also note the jars I am using are of same

Re: Connecting to hbase 1.0.3 via java client stuck at zookeeper.ClientCnxn: Session establishment complete on server

2016-04-04 Thread Sachin Mittal
=sachin-pc,55964,1459772310378[main] client.MetaCache: Cached location: [region=hbase:meta,,1.1588230740, hostname=sachin-pc,55964,1459772310378, seqNum=0] [hconnection-0x1e67b872-shared--pool1-t1] ipc.AbstractRpcClient: Connecting to Sachin-PC/127.0.0.1:55964 java.net.SocketException: Socket is

Re: Connecting to hbase 1.0.3 via java client stuck at zookeeper.ClientCnxn: Session establishment complete on server

2016-04-05 Thread Sachin Mittal
Hi, I figured out the issue. The region server was listening to 192.168.1.102:55964 and not 127.0.0.1: 55964. 192.168.1.102 is the IP of my machine and Sachin-PC is my machine name. In hosts my entry was 127.0.0.1 localhost.localdomain localhost Sachin-PC I removed Sachin-PC from there and

Re: Can not connect local java client to a remote Hbase

2016-04-21 Thread Sachin Mittal
resolved differently as pointed in those links. Hope it helps. Sachin On Thu, Apr 21, 2016 at 11:11 PM, SOUFIANI Mustapha | السفياني مصطفى < s.mustaph...@gmail.com> wrote: > Hi all, > I'm trying to connect my local java client (pentaho) to a remote Hbase but > every time I

Re: Can not connect local java client to a remote Hbase

2016-04-22 Thread Sachin Mittal
you ports are open. Your settings are fine. Issue seems to be elsewhere bu I am not sure where. check with Pentaho maybe. On Fri, Apr 22, 2016 at 8:44 PM, SOUFIANI Mustapha | السفياني مصطفى < s.mustaph...@gmail.com> wrote: > Maybe those ports are not open: > hduser@big-services:~$ telnet localhos

How to get size of Hbase Table

2016-07-20 Thread Sachin Jain
know if there is an API or some approach to calculate the size of an HBase table. [0]: https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkStrategies.scala#L118 Thanks -Sachin

Re: How to get size of Hbase Table

2016-07-21 Thread Sachin Jain
leRegions(final TableName tableName) > > From HRegion: > > public static HDFSBlocksDistribution computeHDFSBlocksDistribution(final > Configuration conf, > > final HTableDescriptor tableDescriptor, final HRegionInfo regionInfo) > throws IOException { > > FYI >

Re: Issues with Spark On Hbase Connector

2016-08-28 Thread Sachin Jain
Hi Sudhir, There is connection leak problem with hortonworks hbase connector if you use hbase 1.2.0. I tried to use hortonwork's connector and felt into the same problem. Have a look at this Hbase issue HBASE-16017 [0]. The fix for this was backported to 1.3.0, 1.4.0 and 2.0.0 I have raised a tic

Re: Issues with Spark On Hbase Connector

2016-08-29 Thread Sachin Jain
If you take my code then it should work. I have tested it on Hbase 1.2.1. On Aug 29, 2016 12:21 PM, "spats" wrote: > Thanks Sachin. > > So it won't work with hbase 1.2.0 even if we use your code from shc branch? > > > > > -- > View this message

Default value of caching in Scanner

2016-10-31 Thread Sachin Jain
d for the new cache value. Thanks -Sachin

Re: Default value of caching in Scanner

2016-11-01 Thread Sachin Jain
/issues.apache.org/jira/browse/HBASE-16973 recently, you can get > more details there. > > Small world, isn't it? (Smile) > > Best Regards, > Yu > > On 1 November 2016 at 13:10, Sachin Jain wrote: > > > Hi, > > > > I am using HBase v1.1.2. I have f

Creating HBase table with presplits

2016-11-28 Thread Sachin Jain
oing to be inserted into HBase. Essentially I don't know the key range so if I specify wrong splits, then either first or last split can be a hot region in my system. [0]: https://hbase.apache.org/book.html#rowkey.regionsplits Thanks -Sachin

Re: Creating HBase table with presplits

2016-11-29 Thread Sachin Jain
orrect that there is no way to > presplit your regions in an effective way. Either you need to make some > starting guess, such as a small number of uniform splits, or wait until you > have some information about what the data will look like. > > Dave > > On Mon, Nov 28, 20

Downsides of having large number of versions in hbase

2016-11-29 Thread Sachin Jain
will not scan HFiles further. Because we are interested in latest version only and we have got in the file recently created. Want to confirm what is true among 1 and 2. Similarly, large number of versions can also degrade the performance of full scan for joins etc. Thanks -Sachin

Re: Downsides of having large number of versions in hbase

2016-11-30 Thread Sachin Jain
. [0]: http://hbase.apache.org/book.html#schema.versions On Tue, Nov 29, 2016 at 4:07 PM, Sachin Jain wrote: > Hi, > > I am curious to understand the impact of having large number of versions > in HBase. Suppose I want to maintain previous 100 versions for a row/cell. > &g

Re: Creating HBase table with presplits

2016-12-13 Thread Sachin Jain
calculate your keyspace size by a lot, you are stuck with > the > > hash function and range you selected even if you later get more regions > > unless you're willing to do complete migration to a new table > > > > Hope above helps. > > > > > > Saad >

Any Repercussions of using Multiwal

2017-06-05 Thread Sachin Jain
(whatever) scenarios. PS: *Hbase Configuration* Single Node (Local Setup) v1.3.1 Ubuntu 16 Core machine. Thanks -Sachin

Re: Any Repercussions of using Multiwal

2017-06-06 Thread Sachin Jain
g 64 wals allowed for a single RS. I thought one of the side effects of having multiwal enabled is that there will be *large amount of data waiting in unarchived wals.* So if a region server fails, it would take more time to playback the wal files and hence it could *compromise Availability.* W

Re: getting start and stop key

2017-06-06 Thread Sachin Jain
Just to add @Ted Yu's answer, you can confirm this by looking at your HMaster UI and see the regions and their boundaries. On Tue, Jun 6, 2017 at 3:50 PM, Ted Yu wrote: > Looks like your table has only one region. > > > On Jun 6, 2017, at 3:14 AM, Rajeshkumar J > wrote: > > > > I am getting sta

Regarding Connection Pooling

2017-06-12 Thread Sachin Jain
ize, does that mean I can serve only N parallel requests if all those requests have to deal with same hbase region server. Is this true ? [0]: https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ConnectionFactory.html Thanks -Sachin

Re: Regarding Connection Pooling

2017-06-12 Thread Sachin Jain
. On 12-Jun-2017 7:31 PM, "Allan Yang" wrote: Connection is thread safe. You can use it across different threads. And requests made by different thread are handled in parallel no matter the keys are in the same region or not. 2017-06-12 20:44 GMT+08:00 Sachin Jain : > Hi, > >

Re: Regarding Connection Pooling

2017-06-12 Thread Sachin Jain
cket to each RS, and the calls written to this > socket are synchronized(or queued using another thread called CallSender ). > But usually, this won't become a bottleneck. If this is a problem for you, > you can tune "hbase.client.ipc.pool.size". > > 2017-06-12 23:47 GMT+08:

Re: Regarding Connection Pooling

2017-06-16 Thread Sachin Jain
n Mon, Jun 12, 2017 at 9:35 PM, Sachin Jain wrote: > Thanks Allan, > > This is what I understood initially that further calls will be serial if a > request is already pending on some RS. I am running hbase 1.3.1 > Is "hbase.client.ipc.pool.size" still valid ? I thought it

Implementation of full table scan using Spark

2017-06-28 Thread Sachin Jain
the full table scan spark job. Q2. When I issue a get command, Is there a way to know if the record is served from blockCache, memstore or Hfile? Thanks -Sachin

Re: Implementation of full table scan using Spark

2017-06-28 Thread Sachin Jain
de of TableInputFormat and see if I get something new. On Thu, Jun 29, 2017 at 9:31 AM, Jingcheng Du wrote: > Hi Sachin, > The TableInputFormat should read the memstore. > The TableInputFormat is converted to scan to each region, the operations in > each region should be a normal scan

Re: Slow HBase write across data center

2017-06-29 Thread Sachin Jain
Try to figure out which region server is handling those writes and it could be possible that particular region server is skewing your cluster's write performance. Another thing to check if your data is already skewed across regions/region servers. Once I faced this issue, I enabled multiwal and u

Change delimiter in column qualifier

2017-09-19 Thread Sachin Jain
Hi, I am using hbase in a system which does not allow using colon between column name and column family. Is there any configuration where we an provide hbase to use underscore (_) as delimiter instead of colon (:) between name and family ? Thanks -Sachin

Re: Change delimiter in column qualifier

2017-09-19 Thread Sachin Jain
informatica connector. I do not want to go into much details about informatica connector and all. Just wanted to know if we can somehow override the colon delim with underscore using some configuration. Thanks -Sachin On Tue, Sep 19, 2017 at 9:01 PM, Ted Yu wrote: > Can you give the error