Look at
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setCaching(int)
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)
Regards,
Srikanth
-Original Message-
From: vishnupriyaa [mailto:vpatoff...@gmail.com]
Sent: Thursday, May
maybe you had
something more specific in mind?
On May 13, 2012, at 9:21 PM, "Srikanth P. Shreenivas"
wrote:
> There is a possibility that you may lose data, and hence, I would not use it
> for first class data if data cannot be re-created.
> If you can derive data from seconda
There is a possibility that you may lose data, and hence, I would not use it
for first class data if data cannot be re-created.
If you can derive data from secondary source and store data in HBase for
performance gains, then, it is a viable use case.
Regards,
Srikanth
-Original Message-
mainly happens if
> the zookeeper node for META is still available.
>
> Regards
> Ram
>
> > -----Original Message-
> > From: Srikanth P. Shreenivas
> > [mailto:srikanth_shreeni...@mindtree.com]
> > Sent: Wednesday, May 02, 2012 6:22 PM
> > To: user@hba
We had shifted our machines from one location to another location.
After bringing up the system, when we started our Hadoop and HBase clusters,
we realized that clocks were out of sync and region servers would not start up.
After syncing the clock, when we restarted the cluster, we are observing
Did you consider using HBase Backup/Restore? We had in the past moved data
from one cluster to another using this technique.
http://hbase.apache.org/book/ops.backup.html
regards,
Srikanth
-Original Message-
From: Manuel de Ferran [mailto:manuel.defer...@gmail.com]
Sent: Tuesday, April
So, will it be safe to assume that Scan queries with TimeRange will perform
well and will read only necessary portions of the tables instead of doing full
table scan?
I have run into a situation, wherein I would like to find out all rows that got
create/updated on during a time range.
I was hop
dary. Compactions will not alter region boundaries, except in the
case of splits where a compaction is necessary to filter out any Rows from
the parent region that are no longer applicable to the daughter region.
On 11/22/11 9:04 AM, "Srikanth P. Shreenivas"
wrote:
>Will major compact
Will major compactions take care of merging "older" regions or adding more
key/values to them as number of regions grow?
Regard,
Srikanth
-Original Message-
From: Amandeep Khurana [mailto:ama...@gmail.com]
Sent: Monday, November 21, 2011 7:25 AM
To: user@hbase.apache.org
Subject: Re: Reg
ify.
How does that design help you do something?
Dave
-Original Message-
From: Srikanth P. Shreenivas [mailto:srikanth_shreeni...@mindtree.com]
Sent: Thursday, September 01, 2011 11:53 AM
To: user@hbase.apache.org
Subject: Tall-Narrow vs. Flat-Wide Tables
Hi,
HBase: The Definiti
Hi,
HBase: The Definitive Guide book's chapter 9 talks about Tall-Narrow vs
Flat-wide tables. (http://ofps.oreilly.com/titles/9781449396107/advanced.html)
It seems to propose that Tall-Narrow tables (more rows, less columns) is better
design. One of the issue it talks about with "Flat-wide" ta
Hi Jimson,
In my experience, I have observed that as you increase number of threads, the
get/put starts taking more time.
The reason being that same TCP connection is used for all the gets/puts from a
single JVM. All requests are multiplexed on the same connection.
Hence, your example of gets t
-Original Message-
From: Srikanth P. Shreenivas [mailto:srikanth_shreeni...@mindtree.com]
Sent: Saturday, August 20, 2011 6:27 PM
To: user@hbase.apache.org
Subject: RE: Query regarding HTable.get and timeouts
Further in this investigation, we enabled the debug logs on client side.
We are
reciate all the
inputs and suggestions.
-Original Message-
From: Srikanth P. Shreenivas
Sent: Monday, August 22, 2011 3:43 PM
To: user@hbase.apache.org
Subject: RE: Query regarding HTable.get and timeouts
Yes, DC1AuthDFSC1D3 hosts the root region. It is also region server 3.
e's a firewall in
between the client and the cluster, also I would make sure that the
client is running the same version as the server.
J-D
On Sat, Aug 20, 2011 at 5:56 AM, Srikanth P. Shreenivas
wrote:
> Further in this investigation, we enabled the debug logs on client side.
>
>
will be greatly appreciated.
Thanks a lot,
Srikanth
-Original Message-
From: Srikanth P. Shreenivas
Sent: Saturday, August 20, 2011 1:57 AM
To: user@hbase.apache.org
Subject: RE: Query regarding HTable.get and timeouts
I did some tests today. In our QA setup, we dont see any issues
how should we go about investigating this
issue, that will be real help.
Regards,
Srikanth.
From: Srikanth P. Shreenivas
Sent: Friday, August 19, 2011 12:39 AM
To: user@hbase.apache.org
Subject: RE: Query regarding HTable.get and timeouts
My apologies, I
be 10 (default) and if we go with
default value of HConstants.RETRY_BACKOFF, then, sleep time added with all
retries will be only 61 seconds, and not close to 600 seconds as the case in
our code is.
Regards,
Srikanth
From: Srikanth P. Shreenivas
Sent
If it helps, I tried LZO setup on CDH3 on my Ubuntu.
I have documented the steps here, should work fine for others too.
http://www.srikanthps.com/2011/08/configuring-lzo-compression-for-cdh3.html
Regards,
Srikanth
From: Sandy Pratt [prat...@adobe.com]
Se
Please note that line numbers I am referencing are from the file :
https://github.com/apache/hbase/blob/trunk/src/main/java/org/apache/hadoop/hbase/client/HConnectionManager.java
From: Srikanth P. Shreenivas
Sent: Friday, August 19, 2011 12:19 AM
To
could the
container be doing the interrupting? I've not come across
client-side thread interrupts before.
St.Ack
On Thu, Aug 18, 2011 at 7:37 AM, Srikanth P. Shreenivas
wrote:
> Hi,
>
> We are experiencing an issue in our HBase Cluster wherein some of the gets
> are timing outs
Hi,
We are experiencing an issue in our HBase Cluster wherein some of the gets are
timing outs at:
java.io.IOException: Giving up trying to get region server: thread is
interrupted.
at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWi
Refer : http://wiki.apache.org/hadoop/Hbase/FAQ#A6
-Original Message-
From: shanmuganathan.r [mailto:shanmuganatha...@zohocorp.com]
Sent: Wednesday, July 27, 2011 4:43 PM
To: user@hbase.apache.org
Subject: how to solve this problem?
Hi All,
I am running the hbase in fully d
There are few ways to look at it:
1) Lets say your row key is a LONG value that keeps incrementing for every
writes, then, you can use, HTable.getRowOrBefore method multiple times to get
hold of latest N entries. For the first call, you can pass Long.MAX_VALUE as
row key, the HBase will retur
Hi,
We are trying to design a HBase table, and we seem to be tending towards
packing 3 fields into our row key, as we need to be able to do random access
using these 3 fields.
The row key is turning out to be around 93 characters in size.
Is it okay to use a 100 character long row keys? It wil
above mentioned FAQ.
>This seems to have resolved the issue. I ran the test app for 20 minutes
>with no read timeouts.
>
>
>Thanks for all the help.
>
>Regards,
>Srikanth
>
>
>
>-Original Message-
>From: Srikanth P. Shreenivas
>Sent: Sunday, July
issue. I ran the test app for 20 minutes with
no read timeouts.
Thanks for all the help.
Regards,
Srikanth
-Original Message-
From: Srikanth P. Shreenivas
Sent: Sunday, July 10, 2011 5:20 PM
To: user@hbase.apache.org
Subject: RE: HBase Read and Write Issues in Mutlithreaded
t; 3c.com3 56
>
> 2011-07-02 -> so on.
>
> I hope this would make my doubt a little more clearer.
>
> Thanks,
> Narayanan
>
>
> On Mon, Jul 11, 2011 at 2:59 PM, Srikanth P. Shreenivas <
> srikanth_shree
of records) ?
Is the setTimeRange method of GET and SCAN meant to do this? If so, why am I
not getting all the column values for a particular rowkey?
Regards,
Narayanan
On Mon, Jul 11, 2011 at 12:28 PM, Srikanth P. Shreenivas <
srikanth_shreeni...@mindtree.com> wrote:
> Hi Narayanan,
&
Hi Narayanan,
I think you need to create the table with versions enabled.
For example, if you need to store 5 versions, you can use create like this:
Hbase> create 'useractivity', {NAME => 'pageviews', VERSIONS => 5}
HBase> put 'useractivity', 'userid1', 'pageviews:uri',
'http://www.allaboutda
p on
server-side). Once you've done that, can you check its logs? See if
you can figure anything on why the hang?
Thanks,
St.Ack
On Sat, Jul 9, 2011 at 6:14 AM, Srikanth P. Shreenivas
wrote:
> Hi St.Ack,
>
> We upgraded to CDH 3 (hadoop-0.20-0.20.2+923.21-1.noarch.rpm,
> h
s
Go to CDH3 if you can. CDH2 is also old.
St.Ack
On Wed, Jun 29, 2011 at 7:15 AM, Srikanth P. Shreenivas
wrote:
> Thanks St. Ack for the inputs.
>
> Will upgrading to CDH3 help or is there a version within CDH2 that you
> recommend we should upgrade to?
>
> Regards,
&g
This will be very helpful. Also, wouldn't it be good idea to add close() to
HTablePoolEnhanced so that underlying HTables can be closed by client apps as
part of application shutdown.
Regards,
Srikanth
-Original Message-
From: Daniel Iancu [mailto:daniel.ia...@1and1.ro]
Sent: Friday, J
Its probably struggling and thats
why requests are not going through -- or the client missed the fact
that region moved (all stuff that should be working better in latest
hbase).
St.Ack
On Tue, Jun 28, 2011 at 9:51 PM, Srikanth P. Shreenivas
wrote:
> Hi,
>
> We are using HBase 0.20.3 (hbase
mits(HTable.java:609)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:474)
Any inputs on why this is happening, or how to rectify it will be of immense
help.
Thanks,
Srikanth
Srikanth P Shreenivas|Principal Consultant | MindTree Ltd.|Global Village, RVCE
Post, Mysore Road, Bang
35 matches
Mail list logo