we do this for almost all our tables
On May 5, 2015 11:05 AM, jeremy p athomewithagroove...@gmail.com wrote:
Thank you for your response!
So I guess 'salt' is a bit of a misnomer. What I used to do is this :
1) Say that my key value is something like '1234foobar'
2) I obtain the hash of
given that CDH4 is hbase 0.94 i dont believe nobody is using it. for our
clients the majority is on 0.94 (versus 0.96 and up).
so i am going with 1), its very stable!
On Mon, Dec 15, 2014 at 1:53 PM, lars hofhansl la...@apache.org wrote:
Over the past few months the rate of the change into
we do these jobs in cascading/scalding
On Apr 9, 2014 5:56 AM, Henning Blohm henning.bl...@zfabrik.de wrote:
We operate a solution that stores large amounts of data in HBASE that needs
to be available for online access.
For efficient scanning, there are three pieces of data encoded in row
do i understand it correctly that it is safe to have 2 hbase clusters
replicate to each other (so in both directions)?
and as long as an update (put/delete) arrives at only one cluster this
setup will function correctly?
not entirely sure about situation where updates get routed to both
we had a master go down on a hbase 0.96 cluster with HA. the second master
took over and the hbase cluster continued to function. great! however
hbase-rest got stuck in a loop spitting out error messages. see below.
is something like hbase-rest, which uses the hbase client api, supposed to
survive
stargate can be run distributed behind load balancer to scale out. however
the scanners are implemented stateful i think, so i would suggest to stay
away from those if you load balance.
On Wed, Feb 5, 2014 at 10:33 AM, jeevi tesh jeevitesh...@gmail.com wrote:
Hi,
Planning to use hbase
if compression is already enabled on a column family, do i understand it
correctly that the main benefit of DATA_BLOCK_ENCODING is in memory?
On Mon, Jan 27, 2014 at 6:02 PM, Nick Xie nick.xie.had...@gmail.com wrote:
Thanks all for the information. Appreciated!! I'll take a look and try.
https://issues.apache.org/jira/browse/HBASE-10112
On Sun, Dec 8, 2013 at 1:07 AM, Ted Yu yuzhih...@gmail.com wrote:
Koert:
Thanks for reporting this issue.
Mind filing a JIRA ?
Cheers
On Sun, Dec 8, 2013 at 2:01 AM, Koert Kuipers ko...@tresata.com wrote:
i am trying to use maxValues
are there downsides if we were to add all these operations to the
writeAsyncBuffer? are there any usages of HTable.put that rely on it being
send off instead of being put in a buffer?
On Mon, Dec 9, 2013 at 5:38 PM, Stack st...@duboce.net wrote:
On Sat, Dec 7, 2013 at 8:52 AM, Koert Kuipers ko
puts special.
On Sat, Dec 7, 2013 at 8:40 AM, Stack st...@duboce.net wrote:
On Fri, Dec 6, 2013 at 3:06 PM, Koert Kuipers ko...@tresata.com wrote
i noticed that puts are put into a bugger (writeAsyncBuffer) that gets
flushed if it gets to a certain size.
writeAsyncBuffer can take objects
i am trying to use maxValues with a globbed row resource in stargate.
from looking at the source code one has to do something like
table/row/column(s)/timestamp(s)/?n=1
(except the ?n=1 piece must be urlencoded)
however i cannot get the n=1 piece to work. i get this stacktrace:
pProblem
hello all,
i was just taking a look at HTable source code to get a bit more
understanding about hbase from a client perspective.
i noticed that puts are put into a bugger (writeAsyncBuffer) that gets
flushed if it gets to a certain size.
writeAsyncBuffer can take objects of type Row, which
From: Koert Kuipers ko...@tresata.com
To: user@hbase.apache.org; vrodio...@carrieriq.com
Sent: Thursday, August 22, 2013 12:30 PM
Subject: Re: one column family but lots of tables
if that is the case, how come people keep warning about limiting the
number
13 matches
Mail list logo