Hi Usman
I am going through the same thing.
> > Unless you have a reason to use wide rows (e.g. you need atomic
> > updates on multiple points within one row) I recommend using a tall
> > table, since large rows will become unmanageable, especially if they
> > keep growing forever (and HBase cann
HTD doesn't have the concept of auto flushing, it's on a per HTable
instance only.
J-D
On Tue, Oct 9, 2012 at 3:51 PM, Mohit Anchlia wrote:
> I am using 92.1 HTableDescriptor and I don't see setAutoFlush method. I am
> using HTablePool
Hi,
In my preput coprocessor I would like to get the old value of the row been
input. Now I am creating a HTable instance and using the get interface;
function wise it works fine. Given the row is physically in the same
region as the cp, is there any lightweight approach doing that?
Thanks!
HBase n00b here, sorry for the level of the questions I'm about to
spit out. I have gone hunting for answers for the past few days.
I have an HBase that was recently upgraded from 0.20.6 to 0.92.1. This
was done just as I was starting so I'm not clear on the state of the
HBase prior to the upgrade
It looks as if RS is able to take the load but at some point memory buffer
on the server is full and it slows everything down.
Some interseting points I am seeing is memstore size of 50MB,
fssynclatency_num_ops= 300k, fswritelatency=180k
On Tue, Oct 9, 2012 at 11:03 AM, Mohit Anchlia wrote:
> Th
Ah it's not only replication that does that, if you use CopyTable
you'll have the same issue. See also
https://issues.apache.org/jira/browse/HBASE-4614
J-D
On Tue, Oct 9, 2012 at 1:05 AM, Placido Revilla
wrote:
> The problem turned out to be that you cannot specify the master zookeeper
> ensembl
There are 2 CF on 2 separate region server. And yes, I have not pre-split
the regions as I was told that we should let HBase handle that
automatically.
Is there a way to set autoflush when using HTableDescriptor?
On Tue, Oct 9, 2012 at 10:50 AM, Doug Meil wrote:
>
> So you're running on a single
So you're running on a single regionserver?
On 10/9/12 1:44 PM, "Mohit Anchlia" wrote:
>I am using HTableInterface as a pool but I don't see any setautoflush
>method. I am using 0.92.1 jar.
>
>Also, how can I see if RS is getting overloaded? I looked at the UI and I
>don't see anything obvio
I am using HTableInterface as a pool but I don't see any setautoflush
method. I am using 0.92.1 jar.
Also, how can I see if RS is getting overloaded? I looked at the UI and I
don't see anything obvious:
equestsPerSecond=0, numberOfOnlineRegions=1, numberOfStores=1,
numberOfStorefiles=1, storefile
It's one of those "it depends" answers.
See this firstŠ
http://hbase.apache.org/book.html#perf.writing
Š Additionally, one thing to understand is where you are writing data.
Either keep track of the requests per RS over the period (e.g., the web
interface), or you can also track it on the clien
Hi Erman,
It's normal.
At t=1 you insert val1
At t=2 you insert val2
At t=3 you put a marker that row1:farm1:q1 values are deleted.
When you try to read the values, HBase will hide all that is before
t=3 because of the marker. Which mean you will not see val2 neither
you will see val1.
I think
Hi,
I have started using HBase Rest Java client as a part of my project. I
see that it may have a problem with the Delete operation.
For a given Delete object, if you apply deleteColumn(family, qualifier)
on it, all matching qualifiers are deleted instead of the latest
value.
In order to recr
The problem turned out to be that you cannot specify the master zookeeper
ensemble in a zoo.cfg file (very useful if you colocate HBase and
zookeeper) because the replication setup code is borked (the zoo.cfg config
overrides the hack done in the code to create a config object equivalent to
the mas
Chris,
In this case nothing scared actually happens.
* If partitions are the same, then HBase simply copies all your HFiles
during bulkloading procedure.
* If partitions are changed, then it still copies them, but in addition,
some of these files (according to number of split regions) would be al
14 matches
Mail list logo