hi biodata:
you can try “ scan.setbatch()” or other filter to limit the number of
column returned.
This is because: There is a very large row in your table,when you try to
retrieve it, OOM will happen.
As I can see, There is no other method to solve this problem.
Defau
Hi ALL:
Today I met a problem, There is a very large table in the hbase
cluster, And this table never ran a major-compaction and I trigger a
major-compaction.
The major-compaction will take a very very long time, meantime, this
table is still being writing.
The number
hi all:
There are two conditions caused the region server crashed (we met).
1.NIO, out of direct memory
2.zookeeper session timeout
You can find the reason in the region server log (or .out) or GC log.
If it is the "out of direct memory",you will see “ kill -
,上午12:08,Stack mailto:st...@duboce.net>> 写道:
On Wed, Jun 1, 2016 at 3:03 AM, 吴国泉wgq
mailto:wgq...@qunar.com>> wrote:
hi all:
1.Is region always on same machine or do you see this phenomenon on more
than one machine?
Not always on the same machine, but always on the machine whic
hi all:
1.Is region always on same machine or do you see this phenomenon on more than
one machine?
Not always on the same machine, but always on the machine which hold the
first region of a table(the only table that its first region cannot flush,when
restart the regionserver, the first
hi all:
I wonder if I increase the size of the Region, will it make bad
effect on the latency of the read operateion?
The default size of the region is 10G I saw from the official guide.
I configured 100G per region on my cluster.
Because my Regionserver Hardware