You have to keep in mind one thing. The memstore size 128 MB said is
the memstore's heap size. Not the sum of all cell's key+ value size.
When a Cell is added into memstore there will be big overhead also.
(~100 bytes per cell java overhead). Ur cell size is so small that
more than half of
We have only one column family.
Table properties:
{NAME => 't', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW',
REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'SNAPPY',
MIN_VERSIONS => '0',KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '6216900',
IN_MEMORY => 'false', BLOCKCACHE =>
Do you have compression enabled, and is your data highly compressible?
On Mon, Mar 27, 2017 at 6:26 AM, Hef wrote:
> Hi,
> Does anyone have an idea why most of my 128MB memstore flushed files are
> only several MBs?
>
> There are a lot of logs look as below:
>
> 2017-03-27
How many column families does your table have ?
Which hbase release are you using ?
Can you pastebin more of the server log around the time of flush ?
Thanks
On Mon, Mar 27, 2017 at 6:26 AM, Hef wrote:
> Hi,
> Does anyone have an idea why most of my 128MB memstore
Hi,
Does anyone have an idea why most of my 128MB memstore flushed files are
only several MBs?
There are a lot of logs look as below:
2017-03-27 13:10:25,064 INFO
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher: Flushed,
sequenceid=13042496, memsize=128.0 M, hasBloomFilter=true, into