Created https://issues.apache.org/jira/browse/HBASE-3649
--- On Tue, 3/15/11, Stack wrote:
> From: Stack
> Subject: Re: Long client pauses with compression
> To: user@hbase.apache.org, apurt...@apache.org
> Date: Tuesday, March 15, 2011, 9:06 AM
> Sounds like a nice feature t
t; --- On Mon, 3/14/11, Jean-Daniel Cryans wrote:
> >
> >> From: Jean-Daniel Cryans
> >> Subject: Re: Long client pauses with compression
> >> To: user@hbase.apache.org
> >> Date: Monday, March 14, 2011, 7:48 PM
> >> For the reasons I gave abo
t a separate compression setting for flushing? I.e. none?
>
>
> --- On Mon, 3/14/11, Jean-Daniel Cryans wrote:
>
>> From: Jean-Daniel Cryans
>> Subject: Re: Long client pauses with compression
>> To: user@hbase.apache.org
>> Date: Monday, March 14, 2011, 7:48 PM
ct: Re: Long client pauses with compression
> To: user@hbase.apache.org
> Date: Monday, March 14, 2011, 7:48 PM
> For the reasons I gave above... the puts are sometimes blocked on the
> memstores which are blocked by the flusher thread which is blocked
> because there's too man
Hi,
Whenever I am with clients and we design for HBase the first thing I
do is spent a few hours explaining exactly that scenario and the
architecture behind it. As for the importing and HBase simply lacking
a graceful degradation that works in all cases I nowadays quickly
point to the bulk import
For the reasons I gave above... the puts are sometimes blocked on the
memstores which are blocked by the flusher thread which is blocked
because there's too many files to compact because the compactor is
given too many small files to compact and has to compact the same data
a bunch of times.
Also,
I changed the settings as described below:
hbase.hstore.blockingStoreFiles=20
hbase.hregion.memstore.block.multiplier=4
MAX_FILESIZE=512mb
MEMSTORE_FLUSHSIZE=128mb
I also created the table with 6 regions initially. Before I wasn't creating any
regions initially. I needed to make all of these cha
This is very informative and helpful, I will try changing the settings and
will report back.
On Mon, Mar 14, 2011 at 11:54 AM, Jean-Daniel Cryans wrote:
> Alright so here's a preliminary report:
>
> - No compression is stable for me too, short pauses.
> - LZO gave me no problems either, generally
Alright so here's a preliminary report:
- No compression is stable for me too, short pauses.
- LZO gave me no problems either, generally faster than no compression.
- GZ initially gave me weird results, but I quickly saw that I forgot
to copy over the native libs from the hadoop folder so my logs
Thanks for the report Bryan, I'll try your little program against one
of our 0.90.1 cluster that has similar hardware.
J-D
On Sun, Mar 13, 2011 at 1:48 PM, Bryan Keller wrote:
> If interested, I wrote a small program that demonstrates the problem
> (http://vancameron.net/HBaseInsert.zip). It us
on this one.
Regards
Stuart
-Original Message-
From: Bryan Keller [mailto:brya...@gmail.com]
Sent: 13 March 2011 20:49
To: user@hbase.apache.org
Subject: Re: Long client pauses with compression
If interested, I wrote a small program that demonstrates the problem
(http://vancameron.net
If interested, I wrote a small program that demonstrates the problem
(http://vancameron.net/HBaseInsert.zip). It uses Gradle, so you'll need that.
To run, enter "gradle run".
On Mar 13, 2011, at 12:14 AM, Bryan Keller wrote:
> I am using the Java client API to write 10,000 rows with about 6000
I am using the Java client API to write 10,000 rows with about 6000 columns
each, via 8 threads making multiple calls to the HTable.put(List) method.
I start with an empty table with one column family and no regions pre-created.
With compression turned off, I am seeing very stable performance. A
13 matches
Mail list logo