files. This is still higher than expected 600 files.
It appears that after major compaction eventually each regions get to 2
files state, but other regions start adding 2 more files because of ongoing
writes and hence file count almost always is higher than 1200.
*
*
*Abhijit Pol | Senior Rocket Scie
Thanks Andy. We might move to 0.90.3 soon.
*
*
*Abhijit Pol | Senior Rocket Scientist | a...@rocketfuel.com | 408.892.3377
p |*
On Fri, Jun 10, 2011 at 3:18 AM, Andrew Purtell wrote:
> Stack,
>
> Aside from the other ideas you mention, this could also be HBASE-3855 if a
> lot o
Thanks for you reply, stack. I have answers inline. By timeout % we refer
to, % of requests that exceeds our acceptable latency threshold.
On Wed, Jun 8, 2011 at 11:31 PM, Stack wrote:
> What happens if you flush that table/region when its slow (you can do
> it from the shell). Does the latency
.doRespond(HBaseServer.java:789)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1080)
since server restart make things look good, is this might be related to
minor compaction & block cache?
*
*
*Abhijit Pol | Senior Rocket Scientist | a...@rocketfuel.com | 408.892.3377
p |*
Is there any work done or thoughts went into exposing bloom filters on
client side?
We have use case where 40-50% of lookup keys don't exists in HBase (new keys
or keys we don't care to store). We don't have strong
consistency requirements and seems like avoiding these ~40% requests going
to HBase
down from almost 3000.
>
> So this would only work if you can take down HBase for a decent amount of
> time.
>
> I wonder if you could alternatively, run an Export job and an Import job of
> your table. Do those preserve the regions, or could you use it to bring
> down
> the n
We have HBase cluster which was peacefully (acceptable throughput and
latencies) serving for about a month (we are using 0.89.20100926 version)
This morning we wanted to set TTL to value smaller than default and Mr.
Murphy struck.
(A) We disabled and altered table with desired TTL value (using sh
how are GC settings in hbase-env.sh look like for you? did you add/remove
from out of box hbase-env.sh?
try running this on RS and watch last column, each increment should be small
sudo -u jstats -gcutil 1000
On Tue, Nov 9, 2010 at 10:53 AM, Stuart Smith wrote:
> Hello,
>
> I just wanted t
>
>
> If this is your setup, your HDFS' namenode is bound to OOM soon.
> (Namenode's
> memory consumption is proportional to the number of blocks on HDFS)
>
>
NN runs on master and we have 4GB for NN and that is good for long time
given amount of blocks we have. DN has 1GB, TT 512MB and JT 1GB.
t it can allocate more RAM
> for the file cache, and lower value makes it less often to swap. So you want
> a lower value.
>
> It's default to 60 on many Linux distributions. Try to make it 0.
>
> Thanks,
> Tatsuya
>
> --
> Tatsuya Kawano
> Tokyo, Japan
>
&g
>
> > we did swapoff -a and then updated fstab to permanently turn it off.
>
> You might not want to turn it off completely. One of the lads was
> recently talking about the horrors that can happen when no swap.
>
> But sounds like you were doing over eager swapping up to this?
>
>
http://wiki.apa
12:28 PM, Abhijit Pol wrote:
>
> > Thanks Stack.
> >
> > I think we have GC under control. We have CMS tunned to start early and
> > don't see slept x longer y in logs anymore. We also have higher zk
> timeout
> > (150 seconds), guess can bump that up a bit
helps suicides. We observed RSs on machines with swap disabled
doing very good so far.
Also, as you suggested we will take odd man out. We don't have to have it
in. Our master is already low key machine.
--Abhi
On Sat, Oct 9, 2010 at 11:12 PM, Stack wrote:
> On Sat, Oct 9, 2010 at 1:1
We are testing with 4 nodes HBase cluster out of which 3 machines are
identical with 64GB RAM and 6x1TB disks. and 4th machine has only 16GB RAM
and 2x1TB disks
We observe (from server side metrics) frequent latency spikes and RS suicide
~ every 8hrs from our 4th machine.
We do have overall heap
e block cache has been populated?
>
> Any other suggestions?
>
> File an issue. We need to come up w/ a fix for this case.
>
> Thanks for writing the list,
> St.Ack
>
>
>
> On Fri, Oct 1, 2010 at 1:26 AM, Abhijit Pol wrote:
> > we are trying to read efficientl
we are trying to read efficiently a hot column family (in_memory=true,
blockcaching=true) that get writes at say 500 qps and reads at 10,000 qps.
- as long as writes are in memstore we get them from memstore and its fast
- if we have read it once it will be at least in block cache (gets priority
d
vidence that your new settings is
> better.
>
> -ryan
>
> On Wed, Sep 8, 2010 at 4:36 PM, Abhijit Pol wrote:
> > On HDFS side we have "dfs.block.size" set to 128MB (like our analytic
> hadoop
> > cluster) and HBASE side "hfile.min.blocksize.size" is d
ess is caching... ram ram ram.
>
> -ryan
>
> On Thu, Aug 19, 2010 at 10:15 AM, Abhijit Pol
> wrote:
> > We are using Hbase 0.20.5 drop with latest cloudera Hadoop distribution.
> >
> > - We are hitting 3 nodes Hbase cluster from a client which has 10
> >
Noticed discussion on this thread. We filed HBASE-2939 with patch
On Thu, Sep 9, 2010 at 11:10 PM, tsuna wrote:
> On Thu, Sep 9, 2010 at 1:48 PM, MauMau wrote:
> > From: "tsuna"
> >> In my recent loadtests on my HBase-heavy application (be it with
> >> HBase's traditional client or with async
On HDFS side we have "dfs.block.size" set to 128MB (like our analytic hadoop
cluster) and HBASE side "hfile.min.blocksize.size" is defaulted to 64KB.
How these parameter play a role in HBASE read and writes? One should try to
match them?
Thanks,
Abhi
We are using Hbase 0.20.5 drop with latest cloudera Hadoop distribution.
- We are hitting 3 nodes Hbase cluster from a client which has 10
threads each with thread local copy of HTable client object and
established connection to server.
- Each of 10 threads issuing 10,000 read requests of keys ran
21 matches
Mail list logo