Hello Stan,
Your bulk load trying to load data to multiple column
families?
-Anoop-
On Wed, Jul 10, 2013 at 11:13 AM, Stack wrote:
> File a bug Stan please. Paste your log snippet and surrounding what is
> going on at the time. It looks broke that a bulk load would be kept o
File a bug Stan please. Paste your log snippet and surrounding what is
going on at the time. It looks broke that a bulk load would be kept out of
a lock for ten minutes or more.
Hope all is well,
St.Ack
On Mon, Jul 8, 2013 at 9:53 AM, Stanislav Barton wrote:
> Hello Michael,
>
> looking in t
yes, as Varun said, I am also decrease block cache ratio. because we always
do random read, so if block cache size is too large, lots of blocks were
promoted, then Full GC is very frequent.
On Wed, Jul 10, 2013 at 11:20 AM, Varun Sharma wrote:
> Hi Suraj,
>
> One thing I have observed is that i
Hi Suraj,
One thing I have observed is that if you very high block cache churn which
happens in a ready heavy workload - a full GC eventually happens because
more block cache blocks bleed into the old generation (LRU based caching).
I have seen this happen particularly when the read load is extrem
See http://hbase.apache.org/book.html#trouble.tools.builtin.zkcli
The command is:
delete path [version]
On Tue, Jul 9, 2013 at 6:50 PM, ch huang wrote:
> att, old data in zookeeper prevent master node from starting
>
att, old data in zookeeper prevent master node from starting
i reinstalled hbase , and how the error info is different
13/07/10 09:17:52 WARN conf.Configuration: fs.default.name is deprecated.
Instead, use fs.defaultFS
13/07/10 09:17:54 INFO master.ServerManager: Finished waiting for region
servers count to settle; checked in 1, slept for 12831 ms, expect
Suraj,
we have heavy read and write loads, with my GC options, we cannot avoid
full GC, but we can decrease GC time greatly.
On Wed, Jul 10, 2013 at 8:05 AM, Suraj Varma wrote:
> Hi Azuryy:
> Thanks so much for sharing. This gives me a good list of tuning options to
> read more on while constru
Time range don't match.
Can you find log around 2013-07-09 15:47:09,067 ?
Cheers
On Tue, Jul 9, 2013 at 5:52 PM, ch huang wrote:
> here is CH35 region server out log
>
> 13/07/10 08:23:34 INFO regionserver.HRegionServer: Serving as
> CH35,60020,1373364396982, RPC listening on CH35/192.168.10.3
here is CH35 region server out log
13/07/10 08:23:34 INFO regionserver.HRegionServer: Serving as
CH35,60020,1373364396982, RPC listening on CH35/192.168.10.35:60020,
sessionid=0x3fc29ea7490009
13/07/10 08:23:34 INFO regionserver.SplitLogWorker: SplitLogWorker
CH35,60020,1373364396982 starting
13/0
org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of
-ROOT-,,0.70236052 to serverName=CH35,60020,1372991820903,
Have you checked region server log for CH35 ?
On Tue, Jul 9, 2013 at 5:35 PM, ch huang wrote:
> i upgrade cdh3u4 to cdh4.3,start master node have problem
>
>
> 2013-
i upgrade cdh3u4 to cdh4.3,start master node have problem
2013-07-09 15:47:09,061 INFO
org.apache.hadoop.hbase.catalog.RootLocationEditor: Unsetting ROOT region
location in ZooKeeper
2013-07-09 15:47:09,063 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign:
master:6-0x3fa281450100bd Creating (
Thanks Anil - your setting matches our current JVM setting quite closely
(though ours is for a 8GB heap).
--Suraj
On Tue, Jul 9, 2013 at 1:53 PM, anil gupta wrote:
> Hi,
>
> I am using 34 GB heap and my use case is also Read oriented. I also do
> writes with MR. :)
> Yesterday, under decent lo
Hi Azuryy:
Thanks so much for sharing. This gives me a good list of tuning options to
read more on while constructing our GC_OPTS.
Follow up question: Was your cluster tuned to handle read heavy loads or
was it mixed / read-write loads? Just trying to understand what your
constraints were.
--Suraj
Hi Otis:
Thanks much for sharing this - this is really good info. I had actually
read this - the only constraint I had was that we are still on JDK6 ...
but, this would definitely be something I'll be referring when we move to
JDK7.
--Suraj
On Mon, Jul 8, 2013 at 10:12 PM, Otis Gospodnetic <
o
Hi Stack:
Yes, we do hit full GC in the 8G heap. I see ... so, you are saying that it
would just be linearly proportional ... so, I should expect a 3x pause
increase with 24G _if_ the full GC hits. I agree with you ... with much
more head room (and mslab etc enabled by default) we shouldn't normall
Hi Bryan,
Java 1.7 from Oracle. We're running away from Java 1.6 wherever we
can. 7 has been stable for us for a lng time.
Otis
--
HBase Performance Monitoring -- http://sematext.com/spm
On Tue, Jul 9, 2013 at 11:03 AM, Bryan Beaudreault
wrote:
> @Otis, are you guys running G1GC with jav
Hi,
I am using 34 GB heap and my use case is also Read oriented. I also do
writes with MR. :)
Yesterday, under decent load i got pauses of 30-40 secs. Still, the RS were
not using the full 34 GB. I am thinking of doing some more tuning as i
expect the read load to increase.
Here is my GC setting
Is the HMaster process running correctly on the cluster? Between the
missing cluster ID and meta region not being available, it looks like
HMaster may not have fully initialized.
Alternately, if HMaster is running correctly, did you override the default
value for zookeeper.znode.parent in your cl
I'm new to HBase, and need a little guidance. I've set up a 6-node cluster,
with 3 nodes
running the ZooKeeper server. The database seems to be working from the hbase
shell; I can create tables, insert,
scan, etc.
But when I try to perform operations in a Java app, I hang at:
13/07/09 12:40:34
Silly question...
Why are you trying to disable automated compaction?
And then the equally silly question... are you attempting to run full
compactions manually?
On Jul 9, 2013, at 11:41 AM, David Koch wrote:
> Hello,
>
> Thank you for your replies.
>
> So, as suggested, I tweaked the
Hello,
Thank you for your replies.
So, as suggested, I tweaked the following settings in Cloudera Manager:
hbase.hstore.compactionThreshold=1
hbase.hstore.compaction.max - I did not touch, I tried setting it to "0"
but the minimum is 2
I can't see any compactions being launched but the job s
In our use case memory/cache is small, and we want to improve read/load
(from-disk) performance by storing HFile blocks consecutively on disk...
The idea is that if we store blocks more closely on disk, then read a data
block from HFile would require fewer random disk access.
In particular, to loo
I see from the blog post that it is java7. The question still stands
regarding using that with hbase, considering the open jira
https://issues.apache.org/jira/browse/HBASE-5261
On Tue, Jul 9, 2013 at 11:03 AM, Bryan Beaudreault wrote:
> @Otis, are you guys running G1GC with java6 or java7? Fro
@Otis, are you guys running G1GC with java6 or java7? From what I'm reading
it seems to be more stable with better performance in java7, but I also
believe java7 is not officially supported by apache hadoop or hbase yet.
I'm wondering if many people are using java7 for hbase without issue
despite
Do you specify startTime and endTime parameters for the CopyTable job ?
Cheers
On Tue, Jul 9, 2013 at 4:38 AM, David Koch wrote:
> Hello,
>
> We disabled automated major compactions by setting
> hbase.hregion.majorcompaction=0.
> This was to avoid issues during buik import of data since compact
You should be able to limit what JM describes by tuning the following two
configs:
hbase.hstore.compactionThreshold
hbase.hstore.compaction.max
Beware of this property as well when tuning the above so you don't
accidentally cause blocking of flushes, though I imagine you would be
tuning down not
Hi David,
Minor compactions can be promoted to Major compactions when all the
files are selected for compaction. And the property below will not
avoid that to occur.
Section 9.7.6.5 there: http://hbase.apache.org/book/regions.arch.html
JM
2013/7/9 David Koch :
> Hello,
>
> We disabled automate
Hello,
We disabled automated major compactions by setting
hbase.hregion.majorcompaction=0.
This was to avoid issues during buik import of data since compactions
seemed to cause the running imports to crash. However, even after
disabling, region server logs still show compactions going on, as well
29 matches
Mail list logo