try. We allow such a retry will cause not getting back all the
results. This is a serious problem and this Exception way
helps there. You can try increasing the scan time out period at client
side. This should help.
-Anoop-
On Mon, Sep 29, 2014 at 12:35 PM, Henry Hung wrote:
> Hi All,
&g
Hi All,
Is there a way to let scanner finish scanning all regions without throwing this
kind of error? I'm using scan with filter MUST_PASS_ALL, and I observe that
whenever the result data is smaller (let's say 10%) compare to another filter
with larger result (let's say 80%), it always failed.
mparator problem: Why pattern "u" has the same result
as ".*u.*" ?
"u" is part of "hung", producing a match.
Do you want to find string whose value is "u" (not a substring) ?
In that case you can specify "^u$"
Cheers
On S
n execute it, the result is "hung".
Question is why the SingleColumnValueFilter do not abide the regex comparator?
Or why is regex comparator "u" is the same as ".*u.*"?
Best regards,
Henry Hung
The privileged confidential information conta
nable to check the size after compression, no?
Best regards,
Henry Hung
The privileged confidential information contained in this email is intended for
use only by the addressees as indicated by the original sender of this email.
If you are not the addressee indi
decrease?
Best regards,
Henry Hung
-Original Message-
From: Dhaval Shah [mailto:prince_mithi...@yahoo.co.in]
Sent: Tuesday, May 27, 2014 8:03 AM
To: user@hbase.apache.org
Subject: Re: HBase cluster design
A few things pop out to me on cursory glance:
- You are using CMSIncrementalMode which
has a nasty fall back to
single threaded full GC.
-- Lars
- Original Message -
From: Henry Hung
To: "user@hbase.apache.org"
Cc:
Sent: Sunday, April 27, 2014 6:44 PM
Subject: RE: suggestion for how eliminate memory problem in heavy-write hbase
region server
@Bryan,
Do
well
distributed writes.
Are your regionservers starved for CPU? Either way, I'd try the java7 G1 GC on
one regionserver and report back. We run with 25GB heaps and never have long
pauses, so 16GB should be fine with enough CPU.
On Sun, Apr 27, 2014 at 8:27 PM, Henry Hung wrote:
&
014 at 11:17 PM, Mikhail Antonov wrote:
> Henry,
>
> http://blog.ragozin.info/2011/10/java-cg-hotspots-cms-and-heap.html -
> that may give some insights.
>
> -Mikhail
>
>
> 2014-04-24 23:07 GMT-07:00 Henry Hung :
>
> > Dear All,
> >
> > My curre
Dear All,
My current hbase environment is heavy write cluster with constant 2000+ insert
rows / second spread to 10 region servers.
Each day I also need to do data deletion, and that will add a lot of IO to the
cluster.
The problem is sometimes after a week, one of the region server will crash
Hi HBase Users,
I'm using hbase 0.96 and currently testing hbase master high-availability.
>From what I know is that you can start 2 master in different machines, and the
2nd to start is the master backup.
Then when I kill -9 the 1st master (active), it always(?) took 2 minutes for
the 2nd mast
master receive ERROR ipc.RPC: RPC.stopProxy called
on non proxy.
Henry:
Thanks for the additional information.
Looks like HA namenode with QJM is not covered by current code.
Mind filing a JIRA with summary of this thread ?
Cheers
On Tue, Nov 26, 2013 at 9:12 AM, Henry Hung wrote:
> @Ted
&g
?
Meaning, did you start HBase using the new hadoop jars ?
Cheers
On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung wrote:
> I looked into the source code of
> org/apache/hadoop/hbase/fs/HFileSystem.java
> and whenever I execute hbase-daemon.sh stop master (or regionserver),
> the
&g
lass[]{ClientProtocol.class, Closeable.class},
We ask for Closeable interface.
Did the error persist after you replaced with the hadoop-hdfs-2.2.0.jar ?
Meaning, did you start HBase using the new hadoop jars ?
Cheers
On Mon, Nov 25, 2013 at 1:04 PM, Henry Hung wrote:
> I looked into t
er
Subject: Re: hbase 0.96 stop master receive ERROR ipc.RPC: RPC.stopProxy called
on non proxy.
Which version of Hadoop do you use?
On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung wrote:
> Hi All,
>
> When stopping master or regionserver, I found some ERROR and WARN in
> the log fil
o you use?
On Wed, Nov 20, 2013 at 5:43 PM, Henry Hung wrote:
> Hi All,
>
> When stopping master or regionserver, I found some ERROR and WARN in
> the log files, are these errors can cause problem in hbase:
>
> 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
&g
n Wed, Nov 20, 2013 at 5:43 PM, Henry Hung wrote:
> Hi All,
>
> When stopping master or regionserver, I found some ERROR and WARN in
> the log files, are these errors can cause problem in hbase:
>
> 13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
> 13/11
Hi All,
When stopping master or regionserver, I found some ERROR and WARN in the log
files, are these errors can cause problem in hbase:
13/11/21 09:31:16 INFO zookeeper.ClientCnxn: EventThread shut down
13/11/21 09:35:36 ERROR ipc.RPC: RPC.stopProxy called on non proxy.
java.lang.IllegalArgumen
@xieliang: I will try the PrintGCApplicationStoppedTime, thank you.
About loading, total requestPerSeconds is around 15000~3 for 9 servers,
with numberOfOnlineRegions = 136.
I also just uploaded the log files of gc and regionserver into dropbox:
https://dl.dropboxusercontent.com/u/60149953
il: vrodio...@carrieriq.com
____
From: Henry Hung [ythu...@winbond.com]
Sent: Tuesday, October 22, 2013 8:41 PM
To: user@hbase.apache.org
Subject: What cause region server to timeout other than long gc?
Hi All,
Today I have 1 of 9 region servers down because of
Hi All,
Today I have 1 of 9 region servers down because of zookeeper timeout, this is
the log:
2013-10-23 07:41:34,139 INFO org.apache.hadoop.hbase.regionserver.Store:
Starting compaction of 4 file(s) in cf of
MES_PERF_LOG_TIME,\x00\x04\x00\x00\x01A\x9D\xDD\xD9\x8D\x00\x00\x08\xD0fcap2\x00\x00=
Hi to all hbase user,
Could someone tell me which ganglia version should be used for hbase 0.94.6?
In HBase Administration Cookbook by Yifeng Jiang, he use ganglia 3.0.x for
hbase 0.92.
Best regards,
Henry
The privileged confidential information contained in th
Hi All,
My understanding is that hbase master node and hadoop name node are the single
point of failures in the cluster.
So for Production environment I will have 2 servers configured as Active /
Passive Cluster with share storage, if the Active server crashed, then Passive
will take over and b
Hi All,
I'm a HBase newbie. Today I start to do stress test using 4 java process to
load large amount of rows (1 minute = 18,000 rows). After a while, one of my
region server log got this kind of WARN message:
WARN org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60020:
readAndProcess
24 matches
Mail list logo