Sorry, I don't know about zk. Please help me.
Thanks.
Do you mean that need change any ZK parameter?
This is all logs about zk, Hmaster and client.
It seems like the problem is zk leader crashed.
client logs:
11/04/26 12:25:04 INFO zookeeper.ClientCnxn: Unable to read additional data
from serv
Thank you for your reply.
I don't change it. It is default.
But I think it is not the case the meta table doesn't assigned.
It had got it and been putting some data into meta table,
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Server not running
at
org.apache.hadoop.hbase.region
Hi,
Here is the HMaster log :
Mon Apr 25 14:51:39 IST 2011 Starting master on impetus-822
ulimit -n 1024
2011-04-25 14:51:40,250 INFO org.apache.hadoop.hbase.master.HMaster:
vmName=Java
HotSpot(TM) Client VM, vmVendor=Sun Microsystems Inc., vmVersion=17.1-b03
2011-04-25 14:51:40,251 INFO org.apac
I use two machines (each with 30 threads) to act as clients. Both
servers and clients are connected via giganet.
Thanks
Weihua
2011/4/26 Chris Tarnas :
> For your query tests, are they all from a single thread? Have you tried
> reading from multiple threads/processes in parallel - that sounds mo
So, you mean I shall disable block cache and make all query directly to DFS?
Then, the query latency maybe high.
And how much block cache hit ratio is considered to be acceptable? I
mean, above such ratio, block cache is benefical.
2011/4/26 Ted Dunning :
> Because of your key organization you a
user_month might still be helpful on average if a user looks for one month
and then another a short time later. This is because your cache could be
primed by the first query.
But you know your application best, of course.
On Mon, Apr 25, 2011 at 10:27 PM, Weihua JIANG wrote:
> Changing key to u
For your query tests, are they all from a single thread? Have you tried reading
from multiple threads/processes in parallel - that sounds more like your use
case.
-chris
On Apr 25, 2011, at 10:04 PM, Weihua JIANG wrote:
> The query is all random read. The scenario is that a user want to
>
Changing key to user_month may not be useful to me since, for each
query, we only need to get one month report for a user instead of all
the data stored for a user.
Putting multiple month data into a single row may be useful, but not
sure. I will perform some experimentation when I have time.
201
The query is all random read. The scenario is that a user want to
query his own monthly bill report, e.g. to query what happened on his
bill in March, or Feb, etc. Since every user may want to do so, we
can't predict who will be the next to ask for such monthly bill
report.
2011/4/26 Stack :
>> Cu
With CDH3B4, the hadoop processes run as separate users (like hdfs,
mapred, etc). Did you set the CDH3B4 directory permissions correctly
as described in the install document?
See: https://ccp.cloudera.com/display/CDHDOC/Upgrading+to+CDH3 and
search for "permissions".
Also see this:
https://ccp.clo
waitForMetaServerConnectionDefault() calls waitForMeta() which should have
waited for "hbase.master.catalog.timeout"
What's the value for "hbase.master.catalog.timeout" on your cluster ?
Thanks
2011/4/25 Gaojinchao
> In client code Put data into Region server has one more times:
> eg:
>publ
My map reduce jobs, which were running fine with HBase 0.20.4 with Hadoop
0.20.2 are now failing as I try to upgrade to HBase 0.90.1 with Hadoop
0.20.2-CDH3B4.
Under 0.90.1 I see the following error,
Error initializing attempt_201104252111_0001_m_02_0:
java.io.FileNotFoundException: File
/tm
In client code Put data into Region server has one more times:
eg:
public void processBatch(List list,
for (int tries = 0; tries < numRetries && retry; ++tries) { // if put
data failed and try to do.
...
In function addRegionToMeta. Does it need to do this ?
catalogTrack
> Currently, to store bill records, we can achieve about 30K record/second.
>
Can you use bulk load? See http://hbase.apache.org/bulk-loads.html
> However, the query performance is quite poor. We can only achieve
> about 600~700 month_report/second. That is, each region server can
> only serve q
Because of your key organization you are blowing away your cache anyway so
it isn't doing you any good.
On Mon, Apr 25, 2011 at 7:59 PM, Weihua JIANG wrote:
> And we also tried to disable block cache, it seems the performance is
> even a little bit better. And it we use the configuration 6 DN ser
Change your key to user_month.
That will put all of the records for a user together so you will only need a
single disk operation to read all of your data. Also, test the option of
putting multiple months in a single row.
On Mon, Apr 25, 2011 at 7:59 PM, Weihua JIANG wrote:
> Hi all,
>
> We wan
Hi all,
We want to implement a bill query system. We have 20M users, the bill
for each user per month contains about 10 0.6K-byte records. We want
to store user bill for 6 months. Of course, user query focused on the
latest month reports. But, the user to be queried doesn't have hot
spot.
We use
One region server with meta table is shutdowned.
-邮件原件-
发件人: Ted Yu [mailto:yuzhih...@gmail.com]
发送时间: 2011年4月25日 21:25
收件人: user@hbase.apache.org
主题: Re: A question about create table with regions in hbase version 0.90.3
Can you give more detail as to how many region servers were shutt
Quick update:
It turns out that we needed to run bin/set_meta_memstore_size.rb (
http://hbase.apache.org/upgrading.html) . I'm curious though: I understand
that our legacy dev machine would suffer because of the old
MEMSTORE_FLUSHSIZE setting. But we setup a brand new dev box with a pristine
0.90
Make sure this log4j changes is on the Child processe's CLASSPATH when
it runs out on the cluster.
St.Ack
On Sun, Apr 24, 2011 at 6:42 PM, Himanish Kushary wrote:
> Hi,
>
> I am trying to debug a MapReduce program and would prefer to view the debug
> informations using log4j through the web gui.I
Hi,
I am trying to debug a MapReduce program and would prefer to view the debug
informations using log4j through the web gui.I tried using Log4J logger with
Commons logging and also passing the parameter
-Dhadoop.root.logger=INFO,TLA
None of these seem to show the debug information on the web gui
There's a good chance that if the region server started getting slow,
the requests from the REST servers would start piling up in the queues
and finally blow out the memory. You could confirm that by looking at
the GC logs before the OOME.
Also when it died, it should a dumped a hprof file. If you
thats a separate cluster, its barely getting any traffic so I don't
think queue would be an issue. We do however have very large files
stored (file per row). So question is, if this is a GET that breaks
things, how can we avoid it?
-Jack
On Mon, Apr 25, 2011 at 10:37 AM, Jean-Daniel Cryans
wr
Hi Eric,
Unfortunately, the LocalJobRunner is missing a feature that is causing the
bulk load option to fail.
Are you running a MapReduce cluster? Make sure that you've configured the
jobtracker address in your mapred-site.xml if so.
-Todd
On Fri, Apr 22, 2011 at 11:09 AM, Eric Ross wrote:
> H
Can't tell what it was because it OOME'd while reading whatever was coming in.
Did you bump the number of handlers in that cluster too? Because you
might hit what we talked about in this jira:
https://issues.apache.org/jira/browse/HBASE-3813
"Chatting w/ J-D this morning, he asked if the queues h
So I'm guessing that the log you pasted was from the master, and I can
see the zookeeper doing retries and strangely enough it was kicked out
by the other ZK peers:
2011-04-21 14:48:26,043 INFO org.apache.zookeeper.ClientCnxn: Socket
connection established to 162-2-77-0/162.2.77.0:2181, initiating
Stack:
Exception in thread "pool-1-thread-9" java.lang.OutOfMemoryError: Java
heap space
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:120)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:959)
at
org.apache
On Sun, Apr 24, 2011 at 8:22 PM, Gan, Xiyun wrote:
> It works for java code, but I'm writing php scripts using thrift gateway.
> What is the solution?
>
Hack what you need into thrift idl and regen your php bindings.
Thanks,
St.Ack
What exception do you see?
Please upgrade to 0.90.2 HBase.
Thanks,
St.Ack
On Mon, Apr 25, 2011 at 2:44 AM, Rakesh Kumar Rakshit
wrote:
> Hello friends,
>
> I was using a hadoop cluster of apache hadoop (version 0.20.2) with apache
> hbase-0.20.6(2 regionservers) and apache zookeeper 3.3.1 (clus
Julio,
We are running CDH3U0 ( hbase 0.90.1 ) so there may be some difference if you
are running 0.90.2. Running your workaround on CDH3U0 I get the following:
hadoop jar hbase-0.90.1-cdh3u0.jar completebulkload -c hbase-site.xml input
table
usage: completebulkload /path/to/hfileoutputfor
Thank you Julio. I just added the below (w/ minor qualification that
-c only needed if config. not already on CLASSPATH). Thanks for the
contrib.
St.Ack
On Sun, Apr 24, 2011 at 11:17 PM, Julio Lopez wrote:
> Stack,
>
> For the bulk loads doc at http://hbase.apache.org/bulk-loads.html (in the
>
That sounds like a good idea. If you don't mind, please file an
issue and make a patch.
Thank you,
St.Ack
On Sun, Apr 24, 2011 at 5:53 PM, bijieshan wrote:
> Under the current hdfs Version, there's no related method to judge whether
> the namenode is in safemode.
> Maybe we can handle the Safe
Stack,
For the bulk loads doc at http://hbase.apache.org/bulk-loads.html (in the
"Importing the prepared data using the completebulkload tool" Section), what
about something along what's outlined below? This could also be included or
referenced from the documentation for
org.apache.hadoop.hba
Andy,
What are the symptoms?
You also need to include in your classpath the directory where the zookeeper
config file (zoo.cfg) is located.
Yes, HBASE-3714 addresses the issue discussed here. Although, it does not
fully address the NPE in
org.apache.hadoop.hbase.zookeeper.ZKConfig.parseZooC
Hi there-
Review the HBase book too.
http://hbase.apache.org/book.html#datamodel
http://hbase.apache.org/book.html#client
http://hbase.apache.org/book.html#performance
-Original Message-
From: JohnJohnGa [mailto:johnjoh...@gmail.com]
Sent: Sunday, April 24, 2011 2:46 AM
To: user@hba
Can you give more detail as to how many region servers were shutting down ?
Thanks
2011/4/25 Gaojinchao
> I merge issue HBASE-3744 to 0.90.2 and test it.
> Find that Creating table fails when region server shutdown
>
> Does it need try to one more times for putting Meta data?
>
> public static
Hello friends,
I was using a hadoop cluster of apache hadoop (version 0.20.2) with apache
hbase-0.20.6(2 regionservers) and apache zookeeper 3.3.1 (cluster of 2).
I ran into problems when I replaced the apache hadoop 0.20.2 with
CDH3(cloudera
hadoop). When I started HBase everything started fine
Hello guys
I am running cloudere distribution cdh3u0 on my cluster with Pig and Hbase.
i can read data from hbase using the following pig query:
my_data = LOAD 'hbase://table1' using
org.apache.pig.backend.hadoop.hbase.HBaseStorage('cf:1') ;dump my_data
but when i try to store data into hbas
I merge issue HBASE-3744 to 0.90.2 and test it.
Find that Creating table fails when region server shutdown
Does it need try to one more times for putting Meta data?
public static void addRegionToMeta(CatalogTracker catalogTracker,
HRegionInfo regionInfo)
throws IOException {
Put put =
Thank you guys and Bill Graham
I have solved the problem.
I just add the following lines of shell to conf/hadoop-env.sh
# if using HBase, likely want to include HBase config
HBASE_CONF_DIR=${HBASE_CONF_DIR:-/etc/hbase/conf}
if [ -n "$HBASE_CONF_DIR" ] && [ -d "$HBASE_CONF_DIR" ]; then
export H
Hello friends,
I was using a hadoop cluster of apache hadoop (version 0.20.2) with apache
hbase-0.20.6(2 regionservers) and apache zookeeper 3.3.1 (cluster of 2).
I ran into problems when I replaced the apache hadoop 0.20.2 with CDH3(cloudera
hadoop). When I started HBase everything started fine i
41 matches
Mail list logo