Christian: I'm slightly shocked about the processing time of more than 2
mins to return 225 rows.I would actually need a response in 5-10 sec.
Anil: I started getting the response within 1-2 sec of firing the query but
i got all the 225 results in 2 mins. My table was having 34 million rows
and
Hello,
We have hive external table mapped to hbase, now moving
from pseudo distributed to fully distributed hadoop cluster.
found that hive queries are still pointing to older namenode address
ie: hdfs://localhost:9000/user/hive/warehouse/table-name as it stores
full uri in its derby metastore.
Try restarting the entire hbase cluster. You often get strange error
like this if you have network errors (bad /etc/hosts setup) , a
'undead' regionserver(dead but still alive) or bad config.
-Håvard
On Thu, Aug 23, 2012 at 9:32 AM, Kiran Gangadharan
kiran.darede...@gmail.com wrote:
Hi,
I am
Hello,
I use a table for counting stuff and want to do updates by pushing
increments rather than get - add in application - put.
To ensure idempotence (i.e avoid over counting) I thought about (mis-)using
a cell's timestamp as a kind of transaction id. This transaction id would
be some strictly
Rohit,
This is an interesting question, but it sounds like overkill. I would
not worry about having tables up that aren't active. If you keep your
active region count down and your memory footprint reasonable 16GB heap
you should be fine.
On Fri, Aug 24, 2012 at 1:01 AM, Rohit Kelkar
Yes you need to update the metastore db directly for this to be in effect.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: Alok Kumar alok...@gmail.com
Date: Fri, 24 Aug 2012 13:30:36
To: user@hbase.apache.org; u...@hive.apache.org
Reply-To:
G'd evening everyone,
Here are the logs from the server side : http://pastebin.com/yC5dGChh
And from the client side : http://pastebin.com/tR7wdkxG
I followed your advices and I noticed :
First, U thought the bad RS were the one with the highest number of
sockets. But rebooting each of them did
Hi Adrien,
What do you think about that hypothesis ?
Yes, there is something fishy to look at here. Difficult to say
without more logs as well.
Are your gets totally random, or are you doing gets on rows that do
exist? That would explain the number of request vs. empty/full
regions.
It does
Sent offline.
- Original Message -
From: Gurjeet Singh gurj...@gmail.com
To: user@hbase.apache.org
Cc:
Sent: Wednesday, August 22, 2012 7:01 PM
Subject: Re: Slow full-table scans
Lars,
Can you send me the modified ingestion code ? I am trying to track
down the problem as well and
Bejoy,
Thank you for your help.
updated metastore n its working fine.
Regards
-Alok
On Fri, Aug 24, 2012 at 5:40 PM, Bejoy KS bejoy...@yahoo.com wrote:
Yes you need to update the metastore db directly for this to be in effect.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
Hey there,
I am wondering if this is a good practice:
I have a 10 nodes cluster, running datanodes and tasktrackers, and
continuously running MR jobs.
My replication factor is 3.
I need to put the results of a couple of jobs into Hbase tables to be able
to do random seek search. The Hbase tables
Directly accessing the metastore schema is generally not a good idea.
Instead I recommend using the ALTER TABLE SET LOCATION command:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-AlterTable%2FPartitionLocation
Thanks.
Carl
On Fri, Aug 24, 2012 at 10:56
Hey,
I have a question regarding the command line arguments one can pass
into hbase thrift start command. I set up HBase on a mac in pseudo
distributed mode. Then I execute hbase thrift start, I got following
exception:
Exception in thread main java.lang.AssertionError: Exactly one
option out of
I have a custom co-processor endpoint that handles aggregation of
various statistics for each region (the stats from all regions are
then merged together for the final result). Sometimes the amount of
data to aggregate is very large, and it takes longer than the exec
timeout to completely
14 matches
Mail list logo