I am using HBase 0.90.4. When one of the region server is brought down, the
HMaster doesnot reallocate the regions and the regions are lost.
This issue was mentioned in the following posts as well. But that issue was
prevalent in HBase 0.90.3 and lesser versions. But it was mentioned to be
resol
Hi Sriram
What is the problem that you are getting ? Any exceptions in logs?
Ideally the master will reallocate all the regions belonging to the region
server that
Went down.
Regards
Ram
-Original Message-
From: V_sriram [mailto:vsrira...@gmail.com]
Sent: Friday, January 13, 2012 2:39
Call for Submission Berlin Buzzwords 2012 - Search, Store, Scale --
June 4 / 5. 2012
The event will comprise presentations on scalable data processing. We
invite you to submit talks on the topics:
* IR / Search - Lucene, Solr, katta, ElasticSearch or comparable solutions
* NoSQL - like CouchDB
Stack writes:
>
> On Thu, Jan 12, 2012 at 5:18 AM, Stanislav Barton
> wrote:
> >but after disable/enable the table in order to make the
> > regions to come up, there still two regions were are problems, the RS throws
> > NegativeArraySizeException while trying to open the region, the whole
Hi Ram,
I too expected that to happen. But instead of reallocating the regions, the
HMAster loses the regions present in the killed region server.
HMaster log trace:
-01-12 22:19:18,586 DEBUG org.apache.hadoop.hbase.master.AssignmentManager:
Handling transition=M_ZK_REGION_OFFLINE, server=Namen
Sorry what I meant by "pastebin all the debug" was to use a service
like pastebin.com to keep the emails short.
So in there I see:
> 12/01/13 02:21:22 INFO zookeeper.ClientCnxn: Opening socket connection to
> server localhost/127.0.0.1:2181
Which means that it's connecting to the default value o
Dhruba:
Thank you for the stats.
Fuesane
Dhruba Borthakur-2 wrote:
>
> Here are some of our stats of FB messages on HBase:
>
> 6B+ messages/day
>
> Traffic to HBase
> 75+ Billion R+W ops/day
> At peak: 1.5M ops/sec
> ~ 55% Read vs. 45% Write ops
>
> Avg write op inserts ~16 records across
On Thu, Jan 12, 2012 at 9:47 PM, T Vinod Gupta wrote:
> i wrote an app to delete bunch of old data which we dont need
> any more.. so that app is doing scans and deletes (specific columns of rows
> based on some custom logic).
>
You understand that you are writing a new entry per item you are del
Hi everyone,
I have a simple MapReduce job that is reading in text files , writing out
to and using the PutSortReducer to configure a
large bulk load for HBase. When I run the job, it's throwing a
ClassCastException error, I believe having to do with partitioning the
keys. Has anyone else exper
Can you reveal part of your code w.r.t. the use of LongWritable ?
Cheers
On Fri, Jan 13, 2012 at 12:36 PM, Jon Bender wrote:
> Hi everyone,
>
> I have a simple MapReduce job that is reading in text files , writing out
> to and using the PutSortReducer to configure a
> large bulk load for HBase.
Hello,
When using the hbase shell on ubuntu 10.04, I get lots of incorrect
characters in my output. For instance
Current count: 174000, row: ???7???Q
RecordingSession,R5�,1326462416602.954ee04ab2363757ed3f6413e028e457.
It looks like my bash is set to be utf8, when i run locale, I get
LAN
Sure, see below for the job setup and Map class:
http://pastebin.com/DKMZhGff
--Jon
On Fri, Jan 13, 2012 at 1:13 PM, Ted Yu wrote:
> Can you reveal part of your code w.r.t. the use of LongWritable ?
>
> Cheers
>
> On Fri, Jan 13, 2012 at 12:36 PM, Jon Bender >wrote:
>
> > Hi everyone,
> >
> >
I have a standalone instance of HBASE (single instance, on localhost).
After reading a few thousand records using a scanner my thread is stuck
waiting:
"main" prio=10 tid=0x016d4800 nid=0xf3a in Object.wait()
[0x7fbe96dc3000]
java.lang.Thread.State: WAITING (on object monitor)
did u get any scan results at all?
check your region server and master hbase logs for any warnings..
also, just fyi - the standalone version of hbase is not super stable. i
have had many similar problems in the past. the distributed mode is much
much robust.
thanks
On Fri, Jan 13, 2012 at 2:36 P
Successfully got a few thousand resultsnothing exceptional in the
hbase log:
|2012-01-13 22:42:13,830 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2012-01-13 22:42:13,832 INFO org.apache.hadoop.io.compress.CodecPool: Got
brand-new decompressor
2012-01-
It always hangs waiting on the same record
On 13/01/12 22:48, Joel Halbert wrote:
Successfully got a few thousand resultsnothing exceptional in the
hbase log:
|2012-01-13 22:42:13,830 INFO
org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2012-01-13 22:42:13,8
I'm having an odd problem with incrementing counters simultaneously during a
scan (both in separate processes).
For low rate counters, there is no problem (< 1 increment per second), but for
the higher rate counters (>10 increments per second), there is an inconsistency
in the counter values.
My apologies,
Here is what "hbase classpath" returns
http://pastebin.com/2MP9c6Yq
As you can see, /etc/hbase/conf is not on there, so that explains my
problem. The documentation indicates that hbase-site.xml is normally
located in /etc/hbase/conf. Is this a problem with 'hbase', or did I
m
Apache HBase doesn't require that, as you can tell by the
documentation: http://hbase.apache.org/book.html
You are running CDH3u2, which has its own quirks/features, so as to
why it's not picking it up while it should is a question for the
cloudera mailing lists.
J-D
On Fri, Jan 13, 2012 at 5:24
19 matches
Mail list logo