Jack: You might want to try applying hbase-3038 (there's two patches
up there. you'll need both). Thought is that it might be cause of
the EOFE you were running into (even though your files seemed less
than the 2G that hbase-3038 is about).
St.Ack
On Tue, Sep 28, 2010 at 12:12 PM, Stack wrote:
Hey guys!
Just wanted to let you know that Wednesday's meetup is going to have
some *fantastic* speakers from around the world :) You need to come!
http://www.meetup.com/Seattle-Hadoop-HBase-NoSQL-Meetup/calendar/13704368/
Wednesday, 7pm, Amazon SLU.
First we have Tim Anglade, who flew here all
I was thinking along the same lines. Adding an additional synchronization
didn't seem like the right approach. So if we make sure we are taking off what
we are expecting to then there wont be a problem.
~Jeff
On Sep 28, 2010, at 2:41 PM, Ted Yu wrote:
> Except for remove(Object r), all call
Fantastic news, I look forward to it
Dave
-Original Message-
From: Todd Lipcon [mailto:t...@cloudera.com]
Sent: Tuesday, September 28, 2010 11:25 AM
To: user@hbase.apache.org
Subject: Re: Upgrading 0.20.6 -> 0.89
On Tue, Sep 28, 2010 at 9:35 AM, Buttler, David wrote:
>
> I currently su
http://pastebin.com/mU8tpuiD there it is.
On Tue, Sep 28, 2010 at 12:03 PM, Stack wrote:
> Can you get history of this region from master log and pastebin it
> (paste result of a grep of 84162fcdf083fe4736c39571223cb029 to
> pastebin)?
> St.Ack
>
> On Tue, Sep 28, 2010 at 10:53 AM, Jack Levin wr
I'll try to reproduce it and capture some comprehensive log files, but we're
testing on EC2 and had terminated some of the servers before noticing what
was happening.
I think it's been doing successful compactions all along because there are
only 3 files in that directory. Here's the hdfs files f
I made https://issues.apache.org/jira/browse/HBASE-3046 for looking
into this. We though we're repro'd it here but it seems like we were
running into hbase-3038... which was not your case, at least, not for
the two files you made available to me.
St.Ack
On Fri, Sep 24, 2010 at 4:52 PM, Jack Levi
Can you get history of this region from master log and pastebin it
(paste result of a grep of 84162fcdf083fe4736c39571223cb029 to
pastebin)?
St.Ack
On Tue, Sep 28, 2010 at 10:53 AM, Jack Levin wrote:
> I am using rest and getting following errors on some insertions:
>
>
> 2010-09-28 10:51:07,215
I was able to fix it by running:
./hbase org.jruby.Main add_table.rb /hbase/img833
then in shell, disable, enable 'img833'
I am checking to see if I lost any files.
-Jack
On Tue, Sep 28, 2010 at 10:53 AM, Jack Levin wrote:
> I am using rest and getting following errors on some insertions:
>
>
That is an astute observation. Stepping through the code with the threads
stopping execution at the points in code you suggest would indeed make it so
take() would return the lower priority compactionRequest, remove the higher
priority compaction request from regionsInQueue, and finally the add
On Tue, Sep 28, 2010 at 9:35 AM, Buttler, David wrote:
>
> I currently suggest that you use the CDH3 hadoop package. Apparently
> StumbleUpon has a production version of 0.89 that they are using. It would
> be helpful if Cloudera put that in their distribution.
>
>
Working on it ;-) CDH3b3 shou
On Tue, Sep 28, 2010 at 10:09 AM, Andrew Purtell wrote:
> A working equivalent of sync() in HDFS, and support for it.
>
> See http://www.cloudera.com/blog/2010/07/whats-new-in-cdh3-b2-hbase/ ,
> especially: "HDFS improvements for HBase – along with the HDFS team at
> Facebook, we have contributed
St. Ack,
just to make sure: by cdh3b2 aka 'hadoop-append' you imply hadoop 0.20.2+320
from cdh3 distro, right?
Thank you.
-Dmitriy
On Tue, Sep 28, 2010 at 9:15 AM, Stack wrote:
> On Mon, Sep 27, 2010 at 7:52 PM, Dmitriy Lyubimov
> wrote:
> > Hi,
> >
> > i would be very grateful if somebody co
I am using rest and getting following errors on some insertions:
2010-09-28 10:51:07,215 ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer:
org.apache.hadoop.hbase.regionserver.WrongRegionException: Requested
row out of range for HRegion
img833,samp754.png,1285522187411.84162fcdf083fe4736c
Thank you, St. Ack.
by 'reconstituting' i meant table had no records, so we had to re-fill it
with the data we kept elsewhere. I couldn't find or figure any technique
that might help me to scavenge it off the hbase files.
so it sounds like the migrate is in order.
-Dmitriy
On Tue, Sep 28, 2010
Thanks!
Renato M.
2010/9/28 Andrew Purtell
> A working equivalent of sync() in HDFS, and support for it.
>
> See http://www.cloudera.com/blog/2010/07/whats-new-in-cdh3-b2-hbase/ ,
> especially: "HDFS improvements for HBase – along with the HDFS team at
> Facebook, we have contributed a number
A working equivalent of sync() in HDFS, and support for it.
See http://www.cloudera.com/blog/2010/07/whats-new-in-cdh3-b2-hbase/ ,
especially: "HDFS improvements for HBase – along with the HDFS team at
Facebook, we have contributed a number of important bug fixes and improvements
for HDFS spec
Somes patches that improve throughput for HBase, although you also
need a HBase-side patch (HBASE-2467). They also backported stuff from
0.21 that's never going to be in 0.20-append. That's our main reasons
to use CDh3b2.
J-D
On Tue, Sep 28, 2010 at 10:02 AM, Renato Marroquín Mogrovejo
wrote:
>
Just a quick question that often intrigues me, why do you guys prefer the
CDH3b2? and not a regular hadoop-0.20.X???
Thanks in advanced.
Renato M.
2010/9/28 Jean-Daniel Cryans
> > Will upgrading to 0.89 be a PITA?
>
> Unless you still use the deprecated APIs, it's actually just a matter
> of r
> Will upgrading to 0.89 be a PITA?
Unless you still use the deprecated APIs, it's actually just a matter
of replacing the distribution and restarting.
>
> Should we expect to be able to upgrade the servers without losing data?
Definitely, since no upgrade of the filesystem format is required. B
I have tried upgrading on one of my test clusters. I can say that the code
changes are relatively small and minor. Things that I had to change:
1) how I was creating my Configuration objects -- using
HBaseConfiguration.create() instead of new HBaseConfiguration()
2) how I was defining my column
>
> Does the 0.20.6 provide the redundancy of region servers as well as the
> 0.89.20100924?
It generally does, but since HDFS 0.20 didn't support fsSync then
testing region server recovery is pretty hard (since you never know
how much data you lost). But for 0.90, which the 0.89 releases are
snap
On Mon, Sep 27, 2010 at 7:52 PM, Dmitriy Lyubimov wrote:
> Hi,
>
> i would be very grateful if somebody could clarify the following for me
> please. (0.20.5)
>
> yesterday we lost a short table (~100 rows) in production without a trace.
> no matter how deep i looked in the logs of regionservers a
On Mon, Sep 27, 2010 at 4:26 PM, Matt Corgan wrote:
> I'm sequentially importing ~1 billion small rows (32 byte keys) into a table
> called StatAreaModelLink. I realize that sequential insertion isn't
> efficient by design, but I'm not in a hurry so I let it run all weekend.
> It's been proceedi
We're using 0.20.6; we have a non-trivial application using many aspects
of hbase; we have a couple of customers in production; we understand this
is still pre-release, however we don't want to lose any data.
Will upgrading to 0.89 be a PITA?
Should we expect to be able to upgrade the server
Jean-Daniel Cryans writes:
> We can kill -9 region servers as much as we
> want, and the cluster does recover.
>
> J-D
Just FYI: I've just tested the redundancy of region servers on a Hadoop
0.20.2/HBase 0.20.6, with 2 region servers. The results are: it works fine, only
if the region to be ki
Resolved. A stupid error I made. Sorry for this.
2010/9/28 Tao Xie
> Maybe a stupid question. I have set export HBASE_MANAGES_ZK=true and
> provide one ZK in hbase-site.xml. In my example, I only set the server sr114
> as zk. But I still find zookeeper will check other quorum servers. I wonder
>
Maybe a stupid question. I have set export HBASE_MANAGES_ZK=true and provide
one ZK in hbase-site.xml. In my example, I only set the server sr114 as zk.
But I still find zookeeper will check other quorum servers. I wonder where
the server lists it reads. Confused about this. Anybody can give me a h
28 matches
Mail list logo