tion/schema has fairly low hlog rollover sizes to
> >> keep
> >> > the possibility of data loss to a minimum. When we upgrade to .89
> with
> >> > append support, I imagine we'll be able to safely set this to a much
> >> larger
> >> > s
pport, I imagine we'll be able to safely set this to a much
>> larger
>> > size. Are there any rough guidelines for what a good values should be
>> now?
>> >
>> > -Daniel
>> >
>> > On 9/28/10 6:13 PM, Buttler, David wrote:
>> >
7;ll be able to safely set this to a much
> larger
> > size. Are there any rough guidelines for what a good values should be
> now?
> >
> > -Daniel
> >
> > On 9/28/10 6:13 PM, Buttler, David wrote:
> >>
> >> Fantastic news, I look forward to
t;> -Original Message-
>> From: Todd Lipcon [mailto:t...@cloudera.com]
>> Sent: Tuesday, September 28, 2010 11:25 AM
>> To: user@hbase.apache.org
>> Subject: Re: Upgrading 0.20.6 -> 0.89
>>
>> On Tue, Sep 28, 2010 at 9:35 AM, Buttler, David wrote:
&
e.org
Subject: Re: Upgrading 0.20.6 -> 0.89
On Tue, Sep 28, 2010 at 9:35 AM, Buttler, David wrote:
I currently suggest that you use the CDH3 hadoop package. Apparently
StumbleUpon has a production version of 0.89 that they are using. It would
be helpful if Cloudera put that in their dist
Fantastic news, I look forward to it
Dave
-Original Message-
From: Todd Lipcon [mailto:t...@cloudera.com]
Sent: Tuesday, September 28, 2010 11:25 AM
To: user@hbase.apache.org
Subject: Re: Upgrading 0.20.6 -> 0.89
On Tue, Sep 28, 2010 at 9:35 AM, Buttler, David wrote:
>
> I
On Tue, Sep 28, 2010 at 9:35 AM, Buttler, David wrote:
>
> I currently suggest that you use the CDH3 hadoop package. Apparently
> StumbleUpon has a production version of 0.89 that they are using. It would
> be helpful if Cloudera put that in their distribution.
>
>
Working on it ;-) CDH3b3 shou
t doesn't get the testing that CDH does.
but CDH3 offers other benefits, check out their blog.
>
> Best regards,
>
>- Andy
>
>
> --- On Tue, 9/28/10, Renato Marroquín Mogrovejo <
> renatoj.marroq...@gmail.com> wrote:
>
> > From: Renato Marroquín
> renatoj.marroq...@gmail.com> wrote:
>
> > From: Renato Marroquín Mogrovejo
> > Subject: Re: Upgrading 0.20.6 -> 0.89
> > To: user@hbase.apache.org
> > Date: Tuesday, September 28, 2010, 10:02 AM
> >
> > Just a quick question that often intrigues me, why do you guys
> > prefer the CDH3b2? and not a regular hadoop-0.20.X???
> > Thanks in advanced.
> >
> >
> > Renato M.
>
>
>
>
>
>
ommon/tree/branch-0.20-append
but CDH3 offers other benefits, check out their blog.
Best regards,
- Andy
--- On Tue, 9/28/10, Renato Marroquín Mogrovejo
wrote:
> From: Renato Marroquín Mogrovejo
> Subject: Re: Upgrading 0.20.6 -> 0.89
> To: user@hbase.apache.org
> Date: Tuesda
Somes patches that improve throughput for HBase, although you also
need a HBase-side patch (HBASE-2467). They also backported stuff from
0.21 that's never going to be in 0.20-append. That's our main reasons
to use CDh3b2.
J-D
On Tue, Sep 28, 2010 at 10:02 AM, Renato Marroquín Mogrovejo
wrote:
>
Just a quick question that often intrigues me, why do you guys prefer the
CDH3b2? and not a regular hadoop-0.20.X???
Thanks in advanced.
Renato M.
2010/9/28 Jean-Daniel Cryans
> > Will upgrading to 0.89 be a PITA?
>
> Unless you still use the deprecated APIs, it's actually just a matter
> of r
> Will upgrading to 0.89 be a PITA?
Unless you still use the deprecated APIs, it's actually just a matter
of replacing the distribution and restarting.
>
> Should we expect to be able to upgrade the servers without losing data?
Definitely, since no upgrade of the filesystem format is required. B
I have tried upgrading on one of my test clusters. I can say that the code
changes are relatively small and minor. Things that I had to change:
1) how I was creating my Configuration objects -- using
HBaseConfiguration.create() instead of new HBaseConfiguration()
2) how I was defining my column
14 matches
Mail list logo