After HBASE-3531 is out of the way, we should be close. When testing is finished, please shut down the cluster. See if any region server refuses to go down. I should have reported the above earlier (I encountered first occurrence Friday evening) That should help people pinpoint potential issues.
Regards On Fri, Feb 11, 2011 at 9:59 PM, Ryan Rawson <[email protected]> wrote: > I am generally +1, but we'll need another RC to address HBASE-3524. > > Here is some of my other report of running this: > > Been running a variant of this found here: > > https://github.com/stumbleupon/hbase/tree/su_prod_90 > > Running in "dev" here at SU now. > > Also been testing that against our Hadoop CDH3b2 patched in with > HDFS-347. In uncontended YCSB runs this did improve much 'get' > numbers, but in a 15 thread contended test the average get time goes > from 12.1 ms -> 6.9ms. We plan to test this more and roll in to our > production environment. With 0.90.1 + a number of our patches, > Hadoopw/347 I loaded 30gb in using YCSB. > > Still working on getting VerifyingWorkload to run and verify this > data. But no exceptions. > > -ryan > > On Fri, Feb 11, 2011 at 7:10 PM, Andrew Purtell <[email protected]> > wrote: > > Seems reasonable to stay -1 given HBASE-3524. > > > > This weekend I'm rolling RPMs of 0.90.1rc0 + ... a few patches (including > 3524) ... for deployment to preproduction staging. Depending how that goes > we may have jiras and patches for you next week. > > > > Best regards, > > > > - Andy > > > > > > > >> From: Stack <[email protected]> > >> Subject: Re: [VOTE] HBase 0.90.1 rc0 is available for download > >> To: [email protected] > >> Cc: [email protected] > >> Date: Friday, February 11, 2011, 9:35 AM > >> > >> Yes. We need to fix the assembly. Its going to trip folks up. I > >> don't think it a sinker on the RC though, especially as we > >> shipped 0.90.0 w/ this same issue. What you think boss? > >> > >> St.Ack > >> > >> > >> On Fri, Feb 11, 2011 at 9:30 AM, Andrew Purtell <[email protected]> > >> wrote: > >> > No an earlier version from before that I failed to > >> delete while moving jars around. So this is a user problem, > >> but I forsee it coming up again and again. > > > > > > > > > > >
