Rick,

So you are saying that upgrading from 0.1.3 to 0.2.0 didn't work for you?

Regards your question, I strongly suggest that you develop for the 0.2
branch. If you think there is some features that are missing, feel free to
fill a Jira and this will solve your problem. Or you can even contribute a
patch.

J-D

On Wed, Jul 23, 2008 at 1:40 PM, Rick Hangartner <[EMAIL PROTECTED]>
wrote:

> Hi,
>
> All in all, we like what we've seen of HBase-0.2.0 so far and the API
> changes.
>
> Our upgrade test with Release Candidate 1 was a little bumpy but generally
> positive.
>
> We set up a single virtualized test machine (in this case VMWare Fusion
> because an OS X machine was close at hand, although we also run Xen
> virtualized installs), with a virtual disk, running hadoop-0.16.4 and
> hbase-0.1.3 with a small test table in a Debian install so we could rapidly
> test alternative upgrades paths and data migration.
>
> We tried our own upgrade procedure once based on some previous experience
> and we tried the "formal" upgrade procedure  found at:
>
> http://wiki.apache.org/hadoop/Hbase/HowToMigrate
> http://wiki.apache.org/hadoop/Hadoop%20Upgrade
>
> In each case, we of course started with a fresh copy of our virtualized
> machine.
>
> Here's what we found:
>
> 1) In both approaches, the upgrade from hadoop-0.16.4 to hadoop-0.17.1 seem
> to go OK.
>
> 2) In both approaches, when we tried to do the data migration from
> hbase-0.1.3 to hbase-0.2.0 we first got migration failures due to
> "unrecovered region server logs". Following the 'Redo Logs' comments in the
> "http://wiki.apache.org/hadoop/Hbase/HowToMigrate"; doc, starting afresh
> with a  new copy of our virtualized system each time, we tried these methods
> of getting rid of the those logs and the fatal error:
>
>   a) deleting just the log files in the "/hbase" directory
>   b) deleting the entire contents of the "/hbase" directory (which means we
> lost our data, but we are just investigating the upgrade path, after all)
>   c) deleting the "/hbase" directory entirely and creating a new "/hbase"
> directory.
>
> I should also note that we would need to repeat approach a) to be 100%
> certain of our results for that case.  (We've already repeated approaches b)
> and c) and have just run out of time for these upgrade tests because we need
> to get to other things).
>
> In all cases, the migrate then failed as:
>
> [EMAIL PROTECTED]:~/hbase$ bin/hbase migrate upgrade
> 08/07/22 18:03:16 INFO util.Migrate: Verifying that file system is
> available...
> 08/07/22 18:03:16 INFO util.Migrate: Verifying that HBase is not running...
> 08/07/22 18:03:17 INFO ipc.Client: Retrying connect to server:
> savory1/10.0.0.45:60000. Already tried 1 time(s).
>  ...
> 08/07/22 18:03:26 INFO ipc.Client: Retrying connect to server:
> savory1/10.0.0.45:60000. Already tried 10 time(s).
> 08/07/22 18:03:27 INFO util.Migrate: Starting upgrade
> 08/07/22 18:03:27 FATAL util.Migrate: Upgrade failed
> java.io.IOException: Install 0.1.x of hbase and run its migration first
>       at org.apache.hadoop.hbase.util.Migrate.run(Migrate.java:181)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>       at org.apache.hadoop.hbase.util.Migrate.main(Migrate.java:446)
>
> As you can see, we were then in a bit of a Catch-22.  To re-install
> HBase-0.1.x also required re-installing Hadoop-0.16.4 (we tried reinstalling
> HBase-0.1.x without doing that!) so there was no way to proceed.  Attempting
> to start up HBase-0.2.0 just resulted in an error message that we needed to
> do the migrate.
>
> 3) Since this was just a test, we then blew away the disk used by Hadoop
> and re-built the namenode per a standard Hadoop new install.  Hadoop-0.17.1
> and Hbase-0.2.0 then started up just fine.  We only ran a few tests with the
> new Hbase command line shell in some ways we used the old HQL shell for
> sanity checks, and everything seems copacetic.
>
> A few other comments:
>
> - The new shell takes a bit of getting used to, but seems quite functional
> (we're not the biggest Ruby fans, but hey, someone took this on and upgraded
> the shell so we just say: Thanks!)
>
> - We really like how timestamps have become first-class objects in the
> HBase-0.2.0 API .  Although, we were in the middle of developing some code
> under HBase-0.1.3 with workarounds for timestamps not being first-class
> objects and we will have to decide whether we should backup and re-develop
> for HBase-0.2.0 (we know we should), or plunge ahead with what we were doing
> under HBase-0.1.3 just to discard it in the near future because of the other
> advantages of HBase-0.2.0.  Is there anything we should consider in making
> this decision, perhaps about timing of any bug fixes and an official release
> of HBase-0.2.0? (HBase-0.2.1?).
>
> Thanks for what continues to be a very useful and interesting product.
>
> Rick
>
>
> On Jul 22, 2008, at 3:24 PM, stack wrote:
>
>  The first 0.2.0 release candidate is available for download:
>>
>> http://people.apache.org/~stack/hbase-0.2.0-candidate-1/<http://people.apache.org/%7Estack/hbase-0.2.0-candidate-1/>
>>
>> Please take this release candidate for a spin. Check the documentation,
>> that unit tests all complete on your platform, etc.
>>
>> Should we release this candidate as hbase 0.2.0?  Vote yes or no before
>> Friday, July 25th.
>>
>> Release 0.2.0 has over 240 issues resolved [1] since the branch for 0.1
>> hbase was made.  Be warned that hbase 0.2.0 is not backward compatible with
>> the hbase 0.1 API.  See [2] Izaak Rubins' notes on the high-level API
>> differences between 0.1 and 0.2.  For notes on how to migrate your 0.1 era
>> hbase data to 0.2, see Izaak's migration guide [3].
>>
>> Yours,
>> The HBase Team
>>
>> 1.
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12312955&styleName=Html&projectId=12310753&Create=Create
>> 2. http://wiki.apache.org/hadoop/Hbase/Plan-0.2/APIChanges
>> 3. http://wiki.apache.org/hadoop/Hbase/HowToMigrate
>>
>
>

Reply via email to