Thanks -

So we are up and running, but on the web UI we see "You are currently running 
the HMaster without HDFS append support enabled. This may result in data loss. 
Please see the HBase wiki for details.". 

" To enable sync, first ensure that you have either compiled the 0.20-append 
branch from Apache, or installed Cloudera's CDH3 "

we are running 0.20.3-dev...we didn't see this message before with hbase 
0.20.6. Now we see the message after upgrading to 0.90.1. Is this message 
reliable? Should I worry?

-geoff

-----Original Message-----
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean-Daniel 
Cryans
Sent: Friday, March 04, 2011 3:15 PM
To: user@hbase.apache.org
Subject: Re: 0.90.1 hbase-default.xml

Did you replace the hadoop jar in the hbase lib? It's compatible but
it still requires the same jar (yeah... it's a mess at the moment
since the append branch doesn't have a release).

You might also want to consider using CDH3b4, which has a compatible
hadoop and hbase.

J-D

On Fri, Mar 4, 2011 at 3:10 PM, Geoff Hendrey <ghend...@decarta.com> wrote:
> Any advise on this one? It occurs when I start HBase, then the master
> shuts off. I am running hadoop-0.20.3-dev for hdfs. ClientProtocol
> version mismatch. (client = 42, server = 41). I suppose it means that
> some hdfs client/server protocol is incompatible, but I thought that
> HBase 0.90.1 would work with 20.x hadoop
>
> 2011-03-04 14:57:13,357 FATAL org.apache.hadoop.hbase.master.HMaster:
> Unhandled exception. Starting shutdown.
> org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol
> org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client
> = 42, server = 41)
>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:364)
>        at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:113)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:215)
>        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:177)
>        at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileS
> ystem.java:82)
>        at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>        at
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>        at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
>        at
> org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.
> java:87)
>        at
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java
> :342)
>        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:278)
> 2011-03-04 14:57:13,359 INFO org.apache.hadoop.hbase.master.HMaster:
> Aborting
> 2011-03-04 14:57:13,359 DEBUG org.apache.hadoop.hbase.master.HMaster:
> Stopping service threads
> 2011-03-04 14:57:13,359 INFO org.apache.hadoop.ipc.HBaseServer: Stopping
> server on 60000
>
> -----Original Message-----
> From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of
> Jean-Daniel Cryans
> Sent: Friday, March 04, 2011 2:53 PM
> To: user@hbase.apache.org
> Subject: Re: 0.90.1 hbase-default.xml
>
> It's now included in the hbase jar so that people don't reuse them
> between versions (since it led to many problems).
>
> J-D
>
> On Fri, Mar 4, 2011 at 2:44 PM, Geoff Hendrey <ghend...@decarta.com>
> wrote:
>> Hi,
>>
>>
>>
>> I tried to use my hbase-default.xml from 0.89 with my new 0.90.1
>> installation. I get a message stating "hbase-default.xml seems to be
>> from an old version of hbase(null), this version is 0.90.1.
>>
>>
>>
>> But 0.90.1 doesn't seem to have an hbase-default.xml file that it
> ships
>> with (at least not in conf after I extracted the 0.90.1 .tar.gz file).
>> So what is the process to get my configs up and running on 0.90.1.
>>
>>
>>
>> -g
>>
>>
>

Reply via email to