Re: Help: ROOT and META!!

2012-04-16 Thread Yabo Xu
Yes, it is. Thanks.

Best,
Arber



On Tue, Apr 17, 2012 at 12:05 AM, Jonathan Hsieh  wrote:

> Arber,
>
> Good to hear! Just to confirm, the bug/patch the same as HBASE-5488?
>
> Jon.
>
> On Sun, Apr 15, 2012 at 4:36 AM, Yabo Xu  wrote:
>
> > Hi Jon:
> >
> > Please ignore my last email. We found it was a bug, fix it by a patch and
> > rebuild, and it works now. Data are back! Thanks.
> >
> > Best,
> > Arber
> >
> >
> >
> > On Sun, Apr 15, 2012 at 12:47 PM, Yabo Xu 
> > wrote:
> >
> > > Dear Jon:
> > >
> > > We just ran OfflineMetaRepair, while getting the following exceptions.
> > > Checked online...it seems that is bug. Any suggestions on how to check
> > out
> > > the most-updated version of OfflineMetaRepair to work with our version
> of
> > > HBase? Thanks in advance.
> > >
> > > 12/04/15 12:28:35 INFO util.HBaseFsck: Loading HBase regioninfo from
> > > HDFS...
> > > 12/04/15 12:28:39 ERROR util.HBaseFsck: Bailed out due to:
> > > java.lang.IllegalArgumentException: Wrong FS: hdfs://
> > >
> >
> n4.example.com:12345/hbase/summba.yeezhao.content/03cde9116662fade27545d86ea71a372/.regioninfo
> > ,
> > > expected: file:///
> > >  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
> > > at
> > >
> >
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
> > >  at
> > >
> >
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
> > > at
> > >
> >
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
> > >  at
> > >
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:125)
> > > at
> > >
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
> > >  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
> > > at
> > org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256)
> > >  at
> > >
> org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284)
> > > at
> org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402)
> > >  at
> > >
> >
> org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRepair.java:90)
> > >
> > > We checked on hdfs, and the files shown in exception are available. Any
> > > point
> > >
> > > Best,
> > > Arber
> > >
> > >
> > > On Sun, Apr 15, 2012 at 11:48 AM, Yabo Xu  > >wrote:
> > >
> > >> Thanks, St. Ack & Jon. To answer St. Ack's question, we are using
> HBase
> > >> 0.90.6, and the data corruption happens when some data nodes are lost
> > due
> > >> to the power issue. We've tried hbck and it reports that ROOT is not
> > found,
> > >> and hfsk reports two blocks of ROOT and META are CORUPT status.
> > >>
> > >> Jon: We just checked OfflineMetaRepair, it seems to be the right tool,
> > >> and is trying it now. Just want to confirm: is it compatible with
> > 0.90.6?
> > >>
> > >> Best,
> > >> Arber
> > >>
> > >>
> > >> On Sun, Apr 15, 2012 at 8:55 AM, Jonathan Hsieh 
> > wrote:
> > >>
> > >>> There is a two tools that can try to help you (unfortunately, I
> haven't
> > >>> written the user documentation for either yet)
> > >>>
> > >>> One is called OfflineMetaRepair.  This assumes that hbase is offline
> > >>> reads
> > >>> the data in HDFS  to create a new ROOT and new META.  If you data is
> in
> > >>> good shape, this should work for you. Depending  on which version of
> > >>> hadoop
> > >>> you are using, you may need to apply HBASE-5488.
> > >>>
> > >>> On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool
> > has
> > >>> been greatly enhanced and may be able to help out as well once an
> > initial
> > >>> META table is built, and your hbase is able to get online.  This will
> > >>> currently will require a patch HBASE-5781 to be applied to be useful.
> > >>>
> > >>> Jon.
> > >>>
> > >>>
> > >>> On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu 
> > >>> wrote:
> > >>>
> > >>> > Hi all:
> > >>> >
> > >>> > Just had a desperate  nightWe had a small production hbase
> > >>> cluster( 8
> > >>> > nodes), and due to the accident crash of a few nodes, ROOT and META
> > are
> > >>> > corrupted, while the rest of tables are mostly there. Are there any
> > >>> way to
> > >>> > restore ROOT and META?
> > >>> >
> > >>> > Any of the hints would be appreciated very much! Waiting on line...
> > >>> >
> > >>> > Best,
> > >>> > Arber
> > >>> >
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> // Jonathan Hsieh (shay)
> > >>> // Software Engineer, Cloudera
> > >>> // j...@cloudera.com
> > >>>
> > >>
> > >>
> > >
> >
>
>
>
> --
> // Jonathan Hsieh (shay)
> // Software Engineer, Cloudera
> // j...@cloudera.com
>


Re: Help: ROOT and META!!

2012-04-16 Thread Jonathan Hsieh
Arber,

Good to hear! Just to confirm, the bug/patch the same as HBASE-5488?

Jon.

On Sun, Apr 15, 2012 at 4:36 AM, Yabo Xu  wrote:

> Hi Jon:
>
> Please ignore my last email. We found it was a bug, fix it by a patch and
> rebuild, and it works now. Data are back! Thanks.
>
> Best,
> Arber
>
>
>
> On Sun, Apr 15, 2012 at 12:47 PM, Yabo Xu 
> wrote:
>
> > Dear Jon:
> >
> > We just ran OfflineMetaRepair, while getting the following exceptions.
> > Checked online...it seems that is bug. Any suggestions on how to check
> out
> > the most-updated version of OfflineMetaRepair to work with our version of
> > HBase? Thanks in advance.
> >
> > 12/04/15 12:28:35 INFO util.HBaseFsck: Loading HBase regioninfo from
> > HDFS...
> > 12/04/15 12:28:39 ERROR util.HBaseFsck: Bailed out due to:
> > java.lang.IllegalArgumentException: Wrong FS: hdfs://
> >
> n4.example.com:12345/hbase/summba.yeezhao.content/03cde9116662fade27545d86ea71a372/.regioninfo
> ,
> > expected: file:///
> >  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
> > at
> >
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
> >  at
> >
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
> > at
> >
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
> >  at
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:125)
> > at
> > org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
> >  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
> > at
> org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256)
> >  at
> > org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284)
> > at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402)
> >  at
> >
> org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRepair.java:90)
> >
> > We checked on hdfs, and the files shown in exception are available. Any
> > point
> >
> > Best,
> > Arber
> >
> >
> > On Sun, Apr 15, 2012 at 11:48 AM, Yabo Xu  >wrote:
> >
> >> Thanks, St. Ack & Jon. To answer St. Ack's question, we are using HBase
> >> 0.90.6, and the data corruption happens when some data nodes are lost
> due
> >> to the power issue. We've tried hbck and it reports that ROOT is not
> found,
> >> and hfsk reports two blocks of ROOT and META are CORUPT status.
> >>
> >> Jon: We just checked OfflineMetaRepair, it seems to be the right tool,
> >> and is trying it now. Just want to confirm: is it compatible with
> 0.90.6?
> >>
> >> Best,
> >> Arber
> >>
> >>
> >> On Sun, Apr 15, 2012 at 8:55 AM, Jonathan Hsieh 
> wrote:
> >>
> >>> There is a two tools that can try to help you (unfortunately, I haven't
> >>> written the user documentation for either yet)
> >>>
> >>> One is called OfflineMetaRepair.  This assumes that hbase is offline
> >>> reads
> >>> the data in HDFS  to create a new ROOT and new META.  If you data is in
> >>> good shape, this should work for you. Depending  on which version of
> >>> hadoop
> >>> you are using, you may need to apply HBASE-5488.
> >>>
> >>> On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool
> has
> >>> been greatly enhanced and may be able to help out as well once an
> initial
> >>> META table is built, and your hbase is able to get online.  This will
> >>> currently will require a patch HBASE-5781 to be applied to be useful.
> >>>
> >>> Jon.
> >>>
> >>>
> >>> On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu 
> >>> wrote:
> >>>
> >>> > Hi all:
> >>> >
> >>> > Just had a desperate  nightWe had a small production hbase
> >>> cluster( 8
> >>> > nodes), and due to the accident crash of a few nodes, ROOT and META
> are
> >>> > corrupted, while the rest of tables are mostly there. Are there any
> >>> way to
> >>> > restore ROOT and META?
> >>> >
> >>> > Any of the hints would be appreciated very much! Waiting on line...
> >>> >
> >>> > Best,
> >>> > Arber
> >>> >
> >>>
> >>>
> >>>
> >>> --
> >>> // Jonathan Hsieh (shay)
> >>> // Software Engineer, Cloudera
> >>> // j...@cloudera.com
> >>>
> >>
> >>
> >
>



-- 
// Jonathan Hsieh (shay)
// Software Engineer, Cloudera
// j...@cloudera.com


Re: Help: ROOT and META!!

2012-04-15 Thread Yabo Xu
Hi Jon:

Please ignore my last email. We found it was a bug, fix it by a patch and
rebuild, and it works now. Data are back! Thanks.

Best,
Arber



On Sun, Apr 15, 2012 at 12:47 PM, Yabo Xu  wrote:

> Dear Jon:
>
> We just ran OfflineMetaRepair, while getting the following exceptions.
> Checked online...it seems that is bug. Any suggestions on how to check out
> the most-updated version of OfflineMetaRepair to work with our version of
> HBase? Thanks in advance.
>
> 12/04/15 12:28:35 INFO util.HBaseFsck: Loading HBase regioninfo from
> HDFS...
> 12/04/15 12:28:39 ERROR util.HBaseFsck: Bailed out due to:
> java.lang.IllegalArgumentException: Wrong FS: hdfs://
> n4.example.com:12345/hbase/summba.yeezhao.content/03cde9116662fade27545d86ea71a372/.regioninfo,
> expected: file:///
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
> at
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
>  at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
> at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
>  at
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:125)
> at
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
>  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
> at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256)
>  at
> org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284)
> at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402)
>  at
> org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRepair.java:90)
>
> We checked on hdfs, and the files shown in exception are available. Any
> point
>
> Best,
> Arber
>
>
> On Sun, Apr 15, 2012 at 11:48 AM, Yabo Xu wrote:
>
>> Thanks, St. Ack & Jon. To answer St. Ack's question, we are using HBase
>> 0.90.6, and the data corruption happens when some data nodes are lost due
>> to the power issue. We've tried hbck and it reports that ROOT is not found,
>> and hfsk reports two blocks of ROOT and META are CORUPT status.
>>
>> Jon: We just checked OfflineMetaRepair, it seems to be the right tool,
>> and is trying it now. Just want to confirm: is it compatible with 0.90.6?
>>
>> Best,
>> Arber
>>
>>
>> On Sun, Apr 15, 2012 at 8:55 AM, Jonathan Hsieh  wrote:
>>
>>> There is a two tools that can try to help you (unfortunately, I haven't
>>> written the user documentation for either yet)
>>>
>>> One is called OfflineMetaRepair.  This assumes that hbase is offline
>>> reads
>>> the data in HDFS  to create a new ROOT and new META.  If you data is in
>>> good shape, this should work for you. Depending  on which version of
>>> hadoop
>>> you are using, you may need to apply HBASE-5488.
>>>
>>> On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool has
>>> been greatly enhanced and may be able to help out as well once an initial
>>> META table is built, and your hbase is able to get online.  This will
>>> currently will require a patch HBASE-5781 to be applied to be useful.
>>>
>>> Jon.
>>>
>>>
>>> On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu 
>>> wrote:
>>>
>>> > Hi all:
>>> >
>>> > Just had a desperate  nightWe had a small production hbase
>>> cluster( 8
>>> > nodes), and due to the accident crash of a few nodes, ROOT and META are
>>> > corrupted, while the rest of tables are mostly there. Are there any
>>> way to
>>> > restore ROOT and META?
>>> >
>>> > Any of the hints would be appreciated very much! Waiting on line...
>>> >
>>> > Best,
>>> > Arber
>>> >
>>>
>>>
>>>
>>> --
>>> // Jonathan Hsieh (shay)
>>> // Software Engineer, Cloudera
>>> // j...@cloudera.com
>>>
>>
>>
>


Re: Help: ROOT and META!!

2012-04-14 Thread Yabo Xu
Dear Jon:

We just ran OfflineMetaRepair, while getting the following exceptions.
Checked online...it seems that is bug. Any suggestions on how to check out
the most-updated version of OfflineMetaRepair to work with our version of
HBase? Thanks in advance.

12/04/15 12:28:35 INFO util.HBaseFsck: Loading HBase regioninfo from HDFS...
12/04/15 12:28:39 ERROR util.HBaseFsck: Bailed out due to:
java.lang.IllegalArgumentException: Wrong FS: hdfs://
n4.example.com:12345/hbase/summba.yeezhao.content/03cde9116662fade27545d86ea71a372/.regioninfo,
expected: file:///
 at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
 at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
at
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
 at
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:125)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
 at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256)
 at org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284)
at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402)
 at
org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRepair.java:90)

We checked on hdfs, and the files shown in exception are available. Any
point

Best,
Arber

On Sun, Apr 15, 2012 at 11:48 AM, Yabo Xu  wrote:

> Thanks, St. Ack & Jon. To answer St. Ack's question, we are using HBase
> 0.90.6, and the data corruption happens when some data nodes are lost due
> to the power issue. We've tried hbck and it reports that ROOT is not found,
> and hfsk reports two blocks of ROOT and META are CORUPT status.
>
> Jon: We just checked OfflineMetaRepair, it seems to be the right tool, and
> is trying it now. Just want to confirm: is it compatible with 0.90.6?
>
> Best,
> Arber
>
>
> On Sun, Apr 15, 2012 at 8:55 AM, Jonathan Hsieh  wrote:
>
>> There is a two tools that can try to help you (unfortunately, I haven't
>> written the user documentation for either yet)
>>
>> One is called OfflineMetaRepair.  This assumes that hbase is offline reads
>> the data in HDFS  to create a new ROOT and new META.  If you data is in
>> good shape, this should work for you. Depending  on which version of
>> hadoop
>> you are using, you may need to apply HBASE-5488.
>>
>> On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool has
>> been greatly enhanced and may be able to help out as well once an initial
>> META table is built, and your hbase is able to get online.  This will
>> currently will require a patch HBASE-5781 to be applied to be useful.
>>
>> Jon.
>>
>>
>> On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu 
>> wrote:
>>
>> > Hi all:
>> >
>> > Just had a desperate  nightWe had a small production hbase cluster(
>> 8
>> > nodes), and due to the accident crash of a few nodes, ROOT and META are
>> > corrupted, while the rest of tables are mostly there. Are there any way
>> to
>> > restore ROOT and META?
>> >
>> > Any of the hints would be appreciated very much! Waiting on line...
>> >
>> > Best,
>> > Arber
>> >
>>
>>
>>
>> --
>> // Jonathan Hsieh (shay)
>> // Software Engineer, Cloudera
>> // j...@cloudera.com
>>
>
>


Re: Help: ROOT and META!!

2012-04-14 Thread Yabo Xu
Thanks, St. Ack & Jon. To answer St. Ack's question, we are using HBase
0.90.6, and the data corruption happens when some data nodes are lost due
to the power issue. We've tried hbck and it reports that ROOT is not found,
and hfsk reports two blocks of ROOT and META are CORUPT status.

Jon: We just checked OfflineMetaRepair, it seems to be the right tool, and
is trying it now. Just want to confirm: is it compatible with 0.90.6?

Best,
Arber


On Sun, Apr 15, 2012 at 8:55 AM, Jonathan Hsieh  wrote:

> There is a two tools that can try to help you (unfortunately, I haven't
> written the user documentation for either yet)
>
> One is called OfflineMetaRepair.  This assumes that hbase is offline reads
> the data in HDFS  to create a new ROOT and new META.  If you data is in
> good shape, this should work for you. Depending  on which version of hadoop
> you are using, you may need to apply HBASE-5488.
>
> On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool has
> been greatly enhanced and may be able to help out as well once an initial
> META table is built, and your hbase is able to get online.  This will
> currently will require a patch HBASE-5781 to be applied to be useful.
>
> Jon.
>
>
> On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu  wrote:
>
> > Hi all:
> >
> > Just had a desperate  nightWe had a small production hbase cluster( 8
> > nodes), and due to the accident crash of a few nodes, ROOT and META are
> > corrupted, while the rest of tables are mostly there. Are there any way
> to
> > restore ROOT and META?
> >
> > Any of the hints would be appreciated very much! Waiting on line...
> >
> > Best,
> > Arber
> >
>
>
>
> --
> // Jonathan Hsieh (shay)
> // Software Engineer, Cloudera
> // j...@cloudera.com
>


Re: Help: ROOT and META!!

2012-04-14 Thread Jonathan Hsieh
There is a two tools that can try to help you (unfortunately, I haven't
written the user documentation for either yet)

One is called OfflineMetaRepair.  This assumes that hbase is offline reads
the data in HDFS  to create a new ROOT and new META.  If you data is in
good shape, this should work for you. Depending  on which version of hadoop
you are using, you may need to apply HBASE-5488.

On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool has
been greatly enhanced and may be able to help out as well once an initial
META table is built, and your hbase is able to get online.  This will
currently will require a patch HBASE-5781 to be applied to be useful.

Jon.


On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu  wrote:

> Hi all:
>
> Just had a desperate  nightWe had a small production hbase cluster( 8
> nodes), and due to the accident crash of a few nodes, ROOT and META are
> corrupted, while the rest of tables are mostly there. Are there any way to
> restore ROOT and META?
>
> Any of the hints would be appreciated very much! Waiting on line...
>
> Best,
> Arber
>



-- 
// Jonathan Hsieh (shay)
// Software Engineer, Cloudera
// j...@cloudera.com


Re: Help: ROOT and META!!

2012-04-14 Thread Stack
On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu  wrote:
> Hi all:
>
> Just had a desperate  nightWe had a small production hbase cluster( 8
> nodes), and due to the accident crash of a few nodes, ROOT and META are
> corrupted, while the rest of tables are mostly there. Are there any way to
> restore ROOT and META?
>

ROOT has nothing in it but location of .META.   How is it corrupted?

Regards META, have you tried hbck on it?  What version of hbase are you running?

St.Ack


Help: ROOT and META!!

2012-04-14 Thread Yabo Xu
Hi all:

Just had a desperate  nightWe had a small production hbase cluster( 8
nodes), and due to the accident crash of a few nodes, ROOT and META are
corrupted, while the rest of tables are mostly there. Are there any way to
restore ROOT and META?

Any of the hints would be appreciated very much! Waiting on line...

Best,
Arber