Re: 0.92 and Read/writes not scaling

2012-04-15 Thread Todd Lipcon
On Fri, Apr 13, 2012 at 9:06 PM, Stack st...@duboce.net wrote:
 On Fri, Apr 13, 2012 at 8:02 PM, Todd Lipcon t...@cloudera.com wrote:
 If you want to patch on the HBase side, you can edit HLog.java to
 remove the checks for the sync method, and have it only call
 hflush. It's only the compatibility path that caused the problem.


 You mean change the order here boss?

Yep - invoking hflush instead of syncfs should fix the issue on older
0.23.x/CDH4 builds, I think (though I didn't test it). Going forward
it won't matter though.

FYI I verified that the fix made it into our nightly CDH4 build last
night (0.23.1+360)

-Todd



  @Override
  public void sync() throws IOException {
    if (this.syncFs != null) {
      try {
       this.syncFs.invoke(this.writer, HLog.NO_ARGS);
      } catch (Exception e) {
        throw new IOException(Reflection, e);
      }
    } else if (this.hflush != null) {
      try {
        this.hflush.invoke(getWriterFSDataOutputStream(), HLog.NO_ARGS);
      } catch (Exception e) {
        throw new IOException(Reflection, e);
      }
    }
  }


 Call hflush if its available ahead of syncFs?

 Seems like we should get this in all around.  I can do it.

 Good stuff,
 St.Ack



-- 
Todd Lipcon
Software Engineer, Cloudera


Re: Help: ROOT and META!!

2012-04-15 Thread Yabo Xu
Hi Jon:

Please ignore my last email. We found it was a bug, fix it by a patch and
rebuild, and it works now. Data are back! Thanks.

Best,
Arber



On Sun, Apr 15, 2012 at 12:47 PM, Yabo Xu arber.resea...@gmail.com wrote:

 Dear Jon:

 We just ran OfflineMetaRepair, while getting the following exceptions.
 Checked online...it seems that is bug. Any suggestions on how to check out
 the most-updated version of OfflineMetaRepair to work with our version of
 HBase? Thanks in advance.

 12/04/15 12:28:35 INFO util.HBaseFsck: Loading HBase regioninfo from
 HDFS...
 12/04/15 12:28:39 ERROR util.HBaseFsck: Bailed out due to:
 java.lang.IllegalArgumentException: Wrong FS: hdfs://
 n4.example.com:12345/hbase/summba.yeezhao.content/03cde9116662fade27545d86ea71a372/.regioninfo,
 expected: file:///
  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47)
  at
 org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357)
 at
 org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
  at
 org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:125)
 at
 org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
  at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
 at org.apache.hadoop.hbase.util.HBaseFsck.loadMetaEntry(HBaseFsck.java:256)
  at
 org.apache.hadoop.hbase.util.HBaseFsck.loadTableInfo(HBaseFsck.java:284)
 at org.apache.hadoop.hbase.util.HBaseFsck.rebuildMeta(HBaseFsck.java:402)
  at
 org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair.main(OfflineMetaRepair.java:90)

 We checked on hdfs, and the files shown in exception are available. Any
 point

 Best,
 Arber


 On Sun, Apr 15, 2012 at 11:48 AM, Yabo Xu arber.resea...@gmail.comwrote:

 Thanks, St. Ack  Jon. To answer St. Ack's question, we are using HBase
 0.90.6, and the data corruption happens when some data nodes are lost due
 to the power issue. We've tried hbck and it reports that ROOT is not found,
 and hfsk reports two blocks of ROOT and META are CORUPT status.

 Jon: We just checked OfflineMetaRepair, it seems to be the right tool,
 and is trying it now. Just want to confirm: is it compatible with 0.90.6?

 Best,
 Arber


 On Sun, Apr 15, 2012 at 8:55 AM, Jonathan Hsieh j...@cloudera.com wrote:

 There is a two tools that can try to help you (unfortunately, I haven't
 written the user documentation for either yet)

 One is called OfflineMetaRepair.  This assumes that hbase is offline
 reads
 the data in HDFS  to create a new ROOT and new META.  If you data is in
 good shape, this should work for you. Depending  on which version of
 hadoop
 you are using, you may need to apply HBASE-5488.

 On the latest branches of hbase (0.90/0.92/0.94/trunk) the hbck tool has
 been greatly enhanced and may be able to help out as well once an initial
 META table is built, and your hbase is able to get online.  This will
 currently will require a patch HBASE-5781 to be applied to be useful.

 Jon.


 On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu arber.resea...@gmail.com
 wrote:

  Hi all:
 
  Just had a desperate  nightWe had a small production hbase
 cluster( 8
  nodes), and due to the accident crash of a few nodes, ROOT and META are
  corrupted, while the rest of tables are mostly there. Are there any
 way to
  restore ROOT and META?
 
  Any of the hints would be appreciated very much! Waiting on line...
 
  Best,
  Arber
 



 --
 // Jonathan Hsieh (shay)
 // Software Engineer, Cloudera
 // j...@cloudera.com






dump HLog content!

2012-04-15 Thread yonghu
Hello,

My hbase version is 0.92.0 and is installed in pseudo-mode. I found a
strange situation of HLog. After I inserted new data value into table,
the volume of HLog is 0. I checked in HDFS.

drwxr-xr-x   - yonghu supergroup  0 2012-04-15 17:34 /hbase/.logs
drwxr-xr-x   - yonghu supergroup  0 2012-04-15 17:34
/hbase/.logs/yonghu-laptop,60020,1334504008467
-rw-r--r--   3 yonghu supergroup  0 2012-04-15 17:34
/hbase/.logs/yonghu-laptop,60020,1334504008467/yonghu-laptop%2C60020%2C1334504008467.1334504048854

But I can use hbase org.apache.hadoop.hbase.regionserver.wal.HLog
--dump to see the content of log information. However, if I write java
program to extract the log information. The output is null! Somebody
knows why?

Thanks!

Yong


Re: dump HLog content!

2012-04-15 Thread Ted Yu
Did 'HLog --dump' show real contents for a 0-sized file ?

Cheers

On Sun, Apr 15, 2012 at 8:58 AM, yonghu yongyong...@gmail.com wrote:

 Hello,

 My hbase version is 0.92.0 and is installed in pseudo-mode. I found a
 strange situation of HLog. After I inserted new data value into table,
 the volume of HLog is 0. I checked in HDFS.

 drwxr-xr-x   - yonghu supergroup  0 2012-04-15 17:34 /hbase/.logs
 drwxr-xr-x   - yonghu supergroup  0 2012-04-15 17:34
 /hbase/.logs/yonghu-laptop,60020,1334504008467
 -rw-r--r--   3 yonghu supergroup  0 2012-04-15 17:34

 /hbase/.logs/yonghu-laptop,60020,1334504008467/yonghu-laptop%2C60020%2C1334504008467.1334504048854

 But I can use hbase org.apache.hadoop.hbase.regionserver.wal.HLog
 --dump to see the content of log information. However, if I write java
 program to extract the log information. The output is null! Somebody
 knows why?

 Thanks!

 Yong



Re: dump HLog content!

2012-04-15 Thread yonghu
Thanks for your reply. After nearly 60minutes, I can see the Hlog volume.

-rw-r--r--   3 yonghu supergroup   2125 2012-04-15 17:34
/hbase/.logs/yonghu-laptop,60020,1334504008467/yonghu-laptop%2C60020%2C1334504008467.1334504048854

I have no idea why it takes so long time.

Yong

On Sun, Apr 15, 2012 at 6:34 PM, yonghu yongyong...@gmail.com wrote:
 yes

 On Sun, Apr 15, 2012 at 6:30 PM, Ted Yu yuzhih...@gmail.com wrote:
 Did 'HLog --dump' show real contents for a 0-sized file ?

 Cheers

 On Sun, Apr 15, 2012 at 8:58 AM, yonghu yongyong...@gmail.com wrote:

 Hello,

 My hbase version is 0.92.0 and is installed in pseudo-mode. I found a
 strange situation of HLog. After I inserted new data value into table,
 the volume of HLog is 0. I checked in HDFS.

 drwxr-xr-x   - yonghu supergroup          0 2012-04-15 17:34 /hbase/.logs
 drwxr-xr-x   - yonghu supergroup          0 2012-04-15 17:34
 /hbase/.logs/yonghu-laptop,60020,1334504008467
 -rw-r--r--   3 yonghu supergroup          0 2012-04-15 17:34

 /hbase/.logs/yonghu-laptop,60020,1334504008467/yonghu-laptop%2C60020%2C1334504008467.1334504048854

 But I can use hbase org.apache.hadoop.hbase.regionserver.wal.HLog
 --dump to see the content of log information. However, if I write java
 program to extract the log information. The output is null! Somebody
 knows why?

 Thanks!

 Yong



Re: dump HLog content!

2012-04-15 Thread Manish Bhoge
Yong,

It is a Hlog log roll property that keep the log size 0 until the complete 
block is written OR until it completes the log roll duration mentioned in 
configuration (default 60 min). However it still persists the edits in .edit 
files and once it reaches to the interval defined for log roll it writes back 
to log. That is the reason you can see the logs(size) more than zero 
byte.eventually it moves the log into .oldlogs also.

Thanks
Manish
Sent from my BlackBerry, pls excuse typo

-Original Message-
From: yonghu yongyong...@gmail.com
Date: Sun, 15 Apr 2012 18:58:45 
To: user@hbase.apache.org
Reply-To: user@hbase.apache.org
Subject: Re: dump HLog content!

Thanks for your reply. After nearly 60minutes, I can see the Hlog volume.

-rw-r--r--   3 yonghu supergroup   2125 2012-04-15 17:34
/hbase/.logs/yonghu-laptop,60020,1334504008467/yonghu-laptop%2C60020%2C1334504008467.1334504048854

I have no idea why it takes so long time.

Yong

On Sun, Apr 15, 2012 at 6:34 PM, yonghu yongyong...@gmail.com wrote:
 yes

 On Sun, Apr 15, 2012 at 6:30 PM, Ted Yu yuzhih...@gmail.com wrote:
 Did 'HLog --dump' show real contents for a 0-sized file ?

 Cheers

 On Sun, Apr 15, 2012 at 8:58 AM, yonghu yongyong...@gmail.com wrote:

 Hello,

 My hbase version is 0.92.0 and is installed in pseudo-mode. I found a
 strange situation of HLog. After I inserted new data value into table,
 the volume of HLog is 0. I checked in HDFS.

 drwxr-xr-x   - yonghu supergroup          0 2012-04-15 17:34 /hbase/.logs
 drwxr-xr-x   - yonghu supergroup          0 2012-04-15 17:34
 /hbase/.logs/yonghu-laptop,60020,1334504008467
 -rw-r--r--   3 yonghu supergroup          0 2012-04-15 17:34

 /hbase/.logs/yonghu-laptop,60020,1334504008467/yonghu-laptop%2C60020%2C1334504008467.1334504048854

 But I can use hbase org.apache.hadoop.hbase.regionserver.wal.HLog
 --dump to see the content of log information. However, if I write java
 program to extract the log information. The output is null! Somebody
 knows why?

 Thanks!

 Yong



Re: dump HLog content!

2012-04-15 Thread yonghu
Thanks for your important information. I have found this information
in the hbase-default.xml file.

Regards!

Yong

On Sun, Apr 15, 2012 at 8:36 PM, Manish Bhoge
manishbh...@rocketmail.com wrote:
 Yong,

 It is a Hlog log roll property that keep the log size 0 until the complete 
 block is written OR until it completes the log roll duration mentioned in 
 configuration (default 60 min). However it still persists the edits in .edit 
 files and once it reaches to the interval defined for log roll it writes back 
 to log. That is the reason you can see the logs(size) more than zero 
 byte.eventually it moves the log into .oldlogs also.

 Thanks
 Manish
 Sent from my BlackBerry, pls excuse typo

 -Original Message-
 From: yonghu yongyong...@gmail.com
 Date: Sun, 15 Apr 2012 18:58:45
 To: user@hbase.apache.org
 Reply-To: user@hbase.apache.org
 Subject: Re: dump HLog content!

 Thanks for your reply. After nearly 60minutes, I can see the Hlog volume.

 -rw-r--r--   3 yonghu supergroup       2125 2012-04-15 17:34
 /hbase/.logs/yonghu-laptop,60020,1334504008467/yonghu-laptop%2C60020%2C1334504008467.1334504048854

 I have no idea why it takes so long time.

 Yong

 On Sun, Apr 15, 2012 at 6:34 PM, yonghu yongyong...@gmail.com wrote:
 yes

 On Sun, Apr 15, 2012 at 6:30 PM, Ted Yu yuzhih...@gmail.com wrote:
 Did 'HLog --dump' show real contents for a 0-sized file ?

 Cheers

 On Sun, Apr 15, 2012 at 8:58 AM, yonghu yongyong...@gmail.com wrote:

 Hello,

 My hbase version is 0.92.0 and is installed in pseudo-mode. I found a
 strange situation of HLog. After I inserted new data value into table,
 the volume of HLog is 0. I checked in HDFS.

 drwxr-xr-x   - yonghu supergroup          0 2012-04-15 17:34 /hbase/.logs
 drwxr-xr-x   - yonghu supergroup          0 2012-04-15 17:34
 /hbase/.logs/yonghu-laptop,60020,1334504008467
 -rw-r--r--   3 yonghu supergroup          0 2012-04-15 17:34

 /hbase/.logs/yonghu-laptop,60020,1334504008467/yonghu-laptop%2C60020%2C1334504008467.1334504048854

 But I can use hbase org.apache.hadoop.hbase.regionserver.wal.HLog
 --dump to see the content of log information. However, if I write java
 program to extract the log information. The output is null! Somebody
 knows why?

 Thanks!

 Yong