On Fri, Apr 13, 2012 at 9:06 PM, Stack wrote:
> On Fri, Apr 13, 2012 at 8:02 PM, Todd Lipcon wrote:
>> If you want to patch on the HBase side, you can edit HLog.java to
>> remove the checks for the "sync" method, and have it only call
>> "hflush". It's only the compatibility path that caused the
Dear Jon:
We just ran OfflineMetaRepair, while getting the following exceptions.
Checked online...it seems that is bug. Any suggestions on how to check out
the most-updated version of OfflineMetaRepair to work with our version of
HBase? Thanks in advance.
12/04/15 12:28:35 INFO util.HBaseFsck: Lo
Thanks, St. Ack & Jon. To answer St. Ack's question, we are using HBase
0.90.6, and the data corruption happens when some data nodes are lost due
to the power issue. We've tried hbck and it reports that ROOT is not found,
and hfsk reports two blocks of ROOT and META are CORUPT status.
Jon: We just
There is a two tools that can try to help you (unfortunately, I haven't
written the user documentation for either yet)
One is called OfflineMetaRepair. This assumes that hbase is offline reads
the data in HDFS to create a new ROOT and new META. If you data is in
good shape, this should work for
On Sat, Apr 14, 2012 at 1:35 PM, Yabo Xu wrote:
> Hi all:
>
> Just had a desperate nightWe had a small production hbase cluster( 8
> nodes), and due to the accident crash of a few nodes, ROOT and META are
> corrupted, while the rest of tables are mostly there. Are there any way to
> restore R
And what as the timeout issue?
St.Ack
On Sat, Apr 14, 2012 at 3:23 PM, Stack wrote:
> On Sat, Apr 14, 2012 at 9:49 AM, Chris Tarnas wrote:
>> I looked into org.apache.hadoop.hbase.regionserver.wal.HLog --split and
>> didn't see any notes about not running on a live cluster so I ran it and it
On Sat, Apr 14, 2012 at 9:49 AM, Chris Tarnas wrote:
> I looked into org.apache.hadoop.hbase.regionserver.wal.HLog --split and
> didn't see any notes about not running on a live cluster so I ran it and it
> ran fine. Was it safe to run with hbase up? Were the newly created files
> correctly a
Hi all:
Just had a desperate nightWe had a small production hbase cluster( 8
nodes), and due to the accident crash of a few nodes, ROOT and META are
corrupted, while the rest of tables are mostly there. Are there any way to
restore ROOT and META?
Any of the hints would be appreciated very mu
As far as I understand sequential keys with a timerange scan have the best
read performance possible, because of the HFile metadata, just as N
indicates. Maybe adding Bloomfilters can further up the performance.
Still, in my case with random keys I get quick(sub second) response from my
scan examp
Thanks N! That's a good point. I'll update the RefGuide with that.
So if the data is evenly distributed (and evenly old per HFile) you still
have the same problem, but it's conceivable that could not be the case.
This is a case where monotonically increasing keys would actually help you.
O
Hello all,
We had a node die on us, and the master could not recover the HLogs due to
timeout issues, but HBase stayed up and all of the regions were re-assigned.
The node crashed hard (IT is investigating why) so we were not able to just
restart it.
I looked into org.apache.hadoop.hbase.regio
Hi,
For the filtering part, every HFile is associated to a set of meta data.
This meta data includes the timerange. So if there is no overlap between
the time range you want and the time range of the store, the HFile is
totally skipped.
This work is done in StoreScanner#selectScannersFrom
Cheers
Hi there-
With respect to:
"* Does it need to hit every memstore and HFile to determine if there
isdata available? And if so does it need to do a full scan of that file to
determine the records qualifying to the timerange, since keys are stored
lexicographically?"
And...
"Using "scan 'table', {
I'm trying to find a definitive answer to the question if scans on
timerange alone will scale when you use uniformly distributed keys like
UUIDs.
Since the keys are randomly generated that would mean the keys will be
spread out over all RegionServers, Regions and HFiles. In theory, assuming
enough
Stack
That approach should fix the problem as well.
Some updates from my previous comments --
* The problems I encountered initially against cdh3u3 was a
misconfiguration on my part (I had a stress config on for that test, when
removed cdh3u3 performed well).
* Nagle modifications didn't make a s
15 matches
Mail list logo