You have no other choice than removing those files... you will loose the
related data but it should be fine if they are only HFiles. Do you have the
list of corrupted files? What kind of files it is?
Also, have you lost a node or a disk? How have you lost about 150 blocks?
JM
2015-02-23 2:47
On Feb 23, 2015, at 1:47 AM, Arinto Murdopo ari...@gmail.com wrote:
We're running HBase (0.94.15-cdh4.6.0) on top of HDFS (Hadoop
2.0.0-cdh4.6.0).
For all of our tables, we set the replication factor to 1 (dfs.replication
= 1 in hbase-site.xml). We set to 1 because we want to minimize the
HBase/HDFS are maintaining block checksums, so presumably a corrupted block
would fail checksum validation. Increasing the number of replicas increases
the odds that you'll still have a valid block. I'm not an HDFS expert, but
I would be very surprised if HDFS is validating a questionable block
I’m sorry, but I implied checking the checksums of the blocks.
Didn’t think I needed to spell it out. Next time I’ll be a bit more precise.
On Feb 23, 2015, at 2:34 PM, Nick Dimiduk ndimi...@gmail.com wrote:
HBase/HDFS are maintaining block checksums, so presumably a corrupted block
would
Arinto:
Probably you should take a look at HBASE-12949.
Cheers
On Mon, Feb 23, 2015 at 5:25 PM, Arinto Murdopo ari...@gmail.com wrote:
@JM:
You mentioned about deleting the files, are you referring to HDFS files
or file on HBase?
Our cluster have 15 nodes. We used 14 of them as DN.
2015-02-23 20:25 GMT-05:00 Arinto Murdopo ari...@gmail.com:
@JM:
You mentioned about deleting the files, are you referring to HDFS files
or file on HBase?
Your HBase files are stored in HDFS. So I think we are refering to the same
thing. Look into /hbase in our HDFS to find HBase files.
On Tue, Feb 24, 2015 at 9:46 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
I don't have the list of the corrupted files yet. I notice that when I try
to Get some of the files, my HBase client code throws these exceptions:
org.apache.hadoop.hbase.client.RetriesExhaustedException:
@JM:
You mentioned about deleting the files, are you referring to HDFS files
or file on HBase?
Our cluster have 15 nodes. We used 14 of them as DN. Actually we tried to
enable the remaining one as DN (so that we have 15 DN), but then we
disabled it (so now we have 14 again). Probably our crawlers
Hi all,
We're running HBase (0.94.15-cdh4.6.0) on top of HDFS (Hadoop
2.0.0-cdh4.6.0).
For all of our tables, we set the replication factor to 1 (dfs.replication
= 1 in hbase-site.xml). We set to 1 because we want to minimize the HDFS
usage (now we realize we should set this value to at least 2,