Thanks. Your configuration looks fine. (The defaults point to somewhere in /tmp, which is bad for data longevity, so must always be changed.)
Did I understand you correctly that the corruption appeared to happen during the period of time when your cluster and DFS was unstable due to excessive load? There is alas no "hbasefsck" yet so if the mapfiles become corrupted in DFS, there is little that can be done except to drop and then recreate the table and start over. (For this reason right now my application treats HBase as an enormous but temporary workspace, where any critical data is replicated to different storage media and any data loss is only a setback in terms of needing to recompute what was lost.) "hbasefsck" is on the road map, as is additional data integrity considerations that can be possible once HADOOP-1700 is ready. - Andy --- On Wed, 7/23/08, Renaud Delbru <[EMAIL PROTECTED]> wrote: > From: Renaud Delbru <[EMAIL PROTECTED]> > Subject: Re: HRegionServer: error opening region > To: [email protected] > Date: Wednesday, July 23, 2008, 9:02 AM > Hi Andrew, > > here is the settings: > > <property> > <name>hadoop.tmp.dir</name> > <value>/sindice-data/hadoop/tmp</value> > </property> > > <property> > <name>dfs.data.dir</name> > > <value>/sindice-data/hadoop/tmp/dfs/data/</value> > </property> > > <property> > <name>dfs.name.dir</name> > > <value>/sindice-data/hadoop/tmp/dfs/name</value> > </property> > > -- > Renaud Delbru > > Andrew Purtell wrote: > > Hi Renaud, > > > > What are the settings of "hadoop.tmp.dir", > "dfs.name.dir", and "dfs.data.dir" in > your hadoop-site.xml? > > > > - Andy > > > > > > --- On Wed, 7/23/08, Renaud Delbru > <[EMAIL PROTECTED]> wrote: > > > > > >> From: Renaud Delbru <[EMAIL PROTECTED]> > >> Subject: HRegionServer: error opening region > >> To: [email protected] > >> Date: Wednesday, July 23, 2008, 7:54 AM > >> Hi, > >> > >> after our issues ("Replay of HLog > required", in a > >> precious thread) with > >> HBase, it seems that HBase has corrupted regions. > >> We have, on the three region servers, errors > stating that > >> HBase cannot > >> open certain regions because some map files on > hdfs are > >> missing (see the > >> log attached). > >> > >> Do you have any ideas how to fix this ? > >> > >> Thanks. > >> -- > >> Renaud Delbrujava.io.FileNotFoundException: File > does not > >> exist: > >> > hdfs://hadoop1.sindice.net:54310/hbase/page-repository/1105668475/field/mapfiles/5122893264992435570/data > >> at > >> > org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:369) > >> at > >> > org.apache.hadoop.hbase.regionserver.HStoreFile.length(HStoreFile.java:464) > >> at > >> > org.apache.hadoop.hbase.regionserver.HStore.loadHStoreFiles(HStore.java:409) > >> at > >> > org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:236) > >> at > >> > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1575) > >> at > >> > org.apache.hadoop.hbase.regionserver.HRegion.<init>(HRegion.java:451) > >> at > >> > org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:901) > >> at > >> > org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:876) > >> at > >> > org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:816) > >> at java.lang.Thread.run(Thread.java:619) > >> > > > > > > > >
