Sounds like you were running in standalone mode. In standalone mode, both master and regionserver run in same JVM. I'm not sure how log replay works in this case, if at all. Maybe post more of the log in pastebin?
Are you on TRUNK? If you are up to a recent TRUNK, the logs should be replayed even on restart of the replay conductor, the hbase master (HBASE-698). Previous, if master was restarted, when a log replay was needed, logs would not be recovered. St.Ack On Mon, Jul 13, 2009 at 8:16 AM, Joel Nothman <jnoth...@student.usyd.edu.au>wrote: > > I am trying out hbase on my local machine. I ran out of file handles while > loading Wikipedia pages into a table as a test: > > 2009-07-13 18:59:20,223 FATAL > org.apache.hadoop.hbase.regionserver.MemcacheFlusher: Replay of hlog > required. Forcing server shutdown > org.apache.hadoop.hbase.DroppedSnapshotException: region: enwiki0903,Port > Vila,1247474149864 > at > org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:903) > at > org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:796) > at > org.apache.hadoop.hbase.regionserver.MemcacheFlusher.flushRegion(MemcacheFlusher.java:265) > at > org.apache.hadoop.hbase.regionserver.MemcacheFlusher.run(MemcacheFlusher.java:148) > Caused by: java.io.FileNotFoundException: > /home/joel/hbase-root/hbase-joel/hbase/enwiki0903/843294683/expanded/mapfiles/5105714541107922778/data > (Too many open files) > at java.io.FileOutputStream.open(Native Method) > > > I could not find any documentation on how to manually replay the hlog. > > It doesn't seem to have done automatically so when I shutdown and restarted > the server. > > Getting a row count of the table I had been loading data into gave me no > results. But I have 31GB of data in my hbase data directory, many of which > are oldlogfile.log. > > How do I recover the data stored there? Why is the recovery not automatic? > > Thanks, > > - Joel > >