Please do not send the same question to three separate email addresses. Also, logs pasted into email are hard to read. Please instead add log snippets using http://en.wikipedia.org/wiki/Pastebin (See end of page for some implementations) or put logs under a webserver where we can pull them with snippets only in the email.
Regards your question below, ensure you have set ulimit by checking the second or third line in the hbase log. It will print out what it sees for ulimit. Make sure its 32k there. In your log below, I see the 'Too many open files' seems to be coming up out of hdfs. Perhaps hdfs is running as a different user who has not had their uliimt raised? Thanks, St.Ack 2010/6/13 chen peng <[email protected]>: > > hi, all: > I had met a question after my program continued for 28+ hours under > the circumstances of cluster which have three machine that had set ulimit to > 32K.............2010-06-13 01:42:57,318 INFO > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Forced flushing of > nutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftrail=5003\x253Acat10361\x26catId=cat10361\x26type=category\x26addFacet=5001\x253Ab211,1276366391680 > because global memstore limit of 395.6m exceeded; currently 372.2m and > flushing till 346.1m2010-06-13 01:42:57,630 INFO > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Forced flushing of > nutchtabletest,com.mwave.www:http/mwave/subcategory.asp\x3FCatID=169\x26parent=3\x5E165,1276303221573 > because global memstore limit of 395.6m exceeded; currently 349.0m and > flushing till 346.1m2010-06-13 01:42:57,949 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on region > nutchtabletest,com.teamstore.www:http/product/index.jsp\x3F > productId=3425980\x26cp=1209615.714811\x26parentPage=family,12763421607472010-06-13 > 01:42:57,961 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction > completed on region > nutchtabletest,com.teamstore.www:http/product/index.jsp\x3FproductId=3425980\x26cp=1209615.714811\x26parentPage=family,1276342160747 > in 0sec2010-06-13 01:42:57,962 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on region > nutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftrail=5003\x253Acat10361\x26catId=cat10361\x26type=category\x26addFacet=5001\x253Ab211,12763663916802010-06-13 > 01:42:57,973 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction > completed on region > nutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftrail=5003\x253Acat10361\x26catId=cat10361\x26type=category\x26addFacet=5001\x253Ab211,1276366391680 > in 0sec2010-06-13 01:42:57,973 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on region > nutchtable > test,com.mwave.www:http/mwave/subcategory.asp\x3FCatID=169\x26parent=3\x5E165,12763032215732010-06-13 > 01:43:03,360 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@1fe8d892010-06-13 > 01:43:05,522 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@109fc12010-06-13 > 01:43:05,583 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@57ebd62010-06-13 > 01:43:07,716 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@8e8ab52010-06-13 > 01:43:07,751 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@146d4d92010-06-13 > 01:43:07,966 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@b1258d2010-06-13 > 01:43:08,061 WARN org.apache.hadoop.hbase.regionser > ver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@8fa7d42010-06-13 > 01:43:08,130 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@c232962010-06-13 > 01:43:08,171 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@84add62010-06-13 > 01:43:08,269 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@8a7a3a2010-06-13 > 01:43:08,337 WARN org.apache.hadoop.hbase.regionserver.Store: Not in > setorg.apache.hadoop.hbase.regionserver.storescan...@cd86a52010-06-13 > 01:43:08,345 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction > completed on region > nutchtabletest,com.mwave.www:http/mwave/subcategory.asp\x3FCatID=169\x26parent=3\x5E165,1276303221573 > in 10sec2010-06-13 02:06:12,436 INFO > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Forced flushing of > nutchtabletest,com.cableorganizer:http > /briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276264102072 > because global memstore limit of 395.6m exceeded; currently 395.6m and > flushing till 346.1m2010-06-13 02:06:12,811 INFO > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Forced flushing of > nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971 > because global memstore limit of 395.6m exceeded; currently 372.5m and > flushing till 346.1m2010-06-13 02:06:13,136 INFO > org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Forced flushing of > nutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680 > because global memstore limit of 395.6m exceeded; currently 350.3m and > flushing till 346.1m2010-06-13 02:06:13,562 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on region > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,12762641020722010-06-13 > 02:06:14,175 I > NFO org.apache.hadoop.hbase.regionserver.HRegion: compaction completed on > region > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276264102072 > in 0sec2010-06-13 02:06:14,175 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Starting split of region > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,12762641020722010-06-13 > 02:06:14,178 INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,12762641020722010-06-13 > 02:06:14,710 INFO org.apache.hadoop.hbase.regionserver.CompactSplitThread: > region split, META updated, and report to master all successful. Old > region=REGION => {NAME => > 'nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276264102072', > STARTKEY => 'com.cableorganizer:http/briggs-stratton-generators/storm-rea > dy-kit.htm\x3F=recommended', ENDKEY => > 'com.cableorganizer:http/leviton/thread-lock-fiber-optic-connectors.html', > ENCODED => 1777480297, OFFLINE => true, SPLIT => true, TABLE => {{NAME => > 'nutchtabletest', FAMILIES => [{NAME => 'bas', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'cnt', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'fchi', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'hdrs', COMPRESSION => 'NONE', VERSI > ONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'ilnk', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'modt', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'mtdt', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'olnk', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'prsstt', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'prtstt', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'prvfc > h', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE > => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'prvsig', > COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => > '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'repr', > COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => > '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'rtrs', > COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => > '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'scr', > COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => > '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'sig', > COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => > '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'stt', > COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => > '65536', IN_MEMORY => 'false', BLOCKCACHE > => 'true'}, {NAME => 'ttl', COMPRESSION => 'NONE', VERSIONS => '3', TTL => > '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => > 'true'}, {NAME => 'txt', COMPRESSION => 'NONE', VERSIONS => '3', TTL => > '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => > 'true'}]}}, new regions: > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177, > > nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177. > Split took 0sec2010-06-13 02:06:14,710 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on region > nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,12763228519712010-06-13 > 02:06:14,723 INFO org.apache.hadoop.hbase.regionserver.HRegion: compaction > completed on region > nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971 in > 0sec2010-06-13 02:06:14,723 INFO org.apache.hadoop.hbase.regionserver.HRegion: > Starting split of region > nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,12763228519712010-06-13 > 02:06:14,728 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: > MSG_REGION_OPEN: > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,12763911741772010-06-13 > 02:06:14,728 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: > MSG_REGION_OPEN: > nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,12763911741772010-06-13 > 02:06:14,728 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: > Worker: MSG_REGION_OPEN: > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,12763911741772010-06-13 > 02:06:14,812 INFO org.apache.hadoop.hbase.regionserver.HRegion: Closed > nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,12763228519712010-06-13 > 02:06:15,373 INFO org.apache.hadoop.hbase.regionserver.HRegion: region > nutchtabletest,com.cabl > eorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177/739848001 > available; sequence id is 156395382010-06-13 02:06:15,373 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN: > nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,12763911741772010-06-13 > 02:06:15,589 INFO org.apache.hadoop.hbase.regionserver.HRegion: region > nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,1276391174177/1831848882 > available; sequence id is 156395392010-06-13 02:06:15,645 INFO > org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, META > updated, and report to master all successful. Old region=REGION => {NAME => > 'nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276322851971', > STARTKEY => 'com.pacsun.shop:http/js_external/sj_flyout.js', ENDKEY => > 'com.samash.www:http/webapp/wcs/stores/servlet/search_-1_10052_10002_UTF-8___t\x253A3\x252F\x2 > 52F\x253Assl\x252F\x252Fsa\x2Bbundle\x2Btaxonomy\x252F\x252F\x253AAccessories\x253ARecording\x2BAccessories\x253AAcoustic\x2BTreatment\x253ABass\x2BTraps__UnitsSold\x252F\x252F1_-1_20__________0_-1__DrillDown___182428_', > ENCODED => 908568317, OFFLINE => true, SPLIT => true, TABLE => {{NAME => > 'nutchtabletest', FAMILIES => [{NAME => 'bas', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'cnt', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'cnttyp', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'fchi', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}, {NAME => 'fcht', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKS > IZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => > 'hdrs', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', > BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => > 'ilnk', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', > BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => > 'modt', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', > BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => > 'mtdt', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', > BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => > 'olnk', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', > BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => > 'prsstt', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', > BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => > 'prtstt', COMPRESSION => 'NONE', VERSIONS => '3 > ', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'prvfch', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'prvsig', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'repr', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'rtrs', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'scr', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'sig', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}, {NAME => 'stt', COMPRESSIO > N => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', > IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'ttl', COMPRESSION => > 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY > => 'false', BLOCKCACHE => 'true'}, {NAME => 'txt', COMPRESSION => 'NONE', > VERSIONS => '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => > 'false', BLOCKCACHE => 'true'}]}}, new regions: > nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724, > nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive Floor > Monitor_-49972869,1276391174724. Split took 0sec2010-06-13 02:06:15,645 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on region > nutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,12763663916802010-06-13 > 02:06:15,663 INFO org.apache.hadoop.hbase.regionserver.HRegionServer: > MSG_REGION_OPEN: nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js > ,12763911747242010-06-13 02:06:15,663 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: MSG_REGION_OPEN: > nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive Floor > Monitor_-49972869,12763911747242010-06-13 02:06:15,664 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN: > nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,12763911747242010-06-13 > 02:06:16,123 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-5104950836598570436_20226 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:19,582 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-5104950836598570436_20226 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:22,814 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-6330529819693039456_20275 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:23,474 INFO org > .apache.hadoop.hbase.regionserver.HRegion: compaction completed on region > nutchtabletest,com.vitaminshoppe.www:http/search/en/category.jsp\x3Ftype=category\x26catId=cat10134,1276366391680 > in 7sec2010-06-13 02:06:23,474 INFO > org.apache.hadoop.hbase.regionserver.HRegion: Starting compaction on region > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,12763911741772010-06-13 > 02:06:26,376 INFO org.apache.hadoop.hbase.regionserver.HRegion: region > nutchtabletest,com.pacsun.shop:http/js_external/sj_flyout.js,1276391174724/232099566 > available; sequence id is 156398252010-06-13 02:06:26,376 INFO > org.apache.hadoop.hbase.regionserver.HRegionServer: Worker: MSG_REGION_OPEN: > nutchtabletest,com.samash.www:http/p/BR15M 15 2 Way Passive Floor > Monitor_-49972869,12763911747242010-06-13 02:06:26,598 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-5772421768525630859_20164 from any node: java.io.IOException: No live > nodes c > ontain current block2010-06-13 02:06:29,612 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-1227684848175029882_20172 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:32,618 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-8420981703314551273_20168 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:32,672 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-2559191036262569688_20333 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:35,619 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-8420981703314551273_20168 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:35,674 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-2559191036262569688_20333 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:38,637 > INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-5563912881422417996_20180 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:38,675 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-2559191036262569688_20333 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:41,650 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_2343005765236386064_20192 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:41,677 WARN > org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not > obtain block: blk_-2559191036262569688_20333 > file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779 at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638) > at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read( > DFSClient.java:1767) at > java.io.DataInputStream.readFully(DataInputStream.java:178) at > java.io.DataInputStream.readFully(DataInputStream.java:152) at > org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848) > at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loadFileInfo(HFile.java:793) > at org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273) > at > org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129) > at > org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974) > at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766) > at > org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832) > at > org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785) > at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93) > 2010-06-13 02:06:41,677 ERROR > org.apache.hadoop.hbase.regionserver.CompactSplitThread: Compaction/Split > failed for region > nutchtabletest,com.cableorganizer:http/briggs-stratton-generators/storm-ready-kit.htm\x3F=recommended,1276391174177java.io.IOException: > Could not obtain block: blk_-2559191036262569688_20333 > file=/hbase/nutchtabletest/739848001/ilnk/9220624711093099779 at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767) > at java.io.DataInputStream.readFully(DataInputStream.java:178) at > java.io.DataInputStream.readFully(DataInputStream.java:152) at > org.apache.hadoop.hbase.io.hfile.HFile$FixedFileTrailer.deserialize(HFile.java:1368) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readTrailer(HFile.java:848) > at org.apache.hadoop.hbase.io.hfile.HFile$Reader.loa > dFileInfo(HFile.java:793) at > org.apache.hadoop.hbase.regionserver.StoreFile.open(StoreFile.java:273) at > org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:129) > at > org.apache.hadoop.hbase.regionserver.Store.completeCompaction(Store.java:974) > at org.apache.hadoop.hbase.regionserver.Store.compact(Store.java:766) > at > org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:832) > at > org.apache.hadoop.hbase.regionserver.HRegion.compactStores(HRegion.java:785) > at > org.apache.hadoop.hbase.regionserver.CompactSplitThread.run(CompactSplitThread.java:93)2010-06-13 > 02:06:41,677 INFO org.apache.hadoop.hbase.regionserver.HRegion: Starting > compaction on region > nutchtabletest,com.cableorganizer:http/fire-protection/composite-sheet-pillows.html,12763911741772010-06-13 > 02:06:41,693 INFO org.apache.hadoop.hdfs.DFSClient: Exception in > createBlockOutputStream java.net.SocketException: Too many open > files2010-06-13 02:06:41,694 INFO org.apache.ha > doop.hdfs.DFSClient: Abandoning block > blk_-4724151989818868275_203342010-06-13 02:06:44,652 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_2343005765236386064_20192 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:47,653 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_2343005765236386064_20192 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:47,695 INFO > org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream > java.net.SocketException: Too many open files2010-06-13 02:06:47,695 INFO > org.apache.hadoop.hdfs.DFSClient: Abandoning block > blk_2197619404089718071_203342010-06-13 02:06:50,655 WARN > org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not > obtain block: blk_2343005765236386064_20192 > file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317 at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DF > SClient.java:1812) at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767) > at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695) > at java.io.DataInputStream.readBoolean(DataInputStream.java:225) at > org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117) at > org.apache.hadoop.hbase.io.Reference.read(Reference.java:151) at > org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126) > at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410) > at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221) at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575) > at org.apa > che.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542) > at > org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462) > at java.lang.Thread.run(Thread.java:619) > 2010-06-13 02:06:50,655 WARN org.apache.hadoop.hbase.regionserver.Store: > Failed open of > hdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317; > presumption is that file was corrupted at flush and lost edits picked up by > commit log replay. Verify!java.io.IOException: Could not obtain block: > blk_2343005765236386064_20192 > file=/hbase/nutchtabletest/686991543/fchi/1061870177496593816.908568317 at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767) > at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695) > at java.io.DataInputStream.readBoolean(DataInputStream.java:225) at > org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117) at > org.apache.hadoop.hbase.io.Reference.read(Reference.java:151) at > org.apache.ha > doop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126) at > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410) > at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221) at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542) > at > org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462) > at java.lang.Thread.run(Thread.java:619)2010-06-13 02:06:50,659 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-153353228097894218_20196 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:51,804 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block blk_- > 3334740230832671768_20314 from any node: java.io.IOException: No live nodes > contain current block2010-06-13 02:06:53,668 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_4832263854844000864_20200 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:53,697 INFO > org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream > java.net.SocketException: Too many open files2010-06-13 02:06:53,697 INFO > org.apache.hadoop.hdfs.DFSClient: Abandoning block > blk_-8490642742553142526_203342010-06-13 02:06:54,806 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-3334740230832671768_20314 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:56,669 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_4832263854844000864_20200 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:57,808 INFO > org.apache.hadoop.hdfs.DFSClient: Could no > t obtain block blk_-3334740230832671768_20314 from any node: > java.io.IOException: No live nodes contain current block2010-06-13 > 02:06:59,670 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_4832263854844000864_20200 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:06:59,698 INFO > org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream > java.net.SocketException: Too many open files2010-06-13 02:06:59,698 INFO > org.apache.hadoop.hdfs.DFSClient: Abandoning block > blk_8167205924627743813_203342010-06-13 02:07:00,809 WARN > org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not > obtain block: blk_-3334740230832671768_20314 > file=/hbase/.META./1028785192/info/515957856915851220 at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638) > at org.apache.hadoop.hdfs.DFSClient$DFSInputSt > ream.read(DFSClient.java:1767) at > java.io.DataInputStream.read(DataInputStream.java:132) at > org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:105) > at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100) at > org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018) > at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1291) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekAtOrAfter(StoreFileScanner.java:98) > at > org.apache.hadoop.hbase.regionserver.StoreFileScanner.seek(StoreFileScanner.java:68) > at > org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:72) > at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1304) > at > org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.initHeap(HRegion.java:1850) > at org.apache.hadoop.hbase.region > server.HRegion$RegionScanner.next(HRegion.java:1883) at > org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1906) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:1877) > at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source) at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) at > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657) at > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915) > 2010-06-13 02:07:00,809 INFO org.apache.hadoop.hdfs.DFSClient: Could not > obtain block blk_-3334740230832671768_20314 from any node: > java.io.IOException: No live nodes contain current block2010-06-13 > 02:07:02,671 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: > java.io.IOException: Could not obtain block: blk_4832263854844000864_20200 > file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317 at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767) > at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695) > at java.io.DataInputStream.readBoolean(DataInputStream.java:225) at > org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117) at > org.apache.hadoop.hbase.io.Reference.read(Reference.java:151) at > org.apache.hadoop.hbase.regi > onserver.StoreFile.<init>(StoreFile.java:126) at > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410) > at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221) at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542) > at > org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462) > at java.lang.Thread.run(Thread.java:619) > 2010-06-13 02:07:02,674 WARN org.apache.hadoop.hbase.regionserver.Store: > Failed open of > hdfs://ubuntu1:9000/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317; > presumption is that file was corrupted at flush and lost edits picked up by > commit log replay. Verify!java.io.IOException: Could not obtain block: > blk_4832263854844000864_20200 > file=/hbase/nutchtabletest/686991543/fcht/3329079451353795349.908568317 at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767) > at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695) > at java.io.DataInputStream.readBoolean(DataInputStream.java:225) at > org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:117) at > org.apache.hadoop.hbase.io.Reference.read(Reference.java:151) at > org.apache.ha > doop.hbase.regionserver.StoreFile.<init>(StoreFile.java:126) at > org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:410) > at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221) at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:1641) > at > org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:320) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.instantiateRegion(HRegionServer.java:1575) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:1542) > at > org.apache.hadoop.hbase.regionserver.HRegionServer$Worker.run(HRegionServer.java:1462) > at java.lang.Thread.run(Thread.java:619)2010-06-13 02:07:02,676 INFO > org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_8179737564656994784_20204 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:07:03,817 WARN > org.apache.hadoop.hdfs.DFSClient: Failed to connect to /172.0. > 8.251:50010 for file /hbase/.META./1028785192/info/515957856915851220 for > block -3334740230832671768:java.net.SocketException: Too many open files > at sun.nio.ch.Net.socket0(Native Method) at > sun.nio.ch.Net.socket(Net.java:94) at > sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84) at > sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37) > at java.nio.channels.SocketChannel.open(SocketChannel.java:105) at > org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922) > at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46) > at > org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101) > at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100) at > org.apache > .hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018) at > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966) at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182) > at > org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164) > at > org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131) > at > org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092) > at > org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729) > at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) at org.a > pache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657) at > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915) > 2010-06-13 02:07:03,820 WARN org.apache.hadoop.hdfs.DFSClient: Failed to > connect to /172.0.8.248:50010 for file > /hbase/.META./1028785192/info/515957856915851220 for block > -3334740230832671768:java.net.SocketException: Too many open files at > sun.nio.ch.Net.socket0(Native Method) at > sun.nio.ch.Net.socket(Net.java:94) at > sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84) at > sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37) > at java.nio.channels.SocketChannel.open(SocketChannel.java:105) at > org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922) > at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46) > at > org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream > .java:101) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100) > at org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018) > at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182) > at > org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164) > at > org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131) > at > org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092) > at > org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729) > at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMeth > odAccessorImpl.java:25) at > java.lang.reflect.Method.invoke(Method.java:597) at > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657) at > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915) > 2010-06-13 02:07:03,820 ERROR > org.apache.hadoop.hbase.regionserver.HRegionServer: java.net.SocketException: > Too many open files at sun.nio.ch.Net.socket0(Native Method) at > sun.nio.ch.Net.socket(Net.java:94) at > sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84) at > sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37) > at java.nio.channels.SocketChannel.open(SocketChannel.java:105) at > org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922) > at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46) > at > org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101) > at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100) at > org.apache.hadoop.hbase.io.hfi > le.HFile$Reader.decompress(HFile.java:1018) at > org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966) at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182) > at > org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164) > at > org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131) > at > org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOrBefore(Store.java:1092) > at > org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729) > at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) at > org.apache.hadoop.hbase.i > pc.HBaseRPC$Server.call(HBaseRPC.java:657) at > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)2010-06-13 > 02:07:03,820 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_-3334740230832671768_20314 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:07:03,827 INFO > org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60020, call > getClosestRowBefore([...@d0a973, [...@124c6ab, [...@16f25f6) from > 172.0.8.251:36613: error: java.net.SocketException: Too many open > filesjava.net.SocketException: Too many open files at > sun.nio.ch.Net.socket0(Native Method) at > sun.nio.ch.Net.socket(Net.java:94) at > sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:84) at > sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:37) > at java.nio.channels.SocketChannel.open(SocketChannel.java:105) at > org.apache.hadoop.net.StandardSocketFactory.createSocket(StandardSocketFactory.java:58) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.fetchBlockByteRange(DFSClient.java:1847) > at > org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1922) > at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:46) > at > org.apache.hadoop.hbase.io.hfile.BoundedRangeFileInputStream.read(BoundedRangeFileInputStream.java:101) > at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:100) at > org.apache.hadoop.hbase.io.hfile.HFile$Reader.decompress(HFile.java:1018) > at org.apache.hadoop.hbase.io.hfile.HFile$Reader.readBlock(HFile.java:966) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.loadBlock(HFile.java:1300) > at > org.apache.hadoop.hbase.io.hfile.HFile$Reader$Scanner.seekTo(HFile.java:1182) > at > org.apache.hadoop.hbase.regionserver.Store.seekToScanner(Store.java:1164) > at > org.apache.hadoop.hbase.regionserver.Store.rowAtOrBeforeFromStoreFile(Store.java:1131) > at org.apache.hadoop.hbase.regionserver.Store.getRowKeyAtOr > Before(Store.java:1092) at > org.apache.hadoop.hbase.regionserver.HRegion.getClosestRowBefore(HRegion.java:1147) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.getClosestRowBefore(HRegionServer.java:1729) > at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source) at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) at > org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:657) at > org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:915)2010-06-13 > 02:07:05,677 INFO org.apache.hadoop.hdfs.DFSClient: Could not obtain block > blk_8179737564656994784_20204 from any node: java.io.IOException: No live > nodes contain current block2010-06-13 02:07:05,699 WARN > org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception: > java.io.IOException: Unable to create new block. at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSCl > ient.java:2845) at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > _________________________________________________________________ > Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. > https://signup.live.com/signup.aspx?id=60969
