But if I get 130 tables out of 64 megs how many i will get after splinting 1gig? Can you tell me what triggers further splits. First one is triggered by exceeding "split at" size what is expect but after this i should get 2 x 32 megs. And then after one of them will grow up again above max filesize limit it should be splited. Am i right?
2010/2/1 Jean-Daniel Cryans <[email protected]> > From what you pasted: > > 2010-02-01 14:05:49,445 INFO > org.apache.hadoop.hbase.regionserver.CompactSplitThread: region split, > META updated, and report to master all successful. Old region=REGION > => {NAME => 'oldWebSingleRowCacheStore,,1265029544146', STARTKEY => > '', ENDKEY => 'filmMenuEditions-not_selected\xC2\xAC1405', ENCODED => > 1899385768, OFFLINE => true, SPLIT => true, TABLE => {{NAME => > 'oldWebSingleRowCacheStore', MAX_FILESIZE => '64', FAMILIES => [{NAME > => 'content', COMPRESSION => 'NONE', VERSIONS => '3', TTL => > '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE > => 'true'}, {NAME => 'description', COMPRESSION => 'NONE', VERSIONS => > '3', TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false', > BLOCKCACHE => 'true'}]}}, new regions: > oldWebSingleRowCacheStore,,1265029549167, > oldWebSingleRowCacheStore,filmLastTopics\xC2\xAC1155,1265029549167. > Split took 0sec > > I see MAX_FILESIZE => '64' which means you have set that table to > split after 64 _bytes_ so either use the default value of 256MB > (256*1024*1024) or even higher if you wish (I set usually set them to > 1GB). > >
