column length is not checked before saved to memstore
-----------------------------------------------------
Key: HBASE-1777
URL: https://issues.apache.org/jira/browse/HBASE-1777
Project: Hadoop HBase
Issue Type: Bug
Components: client
Affects Versions: 0.20.0
Reporter: Billy Pearson
Fix For: 0.20.1
I added some debuging to line 511 in HFile.java and found that the column is
causing my problem it was > max size
we should check this before saving the record to memstore
As of 0.20.0-RC2 the server dies and cause the hlogs to be read again by the
next region server that gets the region in the end it cause the whole cluster
to go down sense the bad data is in the hlog at this point.
{code}
2009-08-18 12:54:16,572 FATAL
org.apache.hadoop.hbase.regionserver.MemStoreFlusher: Replay of hlog
required. Forcing server shutdown
org.apache.hadoop.hbase.DroppedSnapshotException: region:
webdata,http:\x2F\x2Fanaal-genomen.isporno.nl\x2F,1250569930062
at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:950)
at
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:843)
at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:241)
at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:149)
Caused by: java.io.IOException: Key length 183108 > 65536
at
org.apache.hadoop.hbase.io.hfile.HFile$Writer.checkKey(HFile.java:511)
at org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:479)
at org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:447)
at
org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:525)
at org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:489)
at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:935)
... 3 more
{code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.