[ https://issues.apache.org/jira/browse/CASSANDRA-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12871345#action_12871345 ]
Jignesh Dhruv commented on CASSANDRA-1130: ------------------------------------------ OK. I think I narrowed it down further.. The bug may be in "org/apache/cassandra/db/filter/SSTableSliceIterator.java:getNextBlock() method. See the while loop on line 233. Out here, its reading 1 column at a time. As I said before, the problem is when an Column of Type ExpiringColumn becomes DeletedColumn when time has expired. In that case, once the Supercolumn whose subcolumns are of type "DELETED" are read in this while loop, there are some extra bytes that needs to be skipped but instead it goes in the second iteration in the while loop and tries to read the next column and thats where all the problem starts. Shouldn't the while loop just read one Column at a time and then exit. That is what it does when it reads all the bytes. If I put a "break statement" in the end of while loop after reading a column all works fine as the extra bytes are skipped during the next read of a Column. I am not sure what is the purpose of this while loop? but if we break after reading 1 column at a time, all works fine and cassandra starts up smoothly. This looks similar to issue https://issues.apache.org/jira/browse/CASSANDRA-1073 Jignesh > Cassandra throws Exceptions at startup when using TTL in SuperColumns > --------------------------------------------------------------------- > > Key: CASSANDRA-1130 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1130 > Project: Cassandra > Issue Type: Bug > Components: Core > Affects Versions: 0.7 > Reporter: Jignesh Dhruv > Assignee: Sylvain Lebresne > Fix For: 0.7 > > > Hello, > I am trying to use TTL (timeToLive) feature in SuperColumns. > My usecase is: > - I have a SuperColumn and 3 subcolumns. > - I try to expire data after 60 seconds. > While Cassandra is up and running, I am successfully able to push and read > data without any problems. Data compaction and all occurs fine. After > inserting say about 100000 records, I stop Cassandra while data is still > coming. > On startup Cassandra throws an exception and won't start up. (This happens 1 > in every 3 times). Exception varies like: > - EOFException while reading data > - negative value encountered exception > - Heap Space Exception > Cassandra simply won't start up. > Again I get this problem only when I use TTL with SuperColumns. There are no > issues with using TTL with regular Columns. > I tried to diagnose the problem and it seems to happen on startup when it > sees a Column that is marked Deleted and its trying to read data. Its off by > some bytes and hence all these exceptions. > Caused by: java.io.IOException: Corrupt (negative) value length encountered > at > org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtilities.java:317) > at > org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:84) > at > org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:336) > at > org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:285) > at > org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.getNextBlock(SSTableSliceIterator.java:235) > at > org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.pollColumn(SSTableSliceIterator.java:195) > ... 18 more > Let me know if you need more information. > Thanks, > Jignesh -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.