[ 
https://issues.apache.org/jira/browse/CASSANDRA-1130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12871293#action_12871293
 ] 

Jignesh Dhruv commented on CASSANDRA-1130:
------------------------------------------

Yes I am using the latest source code from trunk.

I have a small java application that deals with creating schema and populating 
data.

This is what I was able to debug till now:
The error occurs during deserialization in ColumnSerializer.

There is an extra int byte that needs to be read before 
ColumnSerializer.java:84. Value of this extra int byte is "4". Not sure what it 
stands for.

I am not sure from where that byte is set. After reading that byte, I get the 
localDeletionTime value.

Also this happens when the DELETION_MASK is set on a record. It works fine for 
records with EXPIRATION_MASK. I am thinking that converting a record from 
EXPIRY to DELETED is causing this error at startup or something like that.

Let me know if you need more information.

But you should be able to reproduce this if you have a SuperColumn and their 
subColumns with TTL. Stop and start cassandra after loading some data. It 
consistently fails to startup 1 out of every 3 times.

Jignesh


> Cassandra throws Exceptions at startup when using TTL in SuperColumns
> ---------------------------------------------------------------------
>
>                 Key: CASSANDRA-1130
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1130
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 0.7
>            Reporter: Jignesh Dhruv
>            Assignee: Sylvain Lebresne
>             Fix For: 0.7
>
>
> Hello,
> I am trying to use TTL (timeToLive) feature in SuperColumns.
> My usecase is:
> - I have a SuperColumn and 3 subcolumns.
> - I try to expire data after 60 seconds.
> While Cassandra is up and running, I am successfully able to push and read 
> data without any problems. Data compaction and all occurs fine. After 
> inserting say about 100000 records, I stop Cassandra while data is still 
> coming.
> On startup Cassandra throws an exception and won't start up. (This happens 1 
> in every 3 times). Exception varies like:
> - EOFException while reading data
> - negative value encountered exception
> - Heap Space Exception
> Cassandra simply won't start up.
> Again I get this problem only when I use TTL with SuperColumns. There are no 
> issues with using TTL with regular Columns.
> I tried to diagnose the problem and it seems to happen on startup when it 
> sees a Column that is marked Deleted and its trying to read data. Its off by 
> some bytes and hence all these exceptions.
> Caused by: java.io.IOException: Corrupt (negative) value length encountered
>         at 
> org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtilities.java:317)
>         at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:84)
>         at 
> org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:336)
>         at 
> org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:285)
>         at 
> org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.getNextBlock(SSTableSliceIterator.java:235)
>         at 
> org.apache.cassandra.db.filter.SSTableSliceIterator$ColumnGroupReader.pollColumn(SSTableSliceIterator.java:195)
>         ... 18 more
> Let me know if you need more information.
> Thanks,
> Jignesh

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to