[ 
https://issues.apache.org/jira/browse/CASSANDRA-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14128081#comment-14128081
 ] 

Pavel Pletnev commented on CASSANDRA-7775:
------------------------------------------

No, but we have system of calling CREATE TABLE IF NOT EXIST from several 
processes (and also almost at the same time). We have new table for each day. 
And from 5 to 10 daemons inserting data. On day change time they all can ask to 
create tables at the same time. So some tables created with wrong PRIMARY KEY. 
This is another problem. But i think we need to have system that will not stuck 
completely on such problems.

> Cassandra attempts to flush an empty memtable into disk and fails
> -----------------------------------------------------------------
>
>                 Key: CASSANDRA-7775
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-7775
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>         Environment: $ nodetool version
> ReleaseVersion: 2.0.6
> $ java -version
> java version "1.7.0_51"
> Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
>            Reporter: Omri Bahumi
>
> I'm not sure what triggers this flush, but when it happens the following 
> appears in our logs:
> {code}
>  INFO [OptionalTasks:1] 2014-08-15 02:24:20,115 ColumnFamilyStore.java (line 
> 785) Enqueuing flush of Memtable-app_recs_best_in_expr_prefix2@1219170646(0/0 
> serialized/live bytes, 0 ops)
>  INFO [FlushWriter:34] 2014-08-15 02:24:20,116 Memtable.java (line 331) 
> Writing Memtable-app_recs_best_in_expr_prefix2@1219170646(0/0 serialized/live 
> bytes, 0 ops)
> ERROR [FlushWriter:34] 2014-08-15 02:24:20,127 CassandraDaemon.java (line 
> 196) Exception in thread Thread[FlushWriter:34,5,main]
> java.lang.RuntimeException: Cannot get comparator 1 in 
> org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type).
>  This might due to a mismatch between the schema and the data read
>         at 
> org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
>         at 
> org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:140)
>         at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:96)
>         at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
>         at 
> org.apache.cassandra.db.RangeTombstone$Tracker$1.compare(RangeTombstone.java:125)
>         at 
> org.apache.cassandra.db.RangeTombstone$Tracker$1.compare(RangeTombstone.java:122)
>         at java.util.TreeMap.compare(TreeMap.java:1188)
>         at java.util.TreeMap$NavigableSubMap.<init>(TreeMap.java:1264)
>         at java.util.TreeMap$AscendingSubMap.<init>(TreeMap.java:1699)
>         at java.util.TreeMap.tailMap(TreeMap.java:905)
>         at java.util.TreeSet.tailSet(TreeSet.java:350)
>         at java.util.TreeSet.tailSet(TreeSet.java:383)
>         at 
> org.apache.cassandra.db.RangeTombstone$Tracker.update(RangeTombstone.java:203)
>         at 
> org.apache.cassandra.db.ColumnIndex$Builder.add(ColumnIndex.java:192)
>         at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:138)
>         at 
> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:202)
>         at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:187)
>         at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:365)
>         at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:318)
>         at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>         at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.IndexOutOfBoundsException: index (1) must be less than 
> size (1)
>         at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
>         at 
> com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:285)
>         at 
> com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:45)
>         at 
> org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:124)
>         ... 23 more
> {code}
> After this happens, the MemtablePostFlusher thread pool starts piling up.
> When trying to restart the cluster, a similar exception occurs when trying to 
> replay the commit log.
> Our way of recovering from this is to delete all commit logs in the faulty 
> node, start it and issue a repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to