[jira] [Created] (CASSANDRA-6918) Compaction Assert: Incorrect Row Data Size

2014-03-24 Thread Alexander Goodrich (JIRA)
Alexander Goodrich created CASSANDRA-6918:
-

 Summary: Compaction Assert: Incorrect Row Data Size
 Key: CASSANDRA-6918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6918
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 11 node Linux Cassandra 1.2.15 cluster, each node 
configured as follows:
2P IntelXeon CPU X5660 @ 2.8 GHz (12 cores, 24 threads total)
148 GB RAM
CentOS release 6.4 (Final)
2.6.32-358.11.1.el6.x86_64 #1 SMP Wed May 15 10:48:38 EDT 2013 x86_64 x86_64 
x86_64 GNU/Linux
Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)

Node configuration:
Default cassandra.yaml settings for the most part with the following exceptions:
rpc_server_type: hsha


Reporter: Alexander Goodrich
 Fix For: 1.2.16


I have four tables in a schema with Replication Factor: 6 (previously we set 
this to 3, but when we added more nodes we figured adding more replication to 
improve read time would help, this might have aggravated the issue).

create table table_value_one (
id timeuuid PRIMARY KEY,
value_1 counter
);

create table table_value_two (
id timeuuid PRIMARY KEY,
value_2 counter
);

create table table_position_lookup (
value_1 bigint,
value_2 bigint,
id timeuuid,
PRIMARY KEY (id)
) WITH compaction={'class': 'LeveledCompactionStrategy'};

create table sorted_table (
row_key_index text,
range bigint,
sorted_value bigint,
id timeuuid,
extra_data listbigint,
PRIMARY KEY ((row_key_index, range), sorted_value, id)
) WITH CLUSTERING ORDER BY (sorted_value DESC) AND
  compaction={'class': 'LeveledCompactionStrategy'};

The application creates an object, and stores it in sorted_table based on a 
value position - for example, an object has a value_1 of 5500, and a value_2 of 
4300.

There are rows which represent indices by which I can sort items based on these 
values in descending order. If I wish to see items with the highest # of 
value_1, I can create an index that stores them like so:

row_key_index = 'highest_value_1s'

Additionally, we shard each row by bucket ranges - which is simply the value_1 
or value_2 / 1000. For example, our object above would be found in 
row_key_index = 'highest_value_1s' and range 5000, and also in row_key_index = 
'highest_value_2s' with range 4300.

The true values of this object are stored in two counter tables, 
table_value_one and table_value_two. The current indexed position is stored in 
table_position_lookup.

We allow the application to modify value_one and value_two in the counter table 
indiscriminately. If we know the current values for these are dirty, we wait a 
tuned amount of time before we update the position in the sorted_table index. 
This creates 2 delete operations, and 2 write operations on the same table.

The issue is when we expand the number of write/delete operations on 
sorted_table, we see the following assert in the system log:

ERROR [CompactionExecutor:169] 2014-03-24 08:07:12,871 CassandraDaemon.java 
(line 191) Exception in thread Thread[CompactionExecutor:169,1,main]
java.lang.AssertionError: incorrect row data size 77705872 written to 
/var/lib/cassandra/data/loadtest_1/sorted_table/loadtest_1-sorted_table-tmp-ic-165-Data.db;
 correct is 77800512
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:162)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:208)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)

Each object creates approximately ~500 unique row keys in sorted_table, and it 
possesses an extra_data field containing approximately 15 different bigint 
values.

Previously, our application was running Cassandra 1.2.10 and we did not see the 
assert when our sorted_table did not have the extra data listbigint. Also, 
we were writing around ~200 unique row keys, only containing the ID column.

We tried both leveled compaction 

[jira] [Commented] (CASSANDRA-6918) Compaction Assert: Incorrect Row Data Size

2014-03-24 Thread Alexander Goodrich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13945640#comment-13945640
 ] 

Alexander Goodrich commented on CASSANDRA-6918:
---

Yes, this is a counter-less table that the exceptions occur on - [~jbellis] It 
depends on the node - here's an exception on node #2 in my cluster - I've seen 
it happen without (seemingly) a corresponding compaction large row. Here's an 
example where there is one directly above it:

INFO [CompactionExecutor:144] 2014-03-24 07:50:33,240 CompactionController.java 
(line 156) Compacting large row 
loadtest_1/sorted_table:category1_globallist_item_4:0 (67157460 bytes) 
incrementally
ERROR [CompactionExecutor:144] 2014-03-24 07:50:42,471 CassandraDaemon.java 
(line 191) Exception in thread Thread[CompactionExecutor:144,1,main]
java.lang.AssertionError: incorrect row data size 67156948 written to 
/var/lib/cassandra/data/loadtest_1/sorted_table/loadtest_1-sorted_table-tmp-ic-77-Data.db;
 correct is 67239030
at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:162)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:208)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)


 Compaction Assert: Incorrect Row Data Size
 --

 Key: CASSANDRA-6918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6918
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 11 node Linux Cassandra 1.2.15 cluster, each node 
 configured as follows:
 2P IntelXeon CPU X5660 @ 2.8 GHz (12 cores, 24 threads total)
 148 GB RAM
 CentOS release 6.4 (Final)
 2.6.32-358.11.1.el6.x86_64 #1 SMP Wed May 15 10:48:38 EDT 2013 x86_64 x86_64 
 x86_64 GNU/Linux
 Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
 Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)
 Node configuration:
 Default cassandra.yaml settings for the most part with the following 
 exceptions:
 rpc_server_type: hsha
Reporter: Alexander Goodrich
 Fix For: 1.2.16


 I have four tables in a schema with Replication Factor: 6 (previously we set 
 this to 3, but when we added more nodes we figured adding more replication to 
 improve read time would help, this might have aggravated the issue).
 create table table_value_one (
 id timeuuid PRIMARY KEY,
 value_1 counter
 );
 
 create table table_value_two (
 id timeuuid PRIMARY KEY,
 value_2 counter
 );
 create table table_position_lookup (
 value_1 bigint,
 value_2 bigint,
 id timeuuid,
 PRIMARY KEY (id)
 ) WITH compaction={'class': 'LeveledCompactionStrategy'};
 create table sorted_table (
 row_key_index text,
 range bigint,
 sorted_value bigint,
 id timeuuid,
 extra_data listbigint,
 PRIMARY KEY ((row_key_index, range), sorted_value, id)
 ) WITH CLUSTERING ORDER BY (sorted_value DESC) AND
   compaction={'class': 'LeveledCompactionStrategy'};
 The application creates an object, and stores it in sorted_table based on a 
 value position - for example, an object has a value_1 of 5500, and a value_2 
 of 4300.
 There are rows which represent indices by which I can sort items based on 
 these values in descending order. If I wish to see items with the highest # 
 of value_1, I can create an index that stores them like so:
 row_key_index = 'highest_value_1s'
 Additionally, we shard each row by bucket ranges - which is simply the 
 value_1 or value_2 / 1000. For example, our object above would be found in 
 row_key_index = 'highest_value_1s' and range 5000, and also in row_key_index 
 = 'highest_value_2s' with range 4300.
 The true values of this object are stored in two counter tables, 
 table_value_one and table_value_two. The current indexed position is stored 
 in table_position_lookup.
 We allow the application to modify value_one and value_two in the counter 
 table indiscriminately. If we know the current values for these 

[jira] [Commented] (CASSANDRA-6918) Compaction Assert: Incorrect Row Data Size

2014-03-24 Thread Alexander Goodrich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13945898#comment-13945898
 ] 

Alexander Goodrich commented on CASSANDRA-6918:
---

Thanks for the tip - just did a search through all of our logs and all of them 
have the message Compacting large row on the same table (sorted_table) prior 
to the exception.

 Compaction Assert: Incorrect Row Data Size
 --

 Key: CASSANDRA-6918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6918
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 11 node Linux Cassandra 1.2.15 cluster, each node 
 configured as follows:
 2P IntelXeon CPU X5660 @ 2.8 GHz (12 cores, 24 threads total)
 148 GB RAM
 CentOS release 6.4 (Final)
 2.6.32-358.11.1.el6.x86_64 #1 SMP Wed May 15 10:48:38 EDT 2013 x86_64 x86_64 
 x86_64 GNU/Linux
 Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
 Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)
 Node configuration:
 Default cassandra.yaml settings for the most part with the following 
 exceptions:
 rpc_server_type: hsha
Reporter: Alexander Goodrich
 Fix For: 1.2.16


 I have four tables in a schema with Replication Factor: 6 (previously we set 
 this to 3, but when we added more nodes we figured adding more replication to 
 improve read time would help, this might have aggravated the issue).
 create table table_value_one (
 id timeuuid PRIMARY KEY,
 value_1 counter
 );
 
 create table table_value_two (
 id timeuuid PRIMARY KEY,
 value_2 counter
 );
 create table table_position_lookup (
 value_1 bigint,
 value_2 bigint,
 id timeuuid,
 PRIMARY KEY (id)
 ) WITH compaction={'class': 'LeveledCompactionStrategy'};
 create table sorted_table (
 row_key_index text,
 range bigint,
 sorted_value bigint,
 id timeuuid,
 extra_data listbigint,
 PRIMARY KEY ((row_key_index, range), sorted_value, id)
 ) WITH CLUSTERING ORDER BY (sorted_value DESC) AND
   compaction={'class': 'LeveledCompactionStrategy'};
 The application creates an object, and stores it in sorted_table based on a 
 value position - for example, an object has a value_1 of 5500, and a value_2 
 of 4300.
 There are rows which represent indices by which I can sort items based on 
 these values in descending order. If I wish to see items with the highest # 
 of value_1, I can create an index that stores them like so:
 row_key_index = 'highest_value_1s'
 Additionally, we shard each row by bucket ranges - which is simply the 
 value_1 or value_2 / 1000. For example, our object above would be found in 
 row_key_index = 'highest_value_1s' and range 5000, and also in row_key_index 
 = 'highest_value_2s' with range 4300.
 The true values of this object are stored in two counter tables, 
 table_value_one and table_value_two. The current indexed position is stored 
 in table_position_lookup.
 We allow the application to modify value_one and value_two in the counter 
 table indiscriminately. If we know the current values for these are dirty, we 
 wait a tuned amount of time before we update the position in the sorted_table 
 index. This creates 2 delete operations, and 2 write operations on the same 
 table.
 The issue is when we expand the number of write/delete operations on 
 sorted_table, we see the following assert in the system log:
 ERROR [CompactionExecutor:169] 2014-03-24 08:07:12,871 CassandraDaemon.java 
 (line 191) Exception in thread Thread[CompactionExecutor:169,1,main]
 java.lang.AssertionError: incorrect row data size 77705872 written to 
 /var/lib/cassandra/data/loadtest_1/sorted_table/loadtest_1-sorted_table-tmp-ic-165-Data.db;
  correct is 77800512
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:162)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:208)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at 

[jira] [Comment Edited] (CASSANDRA-6918) Compaction Assert: Incorrect Row Data Size

2014-03-24 Thread Alexander Goodrich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13945898#comment-13945898
 ] 

Alexander Goodrich edited comment on CASSANDRA-6918 at 3/25/14 12:06 AM:
-

[~jbellis] Thanks for the tip - just did a search through all of our logs and 
all of them have the message Compacting large row on the same table 
(sorted_table) prior to the exception.


was (Author: agoodrich):
Thanks for the tip - just did a search through all of our logs and all of them 
have the message Compacting large row on the same table (sorted_table) prior 
to the exception.

 Compaction Assert: Incorrect Row Data Size
 --

 Key: CASSANDRA-6918
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6918
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 11 node Linux Cassandra 1.2.15 cluster, each node 
 configured as follows:
 2P IntelXeon CPU X5660 @ 2.8 GHz (12 cores, 24 threads total)
 148 GB RAM
 CentOS release 6.4 (Final)
 2.6.32-358.11.1.el6.x86_64 #1 SMP Wed May 15 10:48:38 EDT 2013 x86_64 x86_64 
 x86_64 GNU/Linux
 Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
 Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode)
 Node configuration:
 Default cassandra.yaml settings for the most part with the following 
 exceptions:
 rpc_server_type: hsha
Reporter: Alexander Goodrich
 Fix For: 1.2.16


 I have four tables in a schema with Replication Factor: 6 (previously we set 
 this to 3, but when we added more nodes we figured adding more replication to 
 improve read time would help, this might have aggravated the issue).
 create table table_value_one (
 id timeuuid PRIMARY KEY,
 value_1 counter
 );
 
 create table table_value_two (
 id timeuuid PRIMARY KEY,
 value_2 counter
 );
 create table table_position_lookup (
 value_1 bigint,
 value_2 bigint,
 id timeuuid,
 PRIMARY KEY (id)
 ) WITH compaction={'class': 'LeveledCompactionStrategy'};
 create table sorted_table (
 row_key_index text,
 range bigint,
 sorted_value bigint,
 id timeuuid,
 extra_data listbigint,
 PRIMARY KEY ((row_key_index, range), sorted_value, id)
 ) WITH CLUSTERING ORDER BY (sorted_value DESC) AND
   compaction={'class': 'LeveledCompactionStrategy'};
 The application creates an object, and stores it in sorted_table based on a 
 value position - for example, an object has a value_1 of 5500, and a value_2 
 of 4300.
 There are rows which represent indices by which I can sort items based on 
 these values in descending order. If I wish to see items with the highest # 
 of value_1, I can create an index that stores them like so:
 row_key_index = 'highest_value_1s'
 Additionally, we shard each row by bucket ranges - which is simply the 
 value_1 or value_2 / 1000. For example, our object above would be found in 
 row_key_index = 'highest_value_1s' and range 5000, and also in row_key_index 
 = 'highest_value_2s' with range 4300.
 The true values of this object are stored in two counter tables, 
 table_value_one and table_value_two. The current indexed position is stored 
 in table_position_lookup.
 We allow the application to modify value_one and value_two in the counter 
 table indiscriminately. If we know the current values for these are dirty, we 
 wait a tuned amount of time before we update the position in the sorted_table 
 index. This creates 2 delete operations, and 2 write operations on the same 
 table.
 The issue is when we expand the number of write/delete operations on 
 sorted_table, we see the following assert in the system log:
 ERROR [CompactionExecutor:169] 2014-03-24 08:07:12,871 CassandraDaemon.java 
 (line 191) Exception in thread Thread[CompactionExecutor:169,1,main]
 java.lang.AssertionError: incorrect row data size 77705872 written to 
 /var/lib/cassandra/data/loadtest_1/sorted_table/loadtest_1-sorted_table-tmp-ic-165-Data.db;
  correct is 77800512
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:162)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:162)
 at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:208)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at 

[jira] [Commented] (CASSANDRA-2867) Starting 0.8.1 after upgrade from 0.7.6-2 fails

2011-07-28 Thread Alexander Goodrich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13072242#comment-13072242
 ] 

Alexander Goodrich commented on CASSANDRA-2867:
---

Thanks Taras for commenting on the issue, I just ran into the same problem with 
a super column family with a TimeUUIDType comparator, and UTF8Type 
subcomparators; I can insert, get data all day long from the cluster while it 
is running fresh. But the moment I restart the cassandra node, I get the 
exception saying that TimeUUIDTypes must be exactly 16 bytes.

 Starting 0.8.1 after upgrade from 0.7.6-2 fails
 ---

 Key: CASSANDRA-2867
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2867
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.1
 Environment: CentOS 5.6
Reporter: Yaniv Kunda
  Labels: exception, index, starting
 Attachments: trunk-2867.txt


 After upgrading the binaries to 0.8.1 I get an exception when starting 
 cassandra:
 {noformat}
 [root@bserv2 local]#  INFO 12:51:04,512 Logging initialized
  INFO 12:51:04,523 Heap size: 8329887744/8329887744
  INFO 12:51:04,524 JNA not found. Native methods will be disabled.
  INFO 12:51:04,531 Loading settings from 
 file:/usr/local/apache-cassandra-0.8.1/conf/cassandra.yaml
  INFO 12:51:04,621 DiskAccessMode 'auto' determined to be mmap, 
 indexAccessMode is mmap
  INFO 12:51:04,707 Global memtable threshold is enabled at 2648MB
  INFO 12:51:04,708 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,713 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,714 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,716 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,717 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,719 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,770 reading saved cache 
 /vm1/cassandraDB/saved_caches/system-IndexInfo-KeyCache
  INFO 12:51:04,776 Opening /vm1/cassandraDB/data/system/IndexInfo-f-9
  INFO 12:51:04,792 reading saved cache 
 /vm1/cassandraDB/saved_caches/system-Schema-KeyCache
  INFO 12:51:04,794 Opening /vm1/cassandraDB/data/system/Schema-f-194
  INFO 12:51:04,797 Opening /vm1/cassandraDB/data/system/Schema-f-195
  INFO 12:51:04,802 Opening /vm1/cassandraDB/data/system/Schema-f-193
  INFO 12:51:04,811 Opening /vm1/cassandraDB/data/system/Migrations-f-193
  INFO 12:51:04,814 reading saved cache 
 /vm1/cassandraDB/saved_caches/system-LocationInfo-KeyCache
  INFO 12:51:04,815 Opening /vm1/cassandraDB/data/system/LocationInfo-f-292
  INFO 12:51:04,843 Loading schema version 586e70fd-a332-11e0-828e-34b74a661156
 ERROR 12:51:04,996 Exception encountered during startup.
 org.apache.cassandra.db.marshal.MarshalException: A long is exactly 8 bytes: 
 15
 at 
 org.apache.cassandra.db.marshal.LongType.getString(LongType.java:72)
 at 
 org.apache.cassandra.config.CFMetaData.getDefaultIndexName(CFMetaData.java:971)
 at org.apache.cassandra.config.CFMetaData.inflate(CFMetaData.java:381)
 at org.apache.cassandra.config.KSMetaData.inflate(KSMetaData.java:172)
 at 
 org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:99)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:479)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:139)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:315)
 at 
 org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:80)
 Exception encountered during startup.
 org.apache.cassandra.db.marshal.MarshalException: A long is exactly 8 bytes: 
 15
 at 
 org.apache.cassandra.db.marshal.LongType.getString(LongType.java:72)
 at 
 org.apache.cassandra.config.CFMetaData.getDefaultIndexName(CFMetaData.java:971)
 at org.apache.cassandra.config.CFMetaData.inflate(CFMetaData.java:381)
 at org.apache.cassandra.config.KSMetaData.inflate(KSMetaData.java:172)
 at 
 org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:99)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:479)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:139)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:315)
 at 
 

[jira] [Issue Comment Edited] (CASSANDRA-2867) Starting 0.8.1 after upgrade from 0.7.6-2 fails

2011-07-28 Thread Alexander Goodrich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13072242#comment-13072242
 ] 

Alexander Goodrich edited comment on CASSANDRA-2867 at 7/28/11 8:42 AM:


Thanks Taras for commenting on the issue, I just ran into the same problem with 
a super column family with a TimeUUIDType comparator, and UTF8Type 
subcomparators; I can insert, get data all day long from the cluster while it 
is running fresh. But the moment I restart the cassandra node, I get the 
exception saying that TimeUUIDTypes must be exactly 16 bytes.

Edit: Just verified your fix and it fixes my case as well.

  was (Author: redpriest):
Thanks Taras for commenting on the issue, I just ran into the same problem 
with a super column family with a TimeUUIDType comparator, and UTF8Type 
subcomparators; I can insert, get data all day long from the cluster while it 
is running fresh. But the moment I restart the cassandra node, I get the 
exception saying that TimeUUIDTypes must be exactly 16 bytes.
  
 Starting 0.8.1 after upgrade from 0.7.6-2 fails
 ---

 Key: CASSANDRA-2867
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2867
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.8.1
 Environment: CentOS 5.6
Reporter: Yaniv Kunda
  Labels: exception, index, starting
 Attachments: trunk-2867.txt


 After upgrading the binaries to 0.8.1 I get an exception when starting 
 cassandra:
 {noformat}
 [root@bserv2 local]#  INFO 12:51:04,512 Logging initialized
  INFO 12:51:04,523 Heap size: 8329887744/8329887744
  INFO 12:51:04,524 JNA not found. Native methods will be disabled.
  INFO 12:51:04,531 Loading settings from 
 file:/usr/local/apache-cassandra-0.8.1/conf/cassandra.yaml
  INFO 12:51:04,621 DiskAccessMode 'auto' determined to be mmap, 
 indexAccessMode is mmap
  INFO 12:51:04,707 Global memtable threshold is enabled at 2648MB
  INFO 12:51:04,708 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,713 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,714 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,716 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,717 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,719 Removing compacted SSTable files (see 
 http://wiki.apache.org/cassandra/MemtableSSTable)
  INFO 12:51:04,770 reading saved cache 
 /vm1/cassandraDB/saved_caches/system-IndexInfo-KeyCache
  INFO 12:51:04,776 Opening /vm1/cassandraDB/data/system/IndexInfo-f-9
  INFO 12:51:04,792 reading saved cache 
 /vm1/cassandraDB/saved_caches/system-Schema-KeyCache
  INFO 12:51:04,794 Opening /vm1/cassandraDB/data/system/Schema-f-194
  INFO 12:51:04,797 Opening /vm1/cassandraDB/data/system/Schema-f-195
  INFO 12:51:04,802 Opening /vm1/cassandraDB/data/system/Schema-f-193
  INFO 12:51:04,811 Opening /vm1/cassandraDB/data/system/Migrations-f-193
  INFO 12:51:04,814 reading saved cache 
 /vm1/cassandraDB/saved_caches/system-LocationInfo-KeyCache
  INFO 12:51:04,815 Opening /vm1/cassandraDB/data/system/LocationInfo-f-292
  INFO 12:51:04,843 Loading schema version 586e70fd-a332-11e0-828e-34b74a661156
 ERROR 12:51:04,996 Exception encountered during startup.
 org.apache.cassandra.db.marshal.MarshalException: A long is exactly 8 bytes: 
 15
 at 
 org.apache.cassandra.db.marshal.LongType.getString(LongType.java:72)
 at 
 org.apache.cassandra.config.CFMetaData.getDefaultIndexName(CFMetaData.java:971)
 at org.apache.cassandra.config.CFMetaData.inflate(CFMetaData.java:381)
 at org.apache.cassandra.config.KSMetaData.inflate(KSMetaData.java:172)
 at 
 org.apache.cassandra.db.DefsTable.loadFromStorage(DefsTable.java:99)
 at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:479)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:139)
 at 
 org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:315)
 at 
 org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:80)
 Exception encountered during startup.
 org.apache.cassandra.db.marshal.MarshalException: A long is exactly 8 bytes: 
 15
 at 
 org.apache.cassandra.db.marshal.LongType.getString(LongType.java:72)
 at 
 org.apache.cassandra.config.CFMetaData.getDefaultIndexName(CFMetaData.java:971)
 at org.apache.cassandra.config.CFMetaData.inflate(CFMetaData.java:381)
 at