Hi,

1. I've tried to apply the patches for this bug. They worked except for the
unit test modifications that git refused to apply.

2. After applying the patches I've run the stress.py script (with 500,000
keys). The script output seems to be fine, but the cassandra console
contains the below exception.

-Dcassandra.config=file:///home/dragos/Workspace/oss/cassandra/conf/cassandra.yaml
-Dcassandra-foreground
-ea -Xmx1280M

binary_memtable_throughput_in_mb: 64 (tried with 128, 256)

Is this a jvm memory configuration problem? The exception starts to appear
arroung key 100,000.

10/10/29 17:54:57 INFO service.StorageService: Starting up server gossip
10/10/29 17:54:57 INFO db.ColumnFamilyStore: switching in a fresh Memtable
for LocationInfo at
CommitLogContext(file='/home/dragos/cassandra/commitlog/CommitLog-1288364097273.log',
position=700)
10/10/29 17:54:57 INFO db.ColumnFamilyStore: Enqueuing flush of
memtable-locationi...@15696851(227 bytes, 4 operations)
10/10/29 17:54:57 INFO db.Memtable: Writing memtable-locationi...@15696851(227
bytes, 4 operations)
10/10/29 17:54:57 INFO db.Memtable: Completed flushing
/home/dragos/cassandra/data/system/LocationInfo-e-1-Data.db
10/10/29 17:54:57 WARN service.StorageService: Generated random token
94710572475423860127984872289063475144. Random tokens will result in an
unbalanced ring; see http://wiki.apache.org/cassandra/Operations
10/10/29 17:54:57 INFO db.ColumnFamilyStore: switching in a fresh Memtable
for LocationInfo at
CommitLogContext(file='/home/dragos/cassandra/commitlog/CommitLog-1288364097273.log',
position=848)
10/10/29 17:54:57 INFO db.ColumnFamilyStore: Enqueuing flush of
memtable-locationi...@19141351(36 bytes, 1 operations)
10/10/29 17:54:57 INFO db.Memtable: Writing memtable-locationi...@19141351(36
bytes, 1 operations)
10/10/29 17:54:57 INFO db.Memtable: Completed flushing
/home/dragos/cassandra/data/system/LocationInfo-e-2-Data.db
10/10/29 17:54:57 INFO utils.Mx4jTool: Will not load MX4J, mx4j-tools.jar is
not in the classpath
10/10/29 17:54:57 INFO thrift.CassandraDaemon: Binding thrift service to
localhost/127.0.0.1:9160
10/10/29 17:54:57 INFO thrift.CassandraDaemon: Using TFramedTransport with a
max frame size of 15728640 bytes.
10/10/29 17:54:57 INFO thrift.CassandraDaemon: Listening for thrift
clients...
10/10/29 17:55:04 INFO db.ColumnFamilyStore: switching in a fresh Memtable
for Migrations at
CommitLogContext(file='/home/dragos/cassandra/commitlog/CommitLog-1288364097273.log',
position=12544)
10/10/29 17:55:04 INFO db.ColumnFamilyStore: Enqueuing flush of
memtable-migrati...@22958990(6993 bytes, 1 operations)
10/10/29 17:55:04 INFO db.Memtable: Writing memtable-migrati...@22958990(6993
bytes, 1 operations)
10/10/29 17:55:04 INFO db.ColumnFamilyStore: switching in a fresh Memtable
for Schema at
CommitLogContext(file='/home/dragos/cassandra/commitlog/CommitLog-1288364097273.log',
position=12544)
10/10/29 17:55:04 INFO db.ColumnFamilyStore: Enqueuing flush of
memtable-sch...@29336531(2649 bytes, 3 operations)
10/10/29 17:55:05 INFO db.Memtable: Completed flushing
/home/dragos/cassandra/data/system/Migrations-e-1-Data.db
10/10/29 17:55:05 INFO db.Memtable: Writing memtable-sch...@29336531(2649
bytes, 3 operations)
10/10/29 17:55:05 INFO db.Memtable: Completed flushing
/home/dragos/cassandra/data/system/Schema-e-1-Data.db
10/10/29 17:55:05 INFO db.ColumnFamilyStore: read 0 from saved key cache
10/10/29 17:55:05 INFO db.ColumnFamilyStore: read 0 from saved key cache
10/10/29 17:55:05 INFO db.ColumnFamilyStore: loading row cache for Super1 of
Keyspace1
10/10/29 17:55:05 INFO db.ColumnFamilyStore: completed loading (0 ms; 0
keys)  row cache for Super1 of Keyspace1
10/10/29 17:55:05 INFO db.ColumnFamilyStore: loading row cache for Standard1
of Keyspace1
10/10/29 17:55:05 INFO db.ColumnFamilyStore: completed loading (0 ms; 0
keys)  row cache for Standard1 of Keyspace1
10/10/29 17:55:19 INFO service.GCInspector: GC for PS MarkSweep: 255 ms,
99152 reclaimed leaving 98881416 used; max is 1442054144
10/10/29 17:55:22 INFO db.ColumnFamilyStore: switching in a fresh Memtable
for Standard1 at
CommitLogContext(file='/home/dragos/cassandra/commitlog/CommitLog-1288364097273.log',
position=37144548)
10/10/29 17:55:22 INFO db.ColumnFamilyStore: Enqueuing flush of
memtable-standa...@23934262(17798235 bytes, 348985 operations)
10/10/29 17:55:22 INFO db.Memtable: Writing
memtable-standa...@23934262(17798235
bytes, 348985 operations)
10/10/29 17:55:23 INFO service.GCInspector: GC for PS MarkSweep: 357 ms,
81144 reclaimed leaving 209802568 used; max is 1437138944
10/10/29 17:55:27 ERROR service.AbstractCassandraDaemon: Fatal exception in
thread Thread[FlushWriter:1,5,main]
java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 1
    at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
    at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
    at
org.apache.avro.io.ResolvingDecoder.readEnum(ResolvingDecoder.java:177)
    at
org.apache.avro.generic.GenericDatumReader.readEnum(GenericDatumReader.java:172)
    at
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:115)
    at
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:118)
    at
org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:142)
    at
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:114)
    at
org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:142)
    at
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:114)
    at
org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:105)
    at
org.apache.cassandra.io.SerDeUtils.deserializeWithSchema(SerDeUtils.java:112)
    at
org.apache.cassandra.io.sstable.bitidx.BitmapIndexReader.open(BitmapIndexReader.java:87)
    at
org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:196)
    at
org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:178)
    at
org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:160)
    at org.apache.cassandra.db.Memtable.access$1(Memtable.java:152)
    at org.apache.cassandra.db.Memtable$1.runMayThrow(Memtable.java:172)
    at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
    ... 6 more

and more like this.

Dragos

Reply via email to