[jira] [Commented] (CASSANDRA-1991) CFS.maybeSwitchMemtable() calls CommitLog.instance.getContext(), which may block, under flusher lock write lock

2012-05-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271164#comment-13271164
 ] 

Sylvain Lebresne commented on CASSANDRA-1991:
-

Agreed on #3, this seems to me like the right solution and I can't see why this 
wouldn't work (using a future wouldn't change the semantic of the code at all, 
we don't use the context within the lock anyway).

 CFS.maybeSwitchMemtable() calls CommitLog.instance.getContext(), which may 
 block, under flusher lock write lock
 ---

 Key: CASSANDRA-1991
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1991
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Peter Schuller
 Attachments: 1991-checkpointing-flush.txt, 1991-logchanges.txt, 
 1991-trunk-v2.txt, 1991-trunk.txt, 1991-v3.txt, 1991-v4.txt, 1991-v5.txt, 
 1991-v6.txt, 1991-v7.txt, 1991-v8.txt, 1991-v9.txt, trigger.py


 While investigate CASSANDRA-1955 I realized I was seeing very poor latencies 
 for reasons that had nothing to do with flush_writers, even when using 
 periodic commit log mode (and flush writers set ridiculously high, 500).
 It turns out writes blocked were slow because Table.apply() was spending lots 
 of time (I can easily trigger seconds on moderate work-load) trying to 
 acquire a flusher lock read lock (flush lock millis log printout in the 
 logging patch I'll attach).
 That in turns is caused by CFS.maybeSwitchMemtable() which acquires the 
 flusher lock write lock.
 Bisecting further revealed that the offending line of code that blocked was:
 final CommitLogSegment.CommitLogContext ctx = 
 writeCommitLog ? CommitLog.instance.getContext() : null;
 Indeed, CommitLog.getContext() simply returns currentSegment().getContext(), 
 but does so by submitting a callable on the service executor. So 
 independently of flush writers, this can block all (global, for all cf:s) 
 writes very easily, and does.
 I'll attach a file that is an independent Python script that triggers it on 
 my macos laptop (with an intel SSD, which is why I was particularly 
 surprised) (it assumes CPython, out-of-the-box-or-almost Cassandra on 
 localhost that isn't in a cluster, and it will drop/recreate a keyspace 
 called '1955').
 I'm also attaching, just FYI, the patch with log entries that I used while 
 tracking it down.
 Finally, I'll attach a patch with a suggested solution of keeping track of 
 the latest commit log with an AtomicReference (as an alternative to 
 synchronizing all access to segments). With that patch applied, latencies are 
 not affected by my trigger case like they were before. There are some 
 sub-optimal  100 ms cases on my test machine, but for other reasons. I'm no 
 longer able to trigger the extremes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4228) Exception while reading from cassandra via ColumnFamilyInputFormat and OrderPreservingPartitioner

2012-05-09 Thread bert Passek (JIRA)
bert Passek created CASSANDRA-4228:
--

 Summary: Exception while reading from cassandra via 
ColumnFamilyInputFormat and OrderPreservingPartitioner
 Key: CASSANDRA-4228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek


We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
system. After that we can not use ColumnFamilyInputFormat anymore due to 
exceptions in cassandra. A simple unit test is provided via attachement.

Here are some details about our simple setup:

Ring: 

Address DC  RackStatus State   LoadOwns 
   Token   
127.0.0.1   datacenter1 rack1   Up Normal  859.36 KB   100,00%  
   55894951196891831822413178196787984716  

Schema Definition:

create column family TestSuper
  with column_type = 'Super'
  and comparator = 'BytesType'
  and subcomparator = 'BytesType'
  and default_validation_class = 'BytesType'
  and key_validation_class = 'BytesType'
  and read_repair_chance = 0.1
  and dclocal_read_repair_chance = 0.0
  and gc_grace = 864000
  and min_compaction_threshold = 4
  and max_compaction_threshold = 32
  and replicate_on_write = true
  and compaction_strategy = 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
  and caching = 'KEYS_ONLY'
  and compression_options = {'sstable_compression' : 
'org.apache.cassandra.io.compress.SnappyCompressor'};


While running the test we face following exception on client side:

12/05/09 10:18:22 INFO junit.TestRunner: 
testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
org.apache.thrift.transport.TTransportException
12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
org.apache.thrift.transport.TTransportException
at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
at 
org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
at 
de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at org.junit.runners.Suite.runChild(Suite.java:115)
at org.junit.runners.Suite.runChild(Suite.java:23)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
at 

[jira] [Updated] (CASSANDRA-4228) Exception while reading from cassandra via ColumnFamilyInputFormat and OrderPreservingPartitioner

2012-05-09 Thread bert Passek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bert Passek updated CASSANDRA-4228:
---

Attachment: CassandraTest.java

Unit-Test ti reproduce described exception.

 Exception while reading from cassandra via ColumnFamilyInputFormat and 
 OrderPreservingPartitioner
 -

 Key: CASSANDRA-4228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: CassandraTest.java


 We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
 system. After that we can not use ColumnFamilyInputFormat anymore due to 
 exceptions in cassandra. A simple unit test is provided via attachement.
 Here are some details about our simple setup:
 Ring: 
 Address DC  RackStatus State   LoadOwns   
  Token   
 127.0.0.1   datacenter1 rack1   Up Normal  859.36 KB   
 100,00% 55894951196891831822413178196787984716  
 Schema Definition:
 create column family TestSuper
   with column_type = 'Super'
   and comparator = 'BytesType'
   and subcomparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 While running the test we face following exception on client side:
 12/05/09 10:18:22 INFO junit.TestRunner: 
 testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
 org.apache.thrift.transport.TTransportException
 12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
   at 
 de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
   at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
   at org.junit.runners.Suite.runChild(Suite.java:115)
   at org.junit.runners.Suite.runChild(Suite.java:23)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at 

[jira] [Issue Comment Edited] (CASSANDRA-4228) Exception while reading from cassandra via ColumnFamilyInputFormat and OrderPreservingPartitioner

2012-05-09 Thread bert Passek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271209#comment-13271209
 ] 

bert Passek edited comment on CASSANDRA-4228 at 5/9/12 8:29 AM:


Unit-Test to reproduce described exception.

  was (Author: bertpassek):
Unit-Test ti reproduce described exception.
  
 Exception while reading from cassandra via ColumnFamilyInputFormat and 
 OrderPreservingPartitioner
 -

 Key: CASSANDRA-4228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: CassandraTest.java


 We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
 system. After that we can not use ColumnFamilyInputFormat anymore due to 
 exceptions in cassandra. A simple unit test is provided via attachement.
 Here are some details about our simple setup:
 Ring: 
 Address DC  RackStatus State   LoadOwns   
  Token   
 127.0.0.1   datacenter1 rack1   Up Normal  859.36 KB   
 100,00% 55894951196891831822413178196787984716  
 Schema Definition:
 create column family TestSuper
   with column_type = 'Super'
   and comparator = 'BytesType'
   and subcomparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 While running the test we face following exception on client side:
 12/05/09 10:18:22 INFO junit.TestRunner: 
 testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
 org.apache.thrift.transport.TTransportException
 12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
   at 
 de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
   at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
   at org.junit.runners.Suite.runChild(Suite.java:115)
   at 

[jira] [Commented] (CASSANDRA-4196) While loading data using BulkOutPutFormat gettting an exception java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter cannot be cast to org.a

2012-05-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271210#comment-13271210
 ] 

Sylvain Lebresne commented on CASSANDRA-4196:
-

I'm also confused by reading this ticket. I think it could use some on the 
record clarification.

From reading this, it seems Samarth reported this on 1.1 rc1 initially (*not* 
on trunk unless there has been some off-the-record conversation), and then 
somehow this turns out to be a 1.2 issue? Did we remove code from 1.1 between 
rc1 and final that makes this a non-issue for 1.1? Is it something that does 
affect 1.1 (i.e. is it still possible to get the stack trace of the 
description) but can be simply fixed by running updadesstables after a 1.1 
upgrade? Also, why is this marked as Won't fix if something was committed to 
1.2?

Not saying something wrong has been done here, but just want to get the records 
straight. 

 While loading data using BulkOutPutFormat gettting an exception 
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 -

 Key: CASSANDRA-4196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4196
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop, Tools
Affects Versions: 1.2
Reporter: Samarth Gahire
Assignee: Dave Brosius
Priority: Minor
  Labels: bulkloader, cassandra, hadoop, hash
 Fix For: 1.2

 Attachments: 4196_create_correct_bf_type.diff

   Original Estimate: 48h
  Remaining Estimate: 48h

 We are using cassandra-1.1 rc1 for production setup and getting following 
 error while bulkloading data using BulkOutPutFormat.
 {code}
 WARN 09:04:52,384 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2692)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
  WARN 09:04:52,393 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2693)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
  WARN 09:04:52,544 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2698)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 

[jira] [Updated] (CASSANDRA-4228) Exception while reading from cassandra via ColumnFamilyInputFormat and OrderPreservingPartitioner

2012-05-09 Thread bert Passek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bert Passek updated CASSANDRA-4228:
---

Attachment: (was: CassandraTest.java)

 Exception while reading from cassandra via ColumnFamilyInputFormat and 
 OrderPreservingPartitioner
 -

 Key: CASSANDRA-4228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek

 We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
 system. After that we can not use ColumnFamilyInputFormat anymore due to 
 exceptions in cassandra. A simple unit test is provided via attachement.
 Here are some details about our simple setup:
 Ring: 
 Address DC  RackStatus State   LoadOwns   
  Token   
 127.0.0.1   datacenter1 rack1   Up Normal  859.36 KB   
 100,00% 55894951196891831822413178196787984716  
 Schema Definition:
 create column family TestSuper
   with column_type = 'Super'
   and comparator = 'BytesType'
   and subcomparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 While running the test we face following exception on client side:
 12/05/09 10:18:22 INFO junit.TestRunner: 
 testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
 org.apache.thrift.transport.TTransportException
 12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
   at 
 de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
   at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
   at org.junit.runners.Suite.runChild(Suite.java:115)
   at org.junit.runners.Suite.runChild(Suite.java:23)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
   at 

[jira] [Updated] (CASSANDRA-4228) Exception while reading from cassandra via ColumnFamilyInputFormat and OrderPreservingPartitioner

2012-05-09 Thread bert Passek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bert Passek updated CASSANDRA-4228:
---

Attachment: CassandraTest.java

Unit-Test for reproduction.

 Exception while reading from cassandra via ColumnFamilyInputFormat and 
 OrderPreservingPartitioner
 -

 Key: CASSANDRA-4228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: CassandraTest.java


 We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
 system. After that we can not use ColumnFamilyInputFormat anymore due to 
 exceptions in cassandra. A simple unit test is provided via attachement.
 Here are some details about our simple setup:
 Ring: 
 Address DC  RackStatus State   LoadOwns   
  Token   
 127.0.0.1   datacenter1 rack1   Up Normal  859.36 KB   
 100,00% 55894951196891831822413178196787984716  
 Schema Definition:
 create column family TestSuper
   with column_type = 'Super'
   and comparator = 'BytesType'
   and subcomparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 While running the test we face following exception on client side:
 12/05/09 10:18:22 INFO junit.TestRunner: 
 testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
 org.apache.thrift.transport.TTransportException
 12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
   at 
 de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
   at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
   at org.junit.runners.Suite.runChild(Suite.java:115)
   at org.junit.runners.Suite.runChild(Suite.java:23)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at 

[jira] [Updated] (CASSANDRA-4228) Exception while reading from cassandra via ColumnFamilyInputFormat and OrderPreservingPartitioner

2012-05-09 Thread bert Passek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bert Passek updated CASSANDRA-4228:
---

Comment: was deleted

(was: Unit-Test to reproduce described exception.)

 Exception while reading from cassandra via ColumnFamilyInputFormat and 
 OrderPreservingPartitioner
 -

 Key: CASSANDRA-4228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: CassandraTest.java


 We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
 system. After that we can not use ColumnFamilyInputFormat anymore due to 
 exceptions in cassandra. A simple unit test is provided via attachement.
 Here are some details about our simple setup:
 Ring: 
 Address DC  RackStatus State   LoadOwns   
  Token   
 127.0.0.1   datacenter1 rack1   Up Normal  859.36 KB   
 100,00% 55894951196891831822413178196787984716  
 Schema Definition:
 create column family TestSuper
   with column_type = 'Super'
   and comparator = 'BytesType'
   and subcomparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 While running the test we face following exception on client side:
 12/05/09 10:18:22 INFO junit.TestRunner: 
 testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
 org.apache.thrift.transport.TTransportException
 12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
   at 
 de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
   at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
   at org.junit.runners.Suite.runChild(Suite.java:115)
   at org.junit.runners.Suite.runChild(Suite.java:23)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at 

[jira] [Created] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-05-09 Thread bert Passek (JIRA)
bert Passek created CASSANDRA-4229:
--

 Summary: Infinite MapReduce Task while reading via 
ColumnFamilyInputFormat
 Key: CASSANDRA-4229
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: screenshot.jpg

Hi,

we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
not execute any hadoop jobs which reads data from cassandra via 
ColumnFamilyInputFormat.

A map task is created which is running infinitely. We are trying to read from a 
super column family with more or less 1000 row keys.

This is the output from job interface where we already have 17 million map 
input records !!!

Map input records   17.273.127  0   17.273.127
Reduce shuffle bytes0   391 391
Spilled Records 3.288   0   3.288
Map output bytes639.849.351 0   639.849.351
CPU time spent (ms) 792.750 7.600   800.350
Total committed heap usage (bytes)  354.680.832 48.955.392  
403.636.224
Combine input records   17.039.783  0   17.039.783
SPLIT_RAW_BYTES 212 0   212
Reduce input records0   0   0
Reduce input groups 0   0   0
Combine output records  3.288   0   3.288
Physical memory (bytes) snapshot510.275.584 96.370.688  
606.646.272
Reduce output records   0   0   0
Virtual memory (bytes) snapshot 1.826.496.512   934.473.728 
2.760.970.240
Map output records  17.273.126  0   17.273.126

We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
not usable for reading from cassandra.

Best regards 

Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-05-09 Thread bert Passek (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

bert Passek updated CASSANDRA-4229:
---

Attachment: screenshot.jpg

Screenshot from map task with almost 30.000% progress.

 Infinite MapReduce Task while reading via ColumnFamilyInputFormat
 -

 Key: CASSANDRA-4229
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: screenshot.jpg


 Hi,
 we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
 not execute any hadoop jobs which reads data from cassandra via 
 ColumnFamilyInputFormat.
 A map task is created which is running infinitely. We are trying to read from 
 a super column family with more or less 1000 row keys.
 This is the output from job interface where we already have 17 million map 
 input records !!!
 Map input records 17.273.127  0   17.273.127
 Reduce shuffle bytes  0   391 391
 Spilled Records   3.288   0   3.288
 Map output bytes  639.849.351 0   639.849.351
 CPU time spent (ms)   792.750 7.600   800.350
 Total committed heap usage (bytes)354.680.832 48.955.392  
 403.636.224
 Combine input records 17.039.783  0   17.039.783
 SPLIT_RAW_BYTES   212 0   212
 Reduce input records  0   0   0
 Reduce input groups   0   0   0
 Combine output records3.288   0   3.288
 Physical memory (bytes) snapshot  510.275.584 96.370.688  
 606.646.272
 Reduce output records 0   0   0
 Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
 2.760.970.240
 Map output records17.273.126  0   17.273.126
 We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
 not usable for reading from cassandra.
 Best regards 
 Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread JIRA
André Cruz created CASSANDRA-4230:
-

 Summary: Deleting a CF always produces an error and that CF 
remains in an unknown state
 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
from Apache.
Reporter: André Cruz


From the CLI perspective:

[default@Disco] drop column family client; 
null
org.apache.thrift.transport.TTransportException
at 
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at 
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at 
org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at 
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at 
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at 
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
at 
org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
at 
org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
at 
org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
at 
org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)

Log:

 INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
serialized/live bytes, 21 ops)
 INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) Writing 
Memtable-schema_columnfamilies@225225949(978/1222 serialized/live bytes, 21 ops)
 INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
Completed flushing 
/var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
 (1041 bytes)
 INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
serialized/live bytes, 12 ops)
 INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) Writing 
Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 ops)
 INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
Completed flushing 
/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
 (649 bytes)
 INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java (line 
114) Compacting 
[SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
 SSTableReader
(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
 
SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
 SSTableReader(path
='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
 INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
bytes, 6 ops)
 INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) Writing 
Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
 INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java (line 
225) Compacted to 
[/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
  22,486 to 20,621 (~91% of orig
inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
 INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
Completed flushing 
/var/lib/cassandra/data/Disco/Client/Disco-Client-hc-5-Data.db (407 bytes)
ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 CLibrary.java (line 158) 
Unable to create hard link
com.sun.jna.LastErrorException: errno was 17
at org.apache.cassandra.utils.CLibrary.link(Native Method)
at org.apache.cassandra.utils.CLibrary.createHardLink(CLibrary.java:150)
at 
org.apache.cassandra.db.Directories.snapshotLeveledManifest(Directories.java:343)
at 
org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1450)
at 
org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:1483)
at 

[jira] [Updated] (CASSANDRA-4223) Non Unique Streaming session ID's

2012-05-09 Thread Aaron Morton (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Morton updated CASSANDRA-4223:


Attachment: 4223_counter_session_id.diff

Use an AtomicLong in StreamInSession and one in StreamOutSession for the 
session id. 

Sessions are always accessed using inet_address, session_id, and in and out 
session are in their own collections. 

 Non Unique Streaming session ID's
 -

 Key: CASSANDRA-4223
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4223
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 10.04.2 LTS
 java version 1.6.0_24
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
 Bare metal servers from 
 https://www.stormondemand.com/servers/baremetal.html 
 The servers run on a custom hypervisor.
  
Reporter: Aaron Morton
Assignee: Aaron Morton
  Labels: datastax_qa
 Fix For: 1.0.11, 1.1.1

 Attachments: 4223_counter_session_id.diff, NanoTest.java, fmm 
 streaming bug.txt


 I have observed repair processes failing due to duplicate Streaming session 
 ID's. In this installation it is preventing rebalance from completing. I 
 believe it has also prevented repair from completing in the past. 
 The attached streaming-logs.txt file contains log messages and an explanation 
 of what was happening during a repair operation. it has the evidence for 
 duplicate session ID's.
 The duplicate session id's were generated on the repairing node and sent to 
 the streaming node. The streaming source replaced the first session with the 
 second which resulted in both sessions failing when the first FILE_COMPLETE 
 message was received. 
 The errors were:
 {code:java}
 DEBUG [MiscStage:1] 2012-05-03 21:40:33,997 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:1] 2012-05-03 21:40:34,027 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:1,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 and
 {code:java}
 DEBUG [MiscStage:2] 2012-05-03 21:40:36,497 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:2] 2012-05-03 21:40:36,497 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:2,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 I think this is because System.nanoTime() is used for the session ID when 
 creating the StreamInSession objects (driven from 
 StorageService.requestRanges()) . 
 From the documentation 
 (http://docs.oracle.com/javase/6/docs/api/java/lang/System.html#nanoTime()) 
 {quote}
 This method provides nanosecond precision, but not necessarily nanosecond 
 accuracy. No guarantees are made about how frequently values change. 
 {quote}
 Also some info here on clocks and timers 
 https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
 The hypervisor may be at fault here. But it seems like we cannot rely on 
 successive calls to nanoTime() to return different values. 
 To avoid message/interface changes on the StreamHeader it would be good to 
 keep the session ID a long. The simplest approach may be to make successive 
 calls to nanoTime until the result changes. We could fail if a certain number 

[jira] [Commented] (CASSANDRA-4227) StorageProxy throws NPEs for when there's no hostids for a target

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271448#comment-13271448
 ] 

Brandon Williams commented on CASSANDRA-4227:
-

We should just drop the hint in this case, see CASSANDRA-4120

 StorageProxy throws NPEs for when there's no hostids for a target
 -

 Key: CASSANDRA-4227
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4227
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Dave Brosius
Priority: Trivial

 On trunk...
 if there is no host id due to an old node, an info log is generated, but the 
 code continues to use the null host id causing NPEs in decompose... Should 
 this bypass this code, or perhaps can the plain ip address be used in this 
 case? don't know.
 as follows...
 UUID hostId = 
 StorageService.instance.getTokenMetadata().getHostId(target);
 if ((hostId == null)  
 (Gossiper.instance.getVersion(target)  MessagingService.VERSION_12))
 logger.info(Unable to store hint for host with 
 missing ID, {} (old node?), target.toString());
 RowMutation hintedMutation = 
 RowMutation.hintFor(mutation, ByteBuffer.wrap(UUIDGen.decompose(hostId)));
 hintedMutation.apply();

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271485#comment-13271485
 ] 

Brandon Williams commented on CASSANDRA-3706:
-

bq. Vs just doing a seq scan on a normal, non-node-local config CF to grab them 
all at once.

But you still need a mechanism (local file access to each node, etc) to see if 
they are in sync.

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4061) Decommission should take a token

2012-05-09 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4061:


Fix Version/s: (was: 1.2)
   1.0.11

 Decommission should take a token
 

 Key: CASSANDRA-4061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4061
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.0.11

 Attachments: 4061.txt


 Like removetoken, decom should take a token parameter.  This is a bit easier 
 said than done because it changes gossip, but I've seen enough people burned 
 by this (as I have myself.)  In the short term though *decommission still 
 accepts a token parameter* which I thought we had fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4061) Decommission should take a token

2012-05-09 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4061:


Attachment: 4061.txt

To the dismay of my OCD, after looking at the options I think calling 
decommission on the host you want to decommission is the cleanest option we 
have.

However, as stated, nodetool should definitely not allow miscellaneous args to 
get passed, allow users to think it's doing something other than what it is.  
Patch to fix this and clarify the help message, against 1.0 in the unlikely 
case we ever have reason to roll a 1.1.11.

 Decommission should take a token
 

 Key: CASSANDRA-4061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4061
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.0.11

 Attachments: 4061.txt


 Like removetoken, decom should take a token parameter.  This is a bit easier 
 said than done because it changes gossip, but I've seen enough people burned 
 by this (as I have myself.)  In the short term though *decommission still 
 accepts a token parameter* which I thought we had fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271502#comment-13271502
 ] 

Jonathan Ellis commented on CASSANDRA-3706:
---

I don't follow, all I need is for yaml in select yaml from backups: if yaml != 
last_yaml raise

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (CASSANDRA-1991) CFS.maybeSwitchMemtable() calls CommitLog.instance.getContext(), which may block, under flusher lock write lock

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-1991:
-

Assignee: Jonathan Ellis  (was: Peter Schuller)

 CFS.maybeSwitchMemtable() calls CommitLog.instance.getContext(), which may 
 block, under flusher lock write lock
 ---

 Key: CASSANDRA-1991
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1991
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Jonathan Ellis
 Attachments: 1991-checkpointing-flush.txt, 1991-logchanges.txt, 
 1991-trunk-v2.txt, 1991-trunk.txt, 1991-v3.txt, 1991-v4.txt, 1991-v5.txt, 
 1991-v6.txt, 1991-v7.txt, 1991-v8.txt, 1991-v9.txt, trigger.py


 While investigate CASSANDRA-1955 I realized I was seeing very poor latencies 
 for reasons that had nothing to do with flush_writers, even when using 
 periodic commit log mode (and flush writers set ridiculously high, 500).
 It turns out writes blocked were slow because Table.apply() was spending lots 
 of time (I can easily trigger seconds on moderate work-load) trying to 
 acquire a flusher lock read lock (flush lock millis log printout in the 
 logging patch I'll attach).
 That in turns is caused by CFS.maybeSwitchMemtable() which acquires the 
 flusher lock write lock.
 Bisecting further revealed that the offending line of code that blocked was:
 final CommitLogSegment.CommitLogContext ctx = 
 writeCommitLog ? CommitLog.instance.getContext() : null;
 Indeed, CommitLog.getContext() simply returns currentSegment().getContext(), 
 but does so by submitting a callable on the service executor. So 
 independently of flush writers, this can block all (global, for all cf:s) 
 writes very easily, and does.
 I'll attach a file that is an independent Python script that triggers it on 
 my macos laptop (with an intel SSD, which is why I was particularly 
 surprised) (it assumes CPython, out-of-the-box-or-almost Cassandra on 
 localhost that isn't in a cluster, and it will drop/recreate a keyspace 
 called '1955').
 I'm also attaching, just FYI, the patch with log entries that I used while 
 tracking it down.
 Finally, I'll attach a patch with a suggested solution of keeping track of 
 the latest commit log with an AtomicReference (as an alternative to 
 synchronizing all access to segments). With that patch applied, latencies are 
 not affected by my trigger case like they were before. There are some 
 sub-optimal  100 ms cases on my test machine, but for other reasons. I'm no 
 longer able to trigger the extremes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-4061) Decommission should take a token

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271503#comment-13271503
 ] 

Brandon Williams edited comment on CASSANDRA-4061 at 5/9/12 3:57 PM:
-

To the dismay of my OCD, after looking at the options I think calling 
decommission on the host you want to decommission is the cleanest option we 
have.

However, as stated, nodetool should definitely not allow miscellaneous args to 
get passed, allow users to think it's doing something other than what it is.  
Patch to fix this and clarify the help message, against 1.0 in the unlikely 
case we ever have reason to roll a 1.0.11.

  was (Author: brandon.williams):
To the dismay of my OCD, after looking at the options I think calling 
decommission on the host you want to decommission is the cleanest option we 
have.

However, as stated, nodetool should definitely not allow miscellaneous args to 
get passed, allow users to think it's doing something other than what it is.  
Patch to fix this and clarify the help message, against 1.0 in the unlikely 
case we ever have reason to roll a 1.1.11.
  
 Decommission should take a token
 

 Key: CASSANDRA-4061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4061
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.0.11

 Attachments: 4061.txt


 Like removetoken, decom should take a token parameter.  This is a bit easier 
 said than done because it changes gossip, but I've seen enough people burned 
 by this (as I have myself.)  In the short term though *decommission still 
 accepts a token parameter* which I thought we had fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-1991) CFS.maybeSwitchMemtable() calls CommitLog.instance.getContext(), which may block, under flusher lock write lock

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-1991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-1991:
--

Attachment: 1991.txt

patch attached w/ that approach

 CFS.maybeSwitchMemtable() calls CommitLog.instance.getContext(), which may 
 block, under flusher lock write lock
 ---

 Key: CASSANDRA-1991
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1991
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Jonathan Ellis
  Labels: commitlog
 Fix For: 1.1.1

 Attachments: 1991-checkpointing-flush.txt, 1991-logchanges.txt, 
 1991-trunk-v2.txt, 1991-trunk.txt, 1991-v3.txt, 1991-v4.txt, 1991-v5.txt, 
 1991-v6.txt, 1991-v7.txt, 1991-v8.txt, 1991-v9.txt, 1991.txt, trigger.py


 While investigate CASSANDRA-1955 I realized I was seeing very poor latencies 
 for reasons that had nothing to do with flush_writers, even when using 
 periodic commit log mode (and flush writers set ridiculously high, 500).
 It turns out writes blocked were slow because Table.apply() was spending lots 
 of time (I can easily trigger seconds on moderate work-load) trying to 
 acquire a flusher lock read lock (flush lock millis log printout in the 
 logging patch I'll attach).
 That in turns is caused by CFS.maybeSwitchMemtable() which acquires the 
 flusher lock write lock.
 Bisecting further revealed that the offending line of code that blocked was:
 final CommitLogSegment.CommitLogContext ctx = 
 writeCommitLog ? CommitLog.instance.getContext() : null;
 Indeed, CommitLog.getContext() simply returns currentSegment().getContext(), 
 but does so by submitting a callable on the service executor. So 
 independently of flush writers, this can block all (global, for all cf:s) 
 writes very easily, and does.
 I'll attach a file that is an independent Python script that triggers it on 
 my macos laptop (with an intel SSD, which is why I was particularly 
 surprised) (it assumes CPython, out-of-the-box-or-almost Cassandra on 
 localhost that isn't in a cluster, and it will drop/recreate a keyspace 
 called '1955').
 I'm also attaching, just FYI, the patch with log entries that I used while 
 tracking it down.
 Finally, I'll attach a patch with a suggested solution of keeping track of 
 the latest commit log with an AtomicReference (as an alternative to 
 synchronizing all access to segments). With that patch applied, latencies are 
 not affected by my trigger case like they were before. There are some 
 sub-optimal  100 ms cases on my test machine, but for other reasons. I'm no 
 longer able to trigger the extremes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271509#comment-13271509
 ] 

Brandon Williams commented on CASSANDRA-3706:
-

Maybe I misunderstand what 'in sync' meant, but I'm not sure what last_yaml is 
if we're storing by digest (or why they'd every be the same, in that case.)

(also we're backing up 3 different files, not just yaml, but that's not a big 
deal)

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4196) While loading data using BulkOutPutFormat gettting an exception java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter cannot be cast to org.apa

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4196:
--

Fix Version/s: (was: 1.2)

 While loading data using BulkOutPutFormat gettting an exception 
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 -

 Key: CASSANDRA-4196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4196
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop, Tools
Affects Versions: 1.2
Reporter: Samarth Gahire
Assignee: Dave Brosius
Priority: Minor
  Labels: bulkloader, cassandra, hadoop, hash
 Attachments: 4196_create_correct_bf_type.diff

   Original Estimate: 48h
  Remaining Estimate: 48h

 We are using cassandra-1.1 rc1 for production setup and getting following 
 error while bulkloading data using BulkOutPutFormat.
 {code}
 WARN 09:04:52,384 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2692)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
  WARN 09:04:52,393 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2693)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
  WARN 09:04:52,544 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2698)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
 ERROR 09:04:52,544 Exception in thread Thread[Thread-39,5,main]
 [3:02:34 PM] Mariusz Dymarek: java.lang.IndexOutOfBoundsException
 at java.nio.Buffer.checkIndex(Buffer.java:520)
 at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:289)
 at org.apache.cassandra.db.CounterColumn.create(CounterColumn.java:79)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:102)
 at 
 org.apache.cassandra.io.util.ColumnIterator.deserializeNext(ColumnSortedMap.java:251)
 at 
 

[jira] [Commented] (CASSANDRA-4196) While loading data using BulkOutPutFormat gettting an exception java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter cannot be cast to org.a

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271513#comment-13271513
 ] 

Jonathan Ellis commented on CASSANDRA-4196:
---

Murmur3 was only introduced into trunk, so we assume he must have been mixing 
cluster versions.

CASSANDRA-4203 is what was committed to 1.2 to fix.

 While loading data using BulkOutPutFormat gettting an exception 
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 -

 Key: CASSANDRA-4196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4196
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop, Tools
Affects Versions: 1.2
Reporter: Samarth Gahire
Assignee: Dave Brosius
Priority: Minor
  Labels: bulkloader, cassandra, hadoop, hash
 Attachments: 4196_create_correct_bf_type.diff

   Original Estimate: 48h
  Remaining Estimate: 48h

 We are using cassandra-1.1 rc1 for production setup and getting following 
 error while bulkloading data using BulkOutPutFormat.
 {code}
 WARN 09:04:52,384 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2692)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
  WARN 09:04:52,393 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2693)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
  WARN 09:04:52,544 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2698)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
 ERROR 09:04:52,544 Exception in thread Thread[Thread-39,5,main]
 [3:02:34 PM] Mariusz Dymarek: java.lang.IndexOutOfBoundsException
 at java.nio.Buffer.checkIndex(Buffer.java:520)
 at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:289)
 at org.apache.cassandra.db.CounterColumn.create(CounterColumn.java:79)
 at 
 

[jira] [Commented] (CASSANDRA-4228) Exception while reading from cassandra via ColumnFamilyInputFormat and OrderPreservingPartitioner

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271516#comment-13271516
 ] 

Jonathan Ellis commented on CASSANDRA-4228:
---

The server stack trace shows RandomPartitioner.  Sounds like either your 
cluster or your hadoop job is misconfigured.

 Exception while reading from cassandra via ColumnFamilyInputFormat and 
 OrderPreservingPartitioner
 -

 Key: CASSANDRA-4228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: CassandraTest.java


 We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
 system. After that we can not use ColumnFamilyInputFormat anymore due to 
 exceptions in cassandra. A simple unit test is provided via attachement.
 Here are some details about our simple setup:
 Ring: 
 Address DC  RackStatus State   LoadOwns   
  Token   
 127.0.0.1   datacenter1 rack1   Up Normal  859.36 KB   
 100,00% 55894951196891831822413178196787984716  
 Schema Definition:
 create column family TestSuper
   with column_type = 'Super'
   and comparator = 'BytesType'
   and subcomparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 While running the test we face following exception on client side:
 12/05/09 10:18:22 INFO junit.TestRunner: 
 testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
 org.apache.thrift.transport.TTransportException
 12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
   at 
 de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
   at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
   at org.junit.runners.Suite.runChild(Suite.java:115)
   at org.junit.runners.Suite.runChild(Suite.java:23)
   at 

[jira] [Assigned] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-4230:
-

Assignee: Pavel Yaskevich

probably related to CASSANDRA-4219

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
   22,486 to 20,621 (~91% of orig
 inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
  INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/Disco/Client/Disco-Client-hc-5-Data.db (407 bytes)
 ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 CLibrary.java (line 158) 
 Unable to create hard link
 com.sun.jna.LastErrorException: errno was 17
 at org.apache.cassandra.utils.CLibrary.link(Native Method)
 at 
 

[jira] [Commented] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271524#comment-13271524
 ] 

Jonathan Ellis commented on CASSANDRA-4229:
---

Can you reproduce on a single node?

 Infinite MapReduce Task while reading via ColumnFamilyInputFormat
 -

 Key: CASSANDRA-4229
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: screenshot.jpg


 Hi,
 we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
 not execute any hadoop jobs which reads data from cassandra via 
 ColumnFamilyInputFormat.
 A map task is created which is running infinitely. We are trying to read from 
 a super column family with more or less 1000 row keys.
 This is the output from job interface where we already have 17 million map 
 input records !!!
 Map input records 17.273.127  0   17.273.127
 Reduce shuffle bytes  0   391 391
 Spilled Records   3.288   0   3.288
 Map output bytes  639.849.351 0   639.849.351
 CPU time spent (ms)   792.750 7.600   800.350
 Total committed heap usage (bytes)354.680.832 48.955.392  
 403.636.224
 Combine input records 17.039.783  0   17.039.783
 SPLIT_RAW_BYTES   212 0   212
 Reduce input records  0   0   0
 Reduce input groups   0   0   0
 Combine output records3.288   0   3.288
 Physical memory (bytes) snapshot  510.275.584 96.370.688  
 606.646.272
 Reduce output records 0   0   0
 Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
 2.760.970.240
 Map output records17.273.126  0   17.273.126
 We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
 not usable for reading from cassandra.
 Best regards 
 Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4061) Decommission should take a token

2012-05-09 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4061:


Reviewer: vijay2...@yahoo.com

 Decommission should take a token
 

 Key: CASSANDRA-4061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4061
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.0.11

 Attachments: 4061.txt


 Like removetoken, decom should take a token parameter.  This is a bit easier 
 said than done because it changes gossip, but I've seen enough people burned 
 by this (as I have myself.)  In the short term though *decommission still 
 accepts a token parameter* which I thought we had fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271522#comment-13271522
 ] 

Jonathan Ellis commented on CASSANDRA-3706:
---

bq. I'm not sure what last_yaml is if we're storing by digest

Why would we do that?  Just store the file raw.  Then you don't need FS access, 
is my point.

bq. also we're backing up 3 different files, not just yaml, but that's not a 
big deal

Yeah, I was kind of hoping the extrapolation would be obvious. :P

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




git commit: DataInput.skipBytes isn't guaranteed to actually skip the number of bytes requested, use FileUtils.skipBytesFully instead patch by dbrosius; reviewed by jbellis for CASSANDRA-4226

2012-05-09 Thread jbellis
Updated Branches:
  refs/heads/trunk 4357676f3 - 033486158


DataInput.skipBytes isn't guaranteed to actually skip the number of bytes 
requested, use FileUtils.skipBytesFully instead
patch by dbrosius; reviewed by jbellis for CASSANDRA-4226


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/03348615
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/03348615
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/03348615

Branch: refs/heads/trunk
Commit: 033486158a8ee2ed3af870bd17e2734632a796ff
Parents: 4357676
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 11:15:07 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 11:15:07 2012 -0500

--
 src/java/org/apache/cassandra/net/MessageIn.java |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/03348615/src/java/org/apache/cassandra/net/MessageIn.java
--
diff --git a/src/java/org/apache/cassandra/net/MessageIn.java 
b/src/java/org/apache/cassandra/net/MessageIn.java
index 531e0dd..f3b712e 100644
--- a/src/java/org/apache/cassandra/net/MessageIn.java
+++ b/src/java/org/apache/cassandra/net/MessageIn.java
@@ -27,6 +27,7 @@ import com.google.common.collect.ImmutableMap;
 
 import org.apache.cassandra.concurrent.Stage;
 import org.apache.cassandra.io.IVersionedSerializer;
+import org.apache.cassandra.io.util.FileUtils;
 
 public class MessageInT
 {
@@ -82,7 +83,7 @@ public class MessageInT
 if (callback == null)
 {
 // reply for expired callback.  we'll have to skip it.
-in.skipBytes(payloadSize);
+FileUtils.skipBytesFully(in, payloadSize);
 return null;
 }
 serializer = (IVersionedSerializerT2) callback.serializer;



[jira] [Commented] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271534#comment-13271534
 ] 

Pavel Yaskevich commented on CASSANDRA-4230:


From the exception message I see that it's related to the JNA hardlinks while 
it does a snapshot:

{noformat}
ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 CLibrary.java (line 158) 
Unable to create hard link
com.sun.jna.LastErrorException: errno was 17
...
Caused by: java.util.concurrent.ExecutionException: java.io.IOError: 
java.io.IOException: Unable to create hard link from 
/var/lib/cassandra/data/Disco/Client/Client.json to 
/var/lib/cassandra/data/Disco/Client/snapshots/1336559135918-Client/Client.json 
(errno 17)
{noformat}

and errno of 17 means File exists so this is something with snapshot or the 
system rather then migrations.

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 

[jira] [Commented] (CASSANDRA-4196) While loading data using BulkOutPutFormat gettting an exception java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter cannot be cast to org.a

2012-05-09 Thread Samarth Gahire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271539#comment-13271539
 ] 

Samarth Gahire commented on CASSANDRA-4196:
---

I Agree with Jonathan.Today After reinstalling Cassandra properly  bulk-loading 
is working fine.We might had mixed the versions while upgrading.

 While loading data using BulkOutPutFormat gettting an exception 
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 -

 Key: CASSANDRA-4196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4196
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop, Tools
Affects Versions: 1.2
Reporter: Samarth Gahire
Assignee: Dave Brosius
Priority: Minor
  Labels: bulkloader, cassandra, hadoop, hash
 Attachments: 4196_create_correct_bf_type.diff

   Original Estimate: 48h
  Remaining Estimate: 48h

 We are using cassandra-1.1 rc1 for production setup and getting following 
 error while bulkloading data using BulkOutPutFormat.
 {code}
 WARN 09:04:52,384 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2692)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
  WARN 09:04:52,393 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2693)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
  WARN 09:04:52,544 Failed closing 
 IndexWriter(/cassandra/production/Data_daily/production-Data_daily-tmp-hc-2698)
 java.lang.ClassCastException: org.apache.cassandra.utils.Murmur3BloomFilter 
 cannot be cast to org.apache.cassandra.utils.Murmur2BloomFilter
 at 
 org.apache.cassandra.utils.FilterFactory.serialize(FilterFactory.java:50)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter$IndexWriter.close(SSTableWriter.java:410)
 at 
 org.apache.cassandra.io.util.FileUtils.closeQuietly(FileUtils.java:94)
 at 
 org.apache.cassandra.io.sstable.SSTableWriter.abort(SSTableWriter.java:255)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:154)
 at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:92)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:178)
 at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:74)
 ERROR 09:04:52,544 Exception in thread Thread[Thread-39,5,main]
 [3:02:34 PM] Mariusz Dymarek: java.lang.IndexOutOfBoundsException
 at java.nio.Buffer.checkIndex(Buffer.java:520)
 at java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:289)
 at org.apache.cassandra.db.CounterColumn.create(CounterColumn.java:79)
 at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:102)

[1/5] git commit: Merge branch 'cassandra-1.1' into trunk

2012-05-09 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 641346b0d - fd66ccf21
  refs/heads/trunk 033486158 - 1bb2f32d8


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1bb2f32d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1bb2f32d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1bb2f32d

Branch: refs/heads/trunk
Commit: 1bb2f32d8f25da023a167fb0cd3f31059f4266da
Parents: 0334861 fd66ccf
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 11:28:49 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 11:28:49 2012 -0500

--
 CHANGES.txt|2 +
 .../org/apache/cassandra/config/CFMetaData.java|8 +-
 .../apache/cassandra/db/index/keys/KeysIndex.java  |   19 +++
 .../locator/AbstractReplicationStrategy.java   |9 +++
 .../apache/cassandra/locator/LocalStrategy.java|1 +
 .../apache/cassandra/locator/SimpleStrategy.java   |3 ++
 6 files changed, 41 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2f32d/CHANGES.txt
--
diff --cc CHANGES.txt
index e693a40,f17ffd1..d6e62c8
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,6 +1,16 @@@
 +1.2-dev
 + * Track tombstone expiration and compact when tombstone content is
 +   higher than a configurable threshold, default 20% (CASSANDRA-3442)
 + * update MurmurHash to version 3 (CASSANDRA-2975)
 + * (CLI) track elapsed time for `delete' operation (CASSANDRA-4060)
 + * (CLI) jline version is bumped to 1.0 to properly  support
 +   'delete' key function (CASSANDRA-4132)
 + * Save IndexSummary into new SSTable 'Summary' component (CASSANDRA-2392)
 +
 +
  1.1.1-dev
+  * enable caching on index CFs based on data CF cache setting (CASSANDRA-4197)
+  * warn on invalid replication strategy creation options (CASSANDRA-4046)
   * remove [Freeable]Memory finalizers (CASSANDRA-4222)
   * include tombstone size in ColumnFamily.size, which can prevent OOM
 during sudden mass delete operations (CASSANDRA-3741)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2f32d/src/java/org/apache/cassandra/config/CFMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2f32d/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2f32d/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2f32d/src/java/org/apache/cassandra/locator/LocalStrategy.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1bb2f32d/src/java/org/apache/cassandra/locator/SimpleStrategy.java
--



[3/5] git commit: warn on invalid replication strategy creation options patch by dbrosius; reviewed by jbellis for CASSANDRA-4046

2012-05-09 Thread jbellis
warn on invalid replication strategy creation options
patch by dbrosius; reviewed by jbellis for CASSANDRA-4046


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd66ccf2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd66ccf2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd66ccf2

Branch: refs/heads/trunk
Commit: fd66ccf21f1331fd2bd07611ad8dd5bf4be5f83b
Parents: 16d4c6c
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 11:28:40 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 11:28:40 2012 -0500

--
 CHANGES.txt|2 ++
 .../locator/AbstractReplicationStrategy.java   |9 +
 .../apache/cassandra/locator/LocalStrategy.java|1 +
 .../apache/cassandra/locator/SimpleStrategy.java   |3 +++
 4 files changed, 15 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd66ccf2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index baf899f..f17ffd1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.1.1-dev
+ * enable caching on index CFs based on data CF cache setting (CASSANDRA-4197)
+ * warn on invalid replication strategy creation options (CASSANDRA-4046)
  * remove [Freeable]Memory finalizers (CASSANDRA-4222)
  * include tombstone size in ColumnFamily.size, which can prevent OOM
during sudden mass delete operations (CASSANDRA-3741)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd66ccf2/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java 
b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index dccc850..288818c 100644
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@ -251,4 +251,13 @@ public abstract class AbstractReplicationStrategy
 throw new ConfigurationException(Replication factor must be 
numeric; found  + rf);
 }
 }
+
+protected void warnOnUnexpectedOptions(CollectionString expectedOptions)
+{
+for (String key : configOptions.keySet())
+{
+if (expectedOptions.contains(key))
+logger.warn(Unrecognized strategy option { + key + } passed 
to  + getClass().getSimpleName() +  for keyspace  + table);
+}
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd66ccf2/src/java/org/apache/cassandra/locator/LocalStrategy.java
--
diff --git a/src/java/org/apache/cassandra/locator/LocalStrategy.java 
b/src/java/org/apache/cassandra/locator/LocalStrategy.java
index f2804da..381d64f 100644
--- a/src/java/org/apache/cassandra/locator/LocalStrategy.java
+++ b/src/java/org/apache/cassandra/locator/LocalStrategy.java
@@ -48,5 +48,6 @@ public class LocalStrategy extends AbstractReplicationStrategy
 public void validateOptions() throws ConfigurationException
 {
 // LocalStrategy doesn't expect any options.
+warnOnUnexpectedOptions(Arrays.StringasList());
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd66ccf2/src/java/org/apache/cassandra/locator/SimpleStrategy.java
--
diff --git a/src/java/org/apache/cassandra/locator/SimpleStrategy.java 
b/src/java/org/apache/cassandra/locator/SimpleStrategy.java
index 024e9d4..1cffd74 100644
--- a/src/java/org/apache/cassandra/locator/SimpleStrategy.java
+++ b/src/java/org/apache/cassandra/locator/SimpleStrategy.java
@@ -21,6 +21,7 @@ package org.apache.cassandra.locator;
 
 import java.net.InetAddress;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
@@ -28,6 +29,7 @@ import java.util.Map;
 import org.apache.cassandra.config.ConfigurationException;
 import org.apache.cassandra.dht.Token;
 
+
 /**
  * This class returns the nodes responsible for a given
  * key but does not respect rack awareness. Basically
@@ -70,6 +72,7 @@ public class SimpleStrategy extends 
AbstractReplicationStrategy
 {
 throw new ConfigurationException(SimpleStrategy requires a 
replication_factor strategy option.);
 }
+warnOnUnexpectedOptions(Arrays.StringasList(replication_factor));
 validateReplicationFactor(configOptions.get(replication_factor));
 }
 }



[2/5] git commit: warn on invalid replication strategy creation options patch by dbrosius; reviewed by jbellis for CASSANDRA-4046

2012-05-09 Thread jbellis
warn on invalid replication strategy creation options
patch by dbrosius; reviewed by jbellis for CASSANDRA-4046


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fd66ccf2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fd66ccf2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fd66ccf2

Branch: refs/heads/cassandra-1.1
Commit: fd66ccf21f1331fd2bd07611ad8dd5bf4be5f83b
Parents: 16d4c6c
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 11:28:40 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 11:28:40 2012 -0500

--
 CHANGES.txt|2 ++
 .../locator/AbstractReplicationStrategy.java   |9 +
 .../apache/cassandra/locator/LocalStrategy.java|1 +
 .../apache/cassandra/locator/SimpleStrategy.java   |3 +++
 4 files changed, 15 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd66ccf2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index baf899f..f17ffd1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.1.1-dev
+ * enable caching on index CFs based on data CF cache setting (CASSANDRA-4197)
+ * warn on invalid replication strategy creation options (CASSANDRA-4046)
  * remove [Freeable]Memory finalizers (CASSANDRA-4222)
  * include tombstone size in ColumnFamily.size, which can prevent OOM
during sudden mass delete operations (CASSANDRA-3741)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd66ccf2/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java 
b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index dccc850..288818c 100644
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@ -251,4 +251,13 @@ public abstract class AbstractReplicationStrategy
 throw new ConfigurationException(Replication factor must be 
numeric; found  + rf);
 }
 }
+
+protected void warnOnUnexpectedOptions(CollectionString expectedOptions)
+{
+for (String key : configOptions.keySet())
+{
+if (expectedOptions.contains(key))
+logger.warn(Unrecognized strategy option { + key + } passed 
to  + getClass().getSimpleName() +  for keyspace  + table);
+}
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd66ccf2/src/java/org/apache/cassandra/locator/LocalStrategy.java
--
diff --git a/src/java/org/apache/cassandra/locator/LocalStrategy.java 
b/src/java/org/apache/cassandra/locator/LocalStrategy.java
index f2804da..381d64f 100644
--- a/src/java/org/apache/cassandra/locator/LocalStrategy.java
+++ b/src/java/org/apache/cassandra/locator/LocalStrategy.java
@@ -48,5 +48,6 @@ public class LocalStrategy extends AbstractReplicationStrategy
 public void validateOptions() throws ConfigurationException
 {
 // LocalStrategy doesn't expect any options.
+warnOnUnexpectedOptions(Arrays.StringasList());
 }
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/fd66ccf2/src/java/org/apache/cassandra/locator/SimpleStrategy.java
--
diff --git a/src/java/org/apache/cassandra/locator/SimpleStrategy.java 
b/src/java/org/apache/cassandra/locator/SimpleStrategy.java
index 024e9d4..1cffd74 100644
--- a/src/java/org/apache/cassandra/locator/SimpleStrategy.java
+++ b/src/java/org/apache/cassandra/locator/SimpleStrategy.java
@@ -21,6 +21,7 @@ package org.apache.cassandra.locator;
 
 import java.net.InetAddress;
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
@@ -28,6 +29,7 @@ import java.util.Map;
 import org.apache.cassandra.config.ConfigurationException;
 import org.apache.cassandra.dht.Token;
 
+
 /**
  * This class returns the nodes responsible for a given
  * key but does not respect rack awareness. Basically
@@ -70,6 +72,7 @@ public class SimpleStrategy extends 
AbstractReplicationStrategy
 {
 throw new ConfigurationException(SimpleStrategy requires a 
replication_factor strategy option.);
 }
+warnOnUnexpectedOptions(Arrays.StringasList(replication_factor));
 validateReplicationFactor(configOptions.get(replication_factor));
 }
 }



[4/5] git commit: enable keys cache and rows cache on index CFs based on setting in data CF patch by yukim; reviewed by jbellis for CASSANDRA-4197

2012-05-09 Thread jbellis
enable keys cache and rows cache on index CFs based on setting in data CF
patch by yukim; reviewed by jbellis for CASSANDRA-4197


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16d4c6c3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16d4c6c3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16d4c6c3

Branch: refs/heads/trunk
Commit: 16d4c6c320cd98e3e7628a68bd6b4f20f9a365f4
Parents: 641346b
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 11:24:19 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 11:24:19 2012 -0500

--
 .../org/apache/cassandra/config/CFMetaData.java|8 +-
 .../apache/cassandra/db/index/keys/KeysIndex.java  |   19 +++
 2 files changed, 26 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16d4c6c3/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index e903bd7..e36ea2d 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -326,11 +326,17 @@ public final class CFMetaData
 
 public static CFMetaData newIndexMetadata(CFMetaData parent, 
ColumnDefinition info, AbstractType? columnComparator)
 {
+// Depends on parent's cache setting, turn on its index CF's cache.
+// Here, only key cache is enabled, but later (in KeysIndex) row cache 
will be turned on depending on cardinality.
+Caching indexCaching = parent.getCaching() == Caching.ALL || 
parent.getCaching() == Caching.KEYS_ONLY
+ ? Caching.KEYS_ONLY
+ : Caching.NONE;
+
 return new CFMetaData(parent.ksName, 
parent.indexColumnFamilyName(info), ColumnFamilyType.Standard, 
columnComparator, null)
  .keyValidator(info.getValidator())
  .readRepairChance(0.0)
  .dcLocalReadRepairChance(0.0)
- .caching(Caching.NONE)
+ .caching(indexCaching)
  .reloadSecondaryIndexMetadata(parent);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/16d4c6c3/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
--
diff --git a/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java 
b/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
index fa663ca..3ee782b 100644
--- a/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
+++ b/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
@@ -61,6 +61,25 @@ public class KeysIndex extends PerColumnSecondaryIndex
  
indexedCfMetadata.cfName,
  new 
LocalPartitioner(columnDef.getValidator()),
  
indexedCfMetadata);
+
+// enable and initialize row cache based on parent's setting and 
indexed column's cardinality
+CFMetaData.Caching baseCaching = baseCfs.metadata.getCaching();
+if (baseCaching == CFMetaData.Caching.ALL || baseCaching == 
CFMetaData.Caching.ROWS_ONLY)
+{
+/*
+ * # of index CF's key = cardinality of indexed column.
+ * if # of keys stored in index CF is more than average column 
counts (means tall table),
+ * then consider it as high cardinality.
+ */
+double estimatedKeys = indexCfs.estimateKeys();
+double averageColumnCount = indexCfs.getMeanColumns();
+if (averageColumnCount  0  estimatedKeys / averageColumnCount  
1)
+{
+logger.debug(turning row cache on for  + 
indexCfs.getColumnFamilyName());
+indexCfs.metadata.caching(baseCaching);
+indexCfs.initRowCache();
+}
+}
 }
 
 public static AbstractType? indexComparator()



[5/5] git commit: enable keys cache and rows cache on index CFs based on setting in data CF patch by yukim; reviewed by jbellis for CASSANDRA-4197

2012-05-09 Thread jbellis
enable keys cache and rows cache on index CFs based on setting in data CF
patch by yukim; reviewed by jbellis for CASSANDRA-4197


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16d4c6c3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16d4c6c3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16d4c6c3

Branch: refs/heads/cassandra-1.1
Commit: 16d4c6c320cd98e3e7628a68bd6b4f20f9a365f4
Parents: 641346b
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 11:24:19 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 11:24:19 2012 -0500

--
 .../org/apache/cassandra/config/CFMetaData.java|8 +-
 .../apache/cassandra/db/index/keys/KeysIndex.java  |   19 +++
 2 files changed, 26 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16d4c6c3/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index e903bd7..e36ea2d 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -326,11 +326,17 @@ public final class CFMetaData
 
 public static CFMetaData newIndexMetadata(CFMetaData parent, 
ColumnDefinition info, AbstractType? columnComparator)
 {
+// Depends on parent's cache setting, turn on its index CF's cache.
+// Here, only key cache is enabled, but later (in KeysIndex) row cache 
will be turned on depending on cardinality.
+Caching indexCaching = parent.getCaching() == Caching.ALL || 
parent.getCaching() == Caching.KEYS_ONLY
+ ? Caching.KEYS_ONLY
+ : Caching.NONE;
+
 return new CFMetaData(parent.ksName, 
parent.indexColumnFamilyName(info), ColumnFamilyType.Standard, 
columnComparator, null)
  .keyValidator(info.getValidator())
  .readRepairChance(0.0)
  .dcLocalReadRepairChance(0.0)
- .caching(Caching.NONE)
+ .caching(indexCaching)
  .reloadSecondaryIndexMetadata(parent);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/16d4c6c3/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
--
diff --git a/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java 
b/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
index fa663ca..3ee782b 100644
--- a/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
+++ b/src/java/org/apache/cassandra/db/index/keys/KeysIndex.java
@@ -61,6 +61,25 @@ public class KeysIndex extends PerColumnSecondaryIndex
  
indexedCfMetadata.cfName,
  new 
LocalPartitioner(columnDef.getValidator()),
  
indexedCfMetadata);
+
+// enable and initialize row cache based on parent's setting and 
indexed column's cardinality
+CFMetaData.Caching baseCaching = baseCfs.metadata.getCaching();
+if (baseCaching == CFMetaData.Caching.ALL || baseCaching == 
CFMetaData.Caching.ROWS_ONLY)
+{
+/*
+ * # of index CF's key = cardinality of indexed column.
+ * if # of keys stored in index CF is more than average column 
counts (means tall table),
+ * then consider it as high cardinality.
+ */
+double estimatedKeys = indexCfs.estimateKeys();
+double averageColumnCount = indexCfs.getMeanColumns();
+if (averageColumnCount  0  estimatedKeys / averageColumnCount  
1)
+{
+logger.debug(turning row cache on for  + 
indexCfs.getColumnFamilyName());
+indexCfs.metadata.caching(baseCaching);
+indexCfs.initRowCache();
+}
+}
 }
 
 public static AbstractType? indexComparator()



git commit: Version of trunk to 1.2 to help new folks

2012-05-09 Thread vijay
Updated Branches:
  refs/heads/trunk 1bb2f32d8 - e6cebc89c


Version of trunk to 1.2 to help new folks


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e6cebc89
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e6cebc89
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e6cebc89

Branch: refs/heads/trunk
Commit: e6cebc89c0093c1b625dfa3e39f07f841bad0fe4
Parents: 1bb2f32
Author: Vijay Parthasarathy vijay2...@gmail.com
Authored: Wed May 9 09:29:23 2012 -0700
Committer: Vijay Parthasarathy vijay2...@gmail.com
Committed: Wed May 9 09:30:03 2012 -0700

--
 build.xml |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e6cebc89/build.xml
--
diff --git a/build.xml b/build.xml
index 7330862..f9603a2 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=1.1.0/
+property name=base.version value=1.2.0/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/



[jira] [Created] (CASSANDRA-4231) ConcurrentModificationException while writing mutations with ColumnFamilyRecordWrite

2012-05-09 Thread bert Passek (JIRA)
bert Passek created CASSANDRA-4231:
--

 Summary: ConcurrentModificationException while writing mutations 
with ColumnFamilyRecordWrite
 Key: CASSANDRA-4231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4231
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.0.10
 Environment: Debian Squeeze, one local cassandra node
Reporter: bert Passek


Hello,

we are using mapreduce jobs for writing data into cassandra. Sometimes the job 
fails because of a concurrent modification exception.

java.io.IOException: java.util.ConcurrentModificationException
at 
org.apache.cassandra.hadoop.ColumnFamilyRecordWriter$RangeClient.run(ColumnFamilyRecordWriter.java:307)
Caused by: java.util.ConcurrentModificationException
at 
java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
at java.util.AbstractList$Itr.next(AbstractList.java:343)
at org.apache.cassandra.thrift.SuperColumn.write(SuperColumn.java:440)
at 
org.apache.cassandra.thrift.ColumnOrSuperColumn.write(ColumnOrSuperColumn.java:561)
at org.apache.cassandra.thrift.Mutation.write(Mutation.java:384)
at 
org.apache.cassandra.thrift.Cassandra$batch_mutate_args.write(Cassandra.java:19021)
at 
org.apache.cassandra.thrift.Cassandra$Client.send_batch_mutate(Cassandra.java:1018)
at 
org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:1008)
at 
org.apache.cassandra.hadoop.ColumnFamilyRecordWriter$RangeClient.run(ColumnFamilyRecordWriter.java:299)

We were using Cassandra 1.0.8 for quite a long time without such problems.

Regards Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271548#comment-13271548
 ] 

Brandon Williams commented on CASSANDRA-4230:
-

Since snapshot dirs include a timestamp, perhaps we should just move along if 
it already exists.

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
   22,486 to 20,621 (~91% of orig
 inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
  INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/Disco/Client/Disco-Client-hc-5-Data.db (407 bytes)
 ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 CLibrary.java (line 158) 
 Unable to create hard link
 com.sun.jna.LastErrorException: errno was 17
 at org.apache.cassandra.utils.CLibrary.link(Native 

[jira] [Updated] (CASSANDRA-4217) Easy access to column timestamps (and maybe ttl) during queries

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4217:
--

Reviewer: xedin

How about updatetime ?

 Easy access to column timestamps (and maybe ttl) during queries
 ---

 Key: CASSANDRA-4217
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4217
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Affects Versions: 1.1.0
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
  Labels: cql3
 Fix For: 1.1.1

 Attachments: 4217.txt


 It would be interesting to allow accessing the timestamp/ttl of a column 
 though some syntax like
 {noformat}
 SELECT key, value, timestamp(value) FROM foo;
 {noformat}
 and the same for ttl.
 I'll note that currently timestamp and ttl are returned in the resultset 
 because it includes thrift Column object, but adding such syntax would make 
 our future protocol potentially simpler as we wouldn't then have to care 
 about timestamps explicitely (and more compact in general as we would only 
 return timestamps when asked)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4061) Decommission should take a token

2012-05-09 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271554#comment-13271554
 ] 

Vijay commented on CASSANDRA-4061:
--

+1

 Decommission should take a token
 

 Key: CASSANDRA-4061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4061
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.0.11

 Attachments: 4061.txt


 Like removetoken, decom should take a token parameter.  This is a bit easier 
 said than done because it changes gossip, but I've seen enough people burned 
 by this (as I have myself.)  In the short term though *decommission still 
 accepts a token parameter* which I thought we had fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271555#comment-13271555
 ] 

Brandon Williams commented on CASSANDRA-3706:
-

Ok, well, the patch as-is has know of finding the last_yaml since columns are 
keyed by digest.

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Issue Comment Edited] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271555#comment-13271555
 ] 

Brandon Williams edited comment on CASSANDRA-3706 at 5/9/12 4:32 PM:
-

Ok, well, the patch as-is has no of finding the last_yaml since columns are 
keyed by digest.

  was (Author: brandon.williams):
Ok, well, the patch as-is has know of finding the last_yaml since columns 
are keyed by digest.
  
 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271558#comment-13271558
 ] 

Pavel Yaskevich commented on CASSANDRA-4230:


Yeah, we probably should do just that.

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
   22,486 to 20,621 (~91% of orig
 inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
  INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/Disco/Client/Disco-Client-hc-5-Data.db (407 bytes)
 ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 CLibrary.java (line 158) 
 Unable to create hard link
 com.sun.jna.LastErrorException: errno was 17
 at org.apache.cassandra.utils.CLibrary.link(Native Method)
 at 
 

[1/6] git commit: Merge branch 'cassandra-1.1' into trunk

2012-05-09 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.0 b2ca7f821 - 452619cd0
  refs/heads/cassandra-1.1 fd66ccf21 - 861f1f3a9
  refs/heads/trunk e6cebc89c - ca104bac3


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ca104bac
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ca104bac
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ca104bac

Branch: refs/heads/trunk
Commit: ca104bac3b21a13c6d30b814288f952f8c8d511c
Parents: e6cebc8 861f1f3
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 9 11:37:32 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 9 11:37:32 2012 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java |   12 ++--
 1 files changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca104bac/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index 6a4632b,451bfea..c7befea
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -140,12 -142,10 +140,12 @@@ public class NodeCm
  addCmdHelp(header, join, Join the ring);
  addCmdHelp(header, info, Print node informations (uptime, load, 
...));
  addCmdHelp(header, cfstats, Print statistics on column families);
 +addCmdHelp(header, ids, Print list of unique host IDs);
  addCmdHelp(header, version, Print cassandra version);
  addCmdHelp(header, tpstats, Print usage statistics of thread 
pools);
 +addCmdHelp(header, proxyhistograms, Print statistic histograms for 
network operations);
  addCmdHelp(header, drain, Drain the node (stop accepting writes 
and flush all column families));
- addCmdHelp(header, decommission, Decommission the node);
+ addCmdHelp(header, decommission, Decommission the *node I am 
connecting to*);
  addCmdHelp(header, compactionstats, Print statistics on 
compactions);
  addCmdHelp(header, disablegossip, Disable gossip (effectively 
marking the node dead));
  addCmdHelp(header, enablegossip, Reenable gossip);
@@@ -753,8 -716,16 +752,17 @@@
  case ENABLETHRIFT: probe.startThriftServer(); break;
  case STATUSTHRIFT: 
nodeCmd.printIsThriftServerRunning(System.out); break;
  case RESETLOCALSCHEMA: probe.resetLocalSchema(); break;
 +case IDS : nodeCmd.printHostIds(System.out); 
break;
  
+ case DECOMMISSION :
+ if (arguments.length  0)
+ {
+ System.err.println(Decommission will decommission 
the node you are connected to and does not take arguments!);
+ System.exit(1);
+ }
+ probe.decommission();
+ break;
+ 
  case DRAIN :
  try { probe.drain(); }
  catch (ExecutionException ee) { err(ee, Error occured 
during flushing); }



[2/6] git commit: Merge from 1.0

2012-05-09 Thread brandonwilliams
Merge from 1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/861f1f3a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/861f1f3a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/861f1f3a

Branch: refs/heads/cassandra-1.1
Commit: 861f1f3a99854af42b2f7c91f2e16bcd1b0c094b
Parents: fd66ccf 452619c
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 9 11:36:52 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 9 11:36:52 2012 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java |   12 ++--
 1 files changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/861f1f3a/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index e33f698,c369771..451bfea
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -700,14 -647,9 +700,13 @@@ public class NodeCm
  
  switch (command)
  {
 -case RING: nodeCmd.printRing(System.out); break;
 +case RING :
 +if (arguments.length  0) { nodeCmd.printRing(System.out, 
arguments[0]); }
 +else  { nodeCmd.printRing(System.out, 
null); };
 +break;
 +
  case INFO: nodeCmd.printInfo(System.out); break;
  case CFSTATS : 
nodeCmd.printColumnFamilyStats(System.out); break;
- case DECOMMISSION: probe.decommission(); break;
  case TPSTATS : 
nodeCmd.printThreadPoolStats(System.out); break;
  case VERSION : 
nodeCmd.printReleaseVersion(System.out); break;
  case COMPACTIONSTATS : 
nodeCmd.printCompactionStats(System.out); break;
@@@ -716,8 -658,16 +715,17 @@@
  case DISABLETHRIFT   : probe.stopThriftServer(); break;
  case ENABLETHRIFT: probe.startThriftServer(); break;
  case STATUSTHRIFT: 
nodeCmd.printIsThriftServerRunning(System.out); break;
 +case RESETLOCALSCHEMA: probe.resetLocalSchema(); break;
  
+ case DECOMMISSION :
+ if (arguments.length  0)
+ {
+ System.err.println(Decommission will decommission 
the node you are connected to and does not take arguments!);
+ System.exit(1);
+ }
+ probe.decommission();
+ break;
+ 
  case DRAIN :
  try { probe.drain(); }
  catch (ExecutionException ee) { err(ee, Error occured 
during flushing); }



[3/6] git commit: Merge from 1.0

2012-05-09 Thread brandonwilliams
Merge from 1.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/861f1f3a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/861f1f3a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/861f1f3a

Branch: refs/heads/trunk
Commit: 861f1f3a99854af42b2f7c91f2e16bcd1b0c094b
Parents: fd66ccf 452619c
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 9 11:36:52 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 9 11:36:52 2012 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java |   12 ++--
 1 files changed, 10 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/861f1f3a/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index e33f698,c369771..451bfea
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -700,14 -647,9 +700,13 @@@ public class NodeCm
  
  switch (command)
  {
 -case RING: nodeCmd.printRing(System.out); break;
 +case RING :
 +if (arguments.length  0) { nodeCmd.printRing(System.out, 
arguments[0]); }
 +else  { nodeCmd.printRing(System.out, 
null); };
 +break;
 +
  case INFO: nodeCmd.printInfo(System.out); break;
  case CFSTATS : 
nodeCmd.printColumnFamilyStats(System.out); break;
- case DECOMMISSION: probe.decommission(); break;
  case TPSTATS : 
nodeCmd.printThreadPoolStats(System.out); break;
  case VERSION : 
nodeCmd.printReleaseVersion(System.out); break;
  case COMPACTIONSTATS : 
nodeCmd.printCompactionStats(System.out); break;
@@@ -716,8 -658,16 +715,17 @@@
  case DISABLETHRIFT   : probe.stopThriftServer(); break;
  case ENABLETHRIFT: probe.startThriftServer(); break;
  case STATUSTHRIFT: 
nodeCmd.printIsThriftServerRunning(System.out); break;
 +case RESETLOCALSCHEMA: probe.resetLocalSchema(); break;
  
+ case DECOMMISSION :
+ if (arguments.length  0)
+ {
+ System.err.println(Decommission will decommission 
the node you are connected to and does not take arguments!);
+ System.exit(1);
+ }
+ probe.decommission();
+ break;
+ 
  case DRAIN :
  try { probe.drain(); }
  catch (ExecutionException ee) { err(ee, Error occured 
during flushing); }



[4/6] git commit: Do not allow 'nodetool decommission' to take parameters. Patch by brandonwilliams, reviewed by vijay for CASSANDRA-4160

2012-05-09 Thread brandonwilliams
Do not allow 'nodetool decommission' to take parameters.
Patch by brandonwilliams, reviewed by vijay for CASSANDRA-4160


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/452619cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/452619cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/452619cd

Branch: refs/heads/trunk
Commit: 452619cd07ca7985bff6f6fa8fe6f9764ea7fa4a
Parents: b2ca7f8
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 9 11:32:50 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 9 11:33:42 2012 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java |   14 +++---
 1 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/452619cd/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 4452ca7..c369771 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -134,7 +134,7 @@ public class NodeCmd
 addCmdHelp(header, version, Print cassandra version);
 addCmdHelp(header, tpstats, Print usage statistics of thread 
pools);
 addCmdHelp(header, drain, Drain the node (stop accepting writes and 
flush all column families));
-addCmdHelp(header, decommission, Decommission the node);
+addCmdHelp(header, decommission, Decommission the *node I am 
connecting to*);
 addCmdHelp(header, compactionstats, Print statistics on 
compactions);
 addCmdHelp(header, disablegossip, Disable gossip (effectively 
marking the node dead));
 addCmdHelp(header, enablegossip, Reenable gossip);
@@ -650,7 +650,6 @@ public class NodeCmd
 case RING: nodeCmd.printRing(System.out); break;
 case INFO: nodeCmd.printInfo(System.out); break;
 case CFSTATS : 
nodeCmd.printColumnFamilyStats(System.out); break;
-case DECOMMISSION: probe.decommission(); break;
 case TPSTATS : 
nodeCmd.printThreadPoolStats(System.out); break;
 case VERSION : 
nodeCmd.printReleaseVersion(System.out); break;
 case COMPACTIONSTATS : 
nodeCmd.printCompactionStats(System.out); break;
@@ -659,7 +658,16 @@ public class NodeCmd
 case DISABLETHRIFT   : probe.stopThriftServer(); break;
 case ENABLETHRIFT: probe.startThriftServer(); break;
 case STATUSTHRIFT: 
nodeCmd.printIsThriftServerRunning(System.out); break;
-
+
+case DECOMMISSION :
+if (arguments.length  0)
+{
+System.err.println(Decommission will decommission the 
node you are connected to and does not take arguments!);
+System.exit(1);
+}
+probe.decommission();
+break;
+
 case DRAIN :
 try { probe.drain(); }
 catch (ExecutionException ee) { err(ee, Error occured 
during flushing); }



[6/6] git commit: Do not allow 'nodetool decommission' to take parameters. Patch by brandonwilliams, reviewed by vijay for CASSANDRA-4160

2012-05-09 Thread brandonwilliams
Do not allow 'nodetool decommission' to take parameters.
Patch by brandonwilliams, reviewed by vijay for CASSANDRA-4160


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/452619cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/452619cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/452619cd

Branch: refs/heads/cassandra-1.0
Commit: 452619cd07ca7985bff6f6fa8fe6f9764ea7fa4a
Parents: b2ca7f8
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 9 11:32:50 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 9 11:33:42 2012 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java |   14 +++---
 1 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/452619cd/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 4452ca7..c369771 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -134,7 +134,7 @@ public class NodeCmd
 addCmdHelp(header, version, Print cassandra version);
 addCmdHelp(header, tpstats, Print usage statistics of thread 
pools);
 addCmdHelp(header, drain, Drain the node (stop accepting writes and 
flush all column families));
-addCmdHelp(header, decommission, Decommission the node);
+addCmdHelp(header, decommission, Decommission the *node I am 
connecting to*);
 addCmdHelp(header, compactionstats, Print statistics on 
compactions);
 addCmdHelp(header, disablegossip, Disable gossip (effectively 
marking the node dead));
 addCmdHelp(header, enablegossip, Reenable gossip);
@@ -650,7 +650,6 @@ public class NodeCmd
 case RING: nodeCmd.printRing(System.out); break;
 case INFO: nodeCmd.printInfo(System.out); break;
 case CFSTATS : 
nodeCmd.printColumnFamilyStats(System.out); break;
-case DECOMMISSION: probe.decommission(); break;
 case TPSTATS : 
nodeCmd.printThreadPoolStats(System.out); break;
 case VERSION : 
nodeCmd.printReleaseVersion(System.out); break;
 case COMPACTIONSTATS : 
nodeCmd.printCompactionStats(System.out); break;
@@ -659,7 +658,16 @@ public class NodeCmd
 case DISABLETHRIFT   : probe.stopThriftServer(); break;
 case ENABLETHRIFT: probe.startThriftServer(); break;
 case STATUSTHRIFT: 
nodeCmd.printIsThriftServerRunning(System.out); break;
-
+
+case DECOMMISSION :
+if (arguments.length  0)
+{
+System.err.println(Decommission will decommission the 
node you are connected to and does not take arguments!);
+System.exit(1);
+}
+probe.decommission();
+break;
+
 case DRAIN :
 try { probe.drain(); }
 catch (ExecutionException ee) { err(ee, Error occured 
during flushing); }



[5/6] git commit: Do not allow 'nodetool decommission' to take parameters. Patch by brandonwilliams, reviewed by vijay for CASSANDRA-4160

2012-05-09 Thread brandonwilliams
Do not allow 'nodetool decommission' to take parameters.
Patch by brandonwilliams, reviewed by vijay for CASSANDRA-4160


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/452619cd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/452619cd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/452619cd

Branch: refs/heads/cassandra-1.1
Commit: 452619cd07ca7985bff6f6fa8fe6f9764ea7fa4a
Parents: b2ca7f8
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed May 9 11:32:50 2012 -0500
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed May 9 11:33:42 2012 -0500

--
 src/java/org/apache/cassandra/tools/NodeCmd.java |   14 +++---
 1 files changed, 11 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/452619cd/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 4452ca7..c369771 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -134,7 +134,7 @@ public class NodeCmd
 addCmdHelp(header, version, Print cassandra version);
 addCmdHelp(header, tpstats, Print usage statistics of thread 
pools);
 addCmdHelp(header, drain, Drain the node (stop accepting writes and 
flush all column families));
-addCmdHelp(header, decommission, Decommission the node);
+addCmdHelp(header, decommission, Decommission the *node I am 
connecting to*);
 addCmdHelp(header, compactionstats, Print statistics on 
compactions);
 addCmdHelp(header, disablegossip, Disable gossip (effectively 
marking the node dead));
 addCmdHelp(header, enablegossip, Reenable gossip);
@@ -650,7 +650,6 @@ public class NodeCmd
 case RING: nodeCmd.printRing(System.out); break;
 case INFO: nodeCmd.printInfo(System.out); break;
 case CFSTATS : 
nodeCmd.printColumnFamilyStats(System.out); break;
-case DECOMMISSION: probe.decommission(); break;
 case TPSTATS : 
nodeCmd.printThreadPoolStats(System.out); break;
 case VERSION : 
nodeCmd.printReleaseVersion(System.out); break;
 case COMPACTIONSTATS : 
nodeCmd.printCompactionStats(System.out); break;
@@ -659,7 +658,16 @@ public class NodeCmd
 case DISABLETHRIFT   : probe.stopThriftServer(); break;
 case ENABLETHRIFT: probe.startThriftServer(); break;
 case STATUSTHRIFT: 
nodeCmd.printIsThriftServerRunning(System.out); break;
-
+
+case DECOMMISSION :
+if (arguments.length  0)
+{
+System.err.println(Decommission will decommission the 
node you are connected to and does not take arguments!);
+System.exit(1);
+}
+probe.decommission();
+break;
+
 case DRAIN :
 try { probe.drain(); }
 catch (ExecutionException ee) { err(ee, Error occured 
during flushing); }



[jira] [Commented] (CASSANDRA-4217) Easy access to column timestamps (and maybe ttl) during queries

2012-05-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271562#comment-13271562
 ] 

Sylvain Lebresne commented on CASSANDRA-4217:
-

One slight advantage of writetime over either updatetime or inserttime is that 
it doesn't suggest that it's the time of update or insert in the sense of 
SQL. But that's a minor detail I guess, and again I'm fine going with whatever 
others prefer, just wanted to mention it.

 Easy access to column timestamps (and maybe ttl) during queries
 ---

 Key: CASSANDRA-4217
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4217
 Project: Cassandra
  Issue Type: Sub-task
  Components: API
Affects Versions: 1.1.0
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
  Labels: cql3
 Fix For: 1.1.1

 Attachments: 4217.txt


 It would be interesting to allow accessing the timestamp/ttl of a column 
 though some syntax like
 {noformat}
 SELECT key, value, timestamp(value) FROM foo;
 {noformat}
 and the same for ttl.
 I'll note that currently timestamp and ttl are returned in the resultset 
 because it includes thrift Column object, but adding such syntax would make 
 our future protocol potentially simpler as we wouldn't then have to care 
 about timestamps explicitely (and more compact in general as we would only 
 return timestamps when asked)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4223) Non Unique Streaming session ID's

2012-05-09 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271563#comment-13271563
 ] 

Yuki Morishita commented on CASSANDRA-4223:
---

After looking closer to the code, I can say that AtomicLong is enough here, as 
Aaron's patch suggests.
There is no need to generate cluster wide unique id, it just need to be unique 
between two nodes(source, dest).
I thought a pair of host and long value is shared among nodes but that's not 
true. My apologies.

So +1 to the patch attached.

 Non Unique Streaming session ID's
 -

 Key: CASSANDRA-4223
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4223
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 10.04.2 LTS
 java version 1.6.0_24
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
 Bare metal servers from 
 https://www.stormondemand.com/servers/baremetal.html 
 The servers run on a custom hypervisor.
  
Reporter: Aaron Morton
Assignee: Aaron Morton
  Labels: datastax_qa
 Fix For: 1.0.11, 1.1.1

 Attachments: 4223_counter_session_id.diff, NanoTest.java, fmm 
 streaming bug.txt


 I have observed repair processes failing due to duplicate Streaming session 
 ID's. In this installation it is preventing rebalance from completing. I 
 believe it has also prevented repair from completing in the past. 
 The attached streaming-logs.txt file contains log messages and an explanation 
 of what was happening during a repair operation. it has the evidence for 
 duplicate session ID's.
 The duplicate session id's were generated on the repairing node and sent to 
 the streaming node. The streaming source replaced the first session with the 
 second which resulted in both sessions failing when the first FILE_COMPLETE 
 message was received. 
 The errors were:
 {code:java}
 DEBUG [MiscStage:1] 2012-05-03 21:40:33,997 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:1] 2012-05-03 21:40:34,027 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:1,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 and
 {code:java}
 DEBUG [MiscStage:2] 2012-05-03 21:40:36,497 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:2] 2012-05-03 21:40:36,497 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:2,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 I think this is because System.nanoTime() is used for the session ID when 
 creating the StreamInSession objects (driven from 
 StorageService.requestRanges()) . 
 From the documentation 
 (http://docs.oracle.com/javase/6/docs/api/java/lang/System.html#nanoTime()) 
 {quote}
 This method provides nanosecond precision, but not necessarily nanosecond 
 accuracy. No guarantees are made about how frequently values change. 
 {quote}
 Also some info here on clocks and timers 
 https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
 The hypervisor may be at fault here. But it seems like we cannot rely on 
 successive calls to nanoTime() to return different values. 
 To avoid message/interface changes on the StreamHeader it would be good to 
 

[jira] [Resolved] (CASSANDRA-4061) Decommission should take a token

2012-05-09 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-4061.
-

   Resolution: Fixed
Fix Version/s: 1.1.1

Committed.

 Decommission should take a token
 

 Key: CASSANDRA-4061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4061
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.0.11, 1.1.1

 Attachments: 4061.txt


 Like removetoken, decom should take a token parameter.  This is a bit easier 
 said than done because it changes gossip, but I've seen enough people burned 
 by this (as I have myself.)  In the short term though *decommission still 
 accepts a token parameter* which I thought we had fixed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271567#comment-13271567
 ] 

Jonathan Ellis commented on CASSANDRA-3706:
---

Okay, let me spell it out a bit more:

{code}
last_yaml = None
for yaml in query(select yaml from backups):
  if last_yaml != None and yaml != last_yaml:
raise 'cluster yaml out of sync'
  last_yaml = yaml
{code}

... that said, it would make sense to me to to use a data model like this:

{code}
CREATE TABLE config_backups (
  peer inetaddress,
  backed_up_at datetime,
  cassandra_yaml text,
  cassandra_env text,
  PRIMARY KEY (peer, backed_up_at)
)
{code}

which would allow you to query history for a given node (at the cost of making 
my example loop for compare current versions more complex, since CQL isn't 
powerful enough to say give me the most recent version for each node -- 
although you could do that with Hive).

(could this use our new-fangled gossip node id instead of the ip?)

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271570#comment-13271570
 ] 

Jonathan Ellis commented on CASSANDRA-4230:
---

But why would it attempt the same snapshot twice?  Sounds like there's a real 
bug here.

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
   22,486 to 20,621 (~91% of orig
 inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
  INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/Disco/Client/Disco-Client-hc-5-Data.db (407 bytes)
 ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 CLibrary.java (line 158) 
 Unable to create hard link
 com.sun.jna.LastErrorException: errno was 17
 at org.apache.cassandra.utils.CLibrary.link(Native Method)
  

[jira] [Commented] (CASSANDRA-4223) Non Unique Streaming session ID's

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271577#comment-13271577
 ] 

Jonathan Ellis commented on CASSANDRA-4223:
---

StreamIn.requestRanges will create a session ID using the *target* node's id 
generator (whatever it is) but in a Pair with the *source* node's IP.

Conversely, when a Stream is originated on the source node, it creates a 
session id using the *source* node's id generator, and sends that over in the 
stream header, where IncomingStreamReader picks it up and sticks it in the 
Session map.

So it looks to me like Yuki was right the first time ... ?

 Non Unique Streaming session ID's
 -

 Key: CASSANDRA-4223
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4223
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 10.04.2 LTS
 java version 1.6.0_24
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
 Bare metal servers from 
 https://www.stormondemand.com/servers/baremetal.html 
 The servers run on a custom hypervisor.
  
Reporter: Aaron Morton
Assignee: Aaron Morton
  Labels: datastax_qa
 Fix For: 1.0.11, 1.1.1

 Attachments: 4223_counter_session_id.diff, NanoTest.java, fmm 
 streaming bug.txt


 I have observed repair processes failing due to duplicate Streaming session 
 ID's. In this installation it is preventing rebalance from completing. I 
 believe it has also prevented repair from completing in the past. 
 The attached streaming-logs.txt file contains log messages and an explanation 
 of what was happening during a repair operation. it has the evidence for 
 duplicate session ID's.
 The duplicate session id's were generated on the repairing node and sent to 
 the streaming node. The streaming source replaced the first session with the 
 second which resulted in both sessions failing when the first FILE_COMPLETE 
 message was received. 
 The errors were:
 {code:java}
 DEBUG [MiscStage:1] 2012-05-03 21:40:33,997 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:1] 2012-05-03 21:40:34,027 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:1,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 and
 {code:java}
 DEBUG [MiscStage:2] 2012-05-03 21:40:36,497 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:2] 2012-05-03 21:40:36,497 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:2,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 I think this is because System.nanoTime() is used for the session ID when 
 creating the StreamInSession objects (driven from 
 StorageService.requestRanges()) . 
 From the documentation 
 (http://docs.oracle.com/javase/6/docs/api/java/lang/System.html#nanoTime()) 
 {quote}
 This method provides nanosecond precision, but not necessarily nanosecond 
 accuracy. No guarantees are made about how frequently values change. 
 {quote}
 Also some info here on clocks and timers 
 https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
 The hypervisor may be at fault here. But it seems like we cannot rely on 
 successive calls to nanoTime() to 

[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271583#comment-13271583
 ] 

Dave Brosius commented on CASSANDRA-3706:
-

currently the rows are keyed by ip address, and have columns

YAMLDIGEST, YAMLCONTENTS, LOG4JDIGEST, LOG4JCONTENTS, ENVDIGEST, ENVCONTENTS

the digest columns are so you don't constantly write duplicate 'blobs' into the 
database if nothing has changed. perhaps that's not much of a concern and i may 
be biased by the fact that i constant start and stop instances for debugging.

In any case, i can change it to whatever you like.



 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271586#comment-13271586
 ] 

Brandon Williams commented on CASSANDRA-3706:
-

bq. could this use our new-fangled gossip node id instead of the ip?

For 1.2, that would be best, but complicates backward-compatibility if we put 
this in 1.1

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271587#comment-13271587
 ] 

Brandon Williams commented on CASSANDRA-3706:
-

bq. i may be biased by the fact that i constant start and stop instances for 
debugging

An option to disable this would be nice, as personally I would never use it.

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271589#comment-13271589
 ] 

Brandon Williams commented on CASSANDRA-4230:
-

snapshot on compaction is one way it can happen.

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
   22,486 to 20,621 (~91% of orig
 inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
  INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/Disco/Client/Disco-Client-hc-5-Data.db (407 bytes)
 ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 CLibrary.java (line 158) 
 Unable to create hard link
 com.sun.jna.LastErrorException: errno was 17
 at org.apache.cassandra.utils.CLibrary.link(Native Method)
 at 
 

[jira] [Commented] (CASSANDRA-3706) Back up configuration files on startup

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271591#comment-13271591
 ] 

Jonathan Ellis commented on CASSANDRA-3706:
---

you can compare the raw text just as easily as the digest...  it's a bit more 
cpu, but still negilible, and same amount of code.

don't think it's worth the trouble to create an option to disable, if you don't 
need it just ignore it.

 Back up configuration files on startup
 --

 Key: CASSANDRA-3706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3706
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Dave Brosius
Priority: Minor
  Labels: lhf
 Fix For: 1.1.1

 Attachments: save_configuration.diff, save_configuration_2.diff, 
 save_configuration_3.diff, save_configuration_4.diff, 
 save_configuration_6.diff, save_configuration_7.diff


 Snapshot can backup user data, but it's also nice to be able to have 
 known-good configurations saved as well in case of accidental snafus or even 
 catastrophic loss of a cluster.  If we check for changes to cassandra.yaml, 
 cassandra-env.sh, and maybe log4j-server.properties on startup, we can back 
 them up to a columnfamily that can then be handled by normal snapshot/backup 
 procedures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271596#comment-13271596
 ] 

Pavel Yaskevich commented on CASSANDRA-4230:


Yeah, it's only called in snapshotWithoutFlush so if you try to drop CF in the 
middle of compaction snapshoting files it would be the only way to lead to such 
behavior. I think what we need to do here is to stop compaction and run drop 
after that...

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
   22,486 to 20,621 (~91% of orig
 inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
  INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/Disco/Client/Disco-Client-hc-5-Data.db (407 bytes)
 ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 

[jira] [Commented] (CASSANDRA-4223) Non Unique Streaming session ID's

2012-05-09 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271602#comment-13271602
 ] 

Yuki Morishita commented on CASSANDRA-4223:
---

Name *source* is probably misleading here. I found that it's actually *source* 
of data you're requesting.
When node A initiates streaming to node B with StreamIn.requestRanges, it 
creates StreamInSession with pair of id, say, B, 1 and sends request with 
session id 1.
Node B receives request and creates StreamOutSession from sender's ip and 
received session id 1, ends up having StreamOutSession of id A, 1.

||A||  ||B||
|StreamInSessionB, 1 | \-\-(session ID of 1 from A)\-\- |StreamOutSessionA, 
1 |
|  | \-(session ID of 1 from B)\-\-- | 
  |

 Non Unique Streaming session ID's
 -

 Key: CASSANDRA-4223
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4223
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 10.04.2 LTS
 java version 1.6.0_24
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
 Bare metal servers from 
 https://www.stormondemand.com/servers/baremetal.html 
 The servers run on a custom hypervisor.
  
Reporter: Aaron Morton
Assignee: Aaron Morton
  Labels: datastax_qa
 Fix For: 1.0.11, 1.1.1

 Attachments: 4223_counter_session_id.diff, NanoTest.java, fmm 
 streaming bug.txt


 I have observed repair processes failing due to duplicate Streaming session 
 ID's. In this installation it is preventing rebalance from completing. I 
 believe it has also prevented repair from completing in the past. 
 The attached streaming-logs.txt file contains log messages and an explanation 
 of what was happening during a repair operation. it has the evidence for 
 duplicate session ID's.
 The duplicate session id's were generated on the repairing node and sent to 
 the streaming node. The streaming source replaced the first session with the 
 second which resulted in both sessions failing when the first FILE_COMPLETE 
 message was received. 
 The errors were:
 {code:java}
 DEBUG [MiscStage:1] 2012-05-03 21:40:33,997 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:1] 2012-05-03 21:40:34,027 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:1,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 and
 {code:java}
 DEBUG [MiscStage:2] 2012-05-03 21:40:36,497 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:2] 2012-05-03 21:40:36,497 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:2,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 I think this is because System.nanoTime() is used for the session ID when 
 creating the StreamInSession objects (driven from 
 StorageService.requestRanges()) . 
 From the documentation 
 (http://docs.oracle.com/javase/6/docs/api/java/lang/System.html#nanoTime()) 
 {quote}
 This method provides nanosecond precision, but not necessarily nanosecond 
 accuracy. No guarantees are made about how frequently values change. 
 {quote}
 Also some 

[jira] [Commented] (CASSANDRA-4223) Non Unique Streaming session ID's

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271618#comment-13271618
 ] 

Jonathan Ellis commented on CASSANDRA-4223:
---

Right, so the problem is when A creates a {{B, 1}} Session in StreamIn, while 
B simultaneously creates a {{B, 1}} for an unrelated 
StreamOut.transferRanges (move or unbootstrap).  

 Non Unique Streaming session ID's
 -

 Key: CASSANDRA-4223
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4223
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 10.04.2 LTS
 java version 1.6.0_24
 Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
 Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)
 Bare metal servers from 
 https://www.stormondemand.com/servers/baremetal.html 
 The servers run on a custom hypervisor.
  
Reporter: Aaron Morton
Assignee: Aaron Morton
  Labels: datastax_qa
 Fix For: 1.0.11, 1.1.1

 Attachments: 4223_counter_session_id.diff, NanoTest.java, fmm 
 streaming bug.txt


 I have observed repair processes failing due to duplicate Streaming session 
 ID's. In this installation it is preventing rebalance from completing. I 
 believe it has also prevented repair from completing in the past. 
 The attached streaming-logs.txt file contains log messages and an explanation 
 of what was happening during a repair operation. it has the evidence for 
 duplicate session ID's.
 The duplicate session id's were generated on the repairing node and sent to 
 the streaming node. The streaming source replaced the first session with the 
 second which resulted in both sessions failing when the first FILE_COMPLETE 
 message was received. 
 The errors were:
 {code:java}
 DEBUG [MiscStage:1] 2012-05-03 21:40:33,997 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:1] 2012-05-03 21:40:34,027 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:1,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/FMM_Studio/PartsData-hc-1-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 and
 {code:java}
 DEBUG [MiscStage:2] 2012-05-03 21:40:36,497 StreamReplyVerbHandler.java (line 
 47) Received StreamReply StreamReply(sessionId=26132848816442266, 
 file='/var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db', 
 action=FILE_FINISHED)
 ERROR [MiscStage:2] 2012-05-03 21:40:36,497 AbstractCassandraDaemon.java 
 (line 139) Fatal exception in thread Thread[MiscStage:2,5,main]
 java.lang.IllegalStateException: target reports current file is 
 /var/lib/cassandra/data/OpsCenter/rollups7200-hc-3-Data.db but is null
 at 
 org.apache.cassandra.streaming.StreamOutSession.validateCurrentFile(StreamOutSession.java:195)
 at 
 org.apache.cassandra.streaming.StreamReplyVerbHandler.doVerb(StreamReplyVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 {code}
 I think this is because System.nanoTime() is used for the session ID when 
 creating the StreamInSession objects (driven from 
 StorageService.requestRanges()) . 
 From the documentation 
 (http://docs.oracle.com/javase/6/docs/api/java/lang/System.html#nanoTime()) 
 {quote}
 This method provides nanosecond precision, but not necessarily nanosecond 
 accuracy. No guarantees are made about how frequently values change. 
 {quote}
 Also some info here on clocks and timers 
 https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
 The hypervisor may be at fault here. But it seems like we cannot rely on 
 successive calls to nanoTime() to return different values. 
 To avoid message/interface changes on the StreamHeader it would be good to 
 keep the session ID a long. The simplest approach may be to make successive 
 calls to nanoTime until the result changes. We could fail if a certain number 
 of 

[jira] [Commented] (CASSANDRA-4228) Exception while reading from cassandra via ColumnFamilyInputFormat and OrderPreservingPartitioner

2012-05-09 Thread bert Passek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271636#comment-13271636
 ] 

bert Passek commented on CASSANDRA-4228:


I already noticed the RandomPartitioner in the stack trace. Data was written to 
Cassandra by a hadoop job with configured OrderPreservingPartitioner. A 
different job reads from Cassandra where the partitioner in the job 
configuration was also set to OrderPreservingPartitioner.

We haven't actually changed any hadoop jobs, we just updated cassandra from 
1.0.8 to 1.1.0. And then we ran into this exception. The test case was written 
to track down the problem. It's strange because the exception is thrown even if 
we are trying to read from empty column families.

I'm gonna check the cluster and job configuration again, i might have setup 
something wrong.

 Exception while reading from cassandra via ColumnFamilyInputFormat and 
 OrderPreservingPartitioner
 -

 Key: CASSANDRA-4228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4228
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: CassandraTest.java


 We recently updated cassandra from verison 1.0.8 to 1.1.0 on a debian squeeze 
 system. After that we can not use ColumnFamilyInputFormat anymore due to 
 exceptions in cassandra. A simple unit test is provided via attachement.
 Here are some details about our simple setup:
 Ring: 
 Address DC  RackStatus State   LoadOwns   
  Token   
 127.0.0.1   datacenter1 rack1   Up Normal  859.36 KB   
 100,00% 55894951196891831822413178196787984716  
 Schema Definition:
 create column family TestSuper
   with column_type = 'Super'
   and comparator = 'BytesType'
   and subcomparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 While running the test we face following exception on client side:
 12/05/09 10:18:22 INFO junit.TestRunner: 
 testColumnFamilyInputFormat(de.unister.cpc.tests.CassandraTest): 
 org.apache.thrift.transport.TTransportException
 12/05/09 10:18:22 INFO junit.TestRunner: java.lang.RuntimeException: 
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.maybeInit(ColumnFamilyRecordReader.java:391)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:397)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader$StaticRowIterator.computeNext(ColumnFamilyRecordReader.java:323)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
   at 
 org.apache.cassandra.hadoop.ColumnFamilyRecordReader.nextKeyValue(ColumnFamilyRecordReader.java:188)
   at 
 de.unister.cpc.tests.CassandraTest.testColumnFamilyInputFormat(CassandraTest.java:98)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46)
   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)

[jira] [Commented] (CASSANDRA-4229) Infinite MapReduce Task while reading via ColumnFamilyInputFormat

2012-05-09 Thread bert Passek (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271643#comment-13271643
 ] 

bert Passek commented on CASSANDRA-4229:


Yes, i can reproduce it. Actually our developing environment simply consists of 
a single node. The hadoop job is very simple,  just reading data from Cassandra 
and writing back to Cassandra.

 Infinite MapReduce Task while reading via ColumnFamilyInputFormat
 -

 Key: CASSANDRA-4229
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4229
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
 Attachments: screenshot.jpg


 Hi,
 we recently upgraded cassandra from version 1.0.9 to 1.1.0. After that we can 
 not execute any hadoop jobs which reads data from cassandra via 
 ColumnFamilyInputFormat.
 A map task is created which is running infinitely. We are trying to read from 
 a super column family with more or less 1000 row keys.
 This is the output from job interface where we already have 17 million map 
 input records !!!
 Map input records 17.273.127  0   17.273.127
 Reduce shuffle bytes  0   391 391
 Spilled Records   3.288   0   3.288
 Map output bytes  639.849.351 0   639.849.351
 CPU time spent (ms)   792.750 7.600   800.350
 Total committed heap usage (bytes)354.680.832 48.955.392  
 403.636.224
 Combine input records 17.039.783  0   17.039.783
 SPLIT_RAW_BYTES   212 0   212
 Reduce input records  0   0   0
 Reduce input groups   0   0   0
 Combine output records3.288   0   3.288
 Physical memory (bytes) snapshot  510.275.584 96.370.688  
 606.646.272
 Reduce output records 0   0   0
 Virtual memory (bytes) snapshot   1.826.496.512   934.473.728 
 2.760.970.240
 Map output records17.273.126  0   17.273.126
 We must kill the job and we have to go back to version 1.0.9 because 1.1.0 is 
 not usable for reading from cassandra.
 Best regards 
 Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (CASSANDRA-4221) Error while deleting a columnfamily that is being compacted.

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-4221:
-

Assignee: Pavel Yaskevich

 Error while deleting a columnfamily that is being compacted.
 

 Key: CASSANDRA-4221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4221
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: ccm, dtest, cassandra-1.1. The error does not happen in 
 cassandra-1.0.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich

 The following dtest command produces an error:
 {code}export CASSANDRA_VERSION=git:cassandra-1.1; nosetests --nocapture 
 --nologcapture 
 concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.load_test{code}
 Here is the error:
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
   at java.util.concurrent.FutureTask.get(FutureTask.java:111)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:239)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1580)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1770)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:839)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:851)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:142)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:148)
   at 
 

[jira] [Commented] (CASSANDRA-4221) Error while deleting a columnfamily that is being compacted.

2012-05-09 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271649#comment-13271649
 ] 

Pavel Yaskevich commented on CASSANDRA-4221:


This one seems to be caused by the same problem as CASSANDRA-4230.

 Error while deleting a columnfamily that is being compacted.
 

 Key: CASSANDRA-4221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4221
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: ccm, dtest, cassandra-1.1. The error does not happen in 
 cassandra-1.0.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich

 The following dtest command produces an error:
 {code}export CASSANDRA_VERSION=git:cassandra-1.1; nosetests --nocapture 
 --nologcapture 
 concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.load_test{code}
 Here is the error:
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
   at java.util.concurrent.FutureTask.get(FutureTask.java:111)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:239)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1580)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1770)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:839)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:851)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:142)
   at 
 

[jira] [Commented] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271648#comment-13271648
 ] 

Jonathan Ellis commented on CASSANDRA-4230:
---

bq. snapshot on compaction is one way it can happen.

I suppose there's a really small chance of that happening if you happen to time 
it just right, but that wouldn't explain it being easily reproducible (since 
the compaction and drop snapshots generate their snapshot name independently).

Also, snapshot-on-compaction is off by default, I doubt André has it enabled.  
André, can you confirm?

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
   22,486 to 20,621 (~91% of orig
 inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
  INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
 Completed 

[jira] [Assigned] (CASSANDRA-4232) Bootstrapping nodes should not throttle compaction

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-4232:
-

Assignee: Yuki Morishita

 Bootstrapping nodes should not throttle compaction
 --

 Key: CASSANDRA-4232
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4232
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.1.1


 When bootstrapping a new node, often people have to disable compaction 
 throughput so secondary index building completes more reasonably.  Since a 
 bootstrapping node isn't a member yet, there's no reason for it to respect 
 compaction_throughput_in_mb.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4232) Bootstrapping nodes should not throttle compaction

2012-05-09 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-4232:
---

 Summary: Bootstrapping nodes should not throttle compaction
 Key: CASSANDRA-4232
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4232
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.1.1


When bootstrapping a new node, often people have to disable compaction 
throughput so secondary index building completes more reasonably.  Since a 
bootstrapping node isn't a member yet, there's no reason for it to respect 
compaction_throughput_in_mb.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4221) Error while deleting a columnfamily that is being compacted.

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271653#comment-13271653
 ] 

Jonathan Ellis commented on CASSANDRA-4221:
---

Maybe, but I'm skeptical -- 4230 is complaining about a file existing when it 
shouldn't, while this one says a file doesn't exist that should :)

 Error while deleting a columnfamily that is being compacted.
 

 Key: CASSANDRA-4221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4221
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: ccm, dtest, cassandra-1.1. The error does not happen in 
 cassandra-1.0.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich

 The following dtest command produces an error:
 {code}export CASSANDRA_VERSION=git:cassandra-1.1; nosetests --nocapture 
 --nologcapture 
 concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.load_test{code}
 Here is the error:
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
   at java.util.concurrent.FutureTask.get(FutureTask.java:111)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:239)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1580)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1770)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:839)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:851)
   at 
 

[jira] [Commented] (CASSANDRA-4191) Add `nodetool cfstats ks cf` abilities

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271669#comment-13271669
 ] 

Brandon Williams commented on CASSANDRA-4191:
-

I don't see a way to specify at the CF level, which seems like a common use 
case.

 Add `nodetool cfstats ks cf` abilities
 --

 Key: CASSANDRA-4191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4191
 Project: Cassandra
  Issue Type: New Feature
Affects Versions: 1.2
Reporter: Joaquin Casares
Priority: Minor
  Labels: datastax_qa
 Attachments: 4191_specific_cfstats.diff


 This way cfstats will only print information per keyspace/column family 
 combinations.
 Another related proposal as an alternative to this ticket:
 Allow for `nodetool cfstats` to use --excludes or --includes to accept 
 keyspace and column family arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4191) Add `nodetool cfstats ks cf` abilities

2012-05-09 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-4191:


Reviewer: brandon.williams
Assignee: Dave Brosius

 Add `nodetool cfstats ks cf` abilities
 --

 Key: CASSANDRA-4191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4191
 Project: Cassandra
  Issue Type: New Feature
Affects Versions: 1.2
Reporter: Joaquin Casares
Assignee: Dave Brosius
Priority: Minor
  Labels: datastax_qa
 Attachments: 4191_specific_cfstats.diff


 This way cfstats will only print information per keyspace/column family 
 combinations.
 Another related proposal as an alternative to this ticket:
 Allow for `nodetool cfstats` to use --excludes or --includes to accept 
 keyspace and column family arguments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4230) Deleting a CF always produces an error and that CF remains in an unknown state

2012-05-09 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271682#comment-13271682
 ] 

André Cruz commented on CASSANDRA-4230:
---

I suppose you mean this:
{noformat}
$ cat /etc/cassandra/cassandra.yaml |grep snapshot |grep compac
# Whether or not to take a snapshot before each compaction.  Be
snapshot_before_compaction: false
{noformat}

I did not enable it.

 Deleting a CF always produces an error and that CF remains in an unknown state
 --

 Key: CASSANDRA-4230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4230
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Debian Linux Squeeze with the cassandra debian package 
 from Apache.
Reporter: André Cruz
Assignee: Pavel Yaskevich

 From the CLI perspective:
 [default@Disco] drop column family client; 
 null
 org.apache.thrift.transport.TTransportException
   at 
 org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
   at 
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
   at 
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_system_drop_column_family(Cassandra.java:1222)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.system_drop_column_family(Cassandra.java:1209)
   at 
 org.apache.cassandra.cli.CliClient.executeDelColumnFamily(CliClient.java:1301)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:234)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:219)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:346)
 Log:
  INFO [MigrationStage:1] 2012-05-09 11:25:35,686 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columnfamilies@225225949(978/1222 
 serialized/live bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,687 Memtable.java (line 266) 
 Writing Memtable-schema_columnfamilies@225225949(978/1222 serialized/live 
 bytes, 21 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,748 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-hc-34-Data.db
  (1041 bytes)
  INFO [MigrationStage:1] 2012-05-09 11:25:35,749 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-schema_columns@213209572(586/732 
 serialized/live bytes, 12 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,750 Memtable.java (line 266) 
 Writing Memtable-schema_columns@213209572(586/732 serialized/live bytes, 12 
 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,812 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db
  (649 bytes)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,814 CompactionTask.java 
 (line 114) Compacting 
 [SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-27-Data.db'),
  SSTableReader
 (path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-25-Data.db'),
  
 SSTableReader(path='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-26-Data.db'),
  SSTableReader(path
 ='/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-28-Data.db')]
  INFO [MigrationStage:1] 2012-05-09 11:25:35,918 ColumnFamilyStore.java (line 
 634) Enqueuing flush of Memtable-Client@864320066(372/465 serialized/live 
 bytes, 6 ops)
  INFO [FlushWriter:3] 2012-05-09 11:25:35,919 Memtable.java (line 266) 
 Writing Memtable-Client@864320066(372/465 serialized/live bytes, 6 ops)
  INFO [CompactionExecutor:20] 2012-05-09 11:25:35,945 CompactionTask.java 
 (line 225) Compacted to 
 [/var/lib/cassandra/data/system/schema_columns/system-schema_columns-hc-29-Data.db,].
   22,486 to 20,621 (~91% of orig
 inal) bytes for 2 keys at 0.150120MB/s.  Time: 131ms.
  INFO [FlushWriter:3] 2012-05-09 11:25:36,013 Memtable.java (line 307) 
 Completed flushing 
 /var/lib/cassandra/data/Disco/Client/Disco-Client-hc-5-Data.db (407 bytes)
 ERROR [MigrationStage:1] 2012-05-09 11:25:36,043 CLibrary.java (line 158) 
 Unable to 

[jira] [Commented] (CASSANDRA-1991) CFS.maybeSwitchMemtable() calls CommitLog.instance.getContext(), which may block, under flusher lock write lock

2012-05-09 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271685#comment-13271685
 ] 

Sylvain Lebresne commented on CASSANDRA-1991:
-

+1

 CFS.maybeSwitchMemtable() calls CommitLog.instance.getContext(), which may 
 block, under flusher lock write lock
 ---

 Key: CASSANDRA-1991
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1991
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Schuller
Assignee: Jonathan Ellis
  Labels: commitlog
 Fix For: 1.1.1

 Attachments: 1991-checkpointing-flush.txt, 1991-logchanges.txt, 
 1991-trunk-v2.txt, 1991-trunk.txt, 1991-v3.txt, 1991-v4.txt, 1991-v5.txt, 
 1991-v6.txt, 1991-v7.txt, 1991-v8.txt, 1991-v9.txt, 1991.txt, trigger.py


 While investigate CASSANDRA-1955 I realized I was seeing very poor latencies 
 for reasons that had nothing to do with flush_writers, even when using 
 periodic commit log mode (and flush writers set ridiculously high, 500).
 It turns out writes blocked were slow because Table.apply() was spending lots 
 of time (I can easily trigger seconds on moderate work-load) trying to 
 acquire a flusher lock read lock (flush lock millis log printout in the 
 logging patch I'll attach).
 That in turns is caused by CFS.maybeSwitchMemtable() which acquires the 
 flusher lock write lock.
 Bisecting further revealed that the offending line of code that blocked was:
 final CommitLogSegment.CommitLogContext ctx = 
 writeCommitLog ? CommitLog.instance.getContext() : null;
 Indeed, CommitLog.getContext() simply returns currentSegment().getContext(), 
 but does so by submitting a callable on the service executor. So 
 independently of flush writers, this can block all (global, for all cf:s) 
 writes very easily, and does.
 I'll attach a file that is an independent Python script that triggers it on 
 my macos laptop (with an intel SSD, which is why I was particularly 
 surprised) (it assumes CPython, out-of-the-box-or-almost Cassandra on 
 localhost that isn't in a cluster, and it will drop/recreate a keyspace 
 called '1955').
 I'm also attaching, just FYI, the patch with log entries that I used while 
 tracking it down.
 Finally, I'll attach a patch with a suggested solution of keeping track of 
 the latest commit log with an AtomicReference (as an alternative to 
 synchronizing all access to segments). With that patch applied, latencies are 
 not affected by my trigger case like they were before. There are some 
 sub-optimal  100 ms cases on my test machine, but for other reasons. I'm no 
 longer able to trigger the extremes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4221) Error while deleting a columnfamily that is being compacted.

2012-05-09 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-4221:
---

Attachment: CASSANDRA-4230.patch

Patch adds a try to stop all running compactions on given Keyspace or 
ColumnFamily before running a drop command. I have tried the test you have in 
the description and it ran without failures.

 Error while deleting a columnfamily that is being compacted.
 

 Key: CASSANDRA-4221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4221
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: ccm, dtest, cassandra-1.1. The error does not happen in 
 cassandra-1.0.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich
 Attachments: CASSANDRA-4230.patch


 The following dtest command produces an error:
 {code}export CASSANDRA_VERSION=git:cassandra-1.1; nosetests --nocapture 
 --nologcapture 
 concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.load_test{code}
 Here is the error:
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
   at java.util.concurrent.FutureTask.get(FutureTask.java:111)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:239)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1580)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1770)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:839)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:851)

[jira] [Commented] (CASSANDRA-4221) Error while deleting a columnfamily that is being compacted.

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271710#comment-13271710
 ] 

Jonathan Ellis commented on CASSANDRA-4221:
---

That takes us back to the Bad Old Days pre-CASSANDRA-3116, though.  We should 
be able to fix w/o resorting to A Big Lock.

 Error while deleting a columnfamily that is being compacted.
 

 Key: CASSANDRA-4221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4221
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: ccm, dtest, cassandra-1.1. The error does not happen in 
 cassandra-1.0.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich
 Fix For: 1.1.1

 Attachments: CASSANDRA-4230.patch


 The following dtest command produces an error:
 {code}export CASSANDRA_VERSION=git:cassandra-1.1; nosetests --nocapture 
 --nologcapture 
 concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.load_test{code}
 Here is the error:
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
   at java.util.concurrent.FutureTask.get(FutureTask.java:111)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:239)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1580)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1770)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:839)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:851)
   

[jira] [Updated] (CASSANDRA-4221) Error while deleting a columnfamily that is being compacted.

2012-05-09 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-4221:
---

Attachment: (was: CASSANDRA-4230.patch)

 Error while deleting a columnfamily that is being compacted.
 

 Key: CASSANDRA-4221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4221
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: ccm, dtest, cassandra-1.1. The error does not happen in 
 cassandra-1.0.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich
 Fix For: 1.1.1

 Attachments: CASSANDRA-4221.patch


 The following dtest command produces an error:
 {code}export CASSANDRA_VERSION=git:cassandra-1.1; nosetests --nocapture 
 --nologcapture 
 concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.load_test{code}
 Here is the error:
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
   at java.util.concurrent.FutureTask.get(FutureTask.java:111)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:239)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1580)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1770)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:839)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:851)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:142)
   at 
 

[jira] [Commented] (CASSANDRA-4221) Error while deleting a columnfamily that is being compacted.

2012-05-09 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271724#comment-13271724
 ] 

Pavel Yaskevich commented on CASSANDRA-4221:


For the KS or CF drop this seems necessary to try to wait until all running 
compactions finish otherwise it would end up in errors like one in the 
description, also other operations - create, update - are not affected by this.

 Error while deleting a columnfamily that is being compacted.
 

 Key: CASSANDRA-4221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4221
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: ccm, dtest, cassandra-1.1. The error does not happen in 
 cassandra-1.0.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich
 Fix For: 1.1.1

 Attachments: CASSANDRA-4221.patch


 The following dtest command produces an error:
 {code}export CASSANDRA_VERSION=git:cassandra-1.1; nosetests --nocapture 
 --nologcapture 
 concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.load_test{code}
 Here is the error:
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
   at java.util.concurrent.FutureTask.get(FutureTask.java:111)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:239)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1580)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1770)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
   at 
 

[jira] [Updated] (CASSANDRA-4221) Error while deleting a columnfamily that is being compacted.

2012-05-09 Thread Pavel Yaskevich (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Yaskevich updated CASSANDRA-4221:
---

Attachment: CASSANDRA-4221.patch

 Error while deleting a columnfamily that is being compacted.
 

 Key: CASSANDRA-4221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4221
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: ccm, dtest, cassandra-1.1. The error does not happen in 
 cassandra-1.0.
Reporter: Tyler Patterson
Assignee: Pavel Yaskevich
 Fix For: 1.1.1

 Attachments: CASSANDRA-4221.patch


 The following dtest command produces an error:
 {code}export CASSANDRA_VERSION=git:cassandra-1.1; nosetests --nocapture 
 --nologcapture 
 concurrent_schema_changes_test.py:TestConcurrentSchemaChanges.load_test{code}
 Here is the error:
 {code}
 Error occured during compaction
 java.util.concurrent.ExecutionException: java.io.IOError: 
 java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
   at java.util.concurrent.FutureTask.get(FutureTask.java:111)
   at 
 org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:239)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:1580)
   at 
 org.apache.cassandra.service.StorageService.forceTableCompaction(StorageService.java:1770)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   at java.lang.Thread.run(Thread.java:679)
 Caused by: java.io.IOError: java.io.FileNotFoundException: 
 /tmp/dtest-6ECMgy/test/node1/data/Keyspace1/Standard1/Keyspace1-Standard1-hc-47-Data.db
  (No such file or directory)
   at 
 org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:61)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:839)
   at 
 org.apache.cassandra.io.sstable.SSTableReader.getDirectScanner(SSTableReader.java:851)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:142)
   at 
 

[3/6] git commit: warn when expectedOptions does NOT contain key

2012-05-09 Thread jbellis
warn when expectedOptions does NOT contain key


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d4ec6d2b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d4ec6d2b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d4ec6d2b

Branch: refs/heads/cassandra-1.1
Commit: d4ec6d2bb758bab37773d36a0e9db8edc029e500
Parents: 861f1f3
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 13:26:36 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:16 2012 -0500

--
 .../locator/AbstractReplicationStrategy.java   |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4ec6d2b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java 
b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index 288818c..f925124 100644
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@ -256,7 +256,7 @@ public abstract class AbstractReplicationStrategy
 {
 for (String key : configOptions.keySet())
 {
-if (expectedOptions.contains(key))
+if (!expectedOptions.contains(key))
 logger.warn(Unrecognized strategy option { + key + } passed 
to  + getClass().getSimpleName() +  for keyspace  + table);
 }
 }



[6/6] git commit: warn when expectedOptions does NOT contain key

2012-05-09 Thread jbellis
warn when expectedOptions does NOT contain key


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/146f4bd9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/146f4bd9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/146f4bd9

Branch: refs/heads/trunk
Commit: 146f4bd9e730425db1065a86bb4303d38a7e84ac
Parents: ca104ba
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 13:26:36 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:03 2012 -0500

--
 .../locator/AbstractReplicationStrategy.java   |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/146f4bd9/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java 
b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index bed74ae..b654fe2 100644
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@ -254,7 +254,7 @@ public abstract class AbstractReplicationStrategy
 {
 for (String key : configOptions.keySet())
 {
-if (expectedOptions.contains(key))
+if (!expectedOptions.contains(key))
 logger.warn(Unrecognized strategy option { + key + } passed 
to  + getClass().getSimpleName() +  for keyspace  + table);
 }
 }



[4/6] git commit: avoid blocking additional writes during flush patch by jbellis; reviewed by slebresnse and tested by brandonwilliams for CASSANDRA-1991

2012-05-09 Thread jbellis
avoid blocking additional writes during flush
patch by jbellis; reviewed by slebresnse and tested by brandonwilliams for 
CASSANDRA-1991


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4d7e7035
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4d7e7035
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4d7e7035

Branch: refs/heads/trunk
Commit: 4d7e703561bc68a79d856e28b3f710455b1c70bf
Parents: aead8da
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:51:24 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:06 2012 -0500

--
 CHANGES.txt|2 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   12 +++---
 src/java/org/apache/cassandra/db/Memtable.java |   12 +++---
 .../apache/cassandra/db/commitlog/CommitLog.java   |   19 +++
 .../db/compaction/LeveledCompactionStrategy.java   |   25 +--
 .../cassandra/db/compaction/LeveledManifest.java   |9 +
 .../org/apache/cassandra/db/CommitLogTest.java |6 ++--
 7 files changed, 53 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d7e7035/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d6e62c8..0888d29 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -9,6 +9,8 @@
 
 
 1.1.1-dev
+ * avoid blocking additional writes during flush when the commitlog
+   gets behind temporarily (CASSANDRA-1991)
  * enable caching on index CFs based on data CF cache setting (CASSANDRA-4197)
  * warn on invalid replication strategy creation options (CASSANDRA-4046)
  * remove [Freeable]Memory finalizers (CASSANDRA-4222)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4d7e7035/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index ea9bf21..05eaa83 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -30,6 +30,7 @@ import java.util.regex.Pattern;
 import javax.management.*;
 
 import com.google.common.collect.*;
+import com.google.common.util.concurrent.Futures;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -608,8 +609,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 assert getMemtableThreadSafe() == oldMemtable;
-final ReplayPosition ctx = writeCommitLog ? 
CommitLog.instance.getContext() : ReplayPosition.NONE;
-logger.debug(flush position is {}, ctx);
+final FutureReplayPosition ctx = writeCommitLog ? 
CommitLog.instance.getContext() : Futures.immediateFuture(ReplayPosition.NONE);
 
 // submit the memtable for any indexed sub-cfses, and our own.
 final ListColumnFamilyStore icc = new 
ArrayListColumnFamilyStore();
@@ -641,7 +641,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 // while keeping the wait-for-flush (future.get) out of anything 
latency-sensitive.
 return postFlushExecutor.submit(new WrappedRunnable()
 {
-public void runMayThrow() throws InterruptedException, 
IOException
+public void runMayThrow() throws InterruptedException, 
IOException, ExecutionException
 {
 latch.await();
 
@@ -661,7 +661,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 // if we're not writing to the commit log, we are 
replaying the log, so marking
 // the log header with you can discard anything 
written before the context is not valid
-
CommitLog.instance.discardCompletedSegments(metadata.cfId, ctx);
+
CommitLog.instance.discardCompletedSegments(metadata.cfId, ctx.get());
 }
 }
 });
@@ -1709,13 +1709,13 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 if (ksm.durableWrites)
 {
 CommitLog.instance.forceNewSegment();
-ReplayPosition position = CommitLog.instance.getContext();
+FutureReplayPosition position = CommitLog.instance.getContext();
 // now flush everyone else.  re-flushing ourselves is not 
necessary, but harmless
 for (ColumnFamilyStore cfs : ColumnFamilyStore.all())
 cfs.forceFlush();
 waitForActiveFlushes();
 // if everything was clean, flush won't have 

[1/6] git commit: avoid blocking additional writes during flush patch by jbellis; reviewed by slebresnse and tested by brandonwilliams for CASSANDRA-1991

2012-05-09 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 861f1f3a9 - 08848e795
  refs/heads/trunk ca104bac3 - 4d7e70356


avoid blocking additional writes during flush
patch by jbellis; reviewed by slebresnse and tested by brandonwilliams for 
CASSANDRA-1991


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08848e79
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08848e79
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08848e79

Branch: refs/heads/cassandra-1.1
Commit: 08848e7956f5fd08525a08498205637b2652f2a7
Parents: 67ed39f
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:51:24 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:18 2012 -0500

--
 CHANGES.txt|2 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   12 +++---
 src/java/org/apache/cassandra/db/Memtable.java |   12 +++---
 .../apache/cassandra/db/commitlog/CommitLog.java   |   19 +++
 .../db/compaction/LeveledCompactionStrategy.java   |   25 +--
 .../cassandra/db/compaction/LeveledManifest.java   |9 +
 .../org/apache/cassandra/db/CommitLogTest.java |6 ++--
 7 files changed, 53 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/08848e79/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f17ffd1..9246433 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.1.1-dev
+ * avoid blocking additional writes during flush when the commitlog
+   gets behind temporarily (CASSANDRA-1991)
  * enable caching on index CFs based on data CF cache setting (CASSANDRA-4197)
  * warn on invalid replication strategy creation options (CASSANDRA-4046)
  * remove [Freeable]Memory finalizers (CASSANDRA-4222)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/08848e79/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 659be73..9dcf1ef 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -31,6 +31,7 @@ import java.util.regex.Pattern;
 import javax.management.*;
 
 import com.google.common.collect.*;
+import com.google.common.util.concurrent.Futures;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -609,8 +610,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 assert getMemtableThreadSafe() == oldMemtable;
-final ReplayPosition ctx = writeCommitLog ? 
CommitLog.instance.getContext() : ReplayPosition.NONE;
-logger.debug(flush position is {}, ctx);
+final FutureReplayPosition ctx = writeCommitLog ? 
CommitLog.instance.getContext() : Futures.immediateFuture(ReplayPosition.NONE);
 
 // submit the memtable for any indexed sub-cfses, and our own.
 final ListColumnFamilyStore icc = new 
ArrayListColumnFamilyStore();
@@ -642,7 +642,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 // while keeping the wait-for-flush (future.get) out of anything 
latency-sensitive.
 return postFlushExecutor.submit(new WrappedRunnable()
 {
-public void runMayThrow() throws InterruptedException, 
IOException
+public void runMayThrow() throws InterruptedException, 
IOException, ExecutionException
 {
 latch.await();
 
@@ -662,7 +662,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 // if we're not writing to the commit log, we are 
replaying the log, so marking
 // the log header with you can discard anything 
written before the context is not valid
-
CommitLog.instance.discardCompletedSegments(metadata.cfId, ctx);
+
CommitLog.instance.discardCompletedSegments(metadata.cfId, ctx.get());
 }
 }
 });
@@ -1710,13 +1710,13 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 if (ksm.durableWrites)
 {
 CommitLog.instance.forceNewSegment();
-ReplayPosition position = CommitLog.instance.getContext();
+FutureReplayPosition position = CommitLog.instance.getContext();
 // now flush everyone else.  re-flushing ourselves is not 
necessary, but harmless
 for (ColumnFamilyStore cfs : ColumnFamilyStore.all())
 

[2/6] git commit: L0 contents are overlapping (fix for #4142)

2012-05-09 Thread jbellis
L0 contents are overlapping (fix for #4142)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67ed39fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67ed39fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67ed39fa

Branch: refs/heads/cassandra-1.1
Commit: 67ed39fa9bf71be4cfc13fccbdd7b76dcb46c062
Parents: d4ec6d2
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:01:18 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:17 2012 -0500

--
 .../db/compaction/LeveledCompactionStrategy.java   |   30 ++-
 1 files changed, 20 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67ed39fa/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
index 1bc40fd..858a2bc 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
@@ -31,6 +31,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.columniterator.IColumnIterator;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
@@ -174,8 +175,20 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 byLevel.get(manifest.levelOf(sstable)).add(sstable);
 
 ListICompactionScanner scanners = new 
ArrayListICompactionScanner(sstables.size());
-for (Integer level : ImmutableSortedSet.copyOf(byLevel.keySet()))
-scanners.add(new LeveledScanner(new 
ArrayListSSTableReader(byLevel.get(level)), range));
+for (Integer level : byLevel.keySet())
+{
+if (level == 0)
+{
+// L0 makes no guarantees about overlapping-ness.  Just create 
a direct scanner for each
+for (SSTableReader sstable : byLevel.get(level))
+scanners.add(sstable.getDirectScanner(range));
+}
+else
+{
+// Create a LeveledScanner that only opens one sstable at a 
time, in sorted order
+scanners.add(new LeveledScanner(byLevel.get(level), range));
+}
+}
 
 return scanners;
 }
@@ -192,14 +205,12 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 private SSTableScanner currentScanner;
 private long positionOffset;
 
-public LeveledScanner(ListSSTableReader sstables, RangeToken range)
+public LeveledScanner(CollectionSSTableReader sstables, RangeToken 
range)
 {
 this.range = range;
-this.sstables = sstables;
-
-// Sorting a list we got in argument is bad but it's all private 
to this class so let's not bother
-Collections.sort(sstables, SSTable.sstableComparator);
-this.sstableIterator = sstables.iterator();
+this.sstables = new ArrayListSSTableReader(sstables);
+Collections.sort(this.sstables, SSTable.sstableComparator);
+this.sstableIterator = this.sstables.iterator();
 
 long length = 0;
 for (SSTableReader sstable : sstables)
@@ -229,8 +240,7 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 if (!sstableIterator.hasNext())
 return endOfData();
 
-SSTableReader reader = sstableIterator.next();
-currentScanner = reader.getDirectScanner(range);
+currentScanner = 
sstableIterator.next().getDirectScanner(range);
 return computeNext();
 }
 catch (IOException e)



[5/6] git commit: L0 contents are overlapping (fix for #4142)

2012-05-09 Thread jbellis
L0 contents are overlapping (fix for #4142)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aead8da9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aead8da9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aead8da9

Branch: refs/heads/trunk
Commit: aead8da91a81e4f3b4ad21d3d53157846d7fbb36
Parents: 146f4bd
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:01:18 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:04 2012 -0500

--
 .../db/compaction/LeveledCompactionStrategy.java   |   30 ++-
 1 files changed, 20 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/aead8da9/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
index 361c333..939fdc6 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
@@ -27,6 +27,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.columniterator.IColumnIterator;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
@@ -170,8 +171,20 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 byLevel.get(manifest.levelOf(sstable)).add(sstable);
 
 ListICompactionScanner scanners = new 
ArrayListICompactionScanner(sstables.size());
-for (Integer level : ImmutableSortedSet.copyOf(byLevel.keySet()))
-scanners.add(new LeveledScanner(new 
ArrayListSSTableReader(byLevel.get(level)), range));
+for (Integer level : byLevel.keySet())
+{
+if (level == 0)
+{
+// L0 makes no guarantees about overlapping-ness.  Just create 
a direct scanner for each
+for (SSTableReader sstable : byLevel.get(level))
+scanners.add(sstable.getDirectScanner(range));
+}
+else
+{
+// Create a LeveledScanner that only opens one sstable at a 
time, in sorted order
+scanners.add(new LeveledScanner(byLevel.get(level), range));
+}
+}
 
 return scanners;
 }
@@ -188,14 +201,12 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 private SSTableScanner currentScanner;
 private long positionOffset;
 
-public LeveledScanner(ListSSTableReader sstables, RangeToken range)
+public LeveledScanner(CollectionSSTableReader sstables, RangeToken 
range)
 {
 this.range = range;
-this.sstables = sstables;
-
-// Sorting a list we got in argument is bad but it's all private 
to this class so let's not bother
-Collections.sort(sstables, SSTable.sstableComparator);
-this.sstableIterator = sstables.iterator();
+this.sstables = new ArrayListSSTableReader(sstables);
+Collections.sort(this.sstables, SSTable.sstableComparator);
+this.sstableIterator = this.sstables.iterator();
 
 long length = 0;
 for (SSTableReader sstable : sstables)
@@ -225,8 +236,7 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 if (!sstableIterator.hasNext())
 return endOfData();
 
-SSTableReader reader = sstableIterator.next();
-currentScanner = reader.getDirectScanner(range);
+currentScanner = 
sstableIterator.next().getDirectScanner(range);
 return computeNext();
 }
 catch (IOException e)



[jira] [Created] (CASSANDRA-4233) overlapping sstables in leveled compaction strategy

2012-05-09 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-4233:
-

 Summary: overlapping sstables in leveled compaction strategy
 Key: CASSANDRA-4233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4233
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4233) overlapping sstables in leveled compaction strategy

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4233:
--

Description: CASSANDRA-4142 introduces test failures, that are caused by 
overlapping tables within a level, which Shouldn't Happen.

 overlapping sstables in leveled compaction strategy
 ---

 Key: CASSANDRA-4233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4233
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis

 CASSANDRA-4142 introduces test failures, that are caused by overlapping 
 tables within a level, which Shouldn't Happen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[3/6] git commit: revert assertions from #4233

2012-05-09 Thread jbellis
revert assertions from #4233


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2a28a40
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2a28a40
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2a28a40

Branch: refs/heads/trunk
Commit: a2a28a4081d244b8e6c2da90b4f63beefdbccfaf
Parents: 08848e7
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:55:59 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:55:59 2012 -0500

--
 .../db/compaction/LeveledCompactionStrategy.java   |   25 ++-
 .../cassandra/db/compaction/LeveledManifest.java   |9 -
 2 files changed, 3 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2a28a40/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
index 5403aa2..858a2bc 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
@@ -172,40 +172,21 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 {
 MultimapInteger, SSTableReader byLevel = ArrayListMultimap.create();
 for (SSTableReader sstable : sstables)
-{
-int level = manifest.levelOf(sstable);
-assert level = 0;
-byLevel.get(level).add(sstable);
-}
+byLevel.get(manifest.levelOf(sstable)).add(sstable);
 
 ListICompactionScanner scanners = new 
ArrayListICompactionScanner(sstables.size());
 for (Integer level : byLevel.keySet())
 {
 if (level == 0)
 {
-// L0 makes no guarantees about overlapping-ness.  Just create 
a direct scanner for each.
+// L0 makes no guarantees about overlapping-ness.  Just create 
a direct scanner for each
 for (SSTableReader sstable : byLevel.get(level))
 scanners.add(sstable.getDirectScanner(range));
 }
 else
 {
 // Create a LeveledScanner that only opens one sstable at a 
time, in sorted order
-ArrayListSSTableReader sstables1 = new 
ArrayListSSTableReader(byLevel.get(level));
-scanners.add(new LeveledScanner(sstables1, range));
-
-Collections.sort(sstables1, SSTable.sstableComparator);
-SSTableReader previous = null;
-for (SSTableReader sstable : sstables1)
-{
-assert previous == null || 
sstable.first.compareTo(previous.last)  0 : String.format(%s = %s in %s and 
%s for %s in %s,
-   
   previous.last,
-   
   sstable.first,
-   
   previous,
-   
   sstable,
-   
   sstable.getColumnFamilyName(),
-   
   manifest.getLevel(level));
-previous = sstable;
-}
+scanners.add(new LeveledScanner(byLevel.get(level), range));
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2a28a40/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index c3517e1..69ab492 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -32,7 +32,6 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.RowPosition;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
@@ -189,14 +188,6 @@ public class LeveledManifest
 for (SSTableReader ssTableReader : added)
 

[1/6] git commit: merge from 1.1

2012-05-09 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 08848e795 - a2a28a408
  refs/heads/trunk 4d7e70356 - 6f65c8ca8


merge from 1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6f65c8ca
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6f65c8ca
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6f65c8ca

Branch: refs/heads/trunk
Commit: 6f65c8ca80e46c96fca5f7a44852587a4018050c
Parents: 4d7e703 a2a28a4
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:56:54 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:56:54 2012 -0500

--
 .../db/compaction/LeveledCompactionStrategy.java   |   19 +-
 1 files changed, 2 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6f65c8ca/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
--



[5/6] git commit: L0 contents are overlapping (fix for #4142)

2012-05-09 Thread jbellis
L0 contents are overlapping (fix for #4142)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67ed39fa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67ed39fa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67ed39fa

Branch: refs/heads/trunk
Commit: 67ed39fa9bf71be4cfc13fccbdd7b76dcb46c062
Parents: d4ec6d2
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:01:18 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:17 2012 -0500

--
 .../db/compaction/LeveledCompactionStrategy.java   |   30 ++-
 1 files changed, 20 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67ed39fa/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
index 1bc40fd..858a2bc 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
@@ -31,6 +31,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.columniterator.IColumnIterator;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
@@ -174,8 +175,20 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 byLevel.get(manifest.levelOf(sstable)).add(sstable);
 
 ListICompactionScanner scanners = new 
ArrayListICompactionScanner(sstables.size());
-for (Integer level : ImmutableSortedSet.copyOf(byLevel.keySet()))
-scanners.add(new LeveledScanner(new 
ArrayListSSTableReader(byLevel.get(level)), range));
+for (Integer level : byLevel.keySet())
+{
+if (level == 0)
+{
+// L0 makes no guarantees about overlapping-ness.  Just create 
a direct scanner for each
+for (SSTableReader sstable : byLevel.get(level))
+scanners.add(sstable.getDirectScanner(range));
+}
+else
+{
+// Create a LeveledScanner that only opens one sstable at a 
time, in sorted order
+scanners.add(new LeveledScanner(byLevel.get(level), range));
+}
+}
 
 return scanners;
 }
@@ -192,14 +205,12 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 private SSTableScanner currentScanner;
 private long positionOffset;
 
-public LeveledScanner(ListSSTableReader sstables, RangeToken range)
+public LeveledScanner(CollectionSSTableReader sstables, RangeToken 
range)
 {
 this.range = range;
-this.sstables = sstables;
-
-// Sorting a list we got in argument is bad but it's all private 
to this class so let's not bother
-Collections.sort(sstables, SSTable.sstableComparator);
-this.sstableIterator = sstables.iterator();
+this.sstables = new ArrayListSSTableReader(sstables);
+Collections.sort(this.sstables, SSTable.sstableComparator);
+this.sstableIterator = this.sstables.iterator();
 
 long length = 0;
 for (SSTableReader sstable : sstables)
@@ -229,8 +240,7 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 if (!sstableIterator.hasNext())
 return endOfData();
 
-SSTableReader reader = sstableIterator.next();
-currentScanner = reader.getDirectScanner(range);
+currentScanner = 
sstableIterator.next().getDirectScanner(range);
 return computeNext();
 }
 catch (IOException e)



[2/6] git commit: revert assertions from #4233

2012-05-09 Thread jbellis
revert assertions from #4233


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2a28a40
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2a28a40
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2a28a40

Branch: refs/heads/cassandra-1.1
Commit: a2a28a4081d244b8e6c2da90b4f63beefdbccfaf
Parents: 08848e7
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:55:59 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:55:59 2012 -0500

--
 .../db/compaction/LeveledCompactionStrategy.java   |   25 ++-
 .../cassandra/db/compaction/LeveledManifest.java   |9 -
 2 files changed, 3 insertions(+), 31 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2a28a40/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
index 5403aa2..858a2bc 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledCompactionStrategy.java
@@ -172,40 +172,21 @@ public class LeveledCompactionStrategy extends 
AbstractCompactionStrategy implem
 {
 MultimapInteger, SSTableReader byLevel = ArrayListMultimap.create();
 for (SSTableReader sstable : sstables)
-{
-int level = manifest.levelOf(sstable);
-assert level = 0;
-byLevel.get(level).add(sstable);
-}
+byLevel.get(manifest.levelOf(sstable)).add(sstable);
 
 ListICompactionScanner scanners = new 
ArrayListICompactionScanner(sstables.size());
 for (Integer level : byLevel.keySet())
 {
 if (level == 0)
 {
-// L0 makes no guarantees about overlapping-ness.  Just create 
a direct scanner for each.
+// L0 makes no guarantees about overlapping-ness.  Just create 
a direct scanner for each
 for (SSTableReader sstable : byLevel.get(level))
 scanners.add(sstable.getDirectScanner(range));
 }
 else
 {
 // Create a LeveledScanner that only opens one sstable at a 
time, in sorted order
-ArrayListSSTableReader sstables1 = new 
ArrayListSSTableReader(byLevel.get(level));
-scanners.add(new LeveledScanner(sstables1, range));
-
-Collections.sort(sstables1, SSTable.sstableComparator);
-SSTableReader previous = null;
-for (SSTableReader sstable : sstables1)
-{
-assert previous == null || 
sstable.first.compareTo(previous.last)  0 : String.format(%s = %s in %s and 
%s for %s in %s,
-   
   previous.last,
-   
   sstable.first,
-   
   previous,
-   
   sstable,
-   
   sstable.getColumnFamilyName(),
-   
   manifest.getLevel(level));
-previous = sstable;
-}
+scanners.add(new LeveledScanner(byLevel.get(level), range));
 }
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2a28a40/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index c3517e1..69ab492 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -32,7 +32,6 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.RowPosition;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
@@ -189,14 +188,6 @@ public class LeveledManifest
 for (SSTableReader ssTableReader : added)
 

[6/6] git commit: warn when expectedOptions does NOT contain key

2012-05-09 Thread jbellis
warn when expectedOptions does NOT contain key


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d4ec6d2b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d4ec6d2b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d4ec6d2b

Branch: refs/heads/trunk
Commit: d4ec6d2bb758bab37773d36a0e9db8edc029e500
Parents: 861f1f3
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 13:26:36 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:16 2012 -0500

--
 .../locator/AbstractReplicationStrategy.java   |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4ec6d2b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java 
b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
index 288818c..f925124 100644
--- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
+++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java
@@ -256,7 +256,7 @@ public abstract class AbstractReplicationStrategy
 {
 for (String key : configOptions.keySet())
 {
-if (expectedOptions.contains(key))
+if (!expectedOptions.contains(key))
 logger.warn(Unrecognized strategy option { + key + } passed 
to  + getClass().getSimpleName() +  for keyspace  + table);
 }
 }



[4/6] git commit: avoid blocking additional writes during flush patch by jbellis; reviewed by slebresnse and tested by brandonwilliams for CASSANDRA-1991

2012-05-09 Thread jbellis
avoid blocking additional writes during flush
patch by jbellis; reviewed by slebresnse and tested by brandonwilliams for 
CASSANDRA-1991


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/08848e79
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/08848e79
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/08848e79

Branch: refs/heads/trunk
Commit: 08848e7956f5fd08525a08498205637b2652f2a7
Parents: 67ed39f
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed May 9 14:51:24 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed May 9 14:52:18 2012 -0500

--
 CHANGES.txt|2 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   12 +++---
 src/java/org/apache/cassandra/db/Memtable.java |   12 +++---
 .../apache/cassandra/db/commitlog/CommitLog.java   |   19 +++
 .../db/compaction/LeveledCompactionStrategy.java   |   25 +--
 .../cassandra/db/compaction/LeveledManifest.java   |9 +
 .../org/apache/cassandra/db/CommitLogTest.java |6 ++--
 7 files changed, 53 insertions(+), 32 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/08848e79/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index f17ffd1..9246433 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 1.1.1-dev
+ * avoid blocking additional writes during flush when the commitlog
+   gets behind temporarily (CASSANDRA-1991)
  * enable caching on index CFs based on data CF cache setting (CASSANDRA-4197)
  * warn on invalid replication strategy creation options (CASSANDRA-4046)
  * remove [Freeable]Memory finalizers (CASSANDRA-4222)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/08848e79/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 659be73..9dcf1ef 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -31,6 +31,7 @@ import java.util.regex.Pattern;
 import javax.management.*;
 
 import com.google.common.collect.*;
+import com.google.common.util.concurrent.Futures;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -609,8 +610,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 }
 
 assert getMemtableThreadSafe() == oldMemtable;
-final ReplayPosition ctx = writeCommitLog ? 
CommitLog.instance.getContext() : ReplayPosition.NONE;
-logger.debug(flush position is {}, ctx);
+final FutureReplayPosition ctx = writeCommitLog ? 
CommitLog.instance.getContext() : Futures.immediateFuture(ReplayPosition.NONE);
 
 // submit the memtable for any indexed sub-cfses, and our own.
 final ListColumnFamilyStore icc = new 
ArrayListColumnFamilyStore();
@@ -642,7 +642,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 // while keeping the wait-for-flush (future.get) out of anything 
latency-sensitive.
 return postFlushExecutor.submit(new WrappedRunnable()
 {
-public void runMayThrow() throws InterruptedException, 
IOException
+public void runMayThrow() throws InterruptedException, 
IOException, ExecutionException
 {
 latch.await();
 
@@ -662,7 +662,7 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 {
 // if we're not writing to the commit log, we are 
replaying the log, so marking
 // the log header with you can discard anything 
written before the context is not valid
-
CommitLog.instance.discardCompletedSegments(metadata.cfId, ctx);
+
CommitLog.instance.discardCompletedSegments(metadata.cfId, ctx.get());
 }
 }
 });
@@ -1710,13 +1710,13 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 if (ksm.durableWrites)
 {
 CommitLog.instance.forceNewSegment();
-ReplayPosition position = CommitLog.instance.getContext();
+FutureReplayPosition position = CommitLog.instance.getContext();
 // now flush everyone else.  re-flushing ourselves is not 
necessary, but harmless
 for (ColumnFamilyStore cfs : ColumnFamilyStore.all())
 cfs.forceFlush();
 waitForActiveFlushes();
 // if everything was clean, flush won't have 

[jira] [Commented] (CASSANDRA-4233) overlapping sstables in leveled compaction strategy

2012-05-09 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271739#comment-13271739
 ] 

Jonathan Ellis commented on CASSANDRA-4233:
---

(Assertions in question are attached.)

 overlapping sstables in leveled compaction strategy
 ---

 Key: CASSANDRA-4233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4233
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
 Attachments: 4233-assert.txt


 CASSANDRA-4142 introduces test failures, that are caused by overlapping 
 tables within a level, which Shouldn't Happen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4142) OOM Exception during repair session with LeveledCompactionStrategy

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4142.
---

Resolution: Fixed

Added a special case for L0 in 67ed39fa9bf71be4cfc13fccbdd7b76dcb46c062.  Still 
getting errors.  I think these are due to a pre-existing bug in LCS.  Opened 
CASSANDRA-4233 to follow up.

 OOM Exception during repair session with LeveledCompactionStrategy
 --

 Key: CASSANDRA-4142
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4142
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.0.0
 Environment: OS: Linux CentOs 6 
 JDK: Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode)
 Node configuration:
 Quad-core
 10 GB RAM
 Xmx set to 2,5 GB (as computed by default).
Reporter: Romain Hardouin
Assignee: Sylvain Lebresne
 Fix For: 1.1.1

 Attachments: 4142-v2.txt, 4142.txt


 We encountered an OOM Exception on 2 nodes during repair session.
 Our CF are set up to use LeveledCompactionStrategy and SnappyCompressor.
 These two options used together maybe the key to the problem.
 Despite of setting XX:+HeapDumpOnOutOfMemoryError, no dump have been 
 generated.
 Nonetheless a memory analysis on a live node doing a repair reveals an 
 hotspot: an ArrayList of SSTableBoundedScanner which appears to contain as 
 many objects as there are SSTables on disk. 
 This ArrayList consumes 786 MB of the heap space for 5757 objects. Therefore 
 each object is about 140 KB.
 Eclipse Memory Analyzer's denominator tree shows that 99% of a 
 SSTableBoundedScanner object's memory is consumed by a 
 CompressedRandomAccessReader which contains two big byte arrays.
 Cluster information:
 9 nodes
 Each node handles 35 GB (RandomPartitioner)
 This JIRA was created following this discussion:
 http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Why-so-many-SSTables-td7453033.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4233) overlapping sstables in leveled compaction strategy

2012-05-09 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4233:
--

Attachment: 4233-assert.txt

I see CompactionsTest.testStandardColumnCompactions fail 100% of the time -- 
but only when run as part of the entire CompactionsTest suite; 
testStandardColumnCompactions run alone passes.

About 80% of the time the assertion in LCS fails, but sometimes the one in LM 
fails.  (How the former can fail, without the latter, is a mystery to me.)

Is there an off-by-one bug in IntervalTree?

 overlapping sstables in leveled compaction strategy
 ---

 Key: CASSANDRA-4233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4233
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
 Attachments: 4233-assert.txt


 CASSANDRA-4142 introduces test failures, that are caused by overlapping 
 tables within a level, which Shouldn't Happen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4233) overlapping sstables in leveled compaction strategy

2012-05-09 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271742#comment-13271742
 ] 

Brandon Williams commented on CASSANDRA-4233:
-

An easy way to recreate this in a live situation that I just accidentally 
discovered is to set memtable_total_space_in_mb to something small like 3, and 
then run stress with LCS.

 overlapping sstables in leveled compaction strategy
 ---

 Key: CASSANDRA-4233
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4233
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
 Attachments: 4233-assert.txt


 CASSANDRA-4142 introduces test failures, that are caused by overlapping 
 tables within a level, which Shouldn't Happen.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4208) ColumnFamilyOutputFormat should support writing to multiple column families

2012-05-09 Thread Robbie Strickland (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271747#comment-13271747
 ] 

Robbie Strickland commented on CASSANDRA-4208:
--

Any word on whether this solution is getting the thumbs up?  I personally need 
this functionality and would like to proceed in a manner that will ultimately 
be accepted by the community.

 ColumnFamilyOutputFormat should support writing to multiple column families
 ---

 Key: CASSANDRA-4208
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4208
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.1.0
Reporter: Robbie Strickland
 Attachments: cassandra-1.1-4208.txt, trunk-4208-v2.txt, trunk-4208.txt


 It is not currently possible to output records to more than one column family 
 in a single reducer.  Considering that writing values to Cassandra often 
 involves multiple column families (i.e. updating your index when you insert a 
 new value), this seems overly restrictive.  I am submitting a patch that 
 moves the specification of column family from the job configuration to the 
 write() call in ColumnFamilyRecordWriter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >