[jira] [Comment Edited] (CASSANDRA-8477) CMS GC can not recycle objects
[ https://issues.apache.org/jira/browse/CASSANDRA-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245237#comment-14245237 ] Philo Yang edited comment on CASSANDRA-8477 at 12/13/14 7:41 AM: - The old gen in the node with gc trouble rise very fast, if I disable autocompaction, the rising become much slower. I'm uploading heap dump file was (Author: yangzhe1991): I'm uploading heap dump file > CMS GC can not recycle objects > -- > > Key: CASSANDRA-8477 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8477 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.1.1 or 2.1.2-SNAPSHOT(after CASSANDRA-8459 resolved) >Reporter: Philo Yang > Attachments: cassandra.yaml, histo.txt, jstack.txt, system.log > > > I have a trouble in my cluster that CMS full gc can not reduce the size of > old gen. Days ago I post this problem to the maillist, people think it will > be solved by tuning the gc setting, however it doesn't work for me. > Then I saw a similar bug in CASSANDRA-8447, but [~benedict] think it is not > related. With the jstack on > https://gist.github.com/yangzhe1991/755ea2a10520be1fe59a, [~benedict] find a > bug and resolved it in CASSANDRA-8459. So I build a latest version on 2.1 > branch and run the SNAPSHOT version on the nodes with gc trouble. > However, there is still the gc issue. So I think opening a new tick and post > more information is a good idea. Thanks for helping me. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8477) CMS GC can not recycle objects
[ https://issues.apache.org/jira/browse/CASSANDRA-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245237#comment-14245237 ] Philo Yang commented on CASSANDRA-8477: --- I'm uploading heap dump file > CMS GC can not recycle objects > -- > > Key: CASSANDRA-8477 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8477 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.1.1 or 2.1.2-SNAPSHOT(after CASSANDRA-8459 resolved) >Reporter: Philo Yang > Attachments: cassandra.yaml, histo.txt, jstack.txt, system.log > > > I have a trouble in my cluster that CMS full gc can not reduce the size of > old gen. Days ago I post this problem to the maillist, people think it will > be solved by tuning the gc setting, however it doesn't work for me. > Then I saw a similar bug in CASSANDRA-8447, but [~benedict] think it is not > related. With the jstack on > https://gist.github.com/yangzhe1991/755ea2a10520be1fe59a, [~benedict] find a > bug and resolved it in CASSANDRA-8459. So I build a latest version on 2.1 > branch and run the SNAPSHOT version on the nodes with gc trouble. > However, there is still the gc issue. So I think opening a new tick and post > more information is a good idea. Thanks for helping me. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8477) CMS GC can not recycle objects
Philo Yang created CASSANDRA-8477: - Summary: CMS GC can not recycle objects Key: CASSANDRA-8477 URL: https://issues.apache.org/jira/browse/CASSANDRA-8477 Project: Cassandra Issue Type: Bug Components: Core Environment: 2.1.1 or 2.1.2-SNAPSHOT(after CASSANDRA-8459 resolved) Reporter: Philo Yang Attachments: cassandra.yaml, histo.txt, jstack.txt, system.log I have a trouble in my cluster that CMS full gc can not reduce the size of old gen. Days ago I post this problem to the maillist, people think it will be solved by tuning the gc setting, however it doesn't work for me. Then I saw a similar bug in CASSANDRA-8447, but [~benedict] think it is not related. With the jstack on https://gist.github.com/yangzhe1991/755ea2a10520be1fe59a, [~benedict] find a bug and resolved it in CASSANDRA-8459. So I build a latest version on 2.1 branch and run the SNAPSHOT version on the nodes with gc trouble. However, there is still the gc issue. So I think opening a new tick and post more information is a good idea. Thanks for helping me. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8476) RE in writeSortedContents or replaceFlushed blocks compaction threads indefinitely.
[ https://issues.apache.org/jira/browse/CASSANDRA-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245210#comment-14245210 ] Pavel Yaskevich edited comment on CASSANDRA-8476 at 12/13/14 5:18 AM: -- I will give it a shoot this weekend there should be a way to propagate state of the failed compactions back to the thread listening for flush future. was (Author: xedin): I will give it a shoot this weekend there should be a wait to propagate state of the failed compactions back to the thread listening for flush future. > RE in writeSortedContents or replaceFlushed blocks compaction threads > indefinitely. > --- > > Key: CASSANDRA-8476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8476 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Pavel Yaskevich >Assignee: Pavel Yaskevich > Fix For: 2.0.12 > > Attachments: CASSANDRA-8476.patch > > > Encountered this problem while generating some test data, incremental backup > failed to create hard-links to some of the of the system files (which is done > at the end of each compaction): > Example of the RE stacktrace: > {noformat} > 14/12/12 15:47:47 ERROR cassandra.SchemaLoader: Fatal exception in thread > Thread[FlushWriter:5,5,main] > java.lang.RuntimeException: Tried to create duplicate hard link to > /cassandra/data/system/IndexInfo/backups/system-IndexInfo-jb-1-Index.db > at > org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75) > at > org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1222) > at > org.apache.cassandra.db.DataTracker.maybeIncrementallyBackup(DataTracker.java:189) > at > org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:166) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:231) > at > org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1141) > at > org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:343) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > 14 > {noformat} > jstack shows that CompactionExecutor threads are now blocked waiting on the > flush future which will actually never decrement a latch. > {noformat} > "CompactionExecutor:125" daemon prio=5 tid=0x7fb3a10da800 nid=0x13c43 > waiting on condition [0x00012a90] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00071b669088> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) > at java.util.concurrent.FutureTask.get(FutureTask.java:187) > at > org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409) > at > org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:457) > at > org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:203) > at > org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > "CompactionExecutor:124" daemon prio=5 tid=0x7fb35cc09800 nid=0x13a2b > waiting on condition [0x00012934f000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - p
[jira] [Comment Edited] (CASSANDRA-8476) RE in writeSortedContents or replaceFlushed blocks compaction threads indefinitely.
[ https://issues.apache.org/jira/browse/CASSANDRA-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245210#comment-14245210 ] Pavel Yaskevich edited comment on CASSANDRA-8476 at 12/13/14 5:18 AM: -- I will give it a shoot this weekend there should be a wait to propagate state of the failed compactions back to the thread listening for flush future. > RE in writeSortedContents or replaceFlushed blocks compaction threads > indefinitely. > --- > > Key: CASSANDRA-8476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8476 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Pavel Yaskevich >Assignee: Pavel Yaskevich > Fix For: 2.0.12 > > Attachments: CASSANDRA-8476.patch > > > Encountered this problem while generating some test data, incremental backup > failed to create hard-links to some of the of the system files (which is done > at the end of each compaction): > Example of the RE stacktrace: > {noformat} > 14/12/12 15:47:47 ERROR cassandra.SchemaLoader: Fatal exception in thread > Thread[FlushWriter:5,5,main] > java.lang.RuntimeException: Tried to create duplicate hard link to > /cassandra/data/system/IndexInfo/backups/system-IndexInfo-jb-1-Index.db > at > org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75) > at > org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1222) > at > org.apache.cassandra.db.DataTracker.maybeIncrementallyBackup(DataTracker.java:189) > at > org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:166) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:231) > at > org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1141) > at > org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:343) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > 14 > {noformat} > jstack shows that CompactionExecutor threads are now blocked waiting on the > flush future which will actually never decrement a latch. > {noformat} > "CompactionExecutor:125" daemon prio=5 tid=0x7fb3a10da800 nid=0x13c43 > waiting on condition [0x00012a90] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00071b669088> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) > at java.util.concurrent.FutureTask.get(FutureTask.java:187) > at > org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409) > at > org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:457) > at > org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:203) > at > org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > "CompactionExecutor:124" daemon prio=5 tid=0x7fb35cc09800 nid=0x13a2b > waiting on condition [0x00012934f000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007ce4bf918> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java
[jira] [Commented] (CASSANDRA-8476) RE in writeSortedContents or replaceFlushed blocks compaction threads indefinitely.
[ https://issues.apache.org/jira/browse/CASSANDRA-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245174#comment-14245174 ] Pavel Yaskevich commented on CASSANDRA-8476: If we don't and there is a problem in replaceFlushed it results in at least whole compaction freeze so the only way out is to restart process, we have to figure out a way to avoid truncating commitlog and still countdown that latch on problem, e.g. propagate compilation status or similar, we can't just ignore this. > RE in writeSortedContents or replaceFlushed blocks compaction threads > indefinitely. > --- > > Key: CASSANDRA-8476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8476 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Pavel Yaskevich >Assignee: Pavel Yaskevich > Fix For: 2.0.12 > > Attachments: CASSANDRA-8476.patch > > > Encountered this problem while generating some test data, incremental backup > failed to create hard-links to some of the of the system files (which is done > at the end of each compaction): > Example of the RE stacktrace: > {noformat} > 14/12/12 15:47:47 ERROR cassandra.SchemaLoader: Fatal exception in thread > Thread[FlushWriter:5,5,main] > java.lang.RuntimeException: Tried to create duplicate hard link to > /cassandra/data/system/IndexInfo/backups/system-IndexInfo-jb-1-Index.db > at > org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75) > at > org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1222) > at > org.apache.cassandra.db.DataTracker.maybeIncrementallyBackup(DataTracker.java:189) > at > org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:166) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:231) > at > org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1141) > at > org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:343) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > 14 > {noformat} > jstack shows that CompactionExecutor threads are now blocked waiting on the > flush future which will actually never decrement a latch. > {noformat} > "CompactionExecutor:125" daemon prio=5 tid=0x7fb3a10da800 nid=0x13c43 > waiting on condition [0x00012a90] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00071b669088> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) > at java.util.concurrent.FutureTask.get(FutureTask.java:187) > at > org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409) > at > org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:457) > at > org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:203) > at > org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > "CompactionExecutor:124" daemon prio=5 tid=0x7fb35cc09800 nid=0x13a2b > waiting on condition [0x00012934f000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007ce4bf918> (a > java.util
[jira] [Commented] (CASSANDRA-8476) RE in writeSortedContents or replaceFlushed blocks compaction threads indefinitely.
[ https://issues.apache.org/jira/browse/CASSANDRA-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245166#comment-14245166 ] Yuki Morishita commented on CASSANDRA-8476: --- See CASSANDRA-7275. We don't want to count down latch when error occurred. If we do, then we may truncate commit log without writing contents to SSTable. > RE in writeSortedContents or replaceFlushed blocks compaction threads > indefinitely. > --- > > Key: CASSANDRA-8476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8476 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Pavel Yaskevich >Assignee: Pavel Yaskevich > Fix For: 2.0.12 > > Attachments: CASSANDRA-8476.patch > > > Encountered this problem while generating some test data, incremental backup > failed to create hard-links to some of the of the system files (which is done > at the end of each compaction): > Example of the RE stacktrace: > {noformat} > 14/12/12 15:47:47 ERROR cassandra.SchemaLoader: Fatal exception in thread > Thread[FlushWriter:5,5,main] > java.lang.RuntimeException: Tried to create duplicate hard link to > /cassandra/data/system/IndexInfo/backups/system-IndexInfo-jb-1-Index.db > at > org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75) > at > org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1222) > at > org.apache.cassandra.db.DataTracker.maybeIncrementallyBackup(DataTracker.java:189) > at > org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:166) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:231) > at > org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1141) > at > org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:343) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > 14 > {noformat} > jstack shows that CompactionExecutor threads are now blocked waiting on the > flush future which will actually never decrement a latch. > {noformat} > "CompactionExecutor:125" daemon prio=5 tid=0x7fb3a10da800 nid=0x13c43 > waiting on condition [0x00012a90] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00071b669088> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) > at java.util.concurrent.FutureTask.get(FutureTask.java:187) > at > org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409) > at > org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:457) > at > org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:203) > at > org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > "CompactionExecutor:124" daemon prio=5 tid=0x7fb35cc09800 nid=0x13a2b > waiting on condition [0x00012934f000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007ce4bf918> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.ja
[jira] [Updated] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps
[ https://issues.apache.org/jira/browse/CASSANDRA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samuel Klock updated CASSANDRA-8473: Attachment: cassandra-2.1-8473-actual-v1.txt bq. Also, I noticed that your patch applies to trunk (despite the name). I do feel like it would be better to target 3.0 than 2.1 for this, so I'm going to change the fixVersion to 3.0 unless there are strong objections. Sorry about that! Bookkeeping error on my part. I'm attaching a version of the patch that should be based against 2.1 (as intended). I haven't yet addressed your (very good) feedback, but I will do so in the next day or two. The logic in both patch versions is very similar, so most of your feedback should apply to them both. I'll update both patches to reflect your observations. Regarding the fixVersion: the folks in my organization would definitely vote for a 2.1.x target if that's feasible. We have a use case for this functionality that we're planning to deploy to production in the next few months, and we plan to do so using a 2.1.x release. We certainly have the option of using the 2.1 version of patch against our internal Cassandra project, but of course we would prefer not to do so. > Secondary index support for key-value pairs in CQL3 maps > > > Key: CASSANDRA-8473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8473 > Project: Cassandra > Issue Type: Improvement >Reporter: Samuel Klock >Assignee: Samuel Klock > Fix For: 3.0 > > Attachments: cassandra-2.1-8473-actual-v1.txt, cassandra-2.1-8473.txt > > > CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary > indexes on CQL3 maps, but support for a natural use case is still missing: > queries to find rows with map columns containing some key-value pair. For > example (from a comment on CASSANDRA-4511): > {code:sql} > SELECT * FROM main.users WHERE notify['email'] = true; > {code} > Cassandra should add support for this kind of index. One option is to expose > a CQL interface like the following: > * Creating an index: > {code:sql} > cqlsh:mykeyspace> CREATE TABLE mytable (key TEXT PRIMARY KEY, value MAP TEXT>); > cqlsh:mykeyspace> CREATE INDEX ON mytable(ENTRIES(value)); > {code} > * Querying the index: > {code:sql} > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('foo', {'a': '1', > 'b': '2', 'c': '3'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('bar', {'a': '1', > 'b': '4'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('baz', {'b': '4', > 'c': '3'}); > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1'; > key | value > -+ > bar | {'a': '1', 'b': '4'} > foo | {'a': '1', 'b': '2', 'c': '3'} > (2 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1' AND value['b'] > = '2' ALLOW FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '2' ALLOW > FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '4'; > key | value > -+-- > bar | {'a': '1', 'b': '4'} > baz | {'b': '4', 'c': '3'} > (2 rows) > {code} > A patch against the Cassandra-2.1 branch that implements this interface will > be attached to this issue shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8476) RE in writeSortedContents or replaceFlushed blocks compaction threads indefinitely.
[ https://issues.apache.org/jira/browse/CASSANDRA-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245060#comment-14245060 ] Jason Brown commented on CASSANDRA-8476: +1 > RE in writeSortedContents or replaceFlushed blocks compaction threads > indefinitely. > --- > > Key: CASSANDRA-8476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8476 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Pavel Yaskevich >Assignee: Pavel Yaskevich > Fix For: 2.0.12 > > Attachments: CASSANDRA-8476.patch > > > Encountered this problem while generating some test data, incremental backup > failed to create hard-links to some of the of the system files (which is done > at the end of each compaction): > Example of the RE stacktrace: > {noformat} > 14/12/12 15:47:47 ERROR cassandra.SchemaLoader: Fatal exception in thread > Thread[FlushWriter:5,5,main] > java.lang.RuntimeException: Tried to create duplicate hard link to > /cassandra/data/system/IndexInfo/backups/system-IndexInfo-jb-1-Index.db > at > org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75) > at > org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1222) > at > org.apache.cassandra.db.DataTracker.maybeIncrementallyBackup(DataTracker.java:189) > at > org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:166) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:231) > at > org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1141) > at > org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:343) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > 14 > {noformat} > jstack shows that CompactionExecutor threads are now blocked waiting on the > flush future which will actually never decrement a latch. > {noformat} > "CompactionExecutor:125" daemon prio=5 tid=0x7fb3a10da800 nid=0x13c43 > waiting on condition [0x00012a90] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00071b669088> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) > at java.util.concurrent.FutureTask.get(FutureTask.java:187) > at > org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409) > at > org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:457) > at > org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:203) > at > org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > "CompactionExecutor:124" daemon prio=5 tid=0x7fb35cc09800 nid=0x13a2b > waiting on condition [0x00012934f000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007ce4bf918> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) > at java.util.concurrent.FutureTask.get(FutureTask.java:187) > at > org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:
[jira] [Updated] (CASSANDRA-8476) RE in writeSortedContents or replaceFlushed blocks compaction threads indefinitely.
[ https://issues.apache.org/jira/browse/CASSANDRA-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Yaskevich updated CASSANDRA-8476: --- Attachment: CASSANDRA-8476.patch latch.countDown() should be incapsulated into finally block in FlushRunnable.runMayThrow() > RE in writeSortedContents or replaceFlushed blocks compaction threads > indefinitely. > --- > > Key: CASSANDRA-8476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8476 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Pavel Yaskevich >Assignee: Pavel Yaskevich > Fix For: 2.0.12 > > Attachments: CASSANDRA-8476.patch > > > Encountered this problem while generating some test data, incremental backup > failed to create hard-links to some of the of the system files (which is done > at the end of each compaction): > Example of the RE stacktrace: > {noformat} > 14/12/12 15:47:47 ERROR cassandra.SchemaLoader: Fatal exception in thread > Thread[FlushWriter:5,5,main] > java.lang.RuntimeException: Tried to create duplicate hard link to > /cassandra/data/system/IndexInfo/backups/system-IndexInfo-jb-1-Index.db > at > org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75) > at > org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1222) > at > org.apache.cassandra.db.DataTracker.maybeIncrementallyBackup(DataTracker.java:189) > at > org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:166) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:231) > at > org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1141) > at > org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:343) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > 14 > {noformat} > jstack shows that CompactionExecutor threads are now blocked waiting on the > flush future which will actually never decrement a latch. > {noformat} > "CompactionExecutor:125" daemon prio=5 tid=0x7fb3a10da800 nid=0x13c43 > waiting on condition [0x00012a90] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x00071b669088> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) > at java.util.concurrent.FutureTask.get(FutureTask.java:187) > at > org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409) > at > org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:457) > at > org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:203) > at > org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > "CompactionExecutor:124" daemon prio=5 tid=0x7fb35cc09800 nid=0x13a2b > waiting on condition [0x00012934f000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0007ce4bf918> (a > java.util.concurrent.FutureTask) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) > at java.util.concurrent.FutureTask.get(FutureTask.java:187) >
[jira] [Created] (CASSANDRA-8476) RE in writeSortedContents or replaceFlushed blocks compaction threads indefinitely.
Pavel Yaskevich created CASSANDRA-8476: -- Summary: RE in writeSortedContents or replaceFlushed blocks compaction threads indefinitely. Key: CASSANDRA-8476 URL: https://issues.apache.org/jira/browse/CASSANDRA-8476 Project: Cassandra Issue Type: Bug Components: Core Reporter: Pavel Yaskevich Assignee: Pavel Yaskevich Fix For: 2.0.12 Encountered this problem while generating some test data, incremental backup failed to create hard-links to some of the of the system files (which is done at the end of each compaction): Example of the RE stacktrace: {noformat} 14/12/12 15:47:47 ERROR cassandra.SchemaLoader: Fatal exception in thread Thread[FlushWriter:5,5,main] java.lang.RuntimeException: Tried to create duplicate hard link to /cassandra/data/system/IndexInfo/backups/system-IndexInfo-jb-1-Index.db at org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75) at org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1222) at org.apache.cassandra.db.DataTracker.maybeIncrementallyBackup(DataTracker.java:189) at org.apache.cassandra.db.DataTracker.replaceFlushed(DataTracker.java:166) at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.replaceFlushed(AbstractCompactionStrategy.java:231) at org.apache.cassandra.db.ColumnFamilyStore.replaceFlushed(ColumnFamilyStore.java:1141) at org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:343) at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) 14 {noformat} jstack shows that CompactionExecutor threads are now blocked waiting on the flush future which will actually never decrement a latch. {noformat} "CompactionExecutor:125" daemon prio=5 tid=0x7fb3a10da800 nid=0x13c43 waiting on condition [0x00012a90] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x00071b669088> (a java.util.concurrent.FutureTask) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) at java.util.concurrent.FutureTask.get(FutureTask.java:187) at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409) at org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:457) at org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:203) at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225) at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) "CompactionExecutor:124" daemon prio=5 tid=0x7fb35cc09800 nid=0x13a2b waiting on condition [0x00012934f000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0007ce4bf918> (a java.util.concurrent.FutureTask) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) at java.util.concurrent.FutureTask.get(FutureTask.java:187) at org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:409) at org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:457) at org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:203) at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225) at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) at
[jira] [Commented] (CASSANDRA-8014) NPE in Message.java line 324
[ https://issues.apache.org/jira/browse/CASSANDRA-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245041#comment-14245041 ] Peter Haggerty commented on CASSANDRA-8014: --- We've seen this in a 2.0.9 instance when running "nodetool disablethrift". It throws a half dozen of the "Unexpected throwable", then proceeds to: {code} ERROR [pool-6-thread-2] 2014-12-12 23:43:13,643 CassandraDaemon.java (line 199) Exception in thread Thread[pool-6-thread-2,5,main] java.lang.RuntimeException: java.lang.NullPointerException at com.lmax.disruptor.FatalExceptionHandler.handleEventException(FatalExceptionHandler.java:45) at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:126) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at com.thinkaurelius.thrift.Message.getInputTransport(Message.java:338) at com.thinkaurelius.thrift.Message.invoke(Message.java:308) at com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) at com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:638) at com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:632) at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) ... 3 more {code} The "nodetool disablethrift" appears to hang until killed. > NPE in Message.java line 324 > > > Key: CASSANDRA-8014 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8014 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.0.9 >Reporter: Peter Haggerty >Assignee: Pavel Yaskevich > Attachments: NPE_Message.java_line-324.txt > > > We received this when a server was rebooting and attempted to shut Cassandra > down while it was still quite busy. While it's normal for us to have a > handful of the RejectedExecution exceptions on a sudden shutdown like this > these NPEs in Message.java are new. > The attached file include the logs from "StorageServiceShutdownHook" to the > "Logging initialized" after the server restarts and Cassandra comes back up. > {code}ERROR [pool-10-thread-2] 2014-09-29 08:33:44,055 Message.java (line > 324) Unexpected throwable while invoking! > java.lang.NullPointerException > at com.thinkaurelius.thrift.util.mem.Buffer.size(Buffer.java:83) > at > com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.expand(FastMemoryOutputTransport.java:84) > at > com.thinkaurelius.thrift.util.mem.FastMemoryOutputTransport.write(FastMemoryOutputTransport.java:167) > at > org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:55) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at com.thinkaurelius.thrift.Message.invoke(Message.java:314) > at > com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:638) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:632) > at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8438) Nested collections not serialized with respect to native protocol version
[ https://issues.apache.org/jira/browse/CASSANDRA-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245000#comment-14245000 ] Aleksey Yeschenko commented on CASSANDRA-8438: -- Could add an extra section to NEWS.txt. > Nested collections not serialized with respect to native protocol version > - > > Key: CASSANDRA-8438 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8438 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Adam Holmberg >Priority: Minor > > It appears that inner collections are not encoding collection element count > correctly for protocol version <=2 > {code} > from cassandra.cluster import Cluster > s = Cluster(protocol_version=2).connect() > s.execute("CREATE KEYSPACE test WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'}") > s.execute("CREATE TABLE test.test (k int PRIMARY KEY, v map frozen>>) > s.execute("INSERT INTO test.test (k, v ) VALUES ( 1, {1: [2,3,4]})") > print s.execute("SELECT * FROM test.test") > {code} > The map returned is encoded as follows: > 00:01:00:04:00:00:00:01:00:1c:*00:00:00:03*:*00:00:00:04*:00:00:00:02:*00:00:00:04*:00:00:00:03:*00:00:00:04*:00:00:00:04 > It appears that the outer collection encoding is as expected, while the inner > list count, and list element sizes are _int_ size, instead of _short_. This > does not manifest as a problem in cqlsh because it uses protocol version 3 by > default, which accepts this encoding. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7708) UDF schema change events/results
[ https://issues.apache.org/jira/browse/CASSANDRA-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244955#comment-14244955 ] Tyler Hobbs commented on CASSANDRA-7708: A few thoughts on this: The use of {{signature}} as part of the PK in {{schema_functions}} and {{schema_aggregates}} conflicts with the events here, because the drivers can't really compute the signature from the data in the events. (Technically they could, but it's not straightforward, and kind of strange.) Now that we have frozen types, I think the {{signature}} column should be replaced with {{argument_types frozen>}} in both tables. This would allow drivers to easily fetch the right row from the schema tables when these events occur. Regardless of that, the return type doesn't help to uniquely identify a function or aggregate, so I think we can omit it from events. Last, it would be nice to make an {{AbstractMigrationListener}} class with no-op default methods so that we can stop repeating those everywhere. > UDF schema change events/results > > > Key: CASSANDRA-7708 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7708 > Project: Cassandra > Issue Type: Sub-task >Reporter: Robert Stupp >Assignee: Robert Stupp > Labels: protocolv4 > Fix For: 3.0 > > Attachments: 7708-1.txt > > > Schema change notifications for UDF might be interesting for client. > This covers both - the result of {{CREATE}} + {{DROP}} statements and events. > Just adding {{FUNCTION}} as a new target for these events breaks previous > native protocol contract. > Proposal is to introduce a new target {{FUNCTION}} in native protocol v4. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8418) Queries that require allow filtering are working without it
[ https://issues.apache.org/jira/browse/CASSANDRA-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244914#comment-14244914 ] Tyler Hobbs commented on CASSANDRA-8418: bq. Now, regarding queries that should require ALLOW FILTERING and don't, we should probably be careful that adding it is a breaking change (even if it's a bug fix). So my suggestion would be to let things as they are in 2.0, but log a warning message if it's used. Changing it in 2.1 is probably fair game at this point if we properly indicate it in the NEWS file. Regarding logging, that should probably be restricted to only happen once. A warning log on (potentially) every query could get pretty spammy, and not everybody can quickly change their application code in response to a warning log. I also think that it's too late in 2.1 to make a breaking change for relatively little benefit. Requiring the user to add {{ALLOW FILTERING}} won't change the performance of the query or have any positive impact. It's just for educating the user. That's nice, but it isn't worth a breaking change in a bugfix release, IMO. > Queries that require allow filtering are working without it > --- > > Key: CASSANDRA-8418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8418 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Benjamin Lerer >Priority: Minor > Fix For: 3.0 > > Attachments: CASSANDRA-8418.txt > > > The trunk dtest {{cql_tests.py:TestCQL.composite_index_with_pk_test}} has > begun failing after the changes to CASSANDRA-7981. > With the schema {code}CREATE TABLE blogs ( > blog_id int, > time1 int, > time2 int, > author text, > content text, > PRIMARY KEY (blog_id, time1, time2){code} > and {code}CREATE INDEX ON blogs(author){code}, then the query > {code}SELECT blog_id, content FROM blogs WHERE time1 > 0 AND > author='foo'{code} now requires ALLOW FILTERING, but did not before the > refactor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8336) Quarantine nodes after receiving the gossip shutdown message
[ https://issues.apache.org/jira/browse/CASSANDRA-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244880#comment-14244880 ] Brandon Williams edited comment on CASSANDRA-8336 at 12/12/14 10:18 PM: Perhaps one thing we could do is put the node into hibernation before the shutdown message. This way, it will never get marked alive regardless of the heartbeat, even if it propagates later. We might want a new dead state for that though, since I don't want to overload the hibernation state with too many functions since that will complicate knowing what state a node is really in. was (Author: brandon.williams): Perhaps one thing we could do is put the node into hibernation before the shutdown message. This way, it will never get marked alive regardless of the heartbeat, even if it propagates later. We might want a new dead state for that though, since I don't want to overload the hibernation state with too many functions since that will complicate known what state a node is really in. > Quarantine nodes after receiving the gossip shutdown message > > > Key: CASSANDRA-8336 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8336 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Brandon Williams >Assignee: Brandon Williams > Fix For: 2.0.12 > > > In CASSANDRA-3936 we added a gossip shutdown announcement. The problem here > is that this isn't sufficient; you can still get TOEs and have to wait on the > FD to figure things out. This happens due to gossip propagation time and > variance; if node X shuts down and sends the message to Y, but Z has a > greater gossip version than Y for X and has not yet received the message, it > can initiate gossip with Y and thus mark X alive again. I propose > quarantining to solve this, however I feel it should be a -D parameter you > have to specify, so as not to destroy current dev and test practices, since > this will mean a node that shuts down will not be able to restart until the > quarantine expires. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8475) Altering Table's tombstone_threshold stalls compaction until restart
[ https://issues.apache.org/jira/browse/CASSANDRA-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244887#comment-14244887 ] Rick Branson commented on CASSANDRA-8475: - Workaround for this is to disable & stop all compactions before making the schema change, then re-enable compactions after the schema change is made. > Altering Table's tombstone_threshold stalls compaction until restart > > > Key: CASSANDRA-8475 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8475 > Project: Cassandra > Issue Type: Bug >Reporter: Rick Branson > > Compaction won't move forward on the table until a restart takes place and > the temp table is ignored. My hunch is that running CompactionTasks are > killed and there are still pre-opened temp files ref'd but they get deleted > with the CompactionTask dies? > Exception: > 2014-12-12_22:03:19.84572 ERROR 22:03:19 Exception in thread > Thread[CompactionExecutor:671,1,main] > 2014-12-12_22:03:19.84576 java.lang.RuntimeException: > java.io.FileNotFoundException: > /data/cassandra/data/ks1/DataByUserID_007/ks1-DataByUserID_007-tmplink-ka-21801-Data.db > (No such file or directory) > 2014-12-12_22:03:19.84576 at > org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84577 at > org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1895) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84578 at > org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84579 at > org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1681) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84579 at > org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1693) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84580 at > org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:181) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84581 at > org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getScanners(WrappingCompactionStrategy.java:320) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84581 at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:340) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84582 at > org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84583 at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84583 at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84583 at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84584 at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84584 at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84585 at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[na:1.7.0_65] > 2014-12-12_22:03:19.84586 at > java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_65] > 2014-12-12_22:03:19.84586 at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > ~[na:1.7.0_65] > 2014-12-12_22:03:19.84587 at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_65] > 2014-12-12_22:03:19.84587 at java.lang.Thread.run(Thread.java:745) > [na:1.7.0_65] > 2014-12-12_22:03:19.84587 Caused by: java.io.FileNotFoundException: > /data/cassandra/data/ks1/DataByUserID_007/ks1-DataByUserID_007-tmplink-ka-21801-Data.db > (No such file or directory) > 2014-12-12_22:03:19.84588 at java.io.RandomAccessFile.open(Native > Method) ~[na:1.7.0_65] > 2014-12-12_22:03:19.84588 at > java.io.RandomAccessFile.(RandomAccessFile.java:241) ~[na:1.7.0_65] > 2014-12-12_22:03:19.84589 at > org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58) > ~[apache-cassandra-2.1.2.jar:2.1.2] > 2014-12-12_22:03:19.84590 at > org.apache.cassandra.io.compress
[jira] [Commented] (CASSANDRA-8336) Quarantine nodes after receiving the gossip shutdown message
[ https://issues.apache.org/jira/browse/CASSANDRA-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244880#comment-14244880 ] Brandon Williams commented on CASSANDRA-8336: - Perhaps one thing we could do is put the node into hibernation before the shutdown message. This way, it will never get marked alive regardless of the heartbeat, even if it propagates later. We might want a new dead state for that though, since I don't want to overload the hibernation state with too many functions since that will complicate known what state a node is really in. > Quarantine nodes after receiving the gossip shutdown message > > > Key: CASSANDRA-8336 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8336 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Brandon Williams >Assignee: Brandon Williams > Fix For: 2.0.12 > > > In CASSANDRA-3936 we added a gossip shutdown announcement. The problem here > is that this isn't sufficient; you can still get TOEs and have to wait on the > FD to figure things out. This happens due to gossip propagation time and > variance; if node X shuts down and sends the message to Y, but Z has a > greater gossip version than Y for X and has not yet received the message, it > can initiate gossip with Y and thus mark X alive again. I propose > quarantining to solve this, however I feel it should be a -D parameter you > have to specify, so as not to destroy current dev and test practices, since > this will mean a node that shuts down will not be able to restart until the > quarantine expires. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8475) Altering Table's tombstone_threshold stalls compaction until restart
Rick Branson created CASSANDRA-8475: --- Summary: Altering Table's tombstone_threshold stalls compaction until restart Key: CASSANDRA-8475 URL: https://issues.apache.org/jira/browse/CASSANDRA-8475 Project: Cassandra Issue Type: Bug Reporter: Rick Branson Compaction won't move forward on the table until a restart takes place and the temp table is ignored. My hunch is that running CompactionTasks are killed and there are still pre-opened temp files ref'd but they get deleted with the CompactionTask dies? Exception: 2014-12-12_22:03:19.84572 ERROR 22:03:19 Exception in thread Thread[CompactionExecutor:671,1,main] 2014-12-12_22:03:19.84576 java.lang.RuntimeException: java.io.FileNotFoundException: /data/cassandra/data/ks1/DataByUserID_007/ks1-DataByUserID_007-tmplink-ka-21801-Data.db (No such file or directory) 2014-12-12_22:03:19.84576 at org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84577 at org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1895) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84578 at org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84579 at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1681) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84579 at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1693) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84580 at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:181) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84581 at org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getScanners(WrappingCompactionStrategy.java:320) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84581 at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:340) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84582 at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84583 at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84583 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84583 at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84584 at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84584 at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:232) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84585 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_65] 2014-12-12_22:03:19.84586 at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[na:1.7.0_65] 2014-12-12_22:03:19.84586 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_65] 2014-12-12_22:03:19.84587 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_65] 2014-12-12_22:03:19.84587 at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65] 2014-12-12_22:03:19.84587 Caused by: java.io.FileNotFoundException: /data/cassandra/data/ks1/DataByUserID_007/ks1-DataByUserID_007-tmplink-ka-21801-Data.db (No such file or directory) 2014-12-12_22:03:19.84588 at java.io.RandomAccessFile.open(Native Method) ~[na:1.7.0_65] 2014-12-12_22:03:19.84588 at java.io.RandomAccessFile.(RandomAccessFile.java:241) ~[na:1.7.0_65] 2014-12-12_22:03:19.84589 at org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84590 at org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:77) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84590 at org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84591 at org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48) ~[apache-cassandra-2.1.2.jar:2.1.2] 2014-12-12_22:03:19.84591 ... 18 common frames omitted -- This message was sent by Atlassian JI
[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244869#comment-14244869 ] Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124: - [~yukim], Please find the commit details for the changes in the decommission job notification https://github.com/rnamboodiri/cassandra/commit/cdfe80e78195a382fa185b7fc4ad89846b182292 > Use JMX Notifications to Indicate Success/Failure of Long-Running Operations > > > Key: CASSANDRA-7124 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7124 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Tyler Hobbs >Assignee: Rajanarayanan Thottuvaikkatumana >Priority: Minor > Labels: lhf > Fix For: 3.0 > > Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt, > cassandra-trunk-decommission-7124.txt > > > If {{nodetool cleanup}} or some other long-running operation takes too long > to complete, you'll see an error like the one in CASSANDRA-2126, so you can't > tell if the operation completed successfully or not. CASSANDRA-4767 fixed > this for repairs with JMX notifications. We should do something similar for > nodetool cleanup, compact, decommission, move, relocate, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8452) Add missing systems to FBUtilities.isUnix, add FBUtilities.isWindows
[ https://issues.apache.org/jira/browse/CASSANDRA-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-8452: --- Attachment: CASSANDRA-8452-v4.patch Ah, I didn't realize that about maybeReopenEarly. In that case, no reason to keep isPosix around. Anyway, I think it's a reasonable convention to assume we're on linux, and check for specific deviations where appropriate, since that's the primary environment we target. Attached v4 > Add missing systems to FBUtilities.isUnix, add FBUtilities.isWindows > > > Key: CASSANDRA-8452 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8452 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 2.1.3 > > Attachments: CASSANDRA-8452-v2.patch, CASSANDRA-8452-v3.patch, > CASSANDRA-8452-v4.patch, CASSANDRA-8452.patch > > > The isUnix method leaves out a few unix systems, which, after the changes in > CASSANDRA-8136, causes some unexpected behavior during shutdown. It would > also be clearer if FBUtilities had an isWindows method for branching into > Windows specific logic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8336) Quarantine nodes after receiving the gossip shutdown message
[ https://issues.apache.org/jira/browse/CASSANDRA-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244857#comment-14244857 ] Brandon Williams commented on CASSANDRA-8336: - So the real wrinkle here is that when we send the shutdown message, our latest heartbeat hasn't propagated fully, so even if the nodes quarantine, a newer heartbeat will eventually be seen, marking the killed node as back up. I'm not sure what we can do about that, short of changing the format for the shutdown message to include the heartbeat and then filtering based on that. Unfortunately that puts us in 3.1 territory, where we'll have to make 3.0 a prerequisite. > Quarantine nodes after receiving the gossip shutdown message > > > Key: CASSANDRA-8336 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8336 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Brandon Williams >Assignee: Brandon Williams > Fix For: 2.0.12 > > > In CASSANDRA-3936 we added a gossip shutdown announcement. The problem here > is that this isn't sufficient; you can still get TOEs and have to wait on the > FD to figure things out. This happens due to gossip propagation time and > variance; if node X shuts down and sends the message to Y, but Z has a > greater gossip version than Y for X and has not yet received the message, it > can initiate gossip with Y and thus mark X alive again. I propose > quarantining to solve this, however I feel it should be a -D parameter you > have to specify, so as not to destroy current dev and test practices, since > this will mean a node that shuts down will not be able to restart until the > quarantine expires. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps
[ https://issues.apache.org/jira/browse/CASSANDRA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244849#comment-14244849 ] Tyler Hobbs commented on CASSANDRA-8473: Also, I noticed that your patch applies to trunk (despite the name). I do feel like it would be better to target 3.0 than 2.1 for this, so I'm going to change the fixVersion to 3.0 unless there are strong objections. > Secondary index support for key-value pairs in CQL3 maps > > > Key: CASSANDRA-8473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8473 > Project: Cassandra > Issue Type: Improvement >Reporter: Samuel Klock >Assignee: Samuel Klock > Fix For: 3.0 > > Attachments: cassandra-2.1-8473.txt > > > CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary > indexes on CQL3 maps, but support for a natural use case is still missing: > queries to find rows with map columns containing some key-value pair. For > example (from a comment on CASSANDRA-4511): > {code:sql} > SELECT * FROM main.users WHERE notify['email'] = true; > {code} > Cassandra should add support for this kind of index. One option is to expose > a CQL interface like the following: > * Creating an index: > {code:sql} > cqlsh:mykeyspace> CREATE TABLE mytable (key TEXT PRIMARY KEY, value MAP TEXT>); > cqlsh:mykeyspace> CREATE INDEX ON mytable(ENTRIES(value)); > {code} > * Querying the index: > {code:sql} > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('foo', {'a': '1', > 'b': '2', 'c': '3'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('bar', {'a': '1', > 'b': '4'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('baz', {'b': '4', > 'c': '3'}); > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1'; > key | value > -+ > bar | {'a': '1', 'b': '4'} > foo | {'a': '1', 'b': '2', 'c': '3'} > (2 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1' AND value['b'] > = '2' ALLOW FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '2' ALLOW > FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '4'; > key | value > -+-- > bar | {'a': '1', 'b': '4'} > baz | {'b': '4', 'c': '3'} > (2 rows) > {code} > A patch against the Cassandra-2.1 branch that implements this interface will > be attached to this issue shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps
[ https://issues.apache.org/jira/browse/CASSANDRA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-8473: --- Fix Version/s: 3.0 > Secondary index support for key-value pairs in CQL3 maps > > > Key: CASSANDRA-8473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8473 > Project: Cassandra > Issue Type: Improvement >Reporter: Samuel Klock >Assignee: Samuel Klock > Fix For: 3.0 > > Attachments: cassandra-2.1-8473.txt > > > CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary > indexes on CQL3 maps, but support for a natural use case is still missing: > queries to find rows with map columns containing some key-value pair. For > example (from a comment on CASSANDRA-4511): > {code:sql} > SELECT * FROM main.users WHERE notify['email'] = true; > {code} > Cassandra should add support for this kind of index. One option is to expose > a CQL interface like the following: > * Creating an index: > {code:sql} > cqlsh:mykeyspace> CREATE TABLE mytable (key TEXT PRIMARY KEY, value MAP TEXT>); > cqlsh:mykeyspace> CREATE INDEX ON mytable(ENTRIES(value)); > {code} > * Querying the index: > {code:sql} > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('foo', {'a': '1', > 'b': '2', 'c': '3'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('bar', {'a': '1', > 'b': '4'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('baz', {'b': '4', > 'c': '3'}); > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1'; > key | value > -+ > bar | {'a': '1', 'b': '4'} > foo | {'a': '1', 'b': '2', 'c': '3'} > (2 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1' AND value['b'] > = '2' ALLOW FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '2' ALLOW > FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '4'; > key | value > -+-- > bar | {'a': '1', 'b': '4'} > baz | {'b': '4', 'c': '3'} > (2 rows) > {code} > A patch against the Cassandra-2.1 branch that implements this interface will > be attached to this issue shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps
[ https://issues.apache.org/jira/browse/CASSANDRA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244846#comment-14244846 ] Tyler Hobbs commented on CASSANDRA-8473: Awesome! Thanks for the patch. This is quite good so far. Here are my review comments: * SingleColumnRelation.toReceivers(): ** For lists, this error message may be confusing: {{checkTrue(receiver.type instanceof MapType, "Column \"%s\" cannot be used as a map", receiver.name);}}. For example, if you do {{WHERE mylist\[0\] = 'foo'}}, you're not really trying to use it as a map. You may want to handle lists specially. ** The {{checkFalse()}} statement below this is starting to get confusing, I would break it up ** Use curly braces on the "if" clause when they're used on the "else" ** I'm not sure that frozen maps are handled correctly here (e.g. {{WHERE myfrozenmap\['foo'\] = 'bar'}}). May want to double-check that. * SingleColumnRestriction.Contains: ** Update class-level comment (should be a javadoc) to include map entry restrictions ** entries(): no need for curly braces with a single-line for-loop * CreateIndexStatement: ** switch on target.type could be clearer if re-organized; also, the error message about 'keys' is slightly misleading for 'entries' indexes * IndexTarget.TargetType.fromIndexOptions(): ** Should this return FULL if index_values isn't present? Also, no curlies needed for single-line clauses. * ExtendedFilter: ** {{else if (expr.isContains())}} will always be false (due to the {{isContains()}} check above). * CompositesIndex: ** No nested ternaries, please * CompositesIndexOnCollectionKeyAndValue: ** makeIndexColumnPrefix(): need to use {{min(count - 1, cellName.size())}} for loop end (see CASSANDRA-8053 for why) ** by extending CompositesIndexOnCollectionKey, you could eliminate about half of the methods ** isStale(): instead of building a new composite to compare with the index entry key, why not compare the cell value with the second item in the index entry composite? This method could also use a comment or two * SecondaryIndexOnMapEntriesTest: ** Unusued and commented out imports ** Add a test for {{map\[element\] = null}} being invalid. (While we could support this when filtering, we couldn't support it with a 2ary index lookup.) Thanks again! > Secondary index support for key-value pairs in CQL3 maps > > > Key: CASSANDRA-8473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8473 > Project: Cassandra > Issue Type: Improvement >Reporter: Samuel Klock >Assignee: Samuel Klock > Attachments: cassandra-2.1-8473.txt > > > CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary > indexes on CQL3 maps, but support for a natural use case is still missing: > queries to find rows with map columns containing some key-value pair. For > example (from a comment on CASSANDRA-4511): > {code:sql} > SELECT * FROM main.users WHERE notify['email'] = true; > {code} > Cassandra should add support for this kind of index. One option is to expose > a CQL interface like the following: > * Creating an index: > {code:sql} > cqlsh:mykeyspace> CREATE TABLE mytable (key TEXT PRIMARY KEY, value MAP TEXT>); > cqlsh:mykeyspace> CREATE INDEX ON mytable(ENTRIES(value)); > {code} > * Querying the index: > {code:sql} > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('foo', {'a': '1', > 'b': '2', 'c': '3'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('bar', {'a': '1', > 'b': '4'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('baz', {'b': '4', > 'c': '3'}); > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1'; > key | value > -+ > bar | {'a': '1', 'b': '4'} > foo | {'a': '1', 'b': '2', 'c': '3'} > (2 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1' AND value['b'] > = '2' ALLOW FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '2' ALLOW > FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '4'; > key | value > -+-- > bar | {'a': '1', 'b': '4'} > baz | {'b': '4', 'c': '3'} > (2 rows) > {code} > A patch against the Cassandra-2.1 branch that implements this interface will > be attached to this issue shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244815#comment-14244815 ] Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124: - [~yukim] Please find commit details for the changes in compact job notification - https://github.com/rnamboodiri/cassandra/commit/7ad6225052c93df6d53c8ee68ab28c096c813568 > Use JMX Notifications to Indicate Success/Failure of Long-Running Operations > > > Key: CASSANDRA-7124 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7124 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Tyler Hobbs >Assignee: Rajanarayanan Thottuvaikkatumana >Priority: Minor > Labels: lhf > Fix For: 3.0 > > Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt, > cassandra-trunk-decommission-7124.txt > > > If {{nodetool cleanup}} or some other long-running operation takes too long > to complete, you'll see an error like the one in CASSANDRA-2126, so you can't > tell if the operation completed successfully or not. CASSANDRA-4767 fixed > this for repairs with JMX notifications. We should do something similar for > nodetool cleanup, compact, decommission, move, relocate, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8474) Error in commit log allocator thread
[ https://issues.apache.org/jira/browse/CASSANDRA-8474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244812#comment-14244812 ] Jonathan Ellis commented on CASSANDRA-8474: --- This looks like it's just executors complaining when shut down unceremoniously. CASSANDRA-1483 > Error in commit log allocator thread > > > Key: CASSANDRA-8474 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8474 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Benedict >Priority: Minor > Fix For: 2.1.3 > > Attachments: counter_tests.py, node1.log, node2.log > > > The dtest counters_test.py:TestCounters.upgrade_test is intermittently > failing on 2.1-HEAD because of the following error in the system.log > {code} > ERROR [COMMIT-LOG-ALLOCATOR] 2014-12-12 14:18:07,342 StorageService.java:366 > - Stopping gossiper > WARN [COMMIT-LOG-ALLOCATOR] 2014-12-12 14:18:07,342 StorageService.java:274 > - Stopping gossip by operato > r request > INFO [COMMIT-LOG-ALLOCATOR] 2014-12-12 14:18:07,342 Gossiper.java:1341 - > Announcing shutdown > ERROR [COMMIT-LOG-ALLOCATOR] 2014-12-12 14:18:09,349 CassandraDaemon.java:170 > - Exception in thread Threa > d[COMMIT-LOG-ALLOCATOR,5,main] > java.lang.AssertionError: java.lang.InterruptedException > at > org.apache.cassandra.net.OutboundTcpConnection.enqueue(OutboundTcpConnection.java:107) > ~[main/ > :na] > at > org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:682) > ~[main/:na] > at > org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:648) > ~[main/:na] > at org.apache.cassandra.gms.Gossiper.stop(Gossiper.java:1345) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.stopGossiping(StorageService.java:275) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.stopTransports(StorageService.java:367) > ~[main/:na > ] > at > org.apache.cassandra.db.commitlog.CommitLog.handleCommitError(CommitLog.java:365) > ~[main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManage > r.java:164) ~[main/:na] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[main/:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_67] > Caused by: java.lang.InterruptedException: null > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynch > ronizer.java:1219) ~[na:1.7.0_67] > at > java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) > ~[na:1.7.0_ > 67] > at > java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) > ~[na:1.7.0_67] > at > org.apache.cassandra.net.OutboundTcpConnection.enqueue(OutboundTcpConnection.java:103) > ~[main/:na] > ... 9 common frames omitted > {code} > I have attached the system.log files of both nodes used, as well as the test > being run. > I don't see this problem at all while running against 2.0-HEAD. I can > reproduce this very far back into 2.1's history. With 2.1.0-rc2, I see > {code}ERROR [COMMIT-LOG-ALLOCATOR] 2014-12-12 15:21:04,069 > CassandraDaemon.java:166 - Exception in thread > Thread[COMMIT-LOG-ALLOCATOR,5,main] > org.apache.cassandra.io.FSWriteError: > java.nio.channels.ClosedByInterruptException > at > org.apache.cassandra.db.commitlog.CommitLogSegment.(CommitLogSegment.java:178) > ~[main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogSegment.recycle(CommitLogSegment.java:373) > ~[main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogSegmentManager$3.call(CommitLogSegmentManager.java:334) > ~[main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogSegmentManager$3.call(CommitLogSegmentManager.java:331) > ~[main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:148) > ~[main/:na] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[main/:na] > at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_67] > Caused by: java.nio.channels.ClosedByInterruptException: null > at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) > ~[na:1.7.0_67] > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:919) > ~[na:1.7.0_67] > at > org.apache.cassandra.db.commitlog.CommitLogSegment.(CommitLogSegment.java:167) > ~[main/:na] > ... 6 common frames omitted{code} > Is there something we're doing wrong in the test? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8399) Reference Counter exception when dropping user type
[ https://issues.apache.org/jira/browse/CASSANDRA-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244765#comment-14244765 ] Joshua McKenzie commented on CASSANDRA-8399: Capturing the stack on the SSTR ref counting shows 4 getRangeSlice creation and 5 MergeIterator releases in one example however I can't find any mismatches from reading through the MergeIterator code. Taking that a step further, I traced the sstable acquire/release calls in the SSTableScanner with an atomic int count to throw assertion on non-zero count on close() and the call that's throwing the assertion looks to be correctly paired with an acquire, implying that the unbalanced release on the SSTR is coming from another source and the scanner close() assertion is simply exposing it: I haven't been able to reproduce the assertion w/out the reference counting on the SSTableScanner. > Reference Counter exception when dropping user type > --- > > Key: CASSANDRA-8399 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8399 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Joshua McKenzie > Fix For: 2.1.3 > > Attachments: node2.log, ubuntu-8399.log > > > When running the dtest > {{user_types_test.py:TestUserTypes.test_type_keyspace_permission_isolation}} > with the current 2.1-HEAD code, very frequently, but not always, when > dropping a type, the following exception is seen:{code} > ERROR [MigrationStage:1] 2014-12-01 13:54:54,824 CassandraDaemon.java:170 - > Exception in thread Thread[MigrationStage:1,5,main] > java.lang.AssertionError: Reference counter -1 for > /var/folders/v3/z4wf_34n1q506_xjdy49gb78gn/T/dtest-eW2RXj/test/node2/data/system/schema_keyspaces-b0f2235744583cdb9631c43e59ce3676/system-sche > ma_keyspaces-ka-14-Data.db > at > org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1662) > ~[main/:na] > at > org.apache.cassandra.io.sstable.SSTableScanner.close(SSTableScanner.java:164) > ~[main/:na] > at > org.apache.cassandra.utils.MergeIterator.close(MergeIterator.java:62) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore$8.close(ColumnFamilyStore.java:1943) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:2116) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:2029) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1963) > ~[main/:na] > at > org.apache.cassandra.db.SystemKeyspace.serializedSchema(SystemKeyspace.java:744) > ~[main/:na] > at > org.apache.cassandra.db.SystemKeyspace.serializedSchema(SystemKeyspace.java:731) > ~[main/:na] > at org.apache.cassandra.config.Schema.updateVersion(Schema.java:374) > ~[main/:na] > at > org.apache.cassandra.config.Schema.updateVersionAndAnnounce(Schema.java:399) > ~[main/:na] > at > org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:167) > ~[main/:na] > at > org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:49) > ~[main/:na] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[na:1.7.0_67] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > ~[na:1.7.0_67] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > ~[na:1.7.0_67] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_67] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]{code} > Log of the node with the error is attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-3527) Expose JMX values via CQL interface
[ https://issues.apache.org/jira/browse/CASSANDRA-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244758#comment-14244758 ] Jon Haddad commented on CASSANDRA-3527: --- It might not hurt adoption, but it would make it a lot easier to manage. Everyone already uses CQL to query the DB, but not everyone writes Java code. I would have loved this when managing my last cluster. > Expose JMX values via CQL interface > --- > > Key: CASSANDRA-3527 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3527 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Kelley Reynolds >Priority: Minor > Labels: cql > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8474) Error in commit log allocator thread
Philip Thompson created CASSANDRA-8474: -- Summary: Error in commit log allocator thread Key: CASSANDRA-8474 URL: https://issues.apache.org/jira/browse/CASSANDRA-8474 Project: Cassandra Issue Type: Bug Reporter: Philip Thompson Assignee: Benedict Priority: Minor Fix For: 2.1.3 Attachments: counter_tests.py, node1.log, node2.log The dtest counters_test.py:TestCounters.upgrade_test is intermittently failing on 2.1-HEAD because of the following error in the system.log {code} ERROR [COMMIT-LOG-ALLOCATOR] 2014-12-12 14:18:07,342 StorageService.java:366 - Stopping gossiper WARN [COMMIT-LOG-ALLOCATOR] 2014-12-12 14:18:07,342 StorageService.java:274 - Stopping gossip by operato r request INFO [COMMIT-LOG-ALLOCATOR] 2014-12-12 14:18:07,342 Gossiper.java:1341 - Announcing shutdown ERROR [COMMIT-LOG-ALLOCATOR] 2014-12-12 14:18:09,349 CassandraDaemon.java:170 - Exception in thread Threa d[COMMIT-LOG-ALLOCATOR,5,main] java.lang.AssertionError: java.lang.InterruptedException at org.apache.cassandra.net.OutboundTcpConnection.enqueue(OutboundTcpConnection.java:107) ~[main/ :na] at org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:682) ~[main/:na] at org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:648) ~[main/:na] at org.apache.cassandra.gms.Gossiper.stop(Gossiper.java:1345) ~[main/:na] at org.apache.cassandra.service.StorageService.stopGossiping(StorageService.java:275) ~[main/:na] at org.apache.cassandra.service.StorageService.stopTransports(StorageService.java:367) ~[main/:na ] at org.apache.cassandra.db.commitlog.CommitLog.handleCommitError(CommitLog.java:365) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManage r.java:164) ~[main/:na] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[main/:na] at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_67] Caused by: java.lang.InterruptedException: null at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynch ronizer.java:1219) ~[na:1.7.0_67] at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340) ~[na:1.7.0_ 67] at java.util.concurrent.LinkedBlockingQueue.put(LinkedBlockingQueue.java:338) ~[na:1.7.0_67] at org.apache.cassandra.net.OutboundTcpConnection.enqueue(OutboundTcpConnection.java:103) ~[main/:na] ... 9 common frames omitted {code} I have attached the system.log files of both nodes used, as well as the test being run. I don't see this problem at all while running against 2.0-HEAD. I can reproduce this very far back into 2.1's history. With 2.1.0-rc2, I see {code}ERROR [COMMIT-LOG-ALLOCATOR] 2014-12-12 15:21:04,069 CassandraDaemon.java:166 - Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main] org.apache.cassandra.io.FSWriteError: java.nio.channels.ClosedByInterruptException at org.apache.cassandra.db.commitlog.CommitLogSegment.(CommitLogSegment.java:178) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLogSegment.recycle(CommitLogSegment.java:373) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLogSegmentManager$3.call(CommitLogSegmentManager.java:334) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLogSegmentManager$3.call(CommitLogSegmentManager.java:331) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLogSegmentManager$1.runMayThrow(CommitLogSegmentManager.java:148) ~[main/:na] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[main/:na] at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_67] Caused by: java.nio.channels.ClosedByInterruptException: null at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) ~[na:1.7.0_67] at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:919) ~[na:1.7.0_67] at org.apache.cassandra.db.commitlog.CommitLogSegment.(CommitLogSegment.java:167) ~[main/:na] ... 6 common frames omitted{code} Is there something we're doing wrong in the test? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8374) Better support of null for UDF
[ https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-8374: Attachment: 8473-1.txt > Better support of null for UDF > -- > > Key: CASSANDRA-8374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8374 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Robert Stupp > Fix For: 3.0 > > Attachments: 8473-1.txt > > > Currently, every function needs to deal with it's argument potentially being > {{null}}. There is very many case where that's just annoying, users should be > able to define a function like: > {noformat} > CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;' > {noformat} > without having this crashing as soon as a column it's applied to doesn't a > value for some rows (I'll note that this definition apparently cannot be > compiled currently, which should be looked into). > In fact, I think that by default methods shouldn't have to care about > {{null}} values: if the value is {{null}}, we should not call the method at > all and return {{null}}. There is still methods that may explicitely want to > handle {{null}} (to return a default value for instance), so maybe we can add > an {{ALLOW NULLS}} to the creation syntax. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8374) Better support of null for UDF
[ https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244735#comment-14244735 ] Robert Stupp commented on CASSANDRA-8374: - Attached 8473-1.txt (raises error). > Better support of null for UDF > -- > > Key: CASSANDRA-8374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8374 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Robert Stupp > Fix For: 3.0 > > Attachments: 8473-1.txt > > > Currently, every function needs to deal with it's argument potentially being > {{null}}. There is very many case where that's just annoying, users should be > able to define a function like: > {noformat} > CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;' > {noformat} > without having this crashing as soon as a column it's applied to doesn't a > value for some rows (I'll note that this definition apparently cannot be > compiled currently, which should be looked into). > In fact, I think that by default methods shouldn't have to care about > {{null}} values: if the value is {{null}}, we should not call the method at > all and return {{null}}. There is still methods that may explicitely want to > handle {{null}} (to return a default value for instance), so maybe we can add > an {{ALLOW NULLS}} to the creation syntax. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8374) Better support of null for UDF
[ https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244611#comment-14244611 ] Robert Stupp commented on CASSANDRA-8374: - bq. Shouldn't we raise an error instead of silently short circuiting to null? Code is getting a bit complicated handling the {{null}} value, handling the combination of UDA + non-{{ALLOW NULLS}} state function (thinking about what happens when a {{ALLOW NULLS}} UDF is replaced by a non-{{ALLOW NULLS}} variant). Raising an error would make it immediately clear to users. So I'm generally +1 on raising an error (instead of returning a possibly silently wrong result due to bad data). > Better support of null for UDF > -- > > Key: CASSANDRA-8374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8374 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Robert Stupp > Fix For: 3.0 > > > Currently, every function needs to deal with it's argument potentially being > {{null}}. There is very many case where that's just annoying, users should be > able to define a function like: > {noformat} > CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;' > {noformat} > without having this crashing as soon as a column it's applied to doesn't a > value for some rows (I'll note that this definition apparently cannot be > compiled currently, which should be looked into). > In fact, I think that by default methods shouldn't have to care about > {{null}} values: if the value is {{null}}, we should not call the method at > all and return {{null}}. There is still methods that may explicitely want to > handle {{null}} (to return a default value for instance), so maybe we can add > an {{ALLOW NULLS}} to the creation syntax. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8103) Secondary Indices for Static Columns
[ https://issues.apache.org/jira/browse/CASSANDRA-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-8103: -- Assignee: Sylvain Lebresne > Secondary Indices for Static Columns > > > Key: CASSANDRA-8103 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8103 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Ron Cohen >Assignee: Sylvain Lebresne > Fix For: 3.0 > > > We should add secondary index support for static columns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8462) Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-8462: --- Tester: Philip Thompson [~thobbs], you are correct, we currently have no coverage there. I will add some. > Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes > - > > Key: CASSANDRA-8462 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8462 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Rick Branson >Assignee: Aleksey Yeschenko > > Added a 2.1.2 node to a cluster running 2.0.11. Didn't make any schema > changes. When I tried to reboot one of the 2.0 nodes, it failed to boot with > this exception. Besides an obvious fix, any workarounds for this? > {noformat} > java.lang.IllegalArgumentException: No enum constant > org.apache.cassandra.config.CFMetaData.Caching.{"keys":"ALL", > "rows_per_partition":"NONE"} > at java.lang.Enum.valueOf(Enum.java:236) > at > org.apache.cassandra.config.CFMetaData$Caching.valueOf(CFMetaData.java:286) > at > org.apache.cassandra.config.CFMetaData.fromSchemaNoColumnsNoTriggers(CFMetaData.java:1713) > at > org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1793) > at > org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:307) > at > org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:288) > at > org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:131) > at > org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:529) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5977) Structure for cfstats output (JSON, YAML, or XML)
[ https://issues.apache.org/jira/browse/CASSANDRA-5977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244562#comment-14244562 ] Jon Haddad commented on CASSANDRA-5977: --- Just to add here, other people want this as well. This was just posted to the mailing list: https://gist.github.com/JensRantil/3da67e39f50aaf4f5bce > Structure for cfstats output (JSON, YAML, or XML) > - > > Key: CASSANDRA-5977 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5977 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Alyssa Kwan >Priority: Minor > > nodetool cfstats should take a --format arg that structures the output in > JSON, YAML, or XML. This would be useful for piping into another script that > can easily parse this and act on it. It would also help those of us who use > things like MCollective gather aggregate stats across clusters/nodes. > Thoughts? I can submit a patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8462) Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244553#comment-14244553 ] Tyler Hobbs commented on CASSANDRA-8462: [~philipthompson] I'm guessing the dtests don't cover restarting nodes in a mixed-version cluster. Would it be possible to add some test coverage for that? > Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes > - > > Key: CASSANDRA-8462 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8462 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Rick Branson >Assignee: Aleksey Yeschenko > > Added a 2.1.2 node to a cluster running 2.0.11. Didn't make any schema > changes. When I tried to reboot one of the 2.0 nodes, it failed to boot with > this exception. Besides an obvious fix, any workarounds for this? > {noformat} > java.lang.IllegalArgumentException: No enum constant > org.apache.cassandra.config.CFMetaData.Caching.{"keys":"ALL", > "rows_per_partition":"NONE"} > at java.lang.Enum.valueOf(Enum.java:236) > at > org.apache.cassandra.config.CFMetaData$Caching.valueOf(CFMetaData.java:286) > at > org.apache.cassandra.config.CFMetaData.fromSchemaNoColumnsNoTriggers(CFMetaData.java:1713) > at > org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1793) > at > org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:307) > at > org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:288) > at > org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:131) > at > org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:529) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8462) Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-8462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-8462: --- Description: Added a 2.1.2 node to a cluster running 2.0.11. Didn't make any schema changes. When I tried to reboot one of the 2.0 nodes, it failed to boot with this exception. Besides an obvious fix, any workarounds for this? {noformat} java.lang.IllegalArgumentException: No enum constant org.apache.cassandra.config.CFMetaData.Caching.{"keys":"ALL", "rows_per_partition":"NONE"} at java.lang.Enum.valueOf(Enum.java:236) at org.apache.cassandra.config.CFMetaData$Caching.valueOf(CFMetaData.java:286) at org.apache.cassandra.config.CFMetaData.fromSchemaNoColumnsNoTriggers(CFMetaData.java:1713) at org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1793) at org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:307) at org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:288) at org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:131) at org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:529) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585) {noformat} was: Added a 2.1.2 node to a cluster running 2.0.11. Didn't make any schema changes. When I tried to reboot one of the 2.0 nodes, it failed to boot with this exception. Besides an obvious fix, any workarounds for this? {code} java.lang.IllegalArgumentException: No enum constant org.apache.cassandra.config.CFMetaData.Caching.{"keys":"ALL", "rows_per_partition":"NONE"} at java.lang.Enum.valueOf(Enum.java:236) at org.apache.cassandra.config.CFMetaData$Caching.valueOf(CFMetaData.java:286) at org.apache.cassandra.config.CFMetaData.fromSchemaNoColumnsNoTriggers(CFMetaData.java:1713) at org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1793) at org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:307) at org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:288) at org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:131) at org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:529) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585) {/code} > Upgrading a 2.0 to 2.1 breaks CFMetaData on 2.0 nodes > - > > Key: CASSANDRA-8462 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8462 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Rick Branson >Assignee: Aleksey Yeschenko > > Added a 2.1.2 node to a cluster running 2.0.11. Didn't make any schema > changes. When I tried to reboot one of the 2.0 nodes, it failed to boot with > this exception. Besides an obvious fix, any workarounds for this? > {noformat} > java.lang.IllegalArgumentException: No enum constant > org.apache.cassandra.config.CFMetaData.Caching.{"keys":"ALL", > "rows_per_partition":"NONE"} > at java.lang.Enum.valueOf(Enum.java:236) > at > org.apache.cassandra.config.CFMetaData$Caching.valueOf(CFMetaData.java:286) > at > org.apache.cassandra.config.CFMetaData.fromSchemaNoColumnsNoTriggers(CFMetaData.java:1713) > at > org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1793) > at > org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:307) > at > org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:288) > at > org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:131) > at > org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:529) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:270) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8452) Add missing systems to FBUtilities.isUnix, add FBUtilities.isWindows
[ https://issues.apache.org/jira/browse/CASSANDRA-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244552#comment-14244552 ] Joshua McKenzie commented on CASSANDRA-8452: The failure / check on maybeReopenEarly isn't about whether it's posix or not but about whether it's ntfs (and pre-nio technically, though we're all-or-nothing windows on 2.1). Is there a reason we shouldn't reduce this to isWindows and hasProcFS checks and move forward with the assumption of posix-compliance if not Windows, given the platforms we run on? Seems unnecessary to have the isPosix structure simply to add that to the check on the fs for proc. > Add missing systems to FBUtilities.isUnix, add FBUtilities.isWindows > > > Key: CASSANDRA-8452 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8452 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 2.1.3 > > Attachments: CASSANDRA-8452-v2.patch, CASSANDRA-8452-v3.patch, > CASSANDRA-8452.patch > > > The isUnix method leaves out a few unix systems, which, after the changes in > CASSANDRA-8136, causes some unexpected behavior during shutdown. It would > also be clearer if FBUtilities had an isWindows method for branching into > Windows specific logic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8452) Add missing systems to FBUtilities.isUnix, add FBUtilities.isWindows
[ https://issues.apache.org/jira/browse/CASSANDRA-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244543#comment-14244543 ] Blake Eggleston commented on CASSANDRA-8452: +1, sounds good to me. The v3 patch should cover all the bases then. > Add missing systems to FBUtilities.isUnix, add FBUtilities.isWindows > > > Key: CASSANDRA-8452 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8452 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 2.1.3 > > Attachments: CASSANDRA-8452-v2.patch, CASSANDRA-8452-v3.patch, > CASSANDRA-8452.patch > > > The isUnix method leaves out a few unix systems, which, after the changes in > CASSANDRA-8136, causes some unexpected behavior during shutdown. It would > also be clearer if FBUtilities had an isWindows method for branching into > Windows specific logic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-6993) Windows: remove mmap'ed I/O for index files and force standard file access
[ https://issues.apache.org/jira/browse/CASSANDRA-6993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244541#comment-14244541 ] Joshua McKenzie commented on CASSANDRA-6993: Waiting for CASSANDRA-8452 to shake out, then I'll rebase this to that spec. > Windows: remove mmap'ed I/O for index files and force standard file access > -- > > Key: CASSANDRA-6993 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6993 > Project: Cassandra > Issue Type: Improvement >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Minor > Labels: Windows > Fix For: 3.0, 2.1.3 > > Attachments: 6993_2.1_v1.txt, 6993_v1.txt, 6993_v2.txt > > > Memory-mapped I/O on Windows causes issues with hard-links; we're unable to > delete hard-links to open files with memory-mapped segments even using nio. > We'll need to push for close to performance parity between mmap'ed I/O and > buffered going forward as the buffered / compressed path offers other > benefits. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8452) Add missing systems to FBUtilities.isUnix, add FBUtilities.isWindows
[ https://issues.apache.org/jira/browse/CASSANDRA-8452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244539#comment-14244539 ] Joshua McKenzie commented on CASSANDRA-8452: I'm comfortable with the position that if you run cassandra on linux w/ntfs you're in uncharted territory and we make no promises. Naming the checks 'isWindows' and 'hasProcFS' seems like the most granular / correct combination of things for our current needs and we could expand from there later as needs dictate. > Add missing systems to FBUtilities.isUnix, add FBUtilities.isWindows > > > Key: CASSANDRA-8452 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8452 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 2.1.3 > > Attachments: CASSANDRA-8452-v2.patch, CASSANDRA-8452-v3.patch, > CASSANDRA-8452.patch > > > The isUnix method leaves out a few unix systems, which, after the changes in > CASSANDRA-8136, causes some unexpected behavior during shutdown. It would > also be clearer if FBUtilities had an isWindows method for branching into > Windows specific logic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Fix AIOOBE when building syntax error message
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 c3ac6baac -> 254d6f7e9 Fix AIOOBE when building syntax error message Patch by Benjamin Lerer; reviewed by Tyler Hobbs for CASSANDRA-8455 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/254d6f7e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/254d6f7e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/254d6f7e Branch: refs/heads/cassandra-2.1 Commit: 254d6f7e9c689c639f85f3f5119f4a812d295f05 Parents: c3ac6ba Author: blerer Authored: Fri Dec 12 12:02:40 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 12:03:34 2014 -0600 -- CHANGES.txt | 2 ++ .../apache/cassandra/cql3/ErrorCollector.java | 30 .../apache/cassandra/cql3/QueryProcessor.java | 1 + .../cassandra/cql3/ErrorCollectorTest.java | 17 +++ 4 files changed, 50 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/254d6f7e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 5402ad5..07d526c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 2.1.3 + * Fix ArrayIndexOutOfBoundsException when generating error message + for some CQL syntax errors (CASSANDRA-8455) * Scale memtable slab allocation logarithmically (CASSANDRA-7882) * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964) * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926) http://git-wip-us.apache.org/repos/asf/cassandra/blob/254d6f7e/src/java/org/apache/cassandra/cql3/ErrorCollector.java -- diff --git a/src/java/org/apache/cassandra/cql3/ErrorCollector.java b/src/java/org/apache/cassandra/cql3/ErrorCollector.java index cd628b8..2137da2 100644 --- a/src/java/org/apache/cassandra/cql3/ErrorCollector.java +++ b/src/java/org/apache/cassandra/cql3/ErrorCollector.java @@ -132,6 +132,9 @@ public final class ErrorCollector implements ErrorListener Token to, Token offending) { +if (!areTokensValid(from, to, offending)) +return; + String[] lines = query.split("\n"); boolean includeQueryStart = (from.getLine() == 1) && (from.getCharPositionInLine() == 0); @@ -157,6 +160,33 @@ public final class ErrorCollector implements ErrorListener } /** + * Checks if the specified tokens are valid. + * + * @param tokens the tokens to check + * @return true if all the specified tokens are valid ones, false otherwise. + */ +private static boolean areTokensValid(Token... tokens) +{ +for (Token token : tokens) +{ +if (!isTokenValid(token)) +return false; +} +return true; +} + +/** + * Checks that the specified token is valid. + * + * @param token the token to check + * @return true if it is considered as valid, false otherwise. + */ +private static boolean isTokenValid(Token token) +{ +return token.getLine() > 0 && token.getCharPositionInLine() >= 0; +} + +/** * Returns the index of the offending token. In the case where the offending token is an extra * character at the end, the index returned by the TokenStream might be after the last token. * To avoid that problem we need to make sure that the index of the offending token is a valid index http://git-wip-us.apache.org/repos/asf/cassandra/blob/254d6f7e/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java b/src/java/org/apache/cassandra/cql3/QueryProcessor.java index 8e829e8..197225b 100644 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@ -542,6 +542,7 @@ public class QueryProcessor implements QueryHandler } catch (RuntimeException re) { +logger.error(String.format("The statement: [%s] could not be parsed.", queryStr), re); throw new SyntaxException(String.format("Failed parsing statement: [%s] reason: %s %s", queryStr, re.getClass().getSimpleName(), http://git-wip-us.apache.org/repos/asf/cassandra/blob/254d6f7e/test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java b/test/
[1/2] cassandra git commit: Fix AIOOBE when building syntax error message
Repository: cassandra Updated Branches: refs/heads/trunk 66789fe67 -> 3a609c20c Fix AIOOBE when building syntax error message Patch by Benjamin Lerer; reviewed by Tyler Hobbs for CASSANDRA-8455 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/254d6f7e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/254d6f7e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/254d6f7e Branch: refs/heads/trunk Commit: 254d6f7e9c689c639f85f3f5119f4a812d295f05 Parents: c3ac6ba Author: blerer Authored: Fri Dec 12 12:02:40 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 12:03:34 2014 -0600 -- CHANGES.txt | 2 ++ .../apache/cassandra/cql3/ErrorCollector.java | 30 .../apache/cassandra/cql3/QueryProcessor.java | 1 + .../cassandra/cql3/ErrorCollectorTest.java | 17 +++ 4 files changed, 50 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/254d6f7e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 5402ad5..07d526c 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 2.1.3 + * Fix ArrayIndexOutOfBoundsException when generating error message + for some CQL syntax errors (CASSANDRA-8455) * Scale memtable slab allocation logarithmically (CASSANDRA-7882) * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964) * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926) http://git-wip-us.apache.org/repos/asf/cassandra/blob/254d6f7e/src/java/org/apache/cassandra/cql3/ErrorCollector.java -- diff --git a/src/java/org/apache/cassandra/cql3/ErrorCollector.java b/src/java/org/apache/cassandra/cql3/ErrorCollector.java index cd628b8..2137da2 100644 --- a/src/java/org/apache/cassandra/cql3/ErrorCollector.java +++ b/src/java/org/apache/cassandra/cql3/ErrorCollector.java @@ -132,6 +132,9 @@ public final class ErrorCollector implements ErrorListener Token to, Token offending) { +if (!areTokensValid(from, to, offending)) +return; + String[] lines = query.split("\n"); boolean includeQueryStart = (from.getLine() == 1) && (from.getCharPositionInLine() == 0); @@ -157,6 +160,33 @@ public final class ErrorCollector implements ErrorListener } /** + * Checks if the specified tokens are valid. + * + * @param tokens the tokens to check + * @return true if all the specified tokens are valid ones, false otherwise. + */ +private static boolean areTokensValid(Token... tokens) +{ +for (Token token : tokens) +{ +if (!isTokenValid(token)) +return false; +} +return true; +} + +/** + * Checks that the specified token is valid. + * + * @param token the token to check + * @return true if it is considered as valid, false otherwise. + */ +private static boolean isTokenValid(Token token) +{ +return token.getLine() > 0 && token.getCharPositionInLine() >= 0; +} + +/** * Returns the index of the offending token. In the case where the offending token is an extra * character at the end, the index returned by the TokenStream might be after the last token. * To avoid that problem we need to make sure that the index of the offending token is a valid index http://git-wip-us.apache.org/repos/asf/cassandra/blob/254d6f7e/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java b/src/java/org/apache/cassandra/cql3/QueryProcessor.java index 8e829e8..197225b 100644 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@ -542,6 +542,7 @@ public class QueryProcessor implements QueryHandler } catch (RuntimeException re) { +logger.error(String.format("The statement: [%s] could not be parsed.", queryStr), re); throw new SyntaxException(String.format("Failed parsing statement: [%s] reason: %s %s", queryStr, re.getClass().getSimpleName(), http://git-wip-us.apache.org/repos/asf/cassandra/blob/254d6f7e/test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java b/test/unit/org/apache/
[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Conflicts: src/java/org/apache/cassandra/cql3/ErrorCollector.java test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3a609c20 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3a609c20 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3a609c20 Branch: refs/heads/trunk Commit: 3a609c20c947910116ec1447e2dd1227b616b2e8 Parents: 66789fe 254d6f7 Author: Tyler Hobbs Authored: Fri Dec 12 12:06:56 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 12:06:56 2014 -0600 -- CHANGES.txt| 2 ++ .../org/apache/cassandra/cql3/QueryProcessor.java | 1 + .../apache/cassandra/cql3/ErrorCollectorTest.java | 17 + 3 files changed, 20 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a609c20/CHANGES.txt -- diff --cc CHANGES.txt index 10c9fc8,07d526c..985a3c9 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,45 -1,6 +1,47 @@@ +3.0 + * Support for user-defined aggregation functions (CASSANDRA-8053) + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419) + * Refactor SelectStatement, return IN results in natural order instead + of IN value list order (CASSANDRA-7981) + * Support UDTs, tuples, and collections in user-defined + functions (CASSANDRA-7563) + * Fix aggregate fn results on empty selection, result column name, + and cqlsh parsing (CASSANDRA-8229) + * Mark sstables as repaired after full repair (CASSANDRA-7586) + * Extend Descriptor to include a format value and refactor reader/writer apis (CASSANDRA-7443) + * Integrate JMH for microbenchmarks (CASSANDRA-8151) + * Keep sstable levels when bootstrapping (CASSANDRA-7460) + * Add Sigar library and perform basic OS settings check on startup (CASSANDRA-7838) + * Support for aggregation functions (CASSANDRA-4914) + * Remove cassandra-cli (CASSANDRA-7920) + * Accept dollar quoted strings in CQL (CASSANDRA-7769) + * Make assassinate a first class command (CASSANDRA-7935) + * Support IN clause on any clustering column (CASSANDRA-4762) + * Improve compaction logging (CASSANDRA-7818) + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917) + * Do anticompaction in groups (CASSANDRA-6851) + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 7929, + 7924, 7812, 8063, 7813) + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416) + * Move sstable RandomAccessReader to nio2, which allows using the + FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050) + * Remove CQL2 (CASSANDRA-5918) + * Add Thrift get_multi_slice call (CASSANDRA-6757) + * Optimize fetching multiple cells by name (CASSANDRA-6933) + * Allow compilation in java 8 (CASSANDRA-7028) + * Make incremental repair default (CASSANDRA-7250) + * Enable code coverage thru JaCoCo (CASSANDRA-7226) + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) + * Shorten SSTable path (CASSANDRA-6962) + * Use unsafe mutations for most unit tests (CASSANDRA-6969) + * Fix race condition during calculation of pending ranges (CASSANDRA-7390) + * Fail on very large batch sizes (CASSANDRA-8011) + * Improve concurrency of repair (CASSANDRA-6455, 8208) + + 2.1.3 + * Fix ArrayIndexOutOfBoundsException when generating error message +for some CQL syntax errors (CASSANDRA-8455) * Scale memtable slab allocation logarithmically (CASSANDRA-7882) * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964) * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a609c20/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3a609c20/test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java -- diff --cc test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java index 4ecf460,fca93df..899aeec --- a/test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java +++ b/test/unit/org/apache/cassandra/cql3/ErrorCollectorTest.java @@@ -107,30 -107,23 +107,47 @@@ public class ErrorCollectorTes assertEquals(expected, builder.toString()); } +/** + * With ANTLR 3.5.2 it appears that some tokens can contains unexpected values: a line = 0 + * and a charPositionInLine = -1. + */ +@Test +public void testAppendSnippetWithInvalidToken() +{ +String query = "select * fom users"; + +ErrorColl
[jira] [Commented] (CASSANDRA-8390) The process cannot access the file because it is being used by another process
[ https://issues.apache.org/jira/browse/CASSANDRA-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244532#comment-14244532 ] Joshua McKenzie commented on CASSANDRA-8390: Upped to 40 iterations, ran 3 times w/out AV and then put windows defender on the box, activated, mmap on index files and ran up to 120 iterations and still can't reproduce. I'm on spinning disk here w/some other factors so that may play into it if this is a timing issue. The failure to delete within SSTableDeletingTask after successfully deleting the data file is fairly unique to this ticket as far as I know, and while it could indicate logic errors in SSTR.tidy / SSTR.scheduleTidy w/regards to unmapping mmap'ed segments, I'd expect to see this show up with more frequency if it was a logical error such as that. Something else on that system has a handle to that file open and we can't control the ecosystem outside Cassandra which has shown to be a headache on Windows. If either of you could reproduce and run [Handle|http://technet.microsoft.com/en-us/sysinternals/bb896655.aspx] on your system, preferably with admin credentials on the command-prompt, and search for one of these files you're getting an error on it may help us narrow this down. I haven't tried running from within Upsource - [~alexander_radzin]: are you running C* from within upsource as well or running it natively from cassandra.bat? > The process cannot access the file because it is being used by another process > -- > > Key: CASSANDRA-8390 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8390 > Project: Cassandra > Issue Type: Bug >Reporter: Ilya Komolkin >Assignee: Joshua McKenzie > Fix For: 2.1.3 > > > 21:46:27.810 [NonPeriodicTasks:1] ERROR o.a.c.service.CassandraDaemon - > Exception in thread Thread[NonPeriodicTasks:1,5,main] > org.apache.cassandra.io.FSWriteError: java.nio.file.FileSystemException: > E:\Upsource_12391\data\cassandra\data\kernel\filechangehistory_t-a277b560764611e48c8e4915424c75fe\kernel-filechangehistory_t-ka-33-Index.db: > The process cannot access the file because it is being used by another > process. > > at > org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:135) > ~[cassandra-all-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:121) > ~[cassandra-all-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.sstable.SSTable.delete(SSTable.java:113) > ~[cassandra-all-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.sstable.SSTableDeletingTask.run(SSTableDeletingTask.java:94) > ~[cassandra-all-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.sstable.SSTableReader$6.run(SSTableReader.java:664) > ~[cassandra-all-2.1.1.jar:2.1.1] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[na:1.7.0_71] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > ~[na:1.7.0_71] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) > ~[na:1.7.0_71] > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) > ~[na:1.7.0_71] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > ~[na:1.7.0_71] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_71] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71] > Caused by: java.nio.file.FileSystemException: > E:\Upsource_12391\data\cassandra\data\kernel\filechangehistory_t-a277b560764611e48c8e4915424c75fe\kernel-filechangehistory_t-ka-33-Index.db: > The process cannot access the file because it is being used by another > process. > > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) > ~[na:1.7.0_71] > at > sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > ~[na:1.7.0_71] > at > sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) > ~[na:1.7.0_71] > at > sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269) > ~[na:1.7.0_71] > at > sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103) > ~[na:1.7.0_71] > at java.nio.file.Files.delete(Files.java:1079) ~[na:1.7.0_71] > at > org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:131) > ~[cassandra-all-
[jira] [Updated] (CASSANDRA-8463) Upgrading 2.0 to 2.1 causes LCS to recompact all files
[ https://issues.apache.org/jira/browse/CASSANDRA-8463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rick Branson updated CASSANDRA-8463: Attachment: log-for-8463.txt > Upgrading 2.0 to 2.1 causes LCS to recompact all files > -- > > Key: CASSANDRA-8463 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8463 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Hardware is recent 2-socket, 16-core (x2 Hyperthreaded), > 144G RAM, solid-state storage. > Platform is Linux 3.2.51, Oracle JDK 64-bit 1.7.0_65. > Heap is 32G total, 4G newsize. > 8G/8G on-heap/off-heap memtables, offheap_buffer allocator, 0.5 > memtable_cleanup_threshold > concurrent_compactors: 20 >Reporter: Rick Branson >Assignee: Marcus Eriksson > Fix For: 2.1.3 > > Attachments: log-for-8463.txt > > > It appears that tables configured with LCS will completely re-compact > themselves over some period of time after upgrading from 2.0 to 2.1 (2.0.11 > -> 2.1.2, specifically). It starts out with <10 pending tasks for an hour or > so, then starts building up, now with 50-100 tasks pending across the cluster > after 12 hours. These nodes are under heavy write load, but were easily able > to keep up in 2.0 (they rarely had >5 pending compaction tasks), so I don't > think it's LCS in 2.1 actually being worse, just perhaps some different LCS > behavior that causes the layout of tables from 2.0 to prompt the compactor to > reorganize them? > The nodes flushed ~11MB SSTables under 2.0. They're currently flushing ~36MB > SSTables due to the improved memtable setup in 2.1. Before I upgraded the > entire cluster to 2.1, I noticed the problem and tried several variations on > the flush size, thinking perhaps the larger tables in L0 were causing some > kind of cascading compactions. Even if they're sized roughly like the 2.0 > flushes were, same behavior occurs. I also tried both enabling & disabling > STCS in L0 with no real change other than L0 began to back up faster, so I > left the STCS in L0 enabled. > Tables are configured with 32MB sstable_size_in_mb, which was found to be an > improvement on the 160MB table size for compaction performance. Maybe this is > wrong now? Otherwise, the tables are configured with defaults. Compaction has > been unthrottled to help them catch-up. The compaction threads stay very > busy, with the cluster-wide CPU at 45% "nice" time. No nodes have completely > caught up yet. I'll update JIRA with status about their progress if anything > interesting happens. > From a node around 12 hours ago, around an hour after the upgrade, with 19 > pending compaction tasks: > SSTables in each level: [6/4, 10, 105/100, 268, 0, 0, 0, 0, 0] > SSTables in each level: [6/4, 10, 106/100, 271, 0, 0, 0, 0, 0] > SSTables in each level: [1, 16/10, 105/100, 269, 0, 0, 0, 0, 0] > SSTables in each level: [5/4, 10, 103/100, 272, 0, 0, 0, 0, 0] > SSTables in each level: [4, 11/10, 105/100, 270, 0, 0, 0, 0, 0] > SSTables in each level: [1, 12/10, 105/100, 271, 0, 0, 0, 0, 0] > SSTables in each level: [1, 14/10, 104/100, 267, 0, 0, 0, 0, 0] > SSTables in each level: [9/4, 10, 103/100, 265, 0, 0, 0, 0, 0] > Recently, with 41 pending compaction tasks: > SSTables in each level: [4, 13/10, 106/100, 269, 0, 0, 0, 0, 0] > SSTables in each level: [4, 12/10, 106/100, 273, 0, 0, 0, 0, 0] > SSTables in each level: [5/4, 11/10, 106/100, 271, 0, 0, 0, 0, 0] > SSTables in each level: [4, 12/10, 103/100, 275, 0, 0, 0, 0, 0] > SSTables in each level: [2, 13/10, 106/100, 273, 0, 0, 0, 0, 0] > SSTables in each level: [3, 10, 104/100, 275, 0, 0, 0, 0, 0] > SSTables in each level: [6/4, 11/10, 103/100, 269, 0, 0, 0, 0, 0] > SSTables in each level: [4, 16/10, 105/100, 264, 0, 0, 0, 0, 0] > More information about the use case: writes are roughly uniform across these > tables. The data is "sharded" across these 8 tables by key to improve > compaction parallelism. Each node receives up to 75,000 writes/sec sustained > at peak, and a small number of reads. This is a pre-production cluster that's > being warmed up with new data, so the low volume of reads (~100/sec per node) > is just from automatic sampled data checks, otherwise we'd just use STCS :) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps
[ https://issues.apache.org/jira/browse/CASSANDRA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-8473: --- Assignee: Samuel Klock > Secondary index support for key-value pairs in CQL3 maps > > > Key: CASSANDRA-8473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8473 > Project: Cassandra > Issue Type: Improvement >Reporter: Samuel Klock >Assignee: Samuel Klock > Attachments: cassandra-2.1-8473.txt > > > CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary > indexes on CQL3 maps, but support for a natural use case is still missing: > queries to find rows with map columns containing some key-value pair. For > example (from a comment on CASSANDRA-4511): > {code:sql} > SELECT * FROM main.users WHERE notify['email'] = true; > {code} > Cassandra should add support for this kind of index. One option is to expose > a CQL interface like the following: > * Creating an index: > {code:sql} > cqlsh:mykeyspace> CREATE TABLE mytable (key TEXT PRIMARY KEY, value MAP TEXT>); > cqlsh:mykeyspace> CREATE INDEX ON mytable(ENTRIES(value)); > {code} > * Querying the index: > {code:sql} > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('foo', {'a': '1', > 'b': '2', 'c': '3'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('bar', {'a': '1', > 'b': '4'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('baz', {'b': '4', > 'c': '3'}); > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1'; > key | value > -+ > bar | {'a': '1', 'b': '4'} > foo | {'a': '1', 'b': '2', 'c': '3'} > (2 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1' AND value['b'] > = '2' ALLOW FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '2' ALLOW > FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '4'; > key | value > -+-- > bar | {'a': '1', 'b': '4'} > baz | {'b': '4', 'c': '3'} > (2 rows) > {code} > A patch against the Cassandra-2.1 branch that implements this interface will > be attached to this issue shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8192) Better error logging on corrupt compressed SSTables: currently AssertionError in Memory.java
[ https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-8192: --- Priority: Minor (was: Major) Issue Type: Improvement (was: Bug) Summary: Better error logging on corrupt compressed SSTables: currently AssertionError in Memory.java (was: AssertionError in Memory.java) > Better error logging on corrupt compressed SSTables: currently AssertionError > in Memory.java > > > Key: CASSANDRA-8192 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8192 > Project: Cassandra > Issue Type: Improvement > Components: Core > Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67 >Reporter: Andreas Schnitzerling >Assignee: Joshua McKenzie >Priority: Minor > Fix For: 2.1.3 > > Attachments: 8192_v1.txt, cassandra.bat, cassandra.yaml, > logdata-onlinedata-ka-196504-CompressionInfo.zip, printChunkOffsetErrors.txt, > system-compactions_in_progress-ka-47594-CompressionInfo.zip, > system-sstable_activity-jb-25-Filter.zip, system.log, system_AssertionTest.log > > > Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during > start up. > {panel:title=system.log} > ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - > Exception in thread Thread[SSTableBatchOpen:1,5,main] > java.lang.AssertionError: null > at org.apache.cassandra.io.util.Memory.size(Memory.java:307) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:135) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at > org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) > ~[apache-cassandra-2.1.1.jar:2.1.1] > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) > ~[na:1.7.0_55] > at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55] > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > [na:1.7.0_55] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > [na:1.7.0_55] > at java.lang.Thread.run(Unknown Source) [na:1.7.0_55] > {panel} > In the attached log you can still see as well CASSANDRA-8069 and > CASSANDRA-6283. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps
[ https://issues.apache.org/jira/browse/CASSANDRA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-8473: --- Reviewer: Tyler Hobbs > Secondary index support for key-value pairs in CQL3 maps > > > Key: CASSANDRA-8473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8473 > Project: Cassandra > Issue Type: Improvement >Reporter: Samuel Klock > Attachments: cassandra-2.1-8473.txt > > > CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary > indexes on CQL3 maps, but support for a natural use case is still missing: > queries to find rows with map columns containing some key-value pair. For > example (from a comment on CASSANDRA-4511): > {code:sql} > SELECT * FROM main.users WHERE notify['email'] = true; > {code} > Cassandra should add support for this kind of index. One option is to expose > a CQL interface like the following: > * Creating an index: > {code:sql} > cqlsh:mykeyspace> CREATE TABLE mytable (key TEXT PRIMARY KEY, value MAP TEXT>); > cqlsh:mykeyspace> CREATE INDEX ON mytable(ENTRIES(value)); > {code} > * Querying the index: > {code:sql} > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('foo', {'a': '1', > 'b': '2', 'c': '3'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('bar', {'a': '1', > 'b': '4'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('baz', {'b': '4', > 'c': '3'}); > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1'; > key | value > -+ > bar | {'a': '1', 'b': '4'} > foo | {'a': '1', 'b': '2', 'c': '3'} > (2 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1' AND value['b'] > = '2' ALLOW FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '2' ALLOW > FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '4'; > key | value > -+-- > bar | {'a': '1', 'b': '4'} > baz | {'b': '4', 'c': '3'} > (2 rows) > {code} > A patch against the Cassandra-2.1 branch that implements this interface will > be attached to this issue shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8374) Better support of null for UDF
[ https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244506#comment-14244506 ] Jonathan Ellis commented on CASSANDRA-8374: --- bq. let's stick to boxed types for return types and say that no ALLOW NULLS simply mean "If any of the argument is null, the function is not called but null is returned instead". Shouldn't we raise an error instead of silently short circuiting to null? > Better support of null for UDF > -- > > Key: CASSANDRA-8374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8374 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Robert Stupp > Fix For: 3.0 > > > Currently, every function needs to deal with it's argument potentially being > {{null}}. There is very many case where that's just annoying, users should be > able to define a function like: > {noformat} > CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;' > {noformat} > without having this crashing as soon as a column it's applied to doesn't a > value for some rows (I'll note that this definition apparently cannot be > compiled currently, which should be looked into). > In fact, I think that by default methods shouldn't have to care about > {{null}} values: if the value is {{null}}, we should not call the method at > all and return {{null}}. There is still methods that may explicitely want to > handle {{null}} (to return a default value for instance), so maybe we can add > an {{ALLOW NULLS}} to the creation syntax. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/2] cassandra git commit: Release references to sstables on failed SSTableWriter.openEarly
Repository: cassandra Updated Branches: refs/heads/trunk 4125ca0aa -> 66789fe67 Release references to sstables on failed SSTableWriter.openEarly Patch by jmckenzie; reviewed by benedict for CASSANDRA-8248 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c3ac6baa Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c3ac6baa Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c3ac6baa Branch: refs/heads/trunk Commit: c3ac6baac7bfea790a74fe7bc0a62a65202cb67e Parents: 24e895c Author: Joshua McKenzie Authored: Fri Dec 12 11:48:37 2014 -0600 Committer: Joshua McKenzie Committed: Fri Dec 12 11:48:37 2014 -0600 -- src/java/org/apache/cassandra/io/sstable/SSTableWriter.java | 7 +++ 1 file changed, 7 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c3ac6baa/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java index 53176e3..ec64561 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java @@ -413,7 +413,11 @@ public class SSTableWriter extends SSTable sstable.last = getMinimalKey(exclusiveUpperBoundOfReadableIndex); DecoratedKey inclusiveUpperBoundOfReadableData = iwriter.getMaxReadableKey(1); if (inclusiveUpperBoundOfReadableData == null) +{ +// Prevent leaving tmplink files on disk +sstable.releaseReference(); return null; +} int offset = 2; while (true) { @@ -422,7 +426,10 @@ public class SSTableWriter extends SSTable break; inclusiveUpperBoundOfReadableData = iwriter.getMaxReadableKey(offset++); if (inclusiveUpperBoundOfReadableData == null) +{ +sstable.releaseReference(); return null; +} } sstable.last = getMinimalKey(inclusiveUpperBoundOfReadableData); return sstable;
[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66789fe6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66789fe6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66789fe6 Branch: refs/heads/trunk Commit: 66789fe674479a868b31d9edf002fe0d3dd0fd46 Parents: 4125ca0 c3ac6ba Author: Joshua McKenzie Authored: Fri Dec 12 11:49:29 2014 -0600 Committer: Joshua McKenzie Committed: Fri Dec 12 11:49:29 2014 -0600 -- .../cassandra/io/sstable/format/big/BigTableWriter.java | 7 +++ 1 file changed, 7 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/66789fe6/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java -- diff --cc src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java index 5221509,000..7c68c8a mode 100644,00..100644 --- a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java @@@ -1,550 -1,0 +1,557 @@@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.io.sstable.format.big; + +import java.io.Closeable; +import java.io.DataInput; +import java.io.File; +import java.io.FileOutputStream; +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.Collections; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; + +import org.apache.cassandra.db.*; +import org.apache.cassandra.io.sstable.*; +import org.apache.cassandra.io.sstable.format.SSTableReader; +import org.apache.cassandra.io.sstable.format.SSTableWriter; +import org.apache.cassandra.io.sstable.format.Version; +import org.apache.cassandra.io.util.*; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.config.DatabaseDescriptor; +import org.apache.cassandra.db.compaction.AbstractCompactedRow; +import org.apache.cassandra.dht.IPartitioner; +import org.apache.cassandra.io.FSWriteError; +import org.apache.cassandra.io.compress.CompressedSequentialWriter; +import org.apache.cassandra.io.sstable.metadata.MetadataCollector; +import org.apache.cassandra.io.sstable.metadata.MetadataComponent; +import org.apache.cassandra.io.sstable.metadata.MetadataType; +import org.apache.cassandra.io.sstable.metadata.StatsMetadata; +import org.apache.cassandra.io.util.DataOutputPlus; +import org.apache.cassandra.io.util.DataOutputStreamAndChannel; +import org.apache.cassandra.io.util.FileMark; +import org.apache.cassandra.io.util.FileUtils; +import org.apache.cassandra.io.util.SegmentedFile; +import org.apache.cassandra.io.util.SequentialWriter; +import org.apache.cassandra.service.ActiveRepairService; +import org.apache.cassandra.service.StorageService; +import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.utils.FBUtilities; +import org.apache.cassandra.utils.FilterFactory; +import org.apache.cassandra.utils.IFilter; +import org.apache.cassandra.utils.Pair; +import org.apache.cassandra.utils.StreamingHistogram; + +public class BigTableWriter extends SSTableWriter +{ +private static final Logger logger = LoggerFactory.getLogger(BigTableWriter.class); + +// not very random, but the only value that can't be mistaken for a legal column-name length +public static final int END_OF_ROW = 0x; + +private IndexWriter iwriter; +private SegmentedFile.Builder dbuilder; +private final SequentialWriter dataFile; +private DecoratedKey lastWrittenKey; +private FileMark dataMark; + +BigTableWriter(Descriptor descriptor, Long keyCount, Long repairedAt, CFMetaData metadata, IPartitioner partitioner, MetadataCollector metadataCollector) +{ +super(descriptor, keyCount, repairedAt, metadata, partit
cassandra git commit: Release references to sstables on failed SSTableWriter.openEarly
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 24e895c4c -> c3ac6baac Release references to sstables on failed SSTableWriter.openEarly Patch by jmckenzie; reviewed by benedict for CASSANDRA-8248 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c3ac6baa Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c3ac6baa Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c3ac6baa Branch: refs/heads/cassandra-2.1 Commit: c3ac6baac7bfea790a74fe7bc0a62a65202cb67e Parents: 24e895c Author: Joshua McKenzie Authored: Fri Dec 12 11:48:37 2014 -0600 Committer: Joshua McKenzie Committed: Fri Dec 12 11:48:37 2014 -0600 -- src/java/org/apache/cassandra/io/sstable/SSTableWriter.java | 7 +++ 1 file changed, 7 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c3ac6baa/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java index 53176e3..ec64561 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java @@ -413,7 +413,11 @@ public class SSTableWriter extends SSTable sstable.last = getMinimalKey(exclusiveUpperBoundOfReadableIndex); DecoratedKey inclusiveUpperBoundOfReadableData = iwriter.getMaxReadableKey(1); if (inclusiveUpperBoundOfReadableData == null) +{ +// Prevent leaving tmplink files on disk +sstable.releaseReference(); return null; +} int offset = 2; while (true) { @@ -422,7 +426,10 @@ public class SSTableWriter extends SSTable break; inclusiveUpperBoundOfReadableData = iwriter.getMaxReadableKey(offset++); if (inclusiveUpperBoundOfReadableData == null) +{ +sstable.releaseReference(); return null; +} } sstable.last = getMinimalKey(inclusiveUpperBoundOfReadableData); return sstable;
[jira] [Updated] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps
[ https://issues.apache.org/jira/browse/CASSANDRA-8473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samuel Klock updated CASSANDRA-8473: Attachment: cassandra-2.1-8473.txt Attaching proposed patch. > Secondary index support for key-value pairs in CQL3 maps > > > Key: CASSANDRA-8473 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8473 > Project: Cassandra > Issue Type: Improvement >Reporter: Samuel Klock > Attachments: cassandra-2.1-8473.txt > > > CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary > indexes on CQL3 maps, but support for a natural use case is still missing: > queries to find rows with map columns containing some key-value pair. For > example (from a comment on CASSANDRA-4511): > {code:sql} > SELECT * FROM main.users WHERE notify['email'] = true; > {code} > Cassandra should add support for this kind of index. One option is to expose > a CQL interface like the following: > * Creating an index: > {code:sql} > cqlsh:mykeyspace> CREATE TABLE mytable (key TEXT PRIMARY KEY, value MAP TEXT>); > cqlsh:mykeyspace> CREATE INDEX ON mytable(ENTRIES(value)); > {code} > * Querying the index: > {code:sql} > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('foo', {'a': '1', > 'b': '2', 'c': '3'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('bar', {'a': '1', > 'b': '4'}); > cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('baz', {'b': '4', > 'c': '3'}); > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1'; > key | value > -+ > bar | {'a': '1', 'b': '4'} > foo | {'a': '1', 'b': '2', 'c': '3'} > (2 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1' AND value['b'] > = '2' ALLOW FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '2' ALLOW > FILTERING; > key | value > -+ > foo | {'a': '1', 'b': '2', 'c': '3'} > (1 rows) > cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '4'; > key | value > -+-- > bar | {'a': '1', 'b': '4'} > baz | {'b': '4', 'c': '3'} > (2 rows) > {code} > A patch against the Cassandra-2.1 branch that implements this interface will > be attached to this issue shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8410) Select with many IN values on clustering columns can result in a StackOverflowError
[ https://issues.apache.org/jira/browse/CASSANDRA-8410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-8410: --- Attachment: 8410-2.1-v2.txt > Select with many IN values on clustering columns can result in a > StackOverflowError > --- > > Key: CASSANDRA-8410 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8410 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Tyler Hobbs >Assignee: Tyler Hobbs > Fix For: 2.0.12, 2.1.3 > > Attachments: 8410-2.0.txt, 8410-2.1-v2.txt, 8410-2.1.txt > > > When executing a SELECT statement with an IN clause on the clustering > columns, a StackOverflowError can occur if the memtable doesn't contain any > of the requested slices. In 2.0, this happens with the following stack trace: > {noformat} > ERROR [ReadStage:23] 2014-12-02 14:53:11,077 CassandraDaemon.java (line 199) > Exception in thread Thread[ReadStage:23,5,main] > java.lang.StackOverflowError > at org.apache.cassandra.db.marshal.Int32Type.compare(Int32Type.java:52) > at org.apache.cassandra.db.marshal.Int32Type.compare(Int32Type.java:28) > at > org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:279) > at > org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:64) > at > org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:36) > at edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538) > at edu.stanford.ppl.concurrent.SnapTreeMap.boundedMax(SnapTreeMap.java:905) > at > edu.stanford.ppl.concurrent.SnapTreeMap.boundedExtreme(SnapTreeMap.java:833) > at edu.stanford.ppl.concurrent.SnapTreeMap.access$1000(SnapTreeMap.java:90) > at > edu.stanford.ppl.concurrent.SnapTreeMap$AbstractIter.(SnapTreeMap.java:2028) > at > edu.stanford.ppl.concurrent.SnapTreeMap$EntryIter.(SnapTreeMap.java:1951) > at > edu.stanford.ppl.concurrent.SnapTreeMap$EntryIter.(SnapTreeMap.java:1940) > at > edu.stanford.ppl.concurrent.SnapTreeMap$SubMap$EntrySubSet.iterator(SnapTreeMap.java:2462) > at java.util.AbstractMap$2$1.(AbstractMap.java:378) > at java.util.AbstractMap$2.iterator(AbstractMap.java:377) > at > org.apache.cassandra.db.filter.ColumnSlice$NavigableMapIterator.computeNext(ColumnSlice.java:154) > at > org.apache.cassandra.db.filter.ColumnSlice$NavigableMapIterator.computeNext(ColumnSlice.java:162) > at > org.apache.cassandra.db.filter.ColumnSlice$NavigableMapIterator.computeNext(ColumnSlice.java:162) > {noformat} > In 2.1, there's a similar error, but it occurs in AtomicBTreeColumns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8473) Secondary index support for key-value pairs in CQL3 maps
Samuel Klock created CASSANDRA-8473: --- Summary: Secondary index support for key-value pairs in CQL3 maps Key: CASSANDRA-8473 URL: https://issues.apache.org/jira/browse/CASSANDRA-8473 Project: Cassandra Issue Type: Improvement Reporter: Samuel Klock CASSANDRA-4511 and CASSANDRA-6383 made substantial progress on secondary indexes on CQL3 maps, but support for a natural use case is still missing: queries to find rows with map columns containing some key-value pair. For example (from a comment on CASSANDRA-4511): {code:sql} SELECT * FROM main.users WHERE notify['email'] = true; {code} Cassandra should add support for this kind of index. One option is to expose a CQL interface like the following: * Creating an index: {code:sql} cqlsh:mykeyspace> CREATE TABLE mytable (key TEXT PRIMARY KEY, value MAP); cqlsh:mykeyspace> CREATE INDEX ON mytable(ENTRIES(value)); {code} * Querying the index: {code:sql} cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('foo', {'a': '1', 'b': '2', 'c': '3'}); cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('bar', {'a': '1', 'b': '4'}); cqlsh:mykeyspace> INSERT INTO mytable (key, value) VALUES ('baz', {'b': '4', 'c': '3'}); cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1'; key | value -+ bar | {'a': '1', 'b': '4'} foo | {'a': '1', 'b': '2', 'c': '3'} (2 rows) cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['a'] = '1' AND value['b'] = '2' ALLOW FILTERING; key | value -+ foo | {'a': '1', 'b': '2', 'c': '3'} (1 rows) cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '2' ALLOW FILTERING; key | value -+ foo | {'a': '1', 'b': '2', 'c': '3'} (1 rows) cqlsh:mykeyspace> SELECT * FROM mytable WHERE value['b'] = '4'; key | value -+-- bar | {'a': '1', 'b': '4'} baz | {'b': '4', 'c': '3'} (2 rows) {code} A patch against the Cassandra-2.1 branch that implements this interface will be attached to this issue shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4125ca0a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4125ca0a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4125ca0a Branch: refs/heads/trunk Commit: 4125ca0aaa73cd7e1d52718c7ec1a145696cc957 Parents: e530f42 24e895c Author: Tyler Hobbs Authored: Fri Dec 12 11:43:02 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 11:43:02 2014 -0600 -- CHANGES.txt | 2 ++ .../apache/cassandra/db/AtomicBTreeColumns.java | 27 ++-- .../cql3/SingleColumnRelationTest.java | 16 3 files changed, 32 insertions(+), 13 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4125ca0a/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4125ca0a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4125ca0a/test/unit/org/apache/cassandra/cql3/SingleColumnRelationTest.java -- diff --cc test/unit/org/apache/cassandra/cql3/SingleColumnRelationTest.java index e6412a3,2ad4bda..0fd300b --- a/test/unit/org/apache/cassandra/cql3/SingleColumnRelationTest.java +++ b/test/unit/org/apache/cassandra/cql3/SingleColumnRelationTest.java @@@ -17,10 -17,11 +17,13 @@@ */ package org.apache.cassandra.cql3; +import java.util.Arrays; + import org.junit.Test; + import java.util.ArrayList; + import java.util.List; + public class SingleColumnRelationTest extends CQLTester { @Test @@@ -47,336 -45,23 +50,349 @@@ execute("INSERT INTO %s (a, b, c) VALUES (0, {0}, 0)"); // non-EQ operators -assertInvalid("SELECT * FROM %s WHERE c = 0 AND b > ?", set(0)); -assertInvalid("SELECT * FROM %s WHERE c = 0 AND b >= ?", set(0)); -assertInvalid("SELECT * FROM %s WHERE c = 0 AND b < ?", set(0)); -assertInvalid("SELECT * FROM %s WHERE c = 0 AND b <= ?", set(0)); -assertInvalid("SELECT * FROM %s WHERE c = 0 AND b IN (?)", set(0)); +assertInvalidMessage("Collection column 'b' (set) cannot be restricted by a '>' relation", + "SELECT * FROM %s WHERE c = 0 AND b > ?", set(0)); +assertInvalidMessage("Collection column 'b' (set) cannot be restricted by a '>=' relation", + "SELECT * FROM %s WHERE c = 0 AND b >= ?", set(0)); +assertInvalidMessage("Collection column 'b' (set) cannot be restricted by a '<' relation", + "SELECT * FROM %s WHERE c = 0 AND b < ?", set(0)); +assertInvalidMessage("Collection column 'b' (set) cannot be restricted by a '<=' relation", + "SELECT * FROM %s WHERE c = 0 AND b <= ?", set(0)); +assertInvalidMessage("Collection column 'b' (set) cannot be restricted by a 'IN' relation", + "SELECT * FROM %s WHERE c = 0 AND b IN (?)", set(0)); +} + +@Test +public void testClusteringColumnRelations() throws Throwable +{ +createTable("CREATE TABLE %s (a text, b int, c int, d int, primary key(a, b, c))"); +execute("insert into %s (a, b, c, d) values (?, ?, ?, ?)", "first", 1, 5, 1); +execute("insert into %s (a, b, c, d) values (?, ?, ?, ?)", "first", 2, 6, 2); +execute("insert into %s (a, b, c, d) values (?, ?, ?, ?)", "first", 3, 7, 3); +execute("insert into %s (a, b, c, d) values (?, ?, ?, ?)", "second", 4, 8, 4); + +testSelectQueriesWithClusteringColumnRelations(); +} + +@Test +public void testClusteringColumnRelationsWithCompactStorage() throws Throwable +{ +createTable("CREATE TABLE %s (a text, b int, c int, d int, primary key(a, b, c)) WITH COMPACT STORAGE;"); +execute("insert into %s (a, b, c, d) values (?, ?, ?, ?)", "first", 1, 5, 1); +execute("insert into %s (a, b, c, d) values (?, ?, ?, ?)", "first", 2, 6, 2); +execute("insert into %s (a, b, c, d) values (?, ?, ?, ?)", "first", 3, 7, 3); +execute("insert into %s (a, b, c, d) values (?, ?, ?, ?)", "second", 4, 8, 4); + +testSelectQueriesWithClusteringColumnRelations(); +} + +private void testSelectQueriesWithClusteringColumnRelations() throws Throwable +{ +assertRows(execute("select * from %s where a in (?, ?)", "first", "second"), + row("first", 1, 5, 1), + row("first", 2, 6, 2), +
[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/24e895c4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/24e895c4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/24e895c4 Branch: refs/heads/cassandra-2.1 Commit: 24e895c4c73dcc1f849232c6ae54c73bc16ab831 Parents: 025a635 9dc9185 Author: Tyler Hobbs Authored: Fri Dec 12 11:42:42 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 11:42:42 2014 -0600 -- CHANGES.txt | 2 ++ .../apache/cassandra/db/AtomicBTreeColumns.java | 27 ++-- .../cql3/SingleColumnRelationTest.java | 16 3 files changed, 32 insertions(+), 13 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/24e895c4/CHANGES.txt -- diff --cc CHANGES.txt index 579fd62,6cecf99..5402ad5 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,29 -1,6 +1,31 @@@ -2.0.12: +2.1.3 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882) + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964) + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926) + * Ensure memtable flush cannot expire commit log entries from its future (CASSANDRA-8383) + * Make read "defrag" async to reclaim memtables (CASSANDRA-8459) + * Remove tmplink files for offline compactions (CASSANDRA-8321) + * Reduce maxHintsInProgress (CASSANDRA-8415) + * BTree updates may call provided update function twice (CASSANDRA-8018) + * Release sstable references after anticompaction (CASSANDRA-8386) + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320) + * Fix high size calculations for prepared statements (CASSANDRA-8231) + * Centralize shared executors (CASSANDRA-8055) + * Fix filtering for CONTAINS (KEY) relations on frozen collection + clustering columns when the query is restricted to a single + partition (CASSANDRA-8203) + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243) + * Add more log info if readMeter is null (CASSANDRA-8238) + * add check of the system wall clock time at startup (CASSANDRA-8305) + * Support for frozen collections (CASSANDRA-7859) + * Fix overflow on histogram computation (CASSANDRA-8028) + * Have paxos reuse the timestamp generation of normal queries (CASSANDRA-7801) + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291) + * Improve JBOD disk utilization (CASSANDRA-7386) + * Log failed host when preparing incremental repair (CASSANDRA-8228) +Merged from 2.0: + * Avoid StackOverflowError when a large list of IN values +is used for a clustering column (CASSANDRA-8410) * Fix NPE when writetime() or ttl() calls are wrapped by another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) http://git-wip-us.apache.org/repos/asf/cassandra/blob/24e895c4/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java -- diff --cc src/java/org/apache/cassandra/db/AtomicBTreeColumns.java index 372ce5c,000..dc2b5ee mode 100644,00..100644 --- a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java +++ b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java @@@ -1,558 -1,0 +1,559 @@@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.db; + +import java.util.AbstractCollection; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Comparator; +import java.util.Iterator; +import java.util.List; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; + +import com.google.common.base.Function; +import com.google.common.base.Functions; +import com.google.common.collect.AbstractIterator; +import com.google.common.collect.Iterators; + +import org.
[1/3] cassandra git commit: Avoid stack overflow on large clustering IN values
Repository: cassandra Updated Branches: refs/heads/trunk e530f4230 -> 4125ca0aa Avoid stack overflow on large clustering IN values Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-8410 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9dc9185f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9dc9185f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9dc9185f Branch: refs/heads/trunk Commit: 9dc9185f5c7172915485f713dbbb6b78b22d0f66 Parents: 3f3d0ed Author: Tyler Hobbs Authored: Fri Dec 12 11:41:06 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 11:41:06 2014 -0600 -- CHANGES.txt | 2 + .../apache/cassandra/db/filter/ColumnSlice.java | 48 ++-- 2 files changed, 26 insertions(+), 24 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9dc9185f/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index cc426bb..6cecf99 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 2.0.12: + * Avoid StackOverflowError when a large list of IN values + is used for a clustering column (CASSANDRA-8410) * Fix NPE when writetime() or ttl() calls are wrapped by another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) http://git-wip-us.apache.org/repos/asf/cassandra/blob/9dc9185f/src/java/org/apache/cassandra/db/filter/ColumnSlice.java -- diff --git a/src/java/org/apache/cassandra/db/filter/ColumnSlice.java b/src/java/org/apache/cassandra/db/filter/ColumnSlice.java index 9eff12a..6a9efbb 100644 --- a/src/java/org/apache/cassandra/db/filter/ColumnSlice.java +++ b/src/java/org/apache/cassandra/db/filter/ColumnSlice.java @@ -130,36 +130,36 @@ public class ColumnSlice protected Column computeNext() { -if (currentSlice == null) +while (currentSlice != null || idx < slices.length) { -if (idx >= slices.length) -return endOfData(); - -ColumnSlice slice = slices[idx++]; -// Note: we specialize the case of start == "" and finish = "" because it is slightly more efficient, but also they have a specific -// meaning (namely, they always extend to the beginning/end of the range). -if (slice.start.remaining() == 0) +if (currentSlice == null) { -if (slice.finish.remaining() == 0) -currentSlice = map.values().iterator(); +ColumnSlice slice = slices[idx++]; +// Note: we specialize the case of start == "" and finish = "" because it is slightly more efficient, but also they have a specific +// meaning (namely, they always extend to the beginning/end of the range). +if (slice.start.remaining() == 0) +{ +if (slice.finish.remaining() == 0) +currentSlice = map.values().iterator(); +else +currentSlice = map.headMap(slice.finish, true).values().iterator(); +} +else if (slice.finish.remaining() == 0) +{ +currentSlice = map.tailMap(slice.start, true).values().iterator(); +} else -currentSlice = map.headMap(slice.finish, true).values().iterator(); -} -else if (slice.finish.remaining() == 0) -{ -currentSlice = map.tailMap(slice.start, true).values().iterator(); +{ +currentSlice = map.subMap(slice.start, true, slice.finish, true).values().iterator(); +} } -else -{ -currentSlice = map.subMap(slice.start, true, slice.finish, true).values().iterator(); -} -} -if (currentSlice.hasNext()) -return currentSlice.next(); +if (currentSlice.hasNext()) +return currentSlice.next(); -currentSlice = null; -return computeNext(); +currentSlice = null; +} +return endOfData(); } } }
[1/2] cassandra git commit: Avoid stack overflow on large clustering IN values
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 025a63599 -> 24e895c4c Avoid stack overflow on large clustering IN values Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-8410 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9dc9185f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9dc9185f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9dc9185f Branch: refs/heads/cassandra-2.1 Commit: 9dc9185f5c7172915485f713dbbb6b78b22d0f66 Parents: 3f3d0ed Author: Tyler Hobbs Authored: Fri Dec 12 11:41:06 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 11:41:06 2014 -0600 -- CHANGES.txt | 2 + .../apache/cassandra/db/filter/ColumnSlice.java | 48 ++-- 2 files changed, 26 insertions(+), 24 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9dc9185f/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index cc426bb..6cecf99 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 2.0.12: + * Avoid StackOverflowError when a large list of IN values + is used for a clustering column (CASSANDRA-8410) * Fix NPE when writetime() or ttl() calls are wrapped by another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) http://git-wip-us.apache.org/repos/asf/cassandra/blob/9dc9185f/src/java/org/apache/cassandra/db/filter/ColumnSlice.java -- diff --git a/src/java/org/apache/cassandra/db/filter/ColumnSlice.java b/src/java/org/apache/cassandra/db/filter/ColumnSlice.java index 9eff12a..6a9efbb 100644 --- a/src/java/org/apache/cassandra/db/filter/ColumnSlice.java +++ b/src/java/org/apache/cassandra/db/filter/ColumnSlice.java @@ -130,36 +130,36 @@ public class ColumnSlice protected Column computeNext() { -if (currentSlice == null) +while (currentSlice != null || idx < slices.length) { -if (idx >= slices.length) -return endOfData(); - -ColumnSlice slice = slices[idx++]; -// Note: we specialize the case of start == "" and finish = "" because it is slightly more efficient, but also they have a specific -// meaning (namely, they always extend to the beginning/end of the range). -if (slice.start.remaining() == 0) +if (currentSlice == null) { -if (slice.finish.remaining() == 0) -currentSlice = map.values().iterator(); +ColumnSlice slice = slices[idx++]; +// Note: we specialize the case of start == "" and finish = "" because it is slightly more efficient, but also they have a specific +// meaning (namely, they always extend to the beginning/end of the range). +if (slice.start.remaining() == 0) +{ +if (slice.finish.remaining() == 0) +currentSlice = map.values().iterator(); +else +currentSlice = map.headMap(slice.finish, true).values().iterator(); +} +else if (slice.finish.remaining() == 0) +{ +currentSlice = map.tailMap(slice.start, true).values().iterator(); +} else -currentSlice = map.headMap(slice.finish, true).values().iterator(); -} -else if (slice.finish.remaining() == 0) -{ -currentSlice = map.tailMap(slice.start, true).values().iterator(); +{ +currentSlice = map.subMap(slice.start, true, slice.finish, true).values().iterator(); +} } -else -{ -currentSlice = map.subMap(slice.start, true, slice.finish, true).values().iterator(); -} -} -if (currentSlice.hasNext()) -return currentSlice.next(); +if (currentSlice.hasNext()) +return currentSlice.next(); -currentSlice = null; -return computeNext(); +currentSlice = null; +} +return endOfData(); } } }
[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/24e895c4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/24e895c4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/24e895c4 Branch: refs/heads/trunk Commit: 24e895c4c73dcc1f849232c6ae54c73bc16ab831 Parents: 025a635 9dc9185 Author: Tyler Hobbs Authored: Fri Dec 12 11:42:42 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 11:42:42 2014 -0600 -- CHANGES.txt | 2 ++ .../apache/cassandra/db/AtomicBTreeColumns.java | 27 ++-- .../cql3/SingleColumnRelationTest.java | 16 3 files changed, 32 insertions(+), 13 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/24e895c4/CHANGES.txt -- diff --cc CHANGES.txt index 579fd62,6cecf99..5402ad5 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,29 -1,6 +1,31 @@@ -2.0.12: +2.1.3 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882) + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964) + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926) + * Ensure memtable flush cannot expire commit log entries from its future (CASSANDRA-8383) + * Make read "defrag" async to reclaim memtables (CASSANDRA-8459) + * Remove tmplink files for offline compactions (CASSANDRA-8321) + * Reduce maxHintsInProgress (CASSANDRA-8415) + * BTree updates may call provided update function twice (CASSANDRA-8018) + * Release sstable references after anticompaction (CASSANDRA-8386) + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320) + * Fix high size calculations for prepared statements (CASSANDRA-8231) + * Centralize shared executors (CASSANDRA-8055) + * Fix filtering for CONTAINS (KEY) relations on frozen collection + clustering columns when the query is restricted to a single + partition (CASSANDRA-8203) + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243) + * Add more log info if readMeter is null (CASSANDRA-8238) + * add check of the system wall clock time at startup (CASSANDRA-8305) + * Support for frozen collections (CASSANDRA-7859) + * Fix overflow on histogram computation (CASSANDRA-8028) + * Have paxos reuse the timestamp generation of normal queries (CASSANDRA-7801) + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291) + * Improve JBOD disk utilization (CASSANDRA-7386) + * Log failed host when preparing incremental repair (CASSANDRA-8228) +Merged from 2.0: + * Avoid StackOverflowError when a large list of IN values +is used for a clustering column (CASSANDRA-8410) * Fix NPE when writetime() or ttl() calls are wrapped by another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) http://git-wip-us.apache.org/repos/asf/cassandra/blob/24e895c4/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java -- diff --cc src/java/org/apache/cassandra/db/AtomicBTreeColumns.java index 372ce5c,000..dc2b5ee mode 100644,00..100644 --- a/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java +++ b/src/java/org/apache/cassandra/db/AtomicBTreeColumns.java @@@ -1,558 -1,0 +1,559 @@@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.db; + +import java.util.AbstractCollection; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Comparator; +import java.util.Iterator; +import java.util.List; +import java.util.concurrent.atomic.AtomicIntegerFieldUpdater; +import java.util.concurrent.atomic.AtomicReferenceFieldUpdater; + +import com.google.common.base.Function; +import com.google.common.base.Functions; +import com.google.common.collect.AbstractIterator; +import com.google.common.collect.Iterators; + +import org.apache.c
cassandra git commit: Avoid stack overflow on large clustering IN values
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 3f3d0edba -> 9dc9185f5 Avoid stack overflow on large clustering IN values Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-8410 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9dc9185f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9dc9185f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9dc9185f Branch: refs/heads/cassandra-2.0 Commit: 9dc9185f5c7172915485f713dbbb6b78b22d0f66 Parents: 3f3d0ed Author: Tyler Hobbs Authored: Fri Dec 12 11:41:06 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 11:41:06 2014 -0600 -- CHANGES.txt | 2 + .../apache/cassandra/db/filter/ColumnSlice.java | 48 ++-- 2 files changed, 26 insertions(+), 24 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9dc9185f/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index cc426bb..6cecf99 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 2.0.12: + * Avoid StackOverflowError when a large list of IN values + is used for a clustering column (CASSANDRA-8410) * Fix NPE when writetime() or ttl() calls are wrapped by another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) http://git-wip-us.apache.org/repos/asf/cassandra/blob/9dc9185f/src/java/org/apache/cassandra/db/filter/ColumnSlice.java -- diff --git a/src/java/org/apache/cassandra/db/filter/ColumnSlice.java b/src/java/org/apache/cassandra/db/filter/ColumnSlice.java index 9eff12a..6a9efbb 100644 --- a/src/java/org/apache/cassandra/db/filter/ColumnSlice.java +++ b/src/java/org/apache/cassandra/db/filter/ColumnSlice.java @@ -130,36 +130,36 @@ public class ColumnSlice protected Column computeNext() { -if (currentSlice == null) +while (currentSlice != null || idx < slices.length) { -if (idx >= slices.length) -return endOfData(); - -ColumnSlice slice = slices[idx++]; -// Note: we specialize the case of start == "" and finish = "" because it is slightly more efficient, but also they have a specific -// meaning (namely, they always extend to the beginning/end of the range). -if (slice.start.remaining() == 0) +if (currentSlice == null) { -if (slice.finish.remaining() == 0) -currentSlice = map.values().iterator(); +ColumnSlice slice = slices[idx++]; +// Note: we specialize the case of start == "" and finish = "" because it is slightly more efficient, but also they have a specific +// meaning (namely, they always extend to the beginning/end of the range). +if (slice.start.remaining() == 0) +{ +if (slice.finish.remaining() == 0) +currentSlice = map.values().iterator(); +else +currentSlice = map.headMap(slice.finish, true).values().iterator(); +} +else if (slice.finish.remaining() == 0) +{ +currentSlice = map.tailMap(slice.start, true).values().iterator(); +} else -currentSlice = map.headMap(slice.finish, true).values().iterator(); -} -else if (slice.finish.remaining() == 0) -{ -currentSlice = map.tailMap(slice.start, true).values().iterator(); +{ +currentSlice = map.subMap(slice.start, true, slice.finish, true).values().iterator(); +} } -else -{ -currentSlice = map.subMap(slice.start, true, slice.finish, true).values().iterator(); -} -} -if (currentSlice.hasNext()) -return currentSlice.next(); +if (currentSlice.hasNext()) +return currentSlice.next(); -currentSlice = null; -return computeNext(); +currentSlice = null; +} +return endOfData(); } } }
[jira] [Commented] (CASSANDRA-8043) Native Protocol V4
[ https://issues.apache.org/jira/browse/CASSANDRA-8043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244490#comment-14244490 ] Sergio Bossa commented on CASSANDRA-8043: - Another useful improvement would be to include a generic key-value payload, so that people using custom {{QueryHandler}}s could leverage that to move custom data back and forth: does it sound feasible? > Native Protocol V4 > -- > > Key: CASSANDRA-8043 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8043 > Project: Cassandra > Issue Type: Task >Reporter: Sylvain Lebresne > Labels: protocolv4 > Fix For: 3.0 > > > We have a bunch of issues that will require a protocol v4, this ticket is > just a meta ticket to group them all. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-8428) Nodetool Drain kills C* Process
[ https://issues.apache.org/jira/browse/CASSANDRA-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-8428. Resolution: Not a Problem Reproduced In: 2.0.11, 1.2.19 (was: 1.2.19, 2.0.11) I'll open a new ticket for the other problem. > Nodetool Drain kills C* Process > --- > > Key: CASSANDRA-8428 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8428 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Brandon Williams > Labels: nodetool > Fix For: 2.0.12 > > Attachments: system.log > > > Nodetool Drain is documented at http://wiki.apache.org/cassandra/NodeTool and > in the nodetool help to flush a node and stop accepting writes. This is the > behavior I see with 2.1.2. > In 2.0.11 and 1.2.19, instead the Cassandra Process is killed. In the 1.2.19 > logs, I see: > {code} > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:44,234 > StorageService.java (line 964) > DRAINING: starting drain process > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:44,235 > ThriftServer.java (line 116) S > top listening to thrift clients > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:44,239 Server.java > (line 159) Stop li > stening for CQL clients > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:44,239 > Gossiper.java (line 1203) Announcing shutdown > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:46,240 > MessagingService.java (line 696) Waiting for messaging service to quiesce > INFO [ACCEPT-/127.0.0.1] 2014-12-05 10:32:46,241 MessagingService.java (line > 919) MessagingService shutting down server thread.{code} > So it appears this in an intentional shut down, in which case the docs and > help are wrong. I could not find a JIRA that described the change in behavior > moving to 2.1. > Other users on IRC report that drain works as expected for them on 1.2.19. > Attached are 2.0 logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8429) Some keys unreadable during compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-8429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-8429: --- Summary: Some keys unreadable during compaction (was: Stress on trunk fails mixed workload on missing keys) > Some keys unreadable during compaction > -- > > Key: CASSANDRA-8429 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8429 > Project: Cassandra > Issue Type: Bug > Environment: Ubuntu 14.04 >Reporter: Ariel Weisberg >Assignee: Marcus Eriksson > Fix For: 2.1.3 > > Attachments: cluster.conf, run_stress.sh > > > Starts as part of merge commit 25be46497a8df46f05ffa102bc645bfd684ea48a > Stress will say that a key wasn't validated because it isn't returned even > though it's loaded. The key will eventually appear and can be queried using > cqlsh. > Reproduce with > #!/bin/sh > ROWCOUNT=1000 > SCHEMA='-col n=fixed(1) -schema > compaction(strategy=LeveledCompactionStrategy) compression=LZ4Compressor' > ./cassandra-stress write n=$ROWCOUNT -node xh61 -pop seq=1..$ROWCOUNT no-wrap > -rate threads=25 $SCHEMA > ./cassandra-stress mixed "ratio(read=2)" n=1 -node xh61 -pop > "dist=extreme(1..$ROWCOUNT,0.6)" -rate threads=25 $SCHEMA -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8428) Nodetool Drain kills C* Process
[ https://issues.apache.org/jira/browse/CASSANDRA-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244474#comment-14244474 ] Brandon Williams commented on CASSANDRA-8428: - I mean drain has never exited, and still should not. It's the write error and stop policy that's killing it, not drain. > Nodetool Drain kills C* Process > --- > > Key: CASSANDRA-8428 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8428 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Brandon Williams > Labels: nodetool > Fix For: 2.0.12 > > Attachments: system.log > > > Nodetool Drain is documented at http://wiki.apache.org/cassandra/NodeTool and > in the nodetool help to flush a node and stop accepting writes. This is the > behavior I see with 2.1.2. > In 2.0.11 and 1.2.19, instead the Cassandra Process is killed. In the 1.2.19 > logs, I see: > {code} > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:44,234 > StorageService.java (line 964) > DRAINING: starting drain process > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:44,235 > ThriftServer.java (line 116) S > top listening to thrift clients > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:44,239 Server.java > (line 159) Stop li > stening for CQL clients > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:44,239 > Gossiper.java (line 1203) Announcing shutdown > INFO [RMI TCP Connection(2)-192.168.1.5] 2014-12-05 10:32:46,240 > MessagingService.java (line 696) Waiting for messaging service to quiesce > INFO [ACCEPT-/127.0.0.1] 2014-12-05 10:32:46,241 MessagingService.java (line > 919) MessagingService shutting down server thread.{code} > So it appears this in an intentional shut down, in which case the docs and > help are wrong. I could not find a JIRA that described the change in behavior > moving to 2.1. > Other users on IRC report that drain works as expected for them on 1.2.19. > Attached are 2.0 logs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8471) mapred/hive queries fail when there is just 1 node down RF is > 1
[ https://issues.apache.org/jira/browse/CASSANDRA-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244458#comment-14244458 ] Artem Aliev edited comment on CASSANDRA-8471 at 12/12/14 5:11 PM: -- The CqlRecordReader is used to read data from C* to map tasks. To connect to C* it receive a list of C* node locations where given split(row) can be found. It suppose to check all that connections to find available nodes for "control connect". But because the connect methods was out of the check loop, the fist node in the list is always selected. If it is unavailable the map task failed with above Exception. I just moved cluster.connect() call into the check loop. was (Author: artem.aliev): The CqlRecordReader is used to read data from C* to map tasks. To connect to C* it receive a list of C* node locations. It suppose to check all that connections to find available nodes for "control connect". But because the connect methods was out of the check loop, the fist node in the list is always selected. If it is unavailable the map task failed with above Exception. I just moved cluster.connect() call into the check loop. > mapred/hive queries fail when there is just 1 node down RF is > 1 > - > > Key: CASSANDRA-8471 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8471 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Reporter: Artem Aliev > Labels: easyfix, hadoop, patch > Fix For: 2.0.12, 2.1.3 > > Attachments: cassandra-2.0-8471.txt > > > The hive and map reduce queries fail when just 1 node is down, even with RF=3 > (in a 6 node cluster) and default consistency levels for Read and Write. > The simpliest way to reproduce it is to use DataStax integrated hadoop > environment with hive. > {quote} > alter keyspace "HiveMetaStore" WITH replication = > {'class':'NetworkTopologyStrategy', 'DC1':3} ; > alter keyspace cfs WITH replication = {'class':'NetworkTopologyStrategy', > 'DC1':3} ; > alter keyspace cfs_archive WITH replication = > {'class':'NetworkTopologyStrategy', 'DC1':3} ; > CREATE KEYSPACE datamart WITH replication = { > 'class': 'NetworkTopologyStrategy', > 'DC1': '3' > }; > CREATE TABLE users1 ( > id int, > name text, > PRIMARY KEY ((id)) > ) > {quote} > Insert data. > Shutdown one cassandra node. > Run map reduce task. Hive in this case > {quote} > $ dse hive > hive> use datamart; > hive> select count(*) from users1; > {quote} > {quote} > ... > ... > 2014-12-10 18:33:53,090 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:54,093 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:55,096 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:56,099 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:57,102 Stage-1 map = 100%, reduce = 100%, Cumulative CPU > 6.39 sec > MapReduce Total cumulative CPU time: 6 seconds 390 msec > Ended Job = job_201412100017_0006 with errors > Error during job, obtaining debugging information... > Job Tracking URL: > http://i-9d0306706.c.eng-gce-support.internal:50030/jobdetails.jsp?jobid=job_201412100017_0006 > Examining task ID: task_201412100017_0006_m_05 (and more) from job > job_201412100017_0006 > Task with the most failures(4): > - > Task ID: > task_201412100017_0006_m_01 > URL: > > http://i-9d0306706.c.eng-gce-support.internal:50030/taskdetails.jsp?jobid=job_201412100017_0006&tipid=task_201412100017_0006_m_01 > - > Diagnostic Messages for this Task: > java.io.IOException: java.io.IOException: > com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) > tried for query failed (tried: > i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042 > (com.datastax.driver.core.TransportException: > [i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042] Cannot connect)) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:244) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:538) > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:197) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) > at org.apache.hadoop.mapred.Child$4.run(Child.java:266) > at java.security.AccessController.d
[jira] [Commented] (CASSANDRA-8471) mapred/hive queries fail when there is just 1 node down RF is > 1
[ https://issues.apache.org/jira/browse/CASSANDRA-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244458#comment-14244458 ] Artem Aliev commented on CASSANDRA-8471: The CqlRecordReader is used to read data from C* to map tasks. To connect to C* it receive a list of C* node locations. It suppose to check all that connections to find available nodes for "control connect". But because the connect methods was out of the check loop, the fist node in the list is always selected. If it is unavailable the map task failed with above Exception. I just moved cluster.connect() call into the check loop. > mapred/hive queries fail when there is just 1 node down RF is > 1 > - > > Key: CASSANDRA-8471 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8471 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Reporter: Artem Aliev > Labels: easyfix, hadoop, patch > Fix For: 2.0.12, 2.1.3 > > Attachments: cassandra-2.0-8471.txt > > > The hive and map reduce queries fail when just 1 node is down, even with RF=3 > (in a 6 node cluster) and default consistency levels for Read and Write. > The simpliest way to reproduce it is to use DataStax integrated hadoop > environment with hive. > {quote} > alter keyspace "HiveMetaStore" WITH replication = > {'class':'NetworkTopologyStrategy', 'DC1':3} ; > alter keyspace cfs WITH replication = {'class':'NetworkTopologyStrategy', > 'DC1':3} ; > alter keyspace cfs_archive WITH replication = > {'class':'NetworkTopologyStrategy', 'DC1':3} ; > CREATE KEYSPACE datamart WITH replication = { > 'class': 'NetworkTopologyStrategy', > 'DC1': '3' > }; > CREATE TABLE users1 ( > id int, > name text, > PRIMARY KEY ((id)) > ) > {quote} > Insert data. > Shutdown one cassandra node. > Run map reduce task. Hive in this case > {quote} > $ dse hive > hive> use datamart; > hive> select count(*) from users1; > {quote} > {quote} > ... > ... > 2014-12-10 18:33:53,090 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:54,093 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:55,096 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:56,099 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:57,102 Stage-1 map = 100%, reduce = 100%, Cumulative CPU > 6.39 sec > MapReduce Total cumulative CPU time: 6 seconds 390 msec > Ended Job = job_201412100017_0006 with errors > Error during job, obtaining debugging information... > Job Tracking URL: > http://i-9d0306706.c.eng-gce-support.internal:50030/jobdetails.jsp?jobid=job_201412100017_0006 > Examining task ID: task_201412100017_0006_m_05 (and more) from job > job_201412100017_0006 > Task with the most failures(4): > - > Task ID: > task_201412100017_0006_m_01 > URL: > > http://i-9d0306706.c.eng-gce-support.internal:50030/taskdetails.jsp?jobid=job_201412100017_0006&tipid=task_201412100017_0006_m_01 > - > Diagnostic Messages for this Task: > java.io.IOException: java.io.IOException: > com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) > tried for query failed (tried: > i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042 > (com.datastax.driver.core.TransportException: > [i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042] Cannot connect)) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:244) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:538) > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:197) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) > at org.apache.hadoop.mapred.Child$4.run(Child.java:266) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > at org.apache.hadoop.mapred.Child.main(Child.java:260) > Caused by: java.io.IOException: > com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) > tried for query failed (tried: > i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042 > (com.datastax.driver.core.TransportException: > [i-6ac985f7d.c.eng-gce-support.internal/10.240.124
[1/3] cassandra git commit: Fix NPE when writetime() or ttl() are nested inside a fn call
Repository: cassandra Updated Branches: refs/heads/trunk 2fdd1d59c -> e530f4230 Fix NPE when writetime() or ttl() are nested inside a fn call Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-8451 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f3d0edb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f3d0edb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f3d0edb Branch: refs/heads/trunk Commit: 3f3d0edbad6b42f5fc8715ecfa52e2e41bbdcea9 Parents: ac9cfbd Author: Tyler Hobbs Authored: Fri Dec 12 10:49:32 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 10:49:32 2014 -0600 -- CHANGES.txt | 2 + .../cassandra/cql3/statements/Selection.java| 50 ++-- 2 files changed, 47 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f3d0edb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c25caf9..cc426bb 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 2.0.12: + * Fix NPE when writetime() or ttl() calls are wrapped by + another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) * Fix error message on read repair timeouts (CASSANDRA-7947) * Default DTCS base_time_seconds changed to 60 (CASSANDRA-8417) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f3d0edb/src/java/org/apache/cassandra/cql3/statements/Selection.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/Selection.java b/src/java/org/apache/cassandra/cql3/statements/Selection.java index 407f7d9..223f698 100644 --- a/src/java/org/apache/cassandra/cql3/statements/Selection.java +++ b/src/java/org/apache/cassandra/cql3/statements/Selection.java @@ -186,11 +186,8 @@ public abstract class Selection { Selector selector = makeSelector(cfDef, rawSelector, names, metadata); selectors.add(selector); -if (selector instanceof WritetimeOrTTLSelector) -{ -collectTimestamps |= ((WritetimeOrTTLSelector)selector).isWritetime; -collectTTLs |= !((WritetimeOrTTLSelector)selector).isWritetime; -} +collectTimestamps |= selector.usesTimestamps(); +collectTTLs |= selector.usesTTLs(); } return new SelectionWithProcessing(names, metadata, selectors, collectTimestamps, collectTTLs); } @@ -374,6 +371,12 @@ public abstract class Selection private interface Selector extends AssignementTestable { public ByteBuffer compute(ResultSetBuilder rs) throws InvalidRequestException; + +/** Returns true if the selector acts on a column's timestamp, false otherwise. */ +public boolean usesTimestamps(); + +/** Returns true if the selector acts on a column's TTL, false otherwise. */ +public boolean usesTTLs(); } private static class SimpleSelector implements Selector @@ -399,6 +402,16 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(type); } +public boolean usesTimestamps() +{ +return false; +} + +public boolean usesTTLs() +{ +return false; +} + @Override public String toString() { @@ -431,6 +444,22 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(fun.returnType()); } +public boolean usesTimestamps() +{ +for (Selector s : argSelectors) +if (s.usesTimestamps()) +return true; +return false; +} + +public boolean usesTTLs() +{ +for (Selector s : argSelectors) +if (s.usesTTLs()) +return true; +return false; +} + @Override public String toString() { @@ -476,6 +505,17 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(isWritetime ? LongType.instance : Int32Type.instance); } + +public boolean usesTimestamps() +{ +return isWritetime; +} + +public boolean usesTTLs() +{ +return !isWritetime; +} + @Override public String toString() {
[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Conflicts: CHANGES.txt src/java/org/apache/cassandra/cql3/statements/Selection.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/025a6359 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/025a6359 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/025a6359 Branch: refs/heads/trunk Commit: 025a635999b038f58b7541ab877ce1db823fbd5f Parents: 597a1d5 3f3d0ed Author: Tyler Hobbs Authored: Fri Dec 12 10:55:18 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 10:55:18 2014 -0600 -- CHANGES.txt | 2 + .../cassandra/cql3/statements/Selection.java| 46 +--- 2 files changed, 43 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/025a6359/CHANGES.txt -- diff --cc CHANGES.txt index 2571a09,cc426bb..579fd62 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,29 -1,6 +1,31 @@@ -2.0.12: +2.1.3 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882) + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964) + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926) + * Ensure memtable flush cannot expire commit log entries from its future (CASSANDRA-8383) + * Make read "defrag" async to reclaim memtables (CASSANDRA-8459) + * Remove tmplink files for offline compactions (CASSANDRA-8321) + * Reduce maxHintsInProgress (CASSANDRA-8415) + * BTree updates may call provided update function twice (CASSANDRA-8018) + * Release sstable references after anticompaction (CASSANDRA-8386) + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320) + * Fix high size calculations for prepared statements (CASSANDRA-8231) + * Centralize shared executors (CASSANDRA-8055) + * Fix filtering for CONTAINS (KEY) relations on frozen collection + clustering columns when the query is restricted to a single + partition (CASSANDRA-8203) + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243) + * Add more log info if readMeter is null (CASSANDRA-8238) + * add check of the system wall clock time at startup (CASSANDRA-8305) + * Support for frozen collections (CASSANDRA-7859) + * Fix overflow on histogram computation (CASSANDRA-8028) + * Have paxos reuse the timestamp generation of normal queries (CASSANDRA-7801) + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291) + * Improve JBOD disk utilization (CASSANDRA-7386) + * Log failed host when preparing incremental repair (CASSANDRA-8228) +Merged from 2.0: + * Fix NPE when writetime() or ttl() calls are wrapped by +another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) * Fix error message on read repair timeouts (CASSANDRA-7947) * Default DTCS base_time_seconds changed to 60 (CASSANDRA-8417) http://git-wip-us.apache.org/repos/asf/cassandra/blob/025a6359/src/java/org/apache/cassandra/cql3/statements/Selection.java -- diff --cc src/java/org/apache/cassandra/cql3/statements/Selection.java index 5deda5f,223f698..ff808bb --- a/src/java/org/apache/cassandra/cql3/statements/Selection.java +++ b/src/java/org/apache/cassandra/cql3/statements/Selection.java @@@ -224,15 -184,12 +224,12 @@@ public abstract class Selectio boolean collectTTLs = false; for (RawSelector rawSelector : rawSelectors) { -Selector selector = makeSelector(cfDef, rawSelector, names, metadata); +Selector selector = makeSelector(cfm, rawSelector, defs, metadata); selectors.add(selector); - if (selector instanceof WritetimeOrTTLSelector) - { - collectTimestamps |= ((WritetimeOrTTLSelector)selector).isWritetime; - collectTTLs |= !((WritetimeOrTTLSelector)selector).isWritetime; - } + collectTimestamps |= selector.usesTimestamps(); + collectTTLs |= selector.usesTTLs(); } -return new SelectionWithProcessing(names, metadata, selectors, collectTimestamps, collectTTLs); +return new SelectionWithProcessing(defs, metadata, selectors, collectTimestamps, collectTTLs); } else { @@@ -376,18 -347,39 +373,30 @@@ } } -private static class SelectionWithProcessing extends Selection +private static abstract class Selector implements AssignementTestable { -private final List selectors; +public abstract ByteBuffer compute(ResultSetBuilder rs
[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Conflicts: src/java/org/apache/cassandra/cql3/statements/Selection.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e530f423 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e530f423 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e530f423 Branch: refs/heads/trunk Commit: e530f4230ede7227f274ca6441ce0f98007300b3 Parents: 2fdd1d5 025a635 Author: Tyler Hobbs Authored: Fri Dec 12 11:01:08 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 11:01:08 2014 -0600 -- CHANGES.txt | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e530f423/CHANGES.txt --
[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Conflicts: CHANGES.txt src/java/org/apache/cassandra/cql3/statements/Selection.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/025a6359 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/025a6359 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/025a6359 Branch: refs/heads/cassandra-2.1 Commit: 025a635999b038f58b7541ab877ce1db823fbd5f Parents: 597a1d5 3f3d0ed Author: Tyler Hobbs Authored: Fri Dec 12 10:55:18 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 10:55:18 2014 -0600 -- CHANGES.txt | 2 + .../cassandra/cql3/statements/Selection.java| 46 +--- 2 files changed, 43 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/025a6359/CHANGES.txt -- diff --cc CHANGES.txt index 2571a09,cc426bb..579fd62 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,29 -1,6 +1,31 @@@ -2.0.12: +2.1.3 + * Scale memtable slab allocation logarithmically (CASSANDRA-7882) + * cassandra-stress simultaneous inserts over same seed (CASSANDRA-7964) + * Reduce cassandra-stress sampling memory requirements (CASSANDRA-7926) + * Ensure memtable flush cannot expire commit log entries from its future (CASSANDRA-8383) + * Make read "defrag" async to reclaim memtables (CASSANDRA-8459) + * Remove tmplink files for offline compactions (CASSANDRA-8321) + * Reduce maxHintsInProgress (CASSANDRA-8415) + * BTree updates may call provided update function twice (CASSANDRA-8018) + * Release sstable references after anticompaction (CASSANDRA-8386) + * Handle abort() in SSTableRewriter properly (CASSANDRA-8320) + * Fix high size calculations for prepared statements (CASSANDRA-8231) + * Centralize shared executors (CASSANDRA-8055) + * Fix filtering for CONTAINS (KEY) relations on frozen collection + clustering columns when the query is restricted to a single + partition (CASSANDRA-8203) + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243) + * Add more log info if readMeter is null (CASSANDRA-8238) + * add check of the system wall clock time at startup (CASSANDRA-8305) + * Support for frozen collections (CASSANDRA-7859) + * Fix overflow on histogram computation (CASSANDRA-8028) + * Have paxos reuse the timestamp generation of normal queries (CASSANDRA-7801) + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291) + * Improve JBOD disk utilization (CASSANDRA-7386) + * Log failed host when preparing incremental repair (CASSANDRA-8228) +Merged from 2.0: + * Fix NPE when writetime() or ttl() calls are wrapped by +another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) * Fix error message on read repair timeouts (CASSANDRA-7947) * Default DTCS base_time_seconds changed to 60 (CASSANDRA-8417) http://git-wip-us.apache.org/repos/asf/cassandra/blob/025a6359/src/java/org/apache/cassandra/cql3/statements/Selection.java -- diff --cc src/java/org/apache/cassandra/cql3/statements/Selection.java index 5deda5f,223f698..ff808bb --- a/src/java/org/apache/cassandra/cql3/statements/Selection.java +++ b/src/java/org/apache/cassandra/cql3/statements/Selection.java @@@ -224,15 -184,12 +224,12 @@@ public abstract class Selectio boolean collectTTLs = false; for (RawSelector rawSelector : rawSelectors) { -Selector selector = makeSelector(cfDef, rawSelector, names, metadata); +Selector selector = makeSelector(cfm, rawSelector, defs, metadata); selectors.add(selector); - if (selector instanceof WritetimeOrTTLSelector) - { - collectTimestamps |= ((WritetimeOrTTLSelector)selector).isWritetime; - collectTTLs |= !((WritetimeOrTTLSelector)selector).isWritetime; - } + collectTimestamps |= selector.usesTimestamps(); + collectTTLs |= selector.usesTTLs(); } -return new SelectionWithProcessing(names, metadata, selectors, collectTimestamps, collectTTLs); +return new SelectionWithProcessing(defs, metadata, selectors, collectTimestamps, collectTTLs); } else { @@@ -376,18 -347,39 +373,30 @@@ } } -private static class SelectionWithProcessing extends Selection +private static abstract class Selector implements AssignementTestable { -private final List selectors; +public abstract ByteBuffer compute(ResultSetBu
[1/2] cassandra git commit: Fix NPE when writetime() or ttl() are nested inside a fn call
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 597a1d5db -> 025a63599 Fix NPE when writetime() or ttl() are nested inside a fn call Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-8451 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f3d0edb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f3d0edb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f3d0edb Branch: refs/heads/cassandra-2.1 Commit: 3f3d0edbad6b42f5fc8715ecfa52e2e41bbdcea9 Parents: ac9cfbd Author: Tyler Hobbs Authored: Fri Dec 12 10:49:32 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 10:49:32 2014 -0600 -- CHANGES.txt | 2 + .../cassandra/cql3/statements/Selection.java| 50 ++-- 2 files changed, 47 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f3d0edb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c25caf9..cc426bb 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 2.0.12: + * Fix NPE when writetime() or ttl() calls are wrapped by + another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) * Fix error message on read repair timeouts (CASSANDRA-7947) * Default DTCS base_time_seconds changed to 60 (CASSANDRA-8417) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f3d0edb/src/java/org/apache/cassandra/cql3/statements/Selection.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/Selection.java b/src/java/org/apache/cassandra/cql3/statements/Selection.java index 407f7d9..223f698 100644 --- a/src/java/org/apache/cassandra/cql3/statements/Selection.java +++ b/src/java/org/apache/cassandra/cql3/statements/Selection.java @@ -186,11 +186,8 @@ public abstract class Selection { Selector selector = makeSelector(cfDef, rawSelector, names, metadata); selectors.add(selector); -if (selector instanceof WritetimeOrTTLSelector) -{ -collectTimestamps |= ((WritetimeOrTTLSelector)selector).isWritetime; -collectTTLs |= !((WritetimeOrTTLSelector)selector).isWritetime; -} +collectTimestamps |= selector.usesTimestamps(); +collectTTLs |= selector.usesTTLs(); } return new SelectionWithProcessing(names, metadata, selectors, collectTimestamps, collectTTLs); } @@ -374,6 +371,12 @@ public abstract class Selection private interface Selector extends AssignementTestable { public ByteBuffer compute(ResultSetBuilder rs) throws InvalidRequestException; + +/** Returns true if the selector acts on a column's timestamp, false otherwise. */ +public boolean usesTimestamps(); + +/** Returns true if the selector acts on a column's TTL, false otherwise. */ +public boolean usesTTLs(); } private static class SimpleSelector implements Selector @@ -399,6 +402,16 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(type); } +public boolean usesTimestamps() +{ +return false; +} + +public boolean usesTTLs() +{ +return false; +} + @Override public String toString() { @@ -431,6 +444,22 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(fun.returnType()); } +public boolean usesTimestamps() +{ +for (Selector s : argSelectors) +if (s.usesTimestamps()) +return true; +return false; +} + +public boolean usesTTLs() +{ +for (Selector s : argSelectors) +if (s.usesTTLs()) +return true; +return false; +} + @Override public String toString() { @@ -476,6 +505,17 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(isWritetime ? LongType.instance : Int32Type.instance); } + +public boolean usesTimestamps() +{ +return isWritetime; +} + +public boolean usesTTLs() +{ +return !isWritetime; +} + @Override public String toString() {
cassandra git commit: Fix NPE when writetime() or ttl() are nested inside a fn call
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 ac9cfbd9a -> 3f3d0edba Fix NPE when writetime() or ttl() are nested inside a fn call Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-8451 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f3d0edb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f3d0edb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f3d0edb Branch: refs/heads/cassandra-2.0 Commit: 3f3d0edbad6b42f5fc8715ecfa52e2e41bbdcea9 Parents: ac9cfbd Author: Tyler Hobbs Authored: Fri Dec 12 10:49:32 2014 -0600 Committer: Tyler Hobbs Committed: Fri Dec 12 10:49:32 2014 -0600 -- CHANGES.txt | 2 + .../cassandra/cql3/statements/Selection.java| 50 ++-- 2 files changed, 47 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f3d0edb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c25caf9..cc426bb 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 2.0.12: + * Fix NPE when writetime() or ttl() calls are wrapped by + another function call (CASSANDRA-8451) * Fix NPE after dropping a keyspace (CASSANDRA-8332) * Fix error message on read repair timeouts (CASSANDRA-7947) * Default DTCS base_time_seconds changed to 60 (CASSANDRA-8417) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f3d0edb/src/java/org/apache/cassandra/cql3/statements/Selection.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/Selection.java b/src/java/org/apache/cassandra/cql3/statements/Selection.java index 407f7d9..223f698 100644 --- a/src/java/org/apache/cassandra/cql3/statements/Selection.java +++ b/src/java/org/apache/cassandra/cql3/statements/Selection.java @@ -186,11 +186,8 @@ public abstract class Selection { Selector selector = makeSelector(cfDef, rawSelector, names, metadata); selectors.add(selector); -if (selector instanceof WritetimeOrTTLSelector) -{ -collectTimestamps |= ((WritetimeOrTTLSelector)selector).isWritetime; -collectTTLs |= !((WritetimeOrTTLSelector)selector).isWritetime; -} +collectTimestamps |= selector.usesTimestamps(); +collectTTLs |= selector.usesTTLs(); } return new SelectionWithProcessing(names, metadata, selectors, collectTimestamps, collectTTLs); } @@ -374,6 +371,12 @@ public abstract class Selection private interface Selector extends AssignementTestable { public ByteBuffer compute(ResultSetBuilder rs) throws InvalidRequestException; + +/** Returns true if the selector acts on a column's timestamp, false otherwise. */ +public boolean usesTimestamps(); + +/** Returns true if the selector acts on a column's TTL, false otherwise. */ +public boolean usesTTLs(); } private static class SimpleSelector implements Selector @@ -399,6 +402,16 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(type); } +public boolean usesTimestamps() +{ +return false; +} + +public boolean usesTTLs() +{ +return false; +} + @Override public String toString() { @@ -431,6 +444,22 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(fun.returnType()); } +public boolean usesTimestamps() +{ +for (Selector s : argSelectors) +if (s.usesTimestamps()) +return true; +return false; +} + +public boolean usesTTLs() +{ +for (Selector s : argSelectors) +if (s.usesTTLs()) +return true; +return false; +} + @Override public String toString() { @@ -476,6 +505,17 @@ public abstract class Selection return receiver.type.isValueCompatibleWith(isWritetime ? LongType.instance : Int32Type.instance); } + +public boolean usesTimestamps() +{ +return isWritetime; +} + +public boolean usesTTLs() +{ +return !isWritetime; +} + @Override public String toString() {
[jira] [Commented] (CASSANDRA-6198) Distinguish streaming traffic at network level
[ https://issues.apache.org/jira/browse/CASSANDRA-6198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244431#comment-14244431 ] Brandon Williams commented on CASSANDRA-6198: - Hmm, this doesn't compile for me (and I used wiggle to get it to patch): {noformat} [javac] /home/drift/cassandra/src/java/org/apache/cassandra/streaming/ConnectionHandler.java:175: error: cannot find symbol [javac] if (peer instanceof Inet6Address) [javac] ^ [javac] symbol: variable peer [javac] location: class MessageHandler {noformat} Not sure what went wrong here, but can you rebase? > Distinguish streaming traffic at network level > -- > > Key: CASSANDRA-6198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6198 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: sankalp kohli >Assignee: Norman Maurer >Priority: Minor > Fix For: 2.1.3 > > Attachments: > 0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con-v2.txt, > 0001-CASSANDRA-6198-Set-IPTOS_THROUGHPUT-on-streaming-con.txt > > > It would be nice to have some information in the TCP packet which network > teams can inspect to distinguish between streaming traffic and other organic > cassandra traffic. This is very useful for monitoring WAN traffic. > Here are some solutions: > 1) Use a different port for streaming. > 2) Add some IP header. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8374) Better support of null for UDF
[ https://issues.apache.org/jira/browse/CASSANDRA-8374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244426#comment-14244426 ] Robert Stupp commented on CASSANDRA-8374: - Ok - the relevant changes would be: * Add optional {{ALLOW NULLS}} to {{CREATE FUNCTION}} (just before the keyword {{RETURNS}}) * Reject a {{CREATE AGGREGATE}} without {{INITCOND}} if state function is a UDF without {{ALLOW NULLS}} (otherwise the state function would never be called) > Better support of null for UDF > -- > > Key: CASSANDRA-8374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8374 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Robert Stupp > Fix For: 3.0 > > > Currently, every function needs to deal with it's argument potentially being > {{null}}. There is very many case where that's just annoying, users should be > able to define a function like: > {noformat} > CREATE FUNCTION addTwo(val int) RETURNS int LANGUAGE JAVA AS 'return val + 2;' > {noformat} > without having this crashing as soon as a column it's applied to doesn't a > value for some rows (I'll note that this definition apparently cannot be > compiled currently, which should be looked into). > In fact, I think that by default methods shouldn't have to care about > {{null}} values: if the value is {{null}}, we should not call the method at > all and return {{null}}. There is still methods that may explicitely want to > handle {{null}} (to return a default value for instance), so maybe we can add > an {{ALLOW NULLS}} to the creation syntax. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8193) Multi-DC parallel snapshot repair
[ https://issues.apache.org/jira/browse/CASSANDRA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244423#comment-14244423 ] Jimmy Mårdell commented on CASSANDRA-8193: -- Oh, makes sense. My bad, thanks! FYI, running (the original) patch - and the other one using active sstables instead of snapshot ones - in production now and works great. > Multi-DC parallel snapshot repair > - > > Key: CASSANDRA-8193 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8193 > Project: Cassandra > Issue Type: Improvement >Reporter: Jimmy Mårdell >Assignee: Jimmy Mårdell >Priority: Minor > Fix For: 2.0.12, 2.1.3 > > Attachments: 8193-followup.txt, cassandra-2.0-8193-1.txt, > cassandra-2.0-8193-2.txt > > > The current behaviour of snapshot repair is to let one node at a time > calculate a merkle tree. This is to ensure only one node at a time is doing > the expensive calculation. The drawback is that it takes even longer time to > do the merkle tree calculation. > In a multi-DC setup, I think it would make more sense to have one node in > each DC calculate the merkle tree at the same time. This would yield a > significant improvement when you have many data centers. > I'm not sure how relevant this is in 2.1, but I don't see us upgrading to 2.1 > any time soon. Unless there is an obvious drawback that I'm missing, I'd like > to implement this in the 2.0 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8193) Multi-DC parallel snapshot repair
[ https://issues.apache.org/jira/browse/CASSANDRA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-8193: -- Attachment: 8193-followup.txt Patch to change API not to use RepairParallelism. Instead, methods take int value that matches RepairParallelism ordinal. > Multi-DC parallel snapshot repair > - > > Key: CASSANDRA-8193 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8193 > Project: Cassandra > Issue Type: Improvement >Reporter: Jimmy Mårdell >Assignee: Jimmy Mårdell >Priority: Minor > Fix For: 2.0.12, 2.1.3 > > Attachments: 8193-followup.txt, cassandra-2.0-8193-1.txt, > cassandra-2.0-8193-2.txt > > > The current behaviour of snapshot repair is to let one node at a time > calculate a merkle tree. This is to ensure only one node at a time is doing > the expensive calculation. The drawback is that it takes even longer time to > do the merkle tree calculation. > In a multi-DC setup, I think it would make more sense to have one node in > each DC calculate the merkle tree at the same time. This would yield a > significant improvement when you have many data centers. > I'm not sure how relevant this is in 2.1, but I don't see us upgrading to 2.1 > any time soon. Unless there is an obvious drawback that I'm missing, I'd like > to implement this in the 2.0 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (CASSANDRA-8193) Multi-DC parallel snapshot repair
[ https://issues.apache.org/jira/browse/CASSANDRA-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita reopened CASSANDRA-8193: --- JMX API should not use Cassandra specific class, as it can be used outside of Cassandra. I will attach patch to change API. > Multi-DC parallel snapshot repair > - > > Key: CASSANDRA-8193 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8193 > Project: Cassandra > Issue Type: Improvement >Reporter: Jimmy Mårdell >Assignee: Jimmy Mårdell >Priority: Minor > Fix For: 2.0.12, 2.1.3 > > Attachments: cassandra-2.0-8193-1.txt, cassandra-2.0-8193-2.txt > > > The current behaviour of snapshot repair is to let one node at a time > calculate a merkle tree. This is to ensure only one node at a time is doing > the expensive calculation. The drawback is that it takes even longer time to > do the merkle tree calculation. > In a multi-DC setup, I think it would make more sense to have one node in > each DC calculate the merkle tree at the same time. This would yield a > significant improvement when you have many data centers. > I'm not sure how relevant this is in 2.1, but I don't see us upgrading to 2.1 > any time soon. Unless there is an obvious drawback that I'm missing, I'd like > to implement this in the 2.0 branch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8471) mapred/hive queries fail when there is just 1 node down RF is > 1
[ https://issues.apache.org/jira/browse/CASSANDRA-8471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244403#comment-14244403 ] Jonathan Ellis commented on CASSANDRA-8471: --- What is the problem and how does the patch fix it? > mapred/hive queries fail when there is just 1 node down RF is > 1 > - > > Key: CASSANDRA-8471 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8471 > Project: Cassandra > Issue Type: Bug > Components: Hadoop >Reporter: Artem Aliev > Labels: easyfix, hadoop, patch > Fix For: 2.0.12, 2.1.3 > > Attachments: cassandra-2.0-8471.txt > > > The hive and map reduce queries fail when just 1 node is down, even with RF=3 > (in a 6 node cluster) and default consistency levels for Read and Write. > The simpliest way to reproduce it is to use DataStax integrated hadoop > environment with hive. > {quote} > alter keyspace "HiveMetaStore" WITH replication = > {'class':'NetworkTopologyStrategy', 'DC1':3} ; > alter keyspace cfs WITH replication = {'class':'NetworkTopologyStrategy', > 'DC1':3} ; > alter keyspace cfs_archive WITH replication = > {'class':'NetworkTopologyStrategy', 'DC1':3} ; > CREATE KEYSPACE datamart WITH replication = { > 'class': 'NetworkTopologyStrategy', > 'DC1': '3' > }; > CREATE TABLE users1 ( > id int, > name text, > PRIMARY KEY ((id)) > ) > {quote} > Insert data. > Shutdown one cassandra node. > Run map reduce task. Hive in this case > {quote} > $ dse hive > hive> use datamart; > hive> select count(*) from users1; > {quote} > {quote} > ... > ... > 2014-12-10 18:33:53,090 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:54,093 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:55,096 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:56,099 Stage-1 map = 75%, reduce = 25%, Cumulative CPU 6.39 > sec > 2014-12-10 18:33:57,102 Stage-1 map = 100%, reduce = 100%, Cumulative CPU > 6.39 sec > MapReduce Total cumulative CPU time: 6 seconds 390 msec > Ended Job = job_201412100017_0006 with errors > Error during job, obtaining debugging information... > Job Tracking URL: > http://i-9d0306706.c.eng-gce-support.internal:50030/jobdetails.jsp?jobid=job_201412100017_0006 > Examining task ID: task_201412100017_0006_m_05 (and more) from job > job_201412100017_0006 > Task with the most failures(4): > - > Task ID: > task_201412100017_0006_m_01 > URL: > > http://i-9d0306706.c.eng-gce-support.internal:50030/taskdetails.jsp?jobid=job_201412100017_0006&tipid=task_201412100017_0006_m_01 > - > Diagnostic Messages for this Task: > java.io.IOException: java.io.IOException: > com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) > tried for query failed (tried: > i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042 > (com.datastax.driver.core.TransportException: > [i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042] Cannot connect)) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:244) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:538) > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:197) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372) > at org.apache.hadoop.mapred.Child$4.run(Child.java:266) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121) > at org.apache.hadoop.mapred.Child.main(Child.java:260) > Caused by: java.io.IOException: > com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) > tried for query failed (tried: > i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042 > (com.datastax.driver.core.TransportException: > [i-6ac985f7d.c.eng-gce-support.internal/10.240.124.16:9042] Cannot connect)) > at > org.apache.hadoop.hive.cassandra.cql3.input.HiveCqlInputFormat.getRecordReader(HiveCqlInputFormat.java:206) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:241) > ... 9 more > Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All > host(s) tried for query failed (tr
[jira] [Updated] (CASSANDRA-8076) Expose an mbean method to poll for repair job status
[ https://issues.apache.org/jira/browse/CASSANDRA-8076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-8076: -- Attachment: 8076-2.0.txt Here's my proposal. Attached patch for 2.0. {code} /** * Get repair status of finished repair. * Return value can be one of the following: * * * -1 * Repair is still running, or status is not available for given repair number * 0 * Repair has finished successfully * 1 * Repair has finished with error * * * @param id Repair number to check. The number is given as return value of async repair operations. * @return repair status in number. */ public int getRepairStatus(int id); {code} WDYT? > Expose an mbean method to poll for repair job status > > > Key: CASSANDRA-8076 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8076 > Project: Cassandra > Issue Type: Improvement >Reporter: Philip S Doctor >Assignee: Yuki Morishita > Fix For: 2.0.12 > > Attachments: 8076-2.0.txt > > > Given the int reply-id from forceRepairAsync, allow a client to request the > status of this ID via jmx. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7985) stress tool doesn't support auth
[ https://issues.apache.org/jira/browse/CASSANDRA-7985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244307#comment-14244307 ] Jeremiah Jordan commented on CASSANDRA-7985: doesn't apply to cassandra-2.1 anymore. can you rebase? > stress tool doesn't support auth > > > Key: CASSANDRA-7985 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7985 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Ashic Mahtab >Assignee: T Jake Luciani >Priority: Minor > Fix For: 2.1.3 > > Attachments: 7985.txt, 7985v2.txt > > > stress tool in 2.1 doesn't seem to support username / password authentication > (like cqlsh). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8457) nio MessagingService
[ https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244297#comment-14244297 ] Benedict commented on CASSANDRA-8457: - [~enigmacurry] if we could fire up EC2 instances from cstar, that would seem to me to be plenty sufficient for adhoc testing. We can figure something out internally for a more regular CI approach. > nio MessagingService > > > Key: CASSANDRA-8457 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8457 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Jonathan Ellis >Assignee: Ariel Weisberg > Labels: performance > Fix For: 3.0 > > > Thread-per-peer (actually two each incoming and outbound) is a big > contributor to context switching, especially for larger clusters. Let's look > at switching to nio, possibly via Netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8316) "Did not get positive replies from all endpoints" error on incremental repair
[ https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244294#comment-14244294 ] Alan Boudreault commented on CASSANDRA-8316: Thanks [~krummas], I will run my tests today if possible, otherwise during the weekend and get back to you. > "Did not get positive replies from all endpoints" error on incremental repair > -- > > Key: CASSANDRA-8316 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8316 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: cassandra 2.1.2 >Reporter: Loic Lambiel >Assignee: Marcus Eriksson > Fix For: 2.1.3 > > Attachments: 0001-patch.patch, 8316-v2.patch, > CassandraDaemon-2014-11-25-2.snapshot.tar.gz, test.sh > > > Hi, > I've got an issue with incremental repairs on our production 15 nodes 2.1.2 > (new cluster, not yet loaded, RF=3) > After having successfully performed an incremental repair (-par -inc) on 3 > nodes, I started receiving "Repair failed with error Did not get positive > replies from all endpoints." from nodetool on all remaining nodes : > [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges > for keyspace (seq=false, full=false) > [2014-11-14 09:12:47,919] Repair failed with error Did not get positive > replies from all endpoints. > All the nodes are up and running and the local system log shows that the > repair commands got started and that's it. > I've also noticed that soon after the repair, several nodes started having > more cpu load indefinitely without any particular reason (no tasks / queries, > nothing in the logs). I then restarted C* on these nodes and retried the > repair on several nodes, which were successful until facing the issue again. > I tried to repro on our 3 nodes preproduction cluster without success > It looks like I'm not the only one having this issue: > http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html > Any idea? > Thanks > Loic -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: ninja fixup
Repository: cassandra Updated Branches: refs/heads/trunk 5876d9342 -> 2fdd1d59c ninja fixup Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2fdd1d59 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2fdd1d59 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2fdd1d59 Branch: refs/heads/trunk Commit: 2fdd1d59c6e8934803f3c3d5629c41a8ca6a8a14 Parents: 5876d93 Author: Benedict Elliott Smith Authored: Fri Dec 12 15:22:15 2014 + Committer: Benedict Elliott Smith Committed: Fri Dec 12 15:22:15 2014 + -- src/java/org/apache/cassandra/utils/memory/NativeAllocator.java | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2fdd1d59/src/java/org/apache/cassandra/utils/memory/NativeAllocator.java -- diff --git a/src/java/org/apache/cassandra/utils/memory/NativeAllocator.java b/src/java/org/apache/cassandra/utils/memory/NativeAllocator.java index 3c43a27..df9ab1b 100644 --- a/src/java/org/apache/cassandra/utils/memory/NativeAllocator.java +++ b/src/java/org/apache/cassandra/utils/memory/NativeAllocator.java @@ -17,6 +17,8 @@ */ package org.apache.cassandra.utils.memory; +import java.util.HashMap; +import java.util.Map; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.Semaphore; import java.util.concurrent.atomic.AtomicInteger;
[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService
[ https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244258#comment-14244258 ] Ryan McGuire edited comment on CASSANDRA-8457 at 12/12/14 3:19 PM: --- [~benedict] modifying cstar_perf to run multiple instances per node is a larger task, and I'm wondering how useful that will be (seems like a lot of resource contention / non-real-world variables.) Assuming we had the alternative of 100% automated EC2 cluster bootstrapping/teardown, how often would we want to run these larger tests for it to be worth it? was (Author: enigmacurry): @Benedict modifying cstar_perf to run multiple instances per node is a larger task, and I'm wondering how useful that will be (seems like a lot of resource contention / non-real-world variables.) Assuming we had the alternative of 100% automated EC2 cluster bootstrapping/teardown, how often would we want to run these larger tests for it to be worth it? > nio MessagingService > > > Key: CASSANDRA-8457 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8457 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Jonathan Ellis >Assignee: Ariel Weisberg > Labels: performance > Fix For: 3.0 > > > Thread-per-peer (actually two each incoming and outbound) is a big > contributor to context switching, especially for larger clusters. Let's look > at switching to nio, possibly via Netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8457) nio MessagingService
[ https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244258#comment-14244258 ] Ryan McGuire commented on CASSANDRA-8457: - @Benedict modifying cstar_perf to run multiple instances per node is a larger task, and I'm wondering how useful that will be (seems like a lot of resource contention / non-real-world variables.) Assuming we had the alternative of 100% automated EC2 cluster bootstrapping/teardown, how often would we want to run these larger tests for it to be worth it? > nio MessagingService > > > Key: CASSANDRA-8457 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8457 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Jonathan Ellis >Assignee: Ariel Weisberg > Labels: performance > Fix For: 3.0 > > > Thread-per-peer (actually two each incoming and outbound) is a big > contributor to context switching, especially for larger clusters. Let's look > at switching to nio, possibly via Netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8470) Commit Log / Memtable Flush Correctness Stress Test
[ https://issues.apache.org/jira/browse/CASSANDRA-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244251#comment-14244251 ] Marcus Eriksson commented on CASSANDRA-8470: +1, CASSANDRA-8429 would have been found with something like this as well > Commit Log / Memtable Flush Correctness Stress Test > --- > > Key: CASSANDRA-8470 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8470 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Benedict > > CASSANDRA-8383 should have been detected with automated testing. We should > introduce a stress test designed to expose any bugs in the core data paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8457) nio MessagingService
[ https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244239#comment-14244239 ] Ryan McGuire commented on CASSANDRA-8457: - I've just tried to bring up a 25 node EC2 cluster on cstar_perf, and it did so without hiccup. We have an automatic tool for installing cstar_perf.tool on EC2, it's still lacking integration with the frontend though (meaning you can run tests against it, with the frontend, if we attatch the cluster. You just can't create the cluster from the frontend, yet.) > nio MessagingService > > > Key: CASSANDRA-8457 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8457 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Jonathan Ellis >Assignee: Ariel Weisberg > Labels: performance > Fix For: 3.0 > > > Thread-per-peer (actually two each incoming and outbound) is a big > contributor to context switching, especially for larger clusters. Let's look > at switching to nio, possibly via Netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/3] cassandra git commit: cassandra-stress simultaneous inserts over same seed (take two)
cassandra-stress simultaneous inserts over same seed (take two) patch by benedict; reviewed by rstupp CASSANDRA-7964 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/597a1d5d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/597a1d5d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/597a1d5d Branch: refs/heads/trunk Commit: 597a1d5db27ef9e37f7066868e76cc9450fc3c9c Parents: 51f7cad Author: Benedict Elliott Smith Authored: Fri Dec 12 15:07:15 2014 + Committer: Benedict Elliott Smith Committed: Fri Dec 12 15:07:15 2014 + -- .../apache/cassandra/stress/WorkManager.java| 65 ++ .../stress/generate/PartitionIterator.java | 632 +++ .../SampledOpDistributionFactory.java | 1 + 3 files changed, 698 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/597a1d5d/tools/stress/src/org/apache/cassandra/stress/WorkManager.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/WorkManager.java b/tools/stress/src/org/apache/cassandra/stress/WorkManager.java new file mode 100644 index 000..c6a3eee --- /dev/null +++ b/tools/stress/src/org/apache/cassandra/stress/WorkManager.java @@ -0,0 +1,65 @@ +package org.apache.cassandra.stress; + +import java.util.concurrent.atomic.AtomicLong; + +interface WorkManager +{ +// -1 indicates consumer should terminate +int takePermits(int count); + +// signal all consumers to terminate +void stop(); + +static final class FixedWorkManager implements WorkManager +{ + +final AtomicLong permits; + +public FixedWorkManager(long permits) +{ +this.permits = new AtomicLong(permits); +} + +@Override +public int takePermits(int count) +{ +while (true) +{ +long cur = permits.get(); +if (cur == 0) +return -1; +count = (int) Math.min(count, cur); +long next = cur - count; +if (permits.compareAndSet(cur, next)) +return count; +} +} + +@Override +public void stop() +{ +permits.getAndSet(0); +} +} + +static final class ContinuousWorkManager implements WorkManager +{ + +volatile boolean stop = false; + +@Override +public int takePermits(int count) +{ +if (stop) +return -1; +return count; +} + +@Override +public void stop() +{ +stop = true; +} + +} +} http://git-wip-us.apache.org/repos/asf/cassandra/blob/597a1d5d/tools/stress/src/org/apache/cassandra/stress/generate/PartitionIterator.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/generate/PartitionIterator.java b/tools/stress/src/org/apache/cassandra/stress/generate/PartitionIterator.java new file mode 100644 index 000..baab867 --- /dev/null +++ b/tools/stress/src/org/apache/cassandra/stress/generate/PartitionIterator.java @@ -0,0 +1,632 @@ +package org.apache.cassandra.stress.generate; +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + * + */ + + +import java.nio.ByteBuffer; +import java.util.ArrayDeque; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Deque; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.Queue; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.ThreadLocalRandom; + +import org.apache.cassandra.db.marshal.AbstractType; +import org.apache.cassandra.db.marshal.BytesType; +import org.apache.cassandra.stress.Operation; +import org.apache.cassandra.stress.generate.values.Generator; + +// a partition is r
[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5876d934 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5876d934 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5876d934 Branch: refs/heads/trunk Commit: 5876d93424ff49bba9db2de19442992c4dfe641d Parents: b5795ef 597a1d5 Author: Benedict Elliott Smith Authored: Fri Dec 12 15:07:34 2014 + Committer: Benedict Elliott Smith Committed: Fri Dec 12 15:07:34 2014 + -- .../apache/cassandra/stress/WorkManager.java| 65 ++ .../stress/generate/PartitionIterator.java | 632 +++ .../SampledOpDistributionFactory.java | 1 + 3 files changed, 698 insertions(+) --
[1/3] cassandra git commit: cassandra-stress simultaneous inserts over same seed (take two)
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 51f7cad48 -> 597a1d5db refs/heads/trunk b5795ef96 -> 5876d9342 cassandra-stress simultaneous inserts over same seed (take two) patch by benedict; reviewed by rstupp CASSANDRA-7964 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/597a1d5d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/597a1d5d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/597a1d5d Branch: refs/heads/cassandra-2.1 Commit: 597a1d5db27ef9e37f7066868e76cc9450fc3c9c Parents: 51f7cad Author: Benedict Elliott Smith Authored: Fri Dec 12 15:07:15 2014 + Committer: Benedict Elliott Smith Committed: Fri Dec 12 15:07:15 2014 + -- .../apache/cassandra/stress/WorkManager.java| 65 ++ .../stress/generate/PartitionIterator.java | 632 +++ .../SampledOpDistributionFactory.java | 1 + 3 files changed, 698 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/597a1d5d/tools/stress/src/org/apache/cassandra/stress/WorkManager.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/WorkManager.java b/tools/stress/src/org/apache/cassandra/stress/WorkManager.java new file mode 100644 index 000..c6a3eee --- /dev/null +++ b/tools/stress/src/org/apache/cassandra/stress/WorkManager.java @@ -0,0 +1,65 @@ +package org.apache.cassandra.stress; + +import java.util.concurrent.atomic.AtomicLong; + +interface WorkManager +{ +// -1 indicates consumer should terminate +int takePermits(int count); + +// signal all consumers to terminate +void stop(); + +static final class FixedWorkManager implements WorkManager +{ + +final AtomicLong permits; + +public FixedWorkManager(long permits) +{ +this.permits = new AtomicLong(permits); +} + +@Override +public int takePermits(int count) +{ +while (true) +{ +long cur = permits.get(); +if (cur == 0) +return -1; +count = (int) Math.min(count, cur); +long next = cur - count; +if (permits.compareAndSet(cur, next)) +return count; +} +} + +@Override +public void stop() +{ +permits.getAndSet(0); +} +} + +static final class ContinuousWorkManager implements WorkManager +{ + +volatile boolean stop = false; + +@Override +public int takePermits(int count) +{ +if (stop) +return -1; +return count; +} + +@Override +public void stop() +{ +stop = true; +} + +} +} http://git-wip-us.apache.org/repos/asf/cassandra/blob/597a1d5d/tools/stress/src/org/apache/cassandra/stress/generate/PartitionIterator.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/generate/PartitionIterator.java b/tools/stress/src/org/apache/cassandra/stress/generate/PartitionIterator.java new file mode 100644 index 000..baab867 --- /dev/null +++ b/tools/stress/src/org/apache/cassandra/stress/generate/PartitionIterator.java @@ -0,0 +1,632 @@ +package org.apache.cassandra.stress.generate; +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + * + */ + + +import java.nio.ByteBuffer; +import java.util.ArrayDeque; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Deque; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.NoSuchElementException; +import java.util.Queue; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.ThreadLocalRandom; + +import org.apache.cassandra.db.marshal.AbstractType; +import org.apache.cassandra.db.marshal.B
[jira] [Updated] (CASSANDRA-8316) "Did not get positive replies from all endpoints" error on incremental repair
[ https://issues.apache.org/jira/browse/CASSANDRA-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-8316: --- Attachment: 8316-v2.patch Attaching new patch, found the problem with compacting-marking - we fail marking compacting if one of the sstables is marked compacted, this makes sure we remove those before retrying (infinite loop otherwise) Also adds a try/catch around the switch(..) in RepairMessageVerbHandler, might be a bit much, but it makes sure we always remove the parent repair session on failures. With this + the patch in CASSANDRA-8458 i'm able continuously run incremental repairs on a heavily loaded 6 node cluster. > "Did not get positive replies from all endpoints" error on incremental repair > -- > > Key: CASSANDRA-8316 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8316 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: cassandra 2.1.2 >Reporter: Loic Lambiel >Assignee: Marcus Eriksson > Fix For: 2.1.3 > > Attachments: 0001-patch.patch, 8316-v2.patch, > CassandraDaemon-2014-11-25-2.snapshot.tar.gz, test.sh > > > Hi, > I've got an issue with incremental repairs on our production 15 nodes 2.1.2 > (new cluster, not yet loaded, RF=3) > After having successfully performed an incremental repair (-par -inc) on 3 > nodes, I started receiving "Repair failed with error Did not get positive > replies from all endpoints." from nodetool on all remaining nodes : > [2014-11-14 09:12:36,488] Starting repair command #3, repairing 108 ranges > for keyspace (seq=false, full=false) > [2014-11-14 09:12:47,919] Repair failed with error Did not get positive > replies from all endpoints. > All the nodes are up and running and the local system log shows that the > repair commands got started and that's it. > I've also noticed that soon after the repair, several nodes started having > more cpu load indefinitely without any particular reason (no tasks / queries, > nothing in the logs). I then restarted C* on these nodes and retried the > repair on several nodes, which were successful until facing the issue again. > I tried to repro on our 3 nodes preproduction cluster without success > It looks like I'm not the only one having this issue: > http://www.mail-archive.com/user%40cassandra.apache.org/msg39145.html > Any idea? > Thanks > Loic -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8472) Streams hang in repair
[ https://issues.apache.org/jira/browse/CASSANDRA-8472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244226#comment-14244226 ] Jimmy Mårdell commented on CASSANDRA-8472: -- Ohh, nope! Will try that, thanks! "nodetool netstats" also shows some progress of each stream. Would it make sense to have a timeout on not making progress for some time? > Streams hang in repair > -- > > Key: CASSANDRA-8472 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8472 > Project: Cassandra > Issue Type: Bug >Reporter: Jimmy Mårdell > Attachments: errlogs > > > In general streaming is working much better in 2.0.x than before, but we > still get occasional hanging stream sessions. > One of the nodes, the "follower", throws IOException: Broken pipe, causing > all streams to fail with the "initiator" node. But the initiator node still > thinks its sending and receiving files from the follower, causing the > streaming to hang forever. > Relevant lines from the logs of the "follower" attached. There's nothing > relevant in the logs on the initiator node. There are no indications of retry > attempts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8458) Don't give out positions in an sstable beyond its first/last tokens
[ https://issues.apache.org/jira/browse/CASSANDRA-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-8458: --- Attachment: 0001-Make-sure-we-don-t-give-out-positions-from-an-sstabl.patch attached file makes checks first/last keys in an sstable to make sure we don't give out positions beyond them > Don't give out positions in an sstable beyond its first/last tokens > --- > > Key: CASSANDRA-8458 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8458 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.3 > > Attachments: > 0001-Make-sure-we-don-t-give-out-positions-from-an-sstabl.patch > > > Looks like we include tmplink sstables in streams in 2.1+, and when we do, > sometimes we get this error message on the receiving side: > {{java.io.IOException: Corrupt input data, block did not start with 2 byte > signature ('ZV') followed by type byte, 2-byte length)}}. I've only seen this > happen when a tmplink sstable is included in the stream. > We can not just exclude the tmplink files when starting the stream - we need > to include the original file, which we might miss since we check if the > requested stream range intersects the sstable range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8458) Don't give out positions in an sstable beyond its first/last tokens
[ https://issues.apache.org/jira/browse/CASSANDRA-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244219#comment-14244219 ] Marcus Eriksson edited comment on CASSANDRA-8458 at 12/12/14 2:54 PM: -- attached patch checks first/last keys in an sstable to make sure we don't give out positions beyond them was (Author: krummas): attached patch makes checks first/last keys in an sstable to make sure we don't give out positions beyond them > Don't give out positions in an sstable beyond its first/last tokens > --- > > Key: CASSANDRA-8458 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8458 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.3 > > Attachments: > 0001-Make-sure-we-don-t-give-out-positions-from-an-sstabl.patch > > > Looks like we include tmplink sstables in streams in 2.1+, and when we do, > sometimes we get this error message on the receiving side: > {{java.io.IOException: Corrupt input data, block did not start with 2 byte > signature ('ZV') followed by type byte, 2-byte length)}}. I've only seen this > happen when a tmplink sstable is included in the stream. > We can not just exclude the tmplink files when starting the stream - we need > to include the original file, which we might miss since we check if the > requested stream range intersects the sstable range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8458) Don't give out positions in an sstable beyond its first/last tokens
[ https://issues.apache.org/jira/browse/CASSANDRA-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244219#comment-14244219 ] Marcus Eriksson edited comment on CASSANDRA-8458 at 12/12/14 2:53 PM: -- attached patch makes checks first/last keys in an sstable to make sure we don't give out positions beyond them was (Author: krummas): attached file makes checks first/last keys in an sstable to make sure we don't give out positions beyond them > Don't give out positions in an sstable beyond its first/last tokens > --- > > Key: CASSANDRA-8458 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8458 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.3 > > Attachments: > 0001-Make-sure-we-don-t-give-out-positions-from-an-sstabl.patch > > > Looks like we include tmplink sstables in streams in 2.1+, and when we do, > sometimes we get this error message on the receiving side: > {{java.io.IOException: Corrupt input data, block did not start with 2 byte > signature ('ZV') followed by type byte, 2-byte length)}}. I've only seen this > happen when a tmplink sstable is included in the stream. > We can not just exclude the tmplink files when starting the stream - we need > to include the original file, which we might miss since we check if the > requested stream range intersects the sstable range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14244215#comment-14244215 ] Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124: - [~yukim], Ok. Thanks for pointing this out. I will move that message to the end of the OnSuccess method so that correct message is being sent out. Is that it? Should I regenerate the patch? Thanks > Use JMX Notifications to Indicate Success/Failure of Long-Running Operations > > > Key: CASSANDRA-7124 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7124 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Tyler Hobbs >Assignee: Rajanarayanan Thottuvaikkatumana >Priority: Minor > Labels: lhf > Fix For: 3.0 > > Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt, > cassandra-trunk-decommission-7124.txt > > > If {{nodetool cleanup}} or some other long-running operation takes too long > to complete, you'll see an error like the one in CASSANDRA-2126, so you can't > tell if the operation completed successfully or not. CASSANDRA-4767 fixed > this for repairs with JMX notifications. We should do something similar for > nodetool cleanup, compact, decommission, move, relocate, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8458) Don't give out positions in an sstable beyond its first/last tokens
[ https://issues.apache.org/jira/browse/CASSANDRA-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-8458: --- Summary: Don't give out positions in an sstable beyond its first/last tokens (was: Avoid streaming from tmplink files) > Don't give out positions in an sstable beyond its first/last tokens > --- > > Key: CASSANDRA-8458 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8458 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.3 > > > Looks like we include tmplink sstables in streams in 2.1+, and when we do, > sometimes we get this error message on the receiving side: > {{java.io.IOException: Corrupt input data, block did not start with 2 byte > signature ('ZV') followed by type byte, 2-byte length)}}. I've only seen this > happen when a tmplink sstable is included in the stream. > We can not just exclude the tmplink files when starting the stream - we need > to include the original file, which we might miss since we check if the > requested stream range intersects the sstable range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)