[jira] [Commented] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)

2016-07-18 Thread Roman S. Borschel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383633#comment-15383633
 ] 

Roman S. Borschel commented on CASSANDRA-12203:
---

Of course, issue description updated.

> AssertionError on compaction after upgrade (2.1.9 -> 3.7)
> -
>
> Key: CASSANDRA-12203
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12203
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.7 (upgrade from 2.1.9)
> Java version "1.8.0_91"
> Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-83-generic x86_64)
>Reporter: Roman S. Borschel
> Fix For: 3.x
>
>
> After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family 
> (using SizeTieredCompaction) repeatedly and continuously failed compaction 
> (and thus also repair) across the cluster, with all nodes producing the 
> following errors in the logs:
> {noformat}
> 016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread 
> Thread[CompactionExecutor:3,1,main]
> 2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null
> 2016-07-14T09:29:47.96859 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96861 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96861 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96862 |srv=cassandra|   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96862 |srv=cassandra|   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96863 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96863 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96864 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96864 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96865 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96865 |srv=cassandra|   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96866 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96866 |srv=cassandra|   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96868 |srv=cassandra|   at 
> org.apache.cassandra

[jira] [Updated] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)

2016-07-18 Thread Roman S. Borschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman S. Borschel updated CASSANDRA-12203:
--
Description: 
After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family (using 
SizeTieredCompaction) repeatedly and continuously failed compaction (and thus 
also repair) across the cluster, with all nodes producing the following errors 
in the logs:

{noformat}
016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread 
Thread[CompactionExecutor:3,1,main]
2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null
2016-07-14T09:29:47.96859 |srv=cassandra|   at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96860 |srv=cassandra|   at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96860 |srv=cassandra|   at 
org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96860 |srv=cassandra|   at 
org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96861 |srv=cassandra|   at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96861 |srv=cassandra|   at 
org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96862 |srv=cassandra|   at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96862 |srv=cassandra|   at 
org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96863 |srv=cassandra|   at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96863 |srv=cassandra|   at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96864 |srv=cassandra|   at 
org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96864 |srv=cassandra|   at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96865 |srv=cassandra|   at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96865 |srv=cassandra|   at 
org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96866 |srv=cassandra|   at 
org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96866 |srv=cassandra|   at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96867 |srv=cassandra|   at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96867 |srv=cassandra|   at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96867 |srv=cassandra|   at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96868 |srv=cassandra|   at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96868 |srv=cassandra|   at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96869 |srv=cassandra|   at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
 ~[apache-cassandra-3.7.jar:3.7]
2016-07-14T09:29:47.96870 |srv=cassandra|   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_91]
2016-07-14T09:29:47.96870 |srv=cassandra|   at 
java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91]
2016-07-14T09:29:47.9687

[jira] [Updated] (CASSANDRA-12180) Should be able to override compaction space check

2016-07-18 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-12180:
--
Status: Patch Available  (was: Open)

> Should be able to override compaction space check
> -
>
> Key: CASSANDRA-12180
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12180
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12180_3.0.txt
>
>
> If there's not enough space for a compaction it won't do it and print the 
> exception below. Sometimes we know compaction will free up lot of space since 
> an ETL job could have inserted a lot of deletes. This override helps in this 
> case. 
> ERROR [CompactionExecutor:17] CassandraDaemon.java (line 258) Exception in 
> thread Thread
> [CompactionExecutor:17,1,main]
> java.lang.RuntimeException: Not enough space for compaction, estimated 
> sstables = 1552, expected
> write size = 260540558535
> at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace
> (CompactionTask.java:306)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.
> java:106)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.
> java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.
> java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run
> (CompactionManager.java:198)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12220) utest RowIndexEntryTest.testC11206AgainstPreviousArray/Shallow failure

2016-07-18 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383526#comment-15383526
 ] 

Dave Brosius edited comment on CASSANDRA-12220 at 7/19/16 3:31 AM:
---

When the test works (new HashMap), CFMetaData.columnMetaData is laid out in 
memory as

{code}
{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val, 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=pk, 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck}
{code}

When it doesn't work (Maps.newHashMapWithExpectedSize) the order is

{code}
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck,
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=pk,
{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val}
{code}

given that this is a HashMap the difference is explained naturally by the 
different allocation size.

The problem is then, that in TreeCursor.seekTo, it expects the columns to be 
visited in alphabetic order, where you see

{code}
if (key == test) cmp = 0; // check object identity first, since we utilise that 
in some places and it's very cheap
else cmp = comparator.compare(test, key); // order of provision 
matters for asymmetric comparators
if (forwards ? cmp >= 0 : cmp <= 0)
{
// we've either matched, or excluded the value from being 
present
this.cur = cur;
return cmp == 0;
}
{code}
in this case key (ck) is not test (val), and so jumps to the else, which 
forwards == true, and cmp == 1, and thus returns false for seekTo.

This causes stuff to fail.

I can only assume i'm missing something else, because one would think this 
would be failing all over the place, and one assumes it's not.


was (Author: dbrosius):
When the test works (new HashMap), CFMetaData.columnMetaData is laid out in 
memory as

{code}
{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val, 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=pk, 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck}
{code}

When it doesn't work (Maps.newHashMapWithExpectedSize) the order is

{code}
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck,
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=pk,
{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val}
{code}

given that this is a HashMap the difference is explained naturally by the 
different allocation size.

The problem is then, that in TreeCursor.seekTo, it expects the columns to be 
visited in alphabetic order, where you see

if (key == test) cmp = 0; // check object identity first, since we utilise that 
in some places and it's very cheap
else cmp = comparator.compare(test, key); // order of provision 
matters for asymmetric comparators
if (forwards ? cmp >= 0 : cmp <= 0)
{
// we've either matched, or excluded the value from being 
present
this.cur = cur;
return cmp == 0;
}

in this case key (ck) is not test (val), and so jumps to the else, which 
forwards == true, and cmp == 1, and thus returns false for seekTo.

This causes stuff to fail.

I can only assume i'm missing something else, because one would think this 
would be failing all over the place, and one assumes it's not.

> utest RowIndexEntryTest.testC11206AgainstPreviousArray/Shallow failure
> --
>
> Key: CASSANDRA-12220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12220
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>
> The unit tests {{RowIndexEntryTest.testC11206AgainstPreviousArray}} and 
> {{RowIndexEntryTest.testC11206AgainstPreviousShallow}} fail after [this 
> single line 
> change|https://github.com/apache/cassandra/commit/70fd80ae43f3902e651c956b6d4d07cbc203d30a#diff-75146ba408a51071a0b19ffdfbb2bb3cL307]
>  as shown in [this 
> build|http://cassci.datastax.com/view/trunk/job/trunk_testall/1044/].
> Reverting that line to {{new HashMap<>()}} fixes the unit test issues - but 
> _does not_ explain why it fails, since initializing a collection with the 
> expected size should not change the overall behaviour. There seems to be 
> something else being wrong.
> /cc [~dbrosius]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12220) utest RowIndexEntryTest.testC11206AgainstPreviousArray/Shallow failure

2016-07-18 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383526#comment-15383526
 ] 

Dave Brosius edited comment on CASSANDRA-12220 at 7/19/16 3:31 AM:
---

When the test works (new HashMap), CFMetaData.columnMetaData is laid out in 
memory as

{code}
{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val, 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=pk, 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck}
{code}

When it doesn't work (Maps.newHashMapWithExpectedSize) the order is

{code}
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck,
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=pk,
{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val}
{code}

given that this is a HashMap the difference is explained naturally by the 
different allocation size.

The problem is then, that in TreeCursor.seekTo, it expects the columns to be 
visited in alphabetic order, where you see

if (key == test) cmp = 0; // check object identity first, since we utilise that 
in some places and it's very cheap
else cmp = comparator.compare(test, key); // order of provision 
matters for asymmetric comparators
if (forwards ? cmp >= 0 : cmp <= 0)
{
// we've either matched, or excluded the value from being 
present
this.cur = cur;
return cmp == 0;
}

in this case key (ck) is not test (val), and so jumps to the else, which 
forwards == true, and cmp == 1, and thus returns false for seekTo.

This causes stuff to fail.

I can only assume i'm missing something else, because one would think this 
would be failing all over the place, and one assumes it's not.


was (Author: dbrosius):
When the test works (new HashMap), CFMetaData.columnMetaData is laid out in 
memory as

{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val, java.nio.HeapByteBuffer[pos=0 
lim=2 cap=2]=pk, java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck}

When it doesn't work (Maps.newHashMapWithExpectedSize) the order is

java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck,
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=pk,
{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val}

given that this is a HashMap the difference is explained naturally by the 
different allocation size.

The problem is then, that in TreeCursor.seekTo expects the columns to be 
visited in alphabetic order, where you see

if (key == test) cmp = 0; // check object identity first, since we utilise that 
in some places and it's very cheap
else cmp = comparator.compare(test, key); // order of provision 
matters for asymmetric comparators
if (forwards ? cmp >= 0 : cmp <= 0)
{
// we've either matched, or excluded the value from being 
present
this.cur = cur;
return cmp == 0;
}

in this case key (ck) is not test (val), and so jumps to the else, which 
forwards == true, and cmp == 1, and thus returns false for seekTo.

This causes stuff to fail.

I can only assume i'm missing something else, because one would think this 
would be failing all over the place, and one assumes it's not.

> utest RowIndexEntryTest.testC11206AgainstPreviousArray/Shallow failure
> --
>
> Key: CASSANDRA-12220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12220
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>
> The unit tests {{RowIndexEntryTest.testC11206AgainstPreviousArray}} and 
> {{RowIndexEntryTest.testC11206AgainstPreviousShallow}} fail after [this 
> single line 
> change|https://github.com/apache/cassandra/commit/70fd80ae43f3902e651c956b6d4d07cbc203d30a#diff-75146ba408a51071a0b19ffdfbb2bb3cL307]
>  as shown in [this 
> build|http://cassci.datastax.com/view/trunk/job/trunk_testall/1044/].
> Reverting that line to {{new HashMap<>()}} fixes the unit test issues - but 
> _does not_ explain why it fails, since initializing a collection with the 
> expected size should not change the overall behaviour. There seems to be 
> something else being wrong.
> /cc [~dbrosius]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12220) utest RowIndexEntryTest.testC11206AgainstPreviousArray/Shallow failure

2016-07-18 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383526#comment-15383526
 ] 

Dave Brosius commented on CASSANDRA-12220:
--

When the test works (new HashMap), CFMetaData.columnMetaData is laid out in 
memory as

{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val, java.nio.HeapByteBuffer[pos=0 
lim=2 cap=2]=pk, java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck}

When it doesn't work (Maps.newHashMapWithExpectedSize) the order is

java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=ck,
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]=pk,
{java.nio.HeapByteBuffer[pos=0 lim=3 cap=3]=val}

given that this is a HashMap the difference is explained naturally by the 
different allocation size.

The problem is then, that in TreeCursor.seekTo expects the columns to be 
visited in alphabetic order, where you see

if (key == test) cmp = 0; // check object identity first, since we utilise that 
in some places and it's very cheap
else cmp = comparator.compare(test, key); // order of provision 
matters for asymmetric comparators
if (forwards ? cmp >= 0 : cmp <= 0)
{
// we've either matched, or excluded the value from being 
present
this.cur = cur;
return cmp == 0;
}

in this case key (ck) is not test (val), and so jumps to the else, which 
forwards == true, and cmp == 1, and thus returns false for seekTo.

This causes stuff to fail.

I can only assume i'm missing something else, because one would think this 
would be failing all over the place, and one assumes it's not.

> utest RowIndexEntryTest.testC11206AgainstPreviousArray/Shallow failure
> --
>
> Key: CASSANDRA-12220
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12220
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>
> The unit tests {{RowIndexEntryTest.testC11206AgainstPreviousArray}} and 
> {{RowIndexEntryTest.testC11206AgainstPreviousShallow}} fail after [this 
> single line 
> change|https://github.com/apache/cassandra/commit/70fd80ae43f3902e651c956b6d4d07cbc203d30a#diff-75146ba408a51071a0b19ffdfbb2bb3cL307]
>  as shown in [this 
> build|http://cassci.datastax.com/view/trunk/job/trunk_testall/1044/].
> Reverting that line to {{new HashMap<>()}} fixes the unit test issues - but 
> _does not_ explain why it fails, since initializing a collection with the 
> expected size should not change the overall behaviour. There seems to be 
> something else being wrong.
> /cc [~dbrosius]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12214) cqlshlib test failure: cqlshlib.test.remove_test_db

2016-07-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383525#comment-15383525
 ] 

Stefania commented on CASSANDRA-12214:
--

Good idea, I've removed the word "test" from a bunch of methods in 
_cassconnect.py_:

||2.2||3.0||3.9||trunk||
|[patch|https://github.com/stef1927/cassandra/commits/12214-cqlsh-2.2]|[patch|https://github.com/stef1927/cassandra/commits/12214-cqlsh-3.0]|[patch|https://github.com/stef1927/cassandra/commits/12214-cqlsh-3.9]|[patch|https://github.com/stef1927/cassandra/commits/12214-cqlsh]|
|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12214-cqlsh-2.2-cqlsh-tests/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12214-cqlsh-3.0-cqlsh-tests/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12214-cqlsh-3.9-cqlsh-tests/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-12214-cqlsh-cqlsh-tests/]|



>  cqlshlib test failure: cqlshlib.test.remove_test_db
> 
>
> Key: CASSANDRA-12214
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12214
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>  Labels: cqlsh
>
> [~Stefania]  
> http://cassci.datastax.com/job/cassandra-3.9_cqlsh_tests/lastCompletedBuild/testReport/
> Hello, these three tests are failing:
> cqlshlib.test.remove_test_db
> cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_create_columnfamily
> cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_create_table
> Can you look at them, please?  Thank you!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12214) cqlshlib test failure: cqlshlib.test.remove_test_db

2016-07-18 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-12214:


Assignee: Stefania

>  cqlshlib test failure: cqlshlib.test.remove_test_db
> 
>
> Key: CASSANDRA-12214
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12214
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Craig Kodman
>Assignee: Stefania
>  Labels: cqlsh
>
> [~Stefania]  
> http://cassci.datastax.com/job/cassandra-3.9_cqlsh_tests/lastCompletedBuild/testReport/
> Hello, these three tests are failing:
> cqlshlib.test.remove_test_db
> cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_create_columnfamily
> cqlshlib.test.test_cqlsh_completion.TestCqlshCompletion.test_complete_in_create_table
> Can you look at them, please?  Thank you!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test

2016-07-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383505#comment-15383505
 ] 

Stefania commented on CASSANDRA-11465:
--

Yesterday's run reproduced the [same 
failure|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-cqlsh-3.9-dtest/lastCompletedBuild/testReport/cql_tracing_test/TestCqlTracing/tracing_default_impl_test/]
 unfortunately. After investigating, I discovered that the tracing executor was 
rejecting calls to submit with UnsupportedOperation exceptions. I've fixed this 
and I've also increased the maximum time we wait for a queued event to execute 
on the tracing executor when releasing the tracing state. By default it is 
still 1 second, and could be zero if required, but the tests can override it 
via a system property and are currently using up to 15 seconds. I've also added 
some trace messages to debug further if the problem persists, and relaunched 
the tests.

> dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
> --
>
> Key: CASSANDRA-11465
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11465
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: Stefania
>  Labels: dtest
>
> Failing on the following assert, on trunk only: 
> {{self.assertEqual(len(errs[0]), 1)}}
> Is not failing consistently.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test
> Failed on CassCI build trunk_dtest #1087



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12231) Make large mutations easier to find

2016-07-18 Thread Ryan Svihla (JIRA)
Ryan Svihla created CASSANDRA-12231:
---

 Summary: Make large mutations easier to find
 Key: CASSANDRA-12231
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12231
 Project: Cassandra
  Issue Type: Improvement
Reporter: Ryan Svihla
Priority: Minor


Apologies if this has already been submitted, my lacking Jira search foo will 
be to blame.

There are two problems related to large mutations:
1. Mutations that are really large and fail the write with the following 
"Mutation of %s is too large for the maximum size of %s" (in CommitLog.java). 
While this should be something the clients can handle, they often dont and the 
DBA is the first person to notice this. If we could log the primary key 
attempted it could be helpful to track down the culprit.
2. Mutations that are still too large but still under the threshold, so they 
silently crush the server. We could add a warn to this the same way we do with 
batch logs with ideally a configurable threshold. It would also be handy if we 
could include the primary key used. Today, these are super nasty today to track 
down and involve a scan of the dataset using something like Spark.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12230) Provide basic support for Map type in cassandra stress

2016-07-18 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12230:

Description: 
We have CASSANDRA-8986 in which CASSANDRA-9091 is referenced and (prematurely, 
IMO) closed as a duplicate. The attached patch adds basic map support so 
"remove map from your schema" is not what we tell users when they try to stress 
test their data models.

This is a simple patch to accept the parameter. It has to use the same type for 
the key and value. 

  was:
We have CASSANDRA-8986 in which CASSANDRA-9091 is referenced and (prematurely, 
IMO) closed as a duplicate. The attached patch adds basic map support so 
"remove map from your schema is not what we tell users when they try to stress 
test their data models.

This is a simple patch to accept the parameter. It has to use the same type for 
the key and value. 


> Provide basic support for Map type in cassandra stress
> --
>
> Key: CASSANDRA-12230
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12230
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Nate McCall
> Fix For: 2.2.x
>
> Attachments: stress_map_support_2.2.txt
>
>
> We have CASSANDRA-8986 in which CASSANDRA-9091 is referenced and 
> (prematurely, IMO) closed as a duplicate. The attached patch adds basic map 
> support so "remove map from your schema" is not what we tell users when they 
> try to stress test their data models.
> This is a simple patch to accept the parameter. It has to use the same type 
> for the key and value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12230) Provide basic support for Map type in cassandra stress

2016-07-18 Thread Nate McCall (JIRA)
Nate McCall created CASSANDRA-12230:
---

 Summary: Provide basic support for Map type in cassandra stress
 Key: CASSANDRA-12230
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12230
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Nate McCall
 Fix For: 2.2.x
 Attachments: stress_map_support_2.2.txt

We have CASSANDRA-8986 in which CASSANDRA-8986 is referenced and (prematurely, 
IMO) closed as a duplicate. The attached patch adds basic map support so 
"remove map from your schema is not what we tell users when they try to stress 
test their data models.

This is a simple patch to accept the parameter. It has to use the same type for 
the key and value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12230) Provide basic support for Map type in cassandra stress

2016-07-18 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12230:

Description: 
We have CASSANDRA-8986 in which CASSANDRA-9091 is referenced and (prematurely, 
IMO) closed as a duplicate. The attached patch adds basic map support so 
"remove map from your schema is not what we tell users when they try to stress 
test their data models.

This is a simple patch to accept the parameter. It has to use the same type for 
the key and value. 

  was:
We have CASSANDRA-8986 in which CASSANDRA-8986 is referenced and (prematurely, 
IMO) closed as a duplicate. The attached patch adds basic map support so 
"remove map from your schema is not what we tell users when they try to stress 
test their data models.

This is a simple patch to accept the parameter. It has to use the same type for 
the key and value. 


> Provide basic support for Map type in cassandra stress
> --
>
> Key: CASSANDRA-12230
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12230
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Nate McCall
> Fix For: 2.2.x
>
> Attachments: stress_map_support_2.2.txt
>
>
> We have CASSANDRA-8986 in which CASSANDRA-9091 is referenced and 
> (prematurely, IMO) closed as a duplicate. The attached patch adds basic map 
> support so "remove map from your schema is not what we tell users when they 
> try to stress test their data models.
> This is a simple patch to accept the parameter. It has to use the same type 
> for the key and value. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11850) cannot use cql since upgrading python to 2.7.11+

2016-07-18 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383466#comment-15383466
 ] 

Stefania commented on CASSANDRA-11850:
--

I've rebased the 3.9 and trunk branches again, I noticed a (most likely) 
unrelated 
[failure|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11850-cqlsh-3.9-cqlsh-tests/lastCompletedBuild/cython=yes/testReport/cqlshlib.test.test_cqlsh_completion/TestCqlshCompletion/test_complete_in_delete/]
 yesterday on 3.9. It basically failed to read the prompt even if we can see it 
in the debug messages. I could not reproduce it locally (run it 30 times in a 
loop), so I've added an additional debug message to the 3.9+ branches in case 
this failure happens again. 

> cannot use cql since upgrading python to 2.7.11+
> 
>
> Key: CASSANDRA-11850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11850
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Development
>Reporter: Andrew Madison
>Assignee: Stefania
>  Labels: cqlsh
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> OS: Debian GNU/Linux stretch/sid 
> Kernel: 4.5.0-2-amd64 #1 SMP Debian 4.5.4-1 (2016-05-16) x86_64 GNU/Linux
> Python version: 2.7.11+ (default, May  9 2016, 15:54:33)
> [GCC 5.3.1 20160429]
> cqlsh --version: cqlsh 5.0.1
> cassandra -v: 3.5 (also occurs with 3.0.6)
> Issue:
> when running cqlsh, it returns the following error:
> cqlsh -u dbarpt_usr01
> Password: *
> Connection error: ('Unable to connect to any servers', {'odbasandbox1': 
> TypeError('ref() does not take keyword arguments',)})
> I cleared PYTHONPATH:
> python -c "import json; print dir(json); print json.__version__"
> ['JSONDecoder', 'JSONEncoder', '__all__', '__author__', '__builtins__', 
> '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', 
> '_default_decoder', '_default_encoder', 'decoder', 'dump', 'dumps', 
> 'encoder', 'load', 'loads', 'scanner']
> 2.0.9
> Java based clients can connect to Cassandra with no issue. Just CQLSH and 
> Python clients cannot.
> nodetool status also works.
> Thank you for your help.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9054) Break DatabaseDescriptor up into multiple classes.

2016-07-18 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383365#comment-15383365
 ] 

Robert Stupp commented on CASSANDRA-9054:
-

After I've made DD a "non active initializer", it felt too heavy to also break 
DD into multiple classes. On the one hand, there's DD but on the other there's 
{{Config}} - and both are highly related. I think a better approach would be to 
move stuff from DD into the appropriate services - but that's an even more 
intrusive patch. And I don't know whether it's worth to do that. What we want 
to tackle is that accessing DD doesn't "magically" initialize everything in a 
more or less "unexpected" order (CASSANDRA-8616, CASSANDRA-9555).

Beside that, I guess, the patch needs a proper rebase.

> Break DatabaseDescriptor up into multiple classes.
> --
>
> Key: CASSANDRA-9054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9054
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremiah Jordan
>Assignee: Robert Stupp
> Fix For: 3.x
>
>
> Right now to get at Config stuff you go through DatabaseDescriptor.  But when 
> you instantiate DatabaseDescriptor it actually opens system tables and such, 
> which triggers commit log replays, and other things if the right flags aren't 
> set ahead of time.  This makes getting at config stuff from tools annoying, 
> as you have to be very careful about instantiation orders.
> It would be nice if we could break DatabaseDescriptor up into multiple 
> classes, so that getting at config stuff from tools wasn't such a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12181) Include table name in "Cannot get comparator" exception

2016-07-18 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-12181:
--
Attachment: CASSANDRA-12181_3.0-v3.txt

> Include table name in "Cannot get comparator" exception
> ---
>
> Key: CASSANDRA-12181
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12181
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12181-3.0_v2.txt, CASSANDRA-12181_3.0-v3.txt, 
> CASSANDRA-12181_3.0.txt
>
>
> Having table name will help in debugging the following exception. 
> ERROR [MutationStage:xx]  CassandraDaemon.java (line 199) Exception in thread 
> Thread[MutationStage:3788,5,main]
> clusterName=itms8shared20
> java.lang.RuntimeException: Cannot get comparator 2 in 
> org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type).
>  
> This might be due to a mismatch between the schema and the data read



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12181) Include table name in "Cannot get comparator" exception

2016-07-18 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383361#comment-15383361
 ] 

sankalp kohli commented on CASSANDRA-12181:
---

I have attached v3 with changes 

Here is the full stack trace 

ERROR [MutationStage:]  CassandraDaemon.java (line 199) Exception in 
thread Thread[MutationStage:,main]
java.lang.RuntimeException: Cannot get comparator 1 in 
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type).
 This might due to a mismatch between the schema and the data read
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:133)
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:140)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:59)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:36)
at 
edu.stanford.ppl.concurrent.SnapTreeMap$1.compareTo(SnapTreeMap.java:538)
at 
edu.stanford.ppl.concurrent.SnapTreeMap.attemptUpdate(SnapTreeMap.java:1108)
at 
edu.stanford.ppl.concurrent.SnapTreeMap.updateUnderRoot(SnapTreeMap.java:1059)
at edu.stanford.ppl.concurrent.SnapTreeMap.update(SnapTreeMap.java:1023)
at 
edu.stanford.ppl.concurrent.SnapTreeMap.putIfAbsent(SnapTreeMap.java:985)
at 
org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:319)
at 
org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:191)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:226)
at org.apache.cassandra.db.Memtable.put(Memtable.java:173)
at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:900)
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:374)
at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:339)
at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:211)
at 
org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:56)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IndexOutOfBoundsException: index (1) must be less than 
size (1)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:306)
at 
com.google.common.base.Preconditions.checkElementIndex(Preconditions.java:285)
at 
com.google.common.collect.SingletonImmutableList.get(SingletonImmutableList.java:45)
at 
org.apache.cassandra.db.marshal.CompositeType.getComparator(CompositeType.java:124)
... 21 more


> Include table name in "Cannot get comparator" exception
> ---
>
> Key: CASSANDRA-12181
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12181
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12181-3.0_v2.txt, CASSANDRA-12181_3.0.txt
>
>
> Having table name will help in debugging the following exception. 
> ERROR [MutationStage:xx]  CassandraDaemon.java (line 199) Exception in thread 
> Thread[MutationStage:3788,5,main]
> clusterName=itms8shared20
> java.lang.RuntimeException: Cannot get comparator 2 in 
> org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type).
>  
> This might be due to a mismatch between the schema and the data read



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-4650) RangeStreamer should be smarter when picking endpoints for streaming in case of N >=3 in each DC.

2016-07-18 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383318#comment-15383318
 ] 

sankalp kohli commented on CASSANDRA-4650:
--

Please kick off the tests. 

I am not sure which capacity are you talking about. Capacity of edges in the 
graph. If yes which ones. 

> RangeStreamer should be smarter when picking endpoints for streaming in case 
> of N >=3 in each DC.  
> ---
>
> Key: CASSANDRA-4650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4650
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.1.5
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
>  Labels: streaming
> Attachments: CASSANDRA-4650_trunk.txt, photo-1.JPG
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> getRangeFetchMap method in RangeStreamer should pick unique nodes to stream 
> data from when number of replicas in each DC is three or more. 
> When N>=3 in a DC, there are two options for streaming a range. Consider an 
> example of 4 nodes in one datacenter and replication factor of 3. 
> If a node goes down, it needs to recover 3 ranges of data. With current code, 
> two nodes could get selected as it orders the node by proximity. 
> We ideally will want to select 3 nodes for streaming the data. We can do this 
> by selecting unique nodes for each range.  
> Advantages:
> This will increase the performance of bootstrapping a node and will also put 
> less pressure on nodes serving the data. 
> Note: This does not affect if N < 3 in each DC as then it streams data from 
> only 2 nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12179) Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop

2016-07-18 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383310#comment-15383310
 ] 

sankalp kohli commented on CASSANDRA-12179:
---

Attaching v3 using update snitch. I also fixed issues in the updatesnitch 
method.  

> Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop 
> ---
>
> Key: CASSANDRA-12179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12179
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Fix For: 3.0.x
>
> Attachments: CASSANDRA-12179-3.0_v2.txt, CASSANDRA-12179_3.0.txt, 
> CASSANDRA-12179_3.0_v3.txt
>
>
> Need to expose dynamic_snitch_update_interval_in_ms so that it does not 
> require a bounce. This is useful for large clusters where we can change this 
> value and see the impact. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12179) Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop

2016-07-18 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-12179:
--
Attachment: CASSANDRA-12179_3.0_v3.txt

> Make DynamicEndpointSnitch dynamic_snitch_update_interval_in_ms a JMX Prop 
> ---
>
> Key: CASSANDRA-12179
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12179
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Fix For: 3.0.x
>
> Attachments: CASSANDRA-12179-3.0_v2.txt, CASSANDRA-12179_3.0.txt, 
> CASSANDRA-12179_3.0_v3.txt
>
>
> Need to expose dynamic_snitch_update_interval_in_ms so that it does not 
> require a bounce. This is useful for large clusters where we can change this 
> value and see the impact. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12215) NullPointerException during Compaction

2016-07-18 Thread Hau Phan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383195#comment-15383195
 ] 

Hau Phan commented on CASSANDRA-12215:
--

{code}
hau@cqlsh:adsfadfafd> select * from friendships_by_owner ;

 owner_id | friend_id| 
username | created_at   | friend_status | friend_username | status  
 | temp_friend
--+--+--+--+---+-+--+-
 0533a340-4d36-11e6-8fb9-53ec422f025f | 10c58a5c-4d36-11e6-bb31-615a3fd451c6 |  
 nothau | 2016-07-18 22:23:22+ |  accepted |  haunot | accepted 
|null
 0533a340-4d36-11e6-8fb9-53ec422f025f | ec08f3de-4d35-11e6-ada9-39e33de03af2 |  
 nothau | 2016-07-18 22:23:09+ |  accepted | hau | accepted 
|null
 ec08f3de-4d35-11e6-ada9-39e33de03af2 | 10c58a5c-4d36-11e6-bb31-615a3fd451c6 |  
hau | 2016-07-18 22:23:12+ |  accepted |  haunot | accepted 
|null
 ec08f3de-4d35-11e6-ada9-39e33de03af2 | 0533a340-4d36-11e6-8fb9-53ec422f025f |  
hau | 2016-07-18 22:23:09+ |  accepted |  nothau | accepted 
|null
 10c58a5c-4d36-11e6-bb31-615a3fd451c6 | 0533a340-4d36-11e6-8fb9-53ec422f025f |  
 haunot | 2016-07-18 22:23:22+ |  accepted |  nothau | accepted 
|null
 10c58a5c-4d36-11e6-bb31-615a3fd451c6 | ec08f3de-4d35-11e6-ada9-39e33de03af2 |  
 haunot | 2016-07-18 22:23:12+ |  accepted | hau | accepted 
|null

(6 rows)
hau@cqlsh:adsfadfafd> DELETE FROM friendships_by_owner WHERE owner_id = 
0533a340-4d36-11e6-8fb9-53ec422f025f AND friend_id = 
ec08f3de-4d35-11e6-ada9-39e33de03af2;
hau@cqlsh:adsfadfafd> DELETE FROM friendships_by_owner WHERE owner_id = 
ec08f3de-4d35-11e6-ada9-39e33de03af2 AND friend_id = 
0533a340-4d36-11e6-8fb9-53ec422f025f; 

hau@cqlsh:adsfadfafd> select * from friendships_by_owner ;

 owner_id | friend_id| 
username | created_at   | friend_status | friend_username | status  
 | temp_friend
--+--+--+--+---+-+--+-
 0533a340-4d36-11e6-8fb9-53ec422f025f | 10c58a5c-4d36-11e6-bb31-615a3fd451c6 |  
 nothau | 2016-07-18 22:23:22+ |  accepted |  haunot | accepted 
|null
 ec08f3de-4d35-11e6-ada9-39e33de03af2 | null |  
hau | null |  null |null | null 
|null
 10c58a5c-4d36-11e6-bb31-615a3fd451c6 | 0533a340-4d36-11e6-8fb9-53ec422f025f |  
 haunot | 2016-07-18 22:23:22+ |  accepted |  nothau | accepted 
|null

(3 rows)
{code}

> NullPointerException during Compaction
> --
>
> Key: CASSANDRA-12215
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12215
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.0.8, cqlsh 5.0.1
>Reporter: Hau Phan
> Fix For: 3.0.x
>
>
> Running 3.0.8 on a single standalone node with cqlsh 5.0.1, the keyspace RF = 
> 1 and class SimpleStrategy.  
> Attempting to run a 'select * from ' and receiving this error:
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> Cassandra system.log prints this:
> {code}
> ERROR [CompactionExecutor:5] 2016-07-15 13:42:13,219 CassandraDaemon.java:201 
> - Exception in thread Thread[CompactionExecutor:5,1,main]
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[a

[jira] [Commented] (CASSANDRA-12222) Per-node overrides for table settings

2016-07-18 Thread Nate McCall (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383153#comment-15383153
 ] 

Nate McCall commented on CASSANDRA-1:
-

bq. I think this can be potentially useful for almost all table settings, but 
we don't expose JMX methods for all settings, and it would be annoying to have 
to.

Agreed. I really like the idea of having 'node_overrides' persisted in the 
schema. Given the syntax and statement above, to alter 2 nodes with LCS are we 
talking about:
{noformat}
ALTER TABLE foo 
  WITH node_overrides = { 
  '192.168.0.1' : { 'compaction' : { 'class' : 'LeveledCompactionStrategy' } } 
, 
  '192.168.0.2' : { 'compaction' : { 'class' : 'LeveledCompactionStrategy' } } 
}
{noformat}

If so, even though it's on the verbose side, I like the idea of being quite 
explicit when "snowflaking." IME, we never test things like 
{{setCompactionParameters()}} with more than a very small number of nodes 
anyway. 

> Per-node overrides for table settings
> -
>
> Key: CASSANDRA-1
> URL: https://issues.apache.org/jira/browse/CASSANDRA-1
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Sylvain Lebresne
>Priority: Minor
>
> There is a few cases where it's convenient to set some table parameters on 
> only one of a few nodes. For instance, it's useful for experimenting with 
> settings like caching options, compaction, compression, read repair chance, 
> gcGrace ... Another case is when you want to completely migrate to a new 
> setting, but want to do that node-per-node (mainly useful when switching 
> compaction strategy, see CASSANDRA-10898).
> I'll note that we can already do some of this through JMX for some of the 
> settings as we have methods like 
> {{ColumnFamilyStoreMBean.setCompactionParameters()}}, but:
> # parameters settings are initially set in CQL. Having to go to JMX for this 
> sounds less consistent to me. The fact we have both a 
> {{ColumnFamilyStoreMBean.setCompactionParameters()}} and a 
> {{ColumnFamilyStoreMBean.setCompactionParametersJson()}} (as I assume the 
> former one is inconvenient to use) is also proof to me than JMX ain't 
> terribly appropriate.
> # I think this can be potentially useful for almost all table settings, but 
> we don't expose JMX methods for all settings, and it would be annoying to 
> have to. The method suggested below wouldn't have to be updated every time we 
> add a new settings (if done right).
> # Changing options through JMX is not persistent across restarts. This may 
> arguably be fine in some cases, but if you're trying to migrate your 
> compaction strategy node per node, or want to experiment with a setting over 
> a mediumish time period, it's mostly a pain.
> So what I suggest would be add node overrides in the normal table setting 
> (which would be part of the schema as any other setting). In other words, if 
> you want to set LCS for only one specific node, you'd do:
> {noformat}
> ALTER TABLE foo WITH node_overrides = { '192.168.0.1' : { 'compaction' : { 
> 'class' : 'LeveledCompactionStrategy' } } }
> {noformat}
> I'll note that I already suggested that idea on CASSANDRA-10898, but as it's 
> more generic than what that latter ticket is about, so creating its own 
> ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12229) Move streaming to non-blocking IO and netty (streaming 2.1)

2016-07-18 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-12229:

Description: 
As followup work to CASSANDRA-8457, we need to move streaming to use netty.

Streaming 2.0 (CASSANDRA-5286) brought many good improvements to how files are 
transferred between nodes in a cluster. However, the low-level details of the 
current streaming implementation does not line up nicely with a non-blocking 
model, so I think this is a good time to review some of those details and add 
in additional goodness. The current implementation assumes a sequential or 
"single threaded" approach to the sending of stream messages as well as the 
transfer of files. In short, after several iterative prototypes, I propose the 
following:

1) use a single bi-diredtional connection (instead of requiring to two sockets 
& two threads)
2) send the "non-file" {{StreamMessage}}s (basically anything not 
{{OutboundFileMessage}}) via the normal internode messaging. This will require 
a slight bit more management of the session (the ability to look up a 
{{StreamSession}} from a static function on {{StreamManager}}, but we have have 
most of the pieces we need for this already.
3) switch to a non-blocking IO model (facilitated via netty)
4) Allow files to be streamed in parallel (CASSANDRA-4663) - this should just 
be a thing already
5) If the entire sstable is to streamed, in addition to the DATA component, 
transfer all the components of the sstable (primary index, bloom filter, stats, 
and so on). This way we can avoid the CPU and GC pressure from deserializing 
the stream into objects. File streaming then amounts to a block-level transfer.

Note: The progress/results of CASSANDRA-11303 will need to be reflected here, 
as well.

  was:
As followup work to CASSANDRA-8457, we need to move streaming to use netty.

Streaming 2.0 (CASSANDRA-5286) brought many good improvements to how files are 
transferred between nodes in a cluster. However, the low-level details of the 
current streaming implementation does not line up nicely with a non-blocking 
model, so I think this is a good time to review some of those details and add 
in additional goodness. The current implementation assumes a sequential or 
"single threaded" approach to the sending of stream messages as well as the 
transfer of files. In short, after several iterative prototypes, I propose the 
following:

1) use a single bi-diredtional connection (instead of requiring to two sockets 
& two threads)
2) send the "non-file" {{StreamMessage}}s (basically anything not 
{{OutboundFileMessage}}) via the normal internode messaging. This will require 
a slight bit more management of the session (the ability to look up a 
{{StreamSession}} from a static function on {{StreamManager}}, but we have have 
most of the pieces we need for this already.
3) Allow files to be streamed in parallel (CASSANDRA-4663) - this should just 
be a thing already
4) If the entire sstable is to streamed, in addition to the DATA component, 
transfer all the components of the sstable (primary index, bloom filter, stats, 
and so on). This way we can avoid the CPU and GC pressure from deserializing 
the stream into objects. File streaming then amounts to a block-level transfer.

Note: The progress/results of CASSANDRA-11303 will need to be reflected here, 
as well.


> Move streaming to non-blocking IO and netty (streaming 2.1)
> ---
>
> Key: CASSANDRA-12229
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12229
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Jason Brown
>Assignee: Jason Brown
> Fix For: 4.0
>
>
> As followup work to CASSANDRA-8457, we need to move streaming to use netty.
> Streaming 2.0 (CASSANDRA-5286) brought many good improvements to how files 
> are transferred between nodes in a cluster. However, the low-level details of 
> the current streaming implementation does not line up nicely with a 
> non-blocking model, so I think this is a good time to review some of those 
> details and add in additional goodness. The current implementation assumes a 
> sequential or "single threaded" approach to the sending of stream messages as 
> well as the transfer of files. In short, after several iterative prototypes, 
> I propose the following:
> 1) use a single bi-diredtional connection (instead of requiring to two 
> sockets & two threads)
> 2) send the "non-file" {{StreamMessage}}s (basically anything not 
> {{OutboundFileMessage}}) via the normal internode messaging. This will 
> require a slight bit more management of the session (the ability to look up a 
> {{StreamSession}} from a static function on {{StreamManager}}, but we have 
> have most of the pieces we n

[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables

2016-07-18 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383103#comment-15383103
 ] 

Jason Brown commented on CASSANDRA-12127:
-

[~blerer] or [~thobbs] any updates?

> Queries with empty ByteBuffer values in clustering column restrictions fail 
> for non-composite compact tables
> 
>
> Key: CASSANDRA-12127
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12127
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 12127.txt
>
>
> For the following table:
> {code}
> CREATE TABLE myTable (pk int,
>   c blob,
>   value int,
>   PRIMARY KEY (pk, c)) WITH COMPACT STORAGE;
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1);
> INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2);
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}}
> Will result in the following Exception:
> {code}
> java.lang.ClassCastException: 
> org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast 
> to org.apache.cassandra.db.composites.CellName
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188)
>   at 
> org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125)
>   at 
> org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299)
>   [...]
> {code}
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}}
> Will return 2 rows instead of 0.
> The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}}
> {code}
> java.lang.AssertionError
>   at 
> org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253)
>   [...]
> {code}
> I checked 2.0 and {{SELECT * FROM myTable  WHERE pk = 1 AND c > 
> textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND 
> c < textAsBlob('');}} return the same wrong results than in 2.1.
> The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is 
> rejected if a clear error message: {{Invalid empty value for clustering 
> column of COMPACT TABLE}}.
> As it is not possible to insert an empty ByteBuffer value within the 
> clustering column of a non-composite compact tables those queries do not
> have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < 
> textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = 
> textAsBlob('');}} will return nothing
> and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will 
> return the entire partition (pk = 1).
> In my opinion those queries should probably all be rejected as it seems that 
> the fact that {{SELECT * FROM myTable  WHERE pk = 1 AND c > textAsBlob('');}} 
> was accepted in {{2.0}} was due to a bug.
> I am of course open to discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12229) Move streaming to non-blocking IO and netty (streaming 2.1)

2016-07-18 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-12229:
---

 Summary: Move streaming to non-blocking IO and netty (streaming 
2.1)
 Key: CASSANDRA-12229
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12229
 Project: Cassandra
  Issue Type: Improvement
  Components: Streaming and Messaging
Reporter: Jason Brown
Assignee: Jason Brown
 Fix For: 4.0


As followup work to CASSANDRA-8457, we need to move streaming to use netty.

Streaming 2.0 (CASSANDRA-5286) brought many good improvements to how files are 
transferred between nodes in a cluster. However, the low-level details of the 
current streaming implementation does not line up nicely with a non-blocking 
model, so I think this is a good time to review some of those details and add 
in additional goodness. The current implementation assumes a sequential or 
"single threaded" approach to the sending of stream messages as well as the 
transfer of files. In short, after several iterative prototypes, I propose the 
following:

1) use a single bi-diredtional connection (instead of requiring to two sockets 
& two threads)
2) send the "non-file" {{StreamMessage}}s (basically anything not 
{{OutboundFileMessage}}) via the normal internode messaging. This will require 
a slight bit more management of the session (the ability to look up a 
{{StreamSession}} from a static function on {{StreamManager}}, but we have have 
most of the pieces we need for this already.
3) Allow files to be streamed in parallel (CASSANDRA-4663) - this should just 
be a thing already
4) If the entire sstable is to streamed, in addition to the DATA component, 
transfer all the components of the sstable (primary index, bloom filter, stats, 
and so on). This way we can avoid the CPU and GC pressure from deserializing 
the stream into objects. File streaming then amounts to a block-level transfer.

Note: The progress/results of CASSANDRA-11303 will need to be reflected here, 
as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-2848) Make the Client API support passing down timeouts

2016-07-18 Thread Geoffrey Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Yu updated CASSANDRA-2848:
---
Status: Patch Available  (was: Open)

I’ve attached a patch implementing this, and would love some feedback!

At a high level, the approach I took was to use the last flag available in the 
protocol to allow a client to specify whether or not the client supplied a 
timeout (as a {{long}}, in milliseconds). Cassandra will then use the minimum 
of either the client specified timeout or the configured RPC timeout. The rest 
of the changes were essentially for passing the client supplied timeout down to 
where it’s actually needed. I also bumped the messaging service version to 
allow for passing the timeout to the replica nodes as a part of 
serialization/deserialization for  {{ReadCommand}} and {{Mutation}}.

> Make the Client API support passing down timeouts
> -
>
> Key: CASSANDRA-2848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2848
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Goffinet
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 2848-trunk.txt
>
>
> Having a max server RPC timeout is good for worst case, but many applications 
> that have middleware in front of Cassandra, might have higher timeout 
> requirements. In a fail fast environment, if my application starting at say 
> the front-end, only has 20ms to process a request, and it must connect to X 
> services down the stack, by the time it hits Cassandra, we might only have 
> 10ms. I propose we provide the ability to specify the timeout on each call we 
> do optionally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-2848) Make the Client API support passing down timeouts

2016-07-18 Thread Geoffrey Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Yu updated CASSANDRA-2848:
---
Attachment: 2848-trunk.txt

> Make the Client API support passing down timeouts
> -
>
> Key: CASSANDRA-2848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2848
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Goffinet
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 2848-trunk.txt
>
>
> Having a max server RPC timeout is good for worst case, but many applications 
> that have middleware in front of Cassandra, might have higher timeout 
> requirements. In a fail fast environment, if my application starting at say 
> the front-end, only has 20ms to process a request, and it must connect to X 
> services down the stack, by the time it hits Cassandra, we might only have 
> 10ms. I propose we provide the ability to specify the timeout on each call we 
> do optionally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12228) Write performance regression in 3.x vs 3.0

2016-07-18 Thread T Jake Luciani (JIRA)
T Jake Luciani created CASSANDRA-12228:
--

 Summary: Write performance regression in 3.x vs 3.0
 Key: CASSANDRA-12228
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12228
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 3.9


I've been tracking down a performance issue in trunk vs cassandra-3.0 branch.

I think I've found it.  CASSANDRA-6696 changed the default memtable flush 
default to 1 vs the min of 2 in cassandra-3.0.

I don't see any technical reason for this and we should add back the min of 2 
sstable flushers per disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12215) NullPointerException during Compaction

2016-07-18 Thread Hau Phan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382994#comment-15382994
 ] 

Hau Phan commented on CASSANDRA-12215:
--

One thing to note, this occurs when attempting to delete a row, the owner_id 
and username values still exist.  

{code}
delete from friendships_by_owner where owner_id =  and friend_id = 
; 
{code} 



> NullPointerException during Compaction
> --
>
> Key: CASSANDRA-12215
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12215
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.0.8, cqlsh 5.0.1
>Reporter: Hau Phan
> Fix For: 3.0.x
>
>
> Running 3.0.8 on a single standalone node with cqlsh 5.0.1, the keyspace RF = 
> 1 and class SimpleStrategy.  
> Attempting to run a 'select * from ' and receiving this error:
> ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
> failed - received 0 responses and 1 failures" info={'failures': 1, 
> 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
> Cassandra system.log prints this:
> {code}
> ERROR [CompactionExecutor:5] 2016-07-15 13:42:13,219 CassandraDaemon.java:201 
> - Exception in thread Thread[CompactionExecutor:5,1,main]
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263)
>  ~[apache-cassandra-3.0.8.jar:3.0.8]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_65]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_65]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_65]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_65]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]
> {code}
> Doing a sstabledump -d shows a few rows with the column value of 
> "", telling me compaction doesn't seem to be working correctly.  
> # nodetool compactionstats 
> pending tasks: 1
> attempting to run a compaction gets:
> # nodetool compact  
> error: null
> -- StackTrace --
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58)
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:606)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.

[jira] [Commented] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-07-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382976#comment-15382976
 ] 

T Jake Luciani commented on CASSANDRA-11363:


[~rha] would you be able to try the attached patch {{thread-queue-2.1.txt}} and 
see if that helps?

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack, 
> thread-queue-2.1.txt
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11363) High Blocked NTR When Connecting

2016-07-18 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-11363:
---
Attachment: thread-queue-2.1.txt

> High Blocked NTR When Connecting
> 
>
> Key: CASSANDRA-11363
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11363
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Russell Bradberry
>Assignee: Paulo Motta
> Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack, 
> thread-queue-2.1.txt
>
>
> When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the 
> machine load increases to very high levels (> 120 on an 8 core machine) and 
> native transport requests get blocked in tpstats.
> I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8.
> The issue does not seem to affect the nodes running 2.1.9.
> The issue seems to coincide with the number of connections OR the number of 
> total requests being processed at a given time (as the latter increases with 
> the former in our system)
> Currently there is between 600 and 800 client connections on each machine and 
> each machine is handling roughly 2000-3000 client requests per second.
> Disabling the binary protocol fixes the issue for this node but isn't a 
> viable option cluster-wide.
> Here is the output from tpstats:
> {code}
> Pool NameActive   Pending  Completed   Blocked  All 
> time blocked
> MutationStage 0 88387821 0
>  0
> ReadStage 0 0 355860 0
>  0
> RequestResponseStage  0 72532457 0
>  0
> ReadRepairStage   0 0150 0
>  0
> CounterMutationStage 32   104 897560 0
>  0
> MiscStage 0 0  0 0
>  0
> HintedHandoff 0 0 65 0
>  0
> GossipStage   0 0   2338 0
>  0
> CacheCleanupExecutor  0 0  0 0
>  0
> InternalResponseStage 0 0  0 0
>  0
> CommitLogArchiver 0 0  0 0
>  0
> CompactionExecutor2   190474 0
>  0
> ValidationExecutor0 0  0 0
>  0
> MigrationStage0 0 10 0
>  0
> AntiEntropyStage  0 0  0 0
>  0
> PendingRangeCalculator0 0310 0
>  0
> Sampler   0 0  0 0
>  0
> MemtableFlushWriter   110 94 0
>  0
> MemtablePostFlush 134257 0
>  0
> MemtableReclaimMemory 0 0 94 0
>  0
> Native-Transport-Requests   128   156 38795716
> 278451
> Message type   Dropped
> READ 0
> RANGE_SLICE  0
> _TRACE   0
> MUTATION 0
> COUNTER_MUTATION 0
> BINARY   0
> REQUEST_RESPONSE 0
> PAGED_RANGE  0
> READ_REPAIR  0
> {code}
> Attached is the jstack output for both CMS and G1GC.
> Flight recordings are here:
> https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr
> https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr
> It is interesting to note that while the flight recording was taking place, 
> the load on the machine went back to healthy, and when the flight recording 
> finished the load went back to > 100.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9054) Break DatabaseDescriptor up into multiple classes.

2016-07-18 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382961#comment-15382961
 ] 

Blake Eggleston commented on CASSANDRA-9054:


Ok, I don't have any compelling (non-aesthetic) reasons to break DD up as part 
of this ticket, it just seems to come up some time to time, so I thought I'd 
throw it out there :). I should have some feedback on this today or tomorrow.

> Break DatabaseDescriptor up into multiple classes.
> --
>
> Key: CASSANDRA-9054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9054
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremiah Jordan
>Assignee: Robert Stupp
> Fix For: 3.x
>
>
> Right now to get at Config stuff you go through DatabaseDescriptor.  But when 
> you instantiate DatabaseDescriptor it actually opens system tables and such, 
> which triggers commit log replays, and other things if the right flags aren't 
> set ahead of time.  This makes getting at config stuff from tools annoying, 
> as you have to be very careful about instantiation orders.
> It would be nice if we could break DatabaseDescriptor up into multiple 
> classes, so that getting at config stuff from tools wasn't such a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7304) Ability to distinguish between NULL and UNSET values in Prepared Statements

2016-07-18 Thread Russell Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382934#comment-15382934
 ] 

Russell Spitzer commented on CASSANDRA-7304:


Fixed in related ticket CASSANDRA-11207

> Ability to distinguish between NULL and UNSET values in Prepared Statements
> ---
>
> Key: CASSANDRA-7304
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7304
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Drew Kutcharian
>Assignee: Oded Peer
>  Labels: client-impacting, cql, protocolv4
> Fix For: 2.2.0 beta 1
>
> Attachments: 7304-03.patch, 7304-04.patch, 7304-05.patch, 
> 7304-06.patch, 7304-07.patch, 7304-2.patch, 7304-V8.txt, 7304.patch
>
>
> Currently Cassandra inserts tombstones when a value of a column is bound to 
> NULL in a prepared statement. At higher insert rates managing all these 
> tombstones becomes an unnecessary overhead. This limits the usefulness of the 
> prepared statements since developers have to either create multiple prepared 
> statements (each with a different combination of column names, which at times 
> is just unfeasible because of the sheer number of possible combinations) or 
> fall back to using regular (non-prepared) statements.
> This JIRA is here to explore the possibility of either:
> A. Have a flag on prepared statements that once set, tells Cassandra to 
> ignore null columns
> or
> B. Have an "UNSET" value which makes Cassandra skip the null columns and not 
> tombstone them
> Basically, in the context of a prepared statement, a null value means delete, 
> but we don’t have anything that means "ignore" (besides creating a new 
> prepared statement without the ignored column).
> Please refer to the original conversation on DataStax Java Driver mailing 
> list for more background:
> https://groups.google.com/a/lists.datastax.com/d/topic/java-driver-user/cHE3OOSIXBU/discussion
> *EDIT 18/12/14 - [~odpeer] Implementation Notes:*
> The motivation hasn't changed.
> Protocol version 4 specifies that bind variables do not require having a 
> value when executing a statement. Bind variables without a value are called 
> 'unset'. The 'unset' bind variable is serialized as the int value '-2' 
> without following bytes.
> \\
> \\
> * An unset bind variable in an EXECUTE or BATCH request
> ** On a {{value}} does not modify the value and does not create a tombstone
> ** On the {{ttl}} clause is treated as 'unlimited'
> ** On the {{timestamp}} clause is treated as 'now'
> ** On a map key or a list index throws {{InvalidRequestException}}
> ** On a {{counter}} increment or decrement operation does not change the 
> counter value, e.g. {{UPDATE my_tab SET c = c - ? WHERE k = 1}} does change 
> the value of counter {{c}}
> ** On a tuple field or UDT field throws {{InvalidRequestException}}
> * An unset bind variable in a QUERY request
> ** On a partition column, clustering column or index column in the {{WHERE}} 
> clause throws {{InvalidRequestException}}
> ** On the {{limit}} clause is treated as 'unlimited'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12180) Should be able to override compaction space check

2016-07-18 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382933#comment-15382933
 ] 

sankalp kohli commented on CASSANDRA-12180:
---

min free space is helpful as we dont want drives to go all way to 100% full. 
Also with SSDs, the performance goes down as disk reaches full capacity. 

> Should be able to override compaction space check
> -
>
> Key: CASSANDRA-12180
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12180
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12180_3.0.txt
>
>
> If there's not enough space for a compaction it won't do it and print the 
> exception below. Sometimes we know compaction will free up lot of space since 
> an ETL job could have inserted a lot of deletes. This override helps in this 
> case. 
> ERROR [CompactionExecutor:17] CassandraDaemon.java (line 258) Exception in 
> thread Thread
> [CompactionExecutor:17,1,main]
> java.lang.RuntimeException: Not enough space for compaction, estimated 
> sstables = 1552, expected
> write size = 260540558535
> at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace
> (CompactionTask.java:306)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.
> java:106)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.
> java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.
> java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run
> (CompactionManager.java:198)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12227) Unset TTL should use the Table Default TTL

2016-07-18 Thread Russell Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382930#comment-15382930
 ] 

Russell Spitzer edited comment on CASSANDRA-12227 at 7/18/16 7:46 PM:
--

This is already done in CASSANDRA-11207


was (Author: rspitzer):
This is already done in the code

> Unset TTL should use the Table Default TTL
> --
>
> Key: CASSANDRA-12227
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12227
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Russell Spitzer
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12227) Unset TTL should use the Table Default TTL

2016-07-18 Thread Russell Spitzer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Spitzer resolved CASSANDRA-12227.
-
Resolution: Invalid

This is already done in the code

> Unset TTL should use the Table Default TTL
> --
>
> Key: CASSANDRA-12227
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12227
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Russell Spitzer
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12227) Unset TTL should use the Table Default TTL

2016-07-18 Thread Russell Spitzer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Spitzer updated CASSANDRA-12227:

Component/s: CQL

> Unset TTL should use the Table Default TTL
> --
>
> Key: CASSANDRA-12227
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12227
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Russell Spitzer
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12227) Unset TTL should use the Table Default TTL

2016-07-18 Thread Russell Spitzer (JIRA)
Russell Spitzer created CASSANDRA-12227:
---

 Summary: Unset TTL should use the Table Default TTL
 Key: CASSANDRA-12227
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12227
 Project: Cassandra
  Issue Type: Bug
Reporter: Russell Spitzer
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12216) TTL Reading And Writing is Asymmetric

2016-07-18 Thread Russell Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382926#comment-15382926
 ] 

Russell Spitzer commented on CASSANDRA-12216:
-

Should I mark this in Changes as 3.4.3? Sorry it's been a while since I touched 
C* source directly. 

> TTL Reading And Writing is Asymmetric 
> --
>
> Key: CASSANDRA-12216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12216
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Russell Spitzer
>Assignee: Russell Spitzer
>Priority: Minor
> Attachments: 12216-3.7-2.txt, 12216-3.7.txt
>
>
> There is an inherent asymmetry in the way TTL's are read and Written. 
> An `TTL` of 0 when written becomes a `null` in C*
> When read, this `TTL` becomes a `null` 
> The `null` cannot be written back to C* as `TTL`
> This means that end users attempting to copy tables with TTL have to do 
> manual mapping of the null TTL values to 0 to avoid NPE. This is a bit 
> onerous when C* seems to have an internal logic that 0 == NULL. I don't think 
> C* should return values which are not directly insertable back to C*. 
> Even with the advent CASSANDRA-7304 this still remains a problem that the 
> User needs to be aware of and take care of.
> The following prepared statement
> {code}
> INSERT INTO test.table2 (k,v) (?,?) USING TTL: ? 
> {code}
> Will throw NPEs unless we specifically check that the value to be bound to 
> TTL is not null.
> I think we should discuss whether `null` should be treated as 0 in TTL for 
> prepared statements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12224) We shouldn't have got there is the base row had no associated entry

2016-07-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382914#comment-15382914
 ] 

Gábor Auth commented on CASSANDRA-12224:


I've tested some upgrade scenarios, it seems only one upgrade path affected:

3.3.0 -> 3.4.0: OK
3.3.0 -> 3.5.0: OK
3.3.0 -> 3.6.0: OK
3.3.0 -> 3.7.0: Error

3.3.0 -> 3.4.0 -> 3.7.0: OK
3.3.0 -> 3.5.0 -> 3.7.0: OK
3.3.0 -> 3.6.0 -> 3.7.0: OK


> We shouldn't have got there is the base row had no associated entry
> ---
>
> Key: CASSANDRA-12224
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12224
> Project: Cassandra
>  Issue Type: Bug
> Environment: Upgrade from datastax-ddc-3.3.0 to datastax-ddc-3.7.0 on 
> CentOS 7 x86.
>Reporter: Gábor Auth
> Fix For: 3.x
>
>
> Upgrade from datastax-ddc-3.3.0 to datastax-ddc-3.7.0. Maybe related with the 
> https://issues.apache.org/jira/browse/CASSANDRA-11198 issue?
> {code}
> ERROR [SharedPool-Worker-12] 2016-07-18 10:24:55,447 Keyspace.java:519 - 
> Unknown exception caught while attempting to update MaterializedView! 
> keyspace.table
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [SharedPool-Worker-12] 2016-07-18 10:24:55,450 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-12,5,main]: {}
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> java.util.concurrent.Execu

[jira] [Commented] (CASSANDRA-11687) dtest failure in rebuild_test.TestRebuild.simple_rebuild_test

2016-07-18 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382883#comment-15382883
 ] 

Jim Witschey commented on CASSANDRA-11687:
--

I'm not sure it matters:

https://github.com/riptano/cassandra-dtest/blob/d2c93023ebe26a3aff98a85bd62deb96c9403c49/rebuild_test.py#L81-L97

It's going to be a failing {{nodetool}} call one way or the other. (Side note: 
I'm making a PR to make the two error-handling cases use the same logic.)

> dtest failure in rebuild_test.TestRebuild.simple_rebuild_test
> -
>
> Key: CASSANDRA-11687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11687
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: DS Test Eng
>  Labels: dtest
>
> single failure on most recent run (3.0 no-vnode)
> {noformat}
> concurrent rebuild should not be allowed, but one rebuild command should have 
> succeeded.
> {noformat}
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/217/testReport/rebuild_test/TestRebuild/simple_rebuild_test
> Failed on CassCI build cassandra-3.0_novnode_dtest #217



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)

2016-07-18 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382869#comment-15382869
 ] 

Aleksey Yeschenko commented on CASSANDRA-12203:
---

Can you share the schema of the affected table with us, please?

> AssertionError on compaction after upgrade (2.1.9 -> 3.7)
> -
>
> Key: CASSANDRA-12203
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12203
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.7 (upgrade from 2.1.9)
> Java version "1.8.0_91"
> Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-83-generic x86_64)
>Reporter: Roman S. Borschel
> Fix For: 3.x
>
>
> After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family 
> (using SizeTieredCompaction) repeatedly and continuously failed compaction 
> (and thus also repair) across the cluster, with all nodes producing the 
> following errors in the logs:
> {noformat}
> 016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread 
> Thread[CompactionExecutor:3,1,main]
> 2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null
> 2016-07-14T09:29:47.96859 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96861 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96861 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96862 |srv=cassandra|   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96862 |srv=cassandra|   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96863 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96863 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96864 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96864 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96865 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96865 |srv=cassandra|   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96866 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96866 |srv=cassandra|   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96868 |srv=cassandra|   

[jira] [Updated] (CASSANDRA-12216) TTL Reading And Writing is Asymmetric

2016-07-18 Thread Russell Spitzer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Spitzer updated CASSANDRA-12216:

Status: Patch Available  (was: Open)

> TTL Reading And Writing is Asymmetric 
> --
>
> Key: CASSANDRA-12216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12216
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Russell Spitzer
>Assignee: Russell Spitzer
>Priority: Minor
> Attachments: 12216-3.7-2.txt, 12216-3.7.txt
>
>
> There is an inherent asymmetry in the way TTL's are read and Written. 
> An `TTL` of 0 when written becomes a `null` in C*
> When read, this `TTL` becomes a `null` 
> The `null` cannot be written back to C* as `TTL`
> This means that end users attempting to copy tables with TTL have to do 
> manual mapping of the null TTL values to 0 to avoid NPE. This is a bit 
> onerous when C* seems to have an internal logic that 0 == NULL. I don't think 
> C* should return values which are not directly insertable back to C*. 
> Even with the advent CASSANDRA-7304 this still remains a problem that the 
> User needs to be aware of and take care of.
> The following prepared statement
> {code}
> INSERT INTO test.table2 (k,v) (?,?) USING TTL: ? 
> {code}
> Will throw NPEs unless we specifically check that the value to be bound to 
> TTL is not null.
> I think we should discuss whether `null` should be treated as 0 in TTL for 
> prepared statements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12216) TTL Reading And Writing is Asymmetric

2016-07-18 Thread Russell Spitzer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russell Spitzer updated CASSANDRA-12216:

Attachment: 12216-3.7-2.txt

> TTL Reading And Writing is Asymmetric 
> --
>
> Key: CASSANDRA-12216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12216
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Russell Spitzer
>Assignee: Russell Spitzer
>Priority: Minor
> Attachments: 12216-3.7-2.txt, 12216-3.7.txt
>
>
> There is an inherent asymmetry in the way TTL's are read and Written. 
> An `TTL` of 0 when written becomes a `null` in C*
> When read, this `TTL` becomes a `null` 
> The `null` cannot be written back to C* as `TTL`
> This means that end users attempting to copy tables with TTL have to do 
> manual mapping of the null TTL values to 0 to avoid NPE. This is a bit 
> onerous when C* seems to have an internal logic that 0 == NULL. I don't think 
> C* should return values which are not directly insertable back to C*. 
> Even with the advent CASSANDRA-7304 this still remains a problem that the 
> User needs to be aware of and take care of.
> The following prepared statement
> {code}
> INSERT INTO test.table2 (k,v) (?,?) USING TTL: ? 
> {code}
> Will throw NPEs unless we specifically check that the value to be bound to 
> TTL is not null.
> I think we should discuss whether `null` should be treated as 0 in TTL for 
> prepared statements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12216) TTL Reading And Writing is Asymmetric

2016-07-18 Thread Russell Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382867#comment-15382867
 ] 

Russell Spitzer commented on CASSANDRA-12216:
-

Added Tests, Going to attempt to change cql Documentation now

> TTL Reading And Writing is Asymmetric 
> --
>
> Key: CASSANDRA-12216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12216
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Russell Spitzer
>Assignee: Russell Spitzer
>Priority: Minor
> Attachments: 12216-3.7-2.txt, 12216-3.7.txt
>
>
> There is an inherent asymmetry in the way TTL's are read and Written. 
> An `TTL` of 0 when written becomes a `null` in C*
> When read, this `TTL` becomes a `null` 
> The `null` cannot be written back to C* as `TTL`
> This means that end users attempting to copy tables with TTL have to do 
> manual mapping of the null TTL values to 0 to avoid NPE. This is a bit 
> onerous when C* seems to have an internal logic that 0 == NULL. I don't think 
> C* should return values which are not directly insertable back to C*. 
> Even with the advent CASSANDRA-7304 this still remains a problem that the 
> User needs to be aware of and take care of.
> The following prepared statement
> {code}
> INSERT INTO test.table2 (k,v) (?,?) USING TTL: ? 
> {code}
> Will throw NPEs unless we specifically check that the value to be bound to 
> TTL is not null.
> I think we should discuss whether `null` should be treated as 0 in TTL for 
> prepared statements. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12203:

Fix Version/s: 3.x

> AssertionError on compaction after upgrade (2.1.9 -> 3.7)
> -
>
> Key: CASSANDRA-12203
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12203
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.7 (upgrade from 2.1.9)
> Java version "1.8.0_91"
> Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-83-generic x86_64)
>Reporter: Roman S. Borschel
> Fix For: 3.x
>
>
> After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family 
> (using SizeTieredCompaction) repeatedly and continuously failed compaction 
> (and thus also repair) across the cluster, with all nodes producing the 
> following errors in the logs:
> {noformat}
> 016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread 
> Thread[CompactionExecutor:3,1,main]
> 2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null
> 2016-07-14T09:29:47.96859 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96860 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96861 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96861 |srv=cassandra|   at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96862 |srv=cassandra|   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96862 |srv=cassandra|   at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96863 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96863 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96864 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96864 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96865 |srv=cassandra|   at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96865 |srv=cassandra|   at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) 
> ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96866 |srv=cassandra|   at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96866 |srv=cassandra|   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96867 |srv=cassandra|   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.7.jar:3.7]
> 2016-07-14T09:29:47.96868 |srv=cassandra|   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82)
>  ~[ap

[jira] [Updated] (CASSANDRA-12224) We shouldn't have got there is the base row had no associated entry

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12224:

Reproduced In: 3.7

> We shouldn't have got there is the base row had no associated entry
> ---
>
> Key: CASSANDRA-12224
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12224
> Project: Cassandra
>  Issue Type: Bug
> Environment: Upgrade from datastax-ddc-3.3.0 to datastax-ddc-3.7.0 on 
> CentOS 7 x86.
>Reporter: Gábor Auth
> Fix For: 3.x
>
>
> Upgrade from datastax-ddc-3.3.0 to datastax-ddc-3.7.0. Maybe related with the 
> https://issues.apache.org/jira/browse/CASSANDRA-11198 issue?
> {code}
> ERROR [SharedPool-Worker-12] 2016-07-18 10:24:55,447 Keyspace.java:519 - 
> Unknown exception caught while attempting to update MaterializedView! 
> keyspace.table
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [SharedPool-Worker-12] 2016-07-18 10:24:55,450 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-12,5,main]: {}
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.

[jira] [Updated] (CASSANDRA-12224) We shouldn't have got there is the base row had no associated entry

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12224:

Fix Version/s: 3.x

> We shouldn't have got there is the base row had no associated entry
> ---
>
> Key: CASSANDRA-12224
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12224
> Project: Cassandra
>  Issue Type: Bug
> Environment: Upgrade from datastax-ddc-3.3.0 to datastax-ddc-3.7.0 on 
> CentOS 7 x86.
>Reporter: Gábor Auth
> Fix For: 3.x
>
>
> Upgrade from datastax-ddc-3.3.0 to datastax-ddc-3.7.0. Maybe related with the 
> https://issues.apache.org/jira/browse/CASSANDRA-11198 issue?
> {code}
> ERROR [SharedPool-Worker-12] 2016-07-18 10:24:55,447 Keyspace.java:519 - 
> Unknown exception caught while attempting to update MaterializedView! 
> keyspace.table
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  [apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-3.7.0.jar:3.7.0]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [SharedPool-Worker-12] 2016-07-18 10:24:55,450 
> AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-12,5,main]: {}
> java.lang.AssertionError: We shouldn't have got there is the base row had no 
> associated entry
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.computeLivenessInfoForEntry(ViewUpdateGenerator.java:455)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.updateEntry(ViewUpdateGenerator.java:273)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.ViewUpdateGenerator.addBaseTableUpdate(ViewUpdateGenerator.java:127)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.addToViewUpdateGenerators(TableViews.java:403)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.generateViewUpdates(TableViews.java:236)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.view.TableViews.pushViewReplicaUpdates(TableViews.java:140)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:514) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.Keyspace.applyFromCommitLog(Keyspace.java:409) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer$MutationInitiator$1.runMayThrow(CommitLogReplayer.java:152)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.7.0.jar:3.7.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_91]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164)
>  ~[apache-cassandra-3.7.0.jar:3.7.0]
> at org.apache.

[jira] [Comment Edited] (CASSANDRA-11424) Option to leave omitted columns in INSERT JSON unset

2016-07-18 Thread Oded Peer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382792#comment-15382792
 ] 

Oded Peer edited comment on CASSANDRA-11424 at 7/18/16 6:34 PM:


Thank you for the review Sylvain.
It's a great learning experience for me and it will help me do better on my 
next patch  - 
I wasn't aware of the of the .rst documentation in the source tree.
I was glad to learn the syntax in your patch for {code}( { defaultUnset = true; 
} K_UNSET) ){code}Not knowing this lead to changes in the QueryOptions which is 
far less elegant than what you propose.

Of course I am happy with it.



was (Author: odpeer):
Thank you for the review Sylvain.
It's a great learning experience for me and it will help me do better on my 
next patch  - 
I wasn't aware of the of the .rst documentation in the source tree.
I was glad to learn the syntax in your patch for {code}( { defaultUnset = true; 
} K_UNSET) ){code}. Not knowing this lead to changes in the QueryOptions which 
is far less elegant than what you propose.

Of course I am happy with it.


> Option to leave omitted columns in INSERT JSON unset
> 
>
> Key: CASSANDRA-11424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ralf Steppacher
>Assignee: Oded Peer
>  Labels: client-impacting, cql
> Fix For: 3.8
>
> Attachments: 11424-trunk-V1.txt, 11424-trunk-V2.txt, 
> 11424-trunk-V3.txt
>
>
> CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and 
> {{UNSET}} prepared statement parameters.
> When inserting JSON objects it is not possible to profit from this as a 
> prepared statement only has one parameter that is bound to the JSON object as 
> a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for 
> columns omitted from the JSON object.
> Please extend on CASSANDRA-7304 to include JSON support.
> {color:grey}
> (My personal requirement is to be able to insert JSON objects with optional 
> fields without incurring the overhead of creating a tombstone of every column 
> not covered by the JSON object upon initial(!) insert.)
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11424) Option to leave omitted columns in INSERT JSON unset

2016-07-18 Thread Oded Peer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382792#comment-15382792
 ] 

Oded Peer edited comment on CASSANDRA-11424 at 7/18/16 6:34 PM:


Thank you for the review Sylvain.
It's a great learning experience for me and it will help me do better on my 
next patch  - 
I wasn't aware of the of the .rst documentation in the source tree.
I was glad to learn the syntax in your patch for {code}( { defaultUnset = true; 
} K_UNSET) ){code}. Not knowing this lead to changes in the QueryOptions which 
is far less elegant than what you propose.

Of course I am happy with it.



was (Author: odpeer):
Thank you for the review Sylvain.
It's a great learning experience for me and it will help me do better on my 
next patch  - 
I wasn't aware of the of the .rst documentation in the source tree.
I was glad to learn the syntax in your patch for {{ (  { defaultUnset = 
true;  } K_UNSET) }}. Not knowing this lead to changes in the 
QueryOptions which is far less elegant than what you propose.

Of course I am happy with it.


> Option to leave omitted columns in INSERT JSON unset
> 
>
> Key: CASSANDRA-11424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ralf Steppacher
>Assignee: Oded Peer
>  Labels: client-impacting, cql
> Fix For: 3.8
>
> Attachments: 11424-trunk-V1.txt, 11424-trunk-V2.txt, 
> 11424-trunk-V3.txt
>
>
> CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and 
> {{UNSET}} prepared statement parameters.
> When inserting JSON objects it is not possible to profit from this as a 
> prepared statement only has one parameter that is bound to the JSON object as 
> a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for 
> columns omitted from the JSON object.
> Please extend on CASSANDRA-7304 to include JSON support.
> {color:grey}
> (My personal requirement is to be able to insert JSON objects with optional 
> fields without incurring the overhead of creating a tombstone of every column 
> not covered by the JSON object upon initial(!) insert.)
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11424) Option to leave omitted columns in INSERT JSON unset

2016-07-18 Thread Oded Peer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382792#comment-15382792
 ] 

Oded Peer commented on CASSANDRA-11424:
---

Thank you for the review Sylvain.
It's a great learning experience for me and it will help me do better on my 
next patch  - 
I wasn't aware of the of the .rst documentation in the source tree.
I was glad to learn the syntax in your patch for {{ (  { defaultUnset = 
true;  } K_UNSET) }}. Not knowing this lead to changes in the 
QueryOptions which is far less elegant than what you propose.

Of course I am happy with it.


> Option to leave omitted columns in INSERT JSON unset
> 
>
> Key: CASSANDRA-11424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11424
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ralf Steppacher
>Assignee: Oded Peer
>  Labels: client-impacting, cql
> Fix For: 3.8
>
> Attachments: 11424-trunk-V1.txt, 11424-trunk-V2.txt, 
> 11424-trunk-V3.txt
>
>
> CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and 
> {{UNSET}} prepared statement parameters.
> When inserting JSON objects it is not possible to profit from this as a 
> prepared statement only has one parameter that is bound to the JSON object as 
> a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for 
> columns omitted from the JSON object.
> Please extend on CASSANDRA-7304 to include JSON support.
> {color:grey}
> (My personal requirement is to be able to insert JSON objects with optional 
> fields without incurring the overhead of creating a tombstone of every column 
> not covered by the JSON object upon initial(!) insert.)
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11414) dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11414:
--
Fix Version/s: (was: 3.9)
   3.8

> dtest failure in bootstrap_test.TestBootstrap.resumable_bootstrap_test
> --
>
> Key: CASSANDRA-11414
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11414
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Philip Thompson
>Assignee: Paulo Motta
>  Labels: dtest
> Fix For: 2.2.8, 3.0.9, 3.8
>
>
> Stress is failing to read back all data. We can see this output from the 
> stress read
> {code}
> java.io.IOException: Operation x0 on key(s) [314c384f304f4c325030]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> java.io.IOException: Operation x0 on key(s) [33383438363931353131]: Data 
> returned was not validated
>   at org.apache.cassandra.stress.Operation.error(Operation.java:138)
>   at 
> org.apache.cassandra.stress.Operation.timeWithRetry(Operation.java:116)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:101)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:109)
>   at 
> org.apache.cassandra.stress.operations.predefined.CqlOperation.run(CqlOperation.java:261)
>   at 
> org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:327)
> FAILURE
> {code}
> Started happening with build 1075. Does not appear flaky on CI.
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1076/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test
> Failed on CassCI build trunk_dtest #1076



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12155) proposeCallback.java is too spammy for debug.log

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12155:
--
Fix Version/s: (was: 3.9)
   3.8

> proposeCallback.java is too spammy for debug.log
> 
>
> Key: CASSANDRA-12155
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12155
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Wei Deng
>Assignee: Wei Deng
>Priority: Minor
> Fix For: 3.8
>
>
> As stated in [this wiki 
> page|https://wiki.apache.org/cassandra/LoggingGuidelines] derived from the 
> work on CASSANDRA-10241, the DEBUG level logging in debug.log is intended for 
> "+low frequency state changes or message passing. Non-critical path logs on 
> operation details, performance measurements or general troubleshooting 
> information.+"
> However, it appears that in a production deployment of C* 3.x, the LWT 
> message passing from ProposeCallback.java gets printed every 1-2 seconds, 
> which overwhelms debug.log from presenting the other important DEBUG level 
> logging messages, like the following:
> {noformat}
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:23:57,800  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,803  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,804  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:03,807  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:03,807  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:06,811  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:06,811  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:09,815  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:09,815  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:12,819  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:12,819  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:15,823  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:15,823  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:18,827  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:18,827  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:21,831  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:21,831  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:27,839  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:27,839  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:33,847  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:33,847  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,851  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,852  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:39,855  ProposeCallback.java:62 
> - Propose response true from /10.240.0.2
> DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:39,855  ProposeCallback.java:62 
> - Propose response true from /10.240.0.3
> DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:42,859  ProposeCallback.java:

[jira] [Updated] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12193:
--
Fix Version/s: (was: 3.9)
   3.8

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
> --
>
> Key: CASSANDRA-12193
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12193
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Alex Petrov
>  Labels: dtest
> Fix For: 3.0.9, 3.8
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 146, in noncomposite_static_cf_test
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']])
>   File "/home/automaton/cassandra-dtest/assertions.py", line 162, in 
> assert_all
> assert list_res == expected, "Expected {} from {}, but got 
> {}".format(expected, query, list_res)
> "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 
> 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 
> 'Baggins']] from SELECT * FROM users, but got 
> [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], 
> [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]
> {code}
> Related failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11854) Remove finished streaming connections from MessagingService

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11854:
--
Fix Version/s: (was: 3.9)
   3.8

> Remove finished streaming connections from MessagingService
> ---
>
> Key: CASSANDRA-11854
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11854
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 2.1.15, 2.2.7, 3.0.8, 3.8
>
> Attachments: oom.png
>
>
> When a new {{IncomingStreamingConnection}} is created, [we register it in the 
> connections 
> map|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/MessagingService.java#L1109]
>  of {{MessagingService}}, but we [only remove it if there is an 
> exception|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/IncomingStreamingConnection.java#L83]
>  while attaching the socket to the stream session.
> On nodes with SSL and large number of vnodes, after many repair sessions 
> these old connections can accumulate and cause OOM (heap dump attached).
> The connection should be removed from the connections map after if it's 
> finished in order to be garbage collected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11393) dtest failure in upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11393:
--
Fix Version/s: (was: 3.9)
   3.8

> dtest failure in 
> upgrade_tests.upgrade_through_versions_test.ProtoV3Upgrade_2_1_UpTo_3_0_HEAD.rolling_upgrade_test
> --
>
> Key: CASSANDRA-11393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11393
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination, Streaming and Messaging
>Reporter: Philip Thompson
>Assignee: Benjamin Lerer
>  Labels: dtest
> Fix For: 3.0.9, 3.8
>
> Attachments: 11393-3.0.txt
>
>
> We are seeing a failure in the upgrade tests that go from 2.1 to 3.0
> {code}
> node2: ERROR [SharedPool-Worker-2] 2016-03-10 20:05:17,865 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0xeb79b477, 
> /127.0.0.1:39613 => /127.0.0.2:9042]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1208)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.ReadCommand$LegacyReadCommandSerializer.serializedSize(ReadCommand.java:1155)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageOut.payloadSize(MessageOut.java:166) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.getConnection(OutboundTcpConnectionPool.java:72)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.getConnection(MessagingService.java:609)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendOneWay(MessagingService.java:758)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRR(MessagingService.java:701) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.MessagingService.sendRRWithFailure(MessagingService.java:684)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeRequests(AbstractReadExecutor.java:110)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor.makeDataRequests(AbstractReadExecutor.java:85)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.AbstractReadExecutor$AlwaysSpeculatingReadExecutor.executeAsync(AbstractReadExecutor.java:330)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.doInitialQueries(StorageProxy.java:1699)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1654) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1601) 
> ~[main/:na]
>   at 
> org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1520) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:302)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:67)
>  ~[main/:na]
>   at 
> org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:34)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:297)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:333)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:472)
>  ~[main/:na]
>   at 
> org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:449)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:130)
>  ~[main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507)
>  [main/:na]
>   at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401)
>  [main/:na]
>   at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>  [netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32)
>  [netty-all-4.0.23.Final.j

[jira] [Updated] (CASSANDRA-11315) Fix upgrading sparse tables that are incorrectly marked as dense

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11315:
--
Fix Version/s: (was: 3.9)
   3.8

> Fix upgrading sparse tables that are incorrectly marked as dense
> 
>
> Key: CASSANDRA-11315
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11315
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
> Environment: Ubuntu 14.04, Oracle Java 8, Apache Cassandra 2.2.5 -> 
> 3.0.3, Apache Cassandra 2.2.6 -> 3.0.5
>Reporter: Dominik Keil
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.9, 3.8
>
>
> Hi,
> when trying to upgrade our development cluster from C* 2.2.5 to 3.0.3 
> Cassandra fails during startup.
> Here's the relevant log snippet:
> {noformat}
> [...]
> INFO  [main] 2016-03-08 11:42:01,291 ColumnFamilyStore.java:381 - 
> Initializing system.schema_triggers
> INFO  [main] 2016-03-08 11:42:01,302 ColumnFamilyStore.java:381 - 
> Initializing system.schema_usertypes
> INFO  [main] 2016-03-08 11:42:01,313 ColumnFamilyStore.java:381 - 
> Initializing system.schema_functions
> INFO  [main] 2016-03-08 11:42:01,324 ColumnFamilyStore.java:381 - 
> Initializing system.schema_aggregates
> INFO  [main] 2016-03-08 11:42:01,576 SystemKeyspace.java:1284 - Detected 
> version upgrade from 2.2.5 to 3.0.3, snapshotting system keyspace
> WARN  [main] 2016-03-08 11:42:01,911 CompressionParams.java:382 - The 
> sstable_compression option has been deprecated. You should use class instead
> WARN  [main] 2016-03-08 11:42:01,959 CompressionParams.java:333 - The 
> chunk_length_kb option has been deprecated. You should use chunk_length_in_kb 
> instead
> ERROR [main] 2016-03-08 11:42:02,638 CassandraDaemon.java:692 - Exception 
> encountered during startup
> java.lang.AssertionError: null
> at 
> org.apache.cassandra.db.CompactTables.getCompactValueColumn(CompactTables.java:90)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.config.CFMetaData.rebuild(CFMetaData.java:315) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at org.apache.cassandra.config.CFMetaData.(CFMetaData.java:291) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at org.apache.cassandra.config.CFMetaData.create(CFMetaData.java:367) 
> ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.decodeTableMetadata(LegacySchemaMigrator.java:337)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTableMetadata(LegacySchemaMigrator.java:273)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTable(LegacySchemaMigrator.java:244)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readTables$227(LegacySchemaMigrator.java:237)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_74]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readTables(LegacySchemaMigrator.java:237)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readKeyspace(LegacySchemaMigrator.java:186)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.lambda$readSchema$224(LegacySchemaMigrator.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at java.util.ArrayList.forEach(ArrayList.java:1249) ~[na:1.8.0_74]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.readSchema(LegacySchemaMigrator.java:177)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.schema.LegacySchemaMigrator.migrate(LegacySchemaMigrator.java:77)
>  ~[apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:223) 
> [apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551)
>  [apache-cassandra-3.0.3.jar:3.0.3]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) 
> [apache-cassandra-3.0.3.jar:3.0.3]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11733) SSTableReversedIterator ignores range tombstones

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11733:
--
Fix Version/s: (was: 3.9)
   3.8

> SSTableReversedIterator ignores range tombstones
> 
>
> Key: CASSANDRA-11733
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11733
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Dave Brosius
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.8
>
> Attachments: remove_delete.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11944) sstablesInBounds might not actually give all sstables within the bounds due to having start positions moved in sstables

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11944:
--
Fix Version/s: (was: 3.9)
   3.8

> sstablesInBounds might not actually give all sstables within the bounds due 
> to having start positions moved in sstables
> ---
>
> Key: CASSANDRA-11944
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11944
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 3.0.9, 3.8
>
>
> Same problem as with CASSANDRA-11886 - if we try to fetch sstablesInBounds 
> for CANONICAL_SSTABLES, we can miss some actually overlapping sstables. In 
> 3.0+ we state which SSTableSet we want when calling the method.
> Looks like the only issue this could cause is that we include a few too many 
> sstables in compactions that we think contain only droppable tombstones



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11820) Altering a column's type causes EOF

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11820:
--
Fix Version/s: (was: 3.9)
   3.8

> Altering a column's type causes EOF
> ---
>
> Key: CASSANDRA-11820
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11820
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Carl Yeksigian
>Assignee: Sylvain Lebresne
> Fix For: 3.0.9, 3.8
>
>
> While working on CASSANDRA-10309, I was testing altering columns' types. This 
> series of operations fails:
> {code}
> CREATE TABLE test (a int PRIMARY KEY, b int)
> INSERT INTO test (a, b) VALUES (1, 1)
> ALTER TABLE test ALTER b TYPE BLOB
> SELECT * FROM test WHERE a = 1
> {code}
> Tried this on 3.0 and trunk, both fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11715) Make GCInspector's MIN_LOG_DURATION configurable

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11715:
--
Fix Version/s: (was: 3.9)
   3.8

> Make GCInspector's MIN_LOG_DURATION configurable
> 
>
> Key: CASSANDRA-11715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11715
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Brandon Williams
>Assignee: Jeff Jirsa
>Priority: Minor
>  Labels: lhf
> Fix For: 2.2.8, 3.0.9, 3.8
>
>
> It's common for people to run C* with the G1 collector on appropriately-sized 
> heaps.  Quite often, the target pause time is set to 500ms, but GCI fires on 
> anything over 200ms.  We can already control the warn threshold, but these 
> are acceptable GCs for the configuration and create noise at the INFO log 
> level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11996) SSTableSet.CANONICAL can miss sstables

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11996:
--
Fix Version/s: (was: 3.9)
   3.8

> SSTableSet.CANONICAL can miss sstables
> --
>
> Key: CASSANDRA-11996
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11996
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Critical
> Fix For: 3.0.9, 3.8
>
>
> There is a race where we might miss sstables in SSTableSet.CANONICAL when we 
> finish up a compaction.
> Reproducing unit test pushed 
> [here|https://github.com/krummas/cassandra/commit/1292aaa61b89730cff0c022ed1262f45afd493e5]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12071) Regression in flushing throughput under load after CASSANDRA-6696

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12071:
--
Fix Version/s: (was: 3.9)
   3.8

> Regression in flushing throughput under load after CASSANDRA-6696
> -
>
> Key: CASSANDRA-12071
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12071
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Ariel Weisberg
>Assignee: Marcus Eriksson
> Fix For: 3.8
>
>
> The way flushing used to work is that a ColumnFamilyStore could have multiple 
> Memtables flushing at once and multiple ColumnFamilyStores could flush at the 
> same time. The way it works now there can be only a single flush of any 
> ColumnFamilyStore & Memtable running in the C* process, and the number of 
> threads applied to that flush is bounded by the number of disks in JBOD.
> This works ok most of the time but occasionally flushing will be a little 
> slower and ingest will outstrip it and then block on available memory. At 
> this point you see several second stalls that cause timeouts.
> This is a problem for reasonable configurations that don't use JBOD but have 
> access to a fast disk that can handle some IO queuing (RAID, SSD).
> You can reproduce on beefy hardware (12 cores 24 threads, 64 gigs of RAM, 
> SSD) if you unthrottle compaction or set it to something like 64 
> megabytes/second and run with 8 compaction threads and stress with the 
> default write workload and a reasonable number of threads. I tested with 96.
> It started happening after about 60 gigabytes of data was loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12073) [SASI] PREFIX search on CONTAINS/NonTokenizer mode returns only partial results

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12073:
--
Fix Version/s: (was: 3.9)
   3.8

> [SASI] PREFIX search on CONTAINS/NonTokenizer mode returns only partial 
> results
> ---
>
> Key: CASSANDRA-12073
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12073
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Cassandra 3.7
>Reporter: DOAN DuyHai
>Assignee: DOAN DuyHai
> Fix For: 3.8
>
> Attachments: patch_PREFIX_search_with_CONTAINS_mode.txt, 
> patch_PREFIX_search_with_CONTAINS_mode_V2.txt
>
>
> {noformat}
> cqlsh:music> CREATE TABLE music.albums (
> id uuid PRIMARY KEY,
> artist text,
> country text,
> quality text,
> status text,
> title text,
> year int
> );
> cqlsh:music> CREATE CUSTOM INDEX albums_artist_idx ON music.albums (artist) 
> USING 'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = {'mode': 
> 'CONTAINS', 'analyzer_class': 
> 'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
> 'case_sensitive': 'false'};
> cqlsh:music> SELECT * FROM albums WHERE artist like 'lady%'  LIMIT 100;
>  id   | artist| country| quality 
> | status| title | year
> --+---++-+---+---+--
>  372bb0ab-3263-41bc-baad-bb520ddfa787 | Lady Gaga |USA |  normal 
> |  Official |   Red and Blue EP | 2006
>  1a4abbcd-b5de-4c69-a578-31231e01ff09 | Lady Gaga |Unknown |  normal 
> | Promotion |Poker Face | 2008
>  31f4a0dc-9efc-48bf-9f5e-bfc09af42b82 | Lady Gaga |USA |  normal 
> |  Official |   The Cherrytree Sessions | 2009
>  8ebfaebd-28d0-477d-b735-469661ce6873 | Lady Gaga |Unknown |  normal 
> |  Official |Poker Face | 2009
>  98107d82-e0dd-46bc-a273-1577578984c7 | Lady Gaga |USA |  normal 
> |  Official |   Just Dance: The Remixes | 2008
>  a76af0f2-f5c5-4306-974a-e3c17158e6c6 | Lady Gaga |  Italy |  normal 
> |  Official |  The Fame | 2008
>  849ee019-8b15-4767-8660-537ab9710459 | Lady Gaga |USA |  normal 
> |  Official |Christmas Tree | 2008
>  4bad59ac-913f-43da-9d48-89adc65453d2 | Lady Gaga |  Australia |  normal 
> |  Official | Eh Eh | 2009
>  80327731-c450-457f-bc12-0a8c21fd9c5d | Lady Gaga |USA |  normal 
> |  Official | Just Dance Remixes Part 2 | 2008
>  3ad33659-e932-4d31-a040-acab0e23c3d4 | Lady Gaga |Unknown |  normal 
> |  null |Just Dance | 2008
>  9adce7f6-6a1d-49fd-b8bd-8f6fac73558b | Lady Gaga | United Kingdom |  normal 
> |  Official |Just Dance | 2009
> (11 rows)
> {noformat}
> *SASI* says that there are only 11 artists whose name starts with {{lady}}.
> However, in the data set, there are:
> * Lady Pank
> * Lady Saw
> * Lady Saw
> * Ladyhawke
> * Ladytron
> * Ladysmith Black Mambazo
> * Lady Gaga
> * Lady Sovereign
> etc ...
> By debugging the source code, the issue is in 
> {{OnDiskIndex.TermIterator::computeNext()}}
> {code:java}
> for (;;)
> {
> if (currentBlock == null)
> return endOfData();
> if (offset >= 0 && offset < currentBlock.termCount())
> {
> DataTerm currentTerm = currentBlock.getTerm(nextOffset());
> if (checkLower && !e.isLowerSatisfiedBy(currentTerm))
> continue;
> // flip the flag right on the first bounds match
> // to avoid expensive comparisons
> checkLower = false;
> if (checkUpper && !e.isUpperSatisfiedBy(currentTerm))
> return endOfData();
> return currentTerm;
> }
> nextBlock();
> }
> {code}
>  So the {{endOfData()}} conditions are:
> * currentBlock == null
> * checkUpper && !e.isUpperSatisfiedBy(currentTerm)
> The problem is that {{e::isUpperSatisfiedBy}} is checking not only whether 
> the term match but also returns *false* when it's a *partial term* !
> {code:java}
> public boolean isUpperSatisfiedBy(OnDiskIndex.DataTerm term)
> {
> if (!hasUpper())
> return true;
> if (nonMatchingPartial(term))
> return false;
> int cmp = term.compareTo(validator, upper.value, false);
> return cmp < 0 || cmp == 0 && upper.inclusive;
> }
> {code}
> By debugging the OnDiskIndex data, I've

[jira] [Updated] (CASSANDRA-12043) Syncing most recent commit in CAS across replicas can cause all CAS queries in the CQL partition to fail

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12043:
--
Fix Version/s: (was: 3.9)
   3.8

> Syncing most recent commit in CAS across replicas can cause all CAS queries 
> in the CQL partition to fail
> 
>
> Key: CASSANDRA-12043
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12043
> Project: Cassandra
>  Issue Type: Bug
>Reporter: sankalp kohli
>Assignee: Sylvain Lebresne
> Fix For: 2.1.15, 2.2.7, 3.0.9, 3.8
>
>
> We update the most recent commit on requiredParticipant replicas if out of 
> sync during the prepare round in beginAndRepairPaxos method. We keep doing 
> this in a loop till the requiredParticipant replicas have the same most 
> recent commit or we hit timeout. 
> Say we have 3 machines A,B and C and gc grace on the table is 10 days. We do 
> a CAS write at time 0 and it went to A and B but not to C.  C will get the 
> hint later but will not update the most recent commit in paxos table. This is 
> how CAS hints work. 
> In the paxos table whose gc_grace=0, most_recent_commit in A and B will be 
> inserted with timestamp 0 and with a TTL of 10 days. After 10 days, this 
> insert will become a tombstone at time 0 till it is compacted away since 
> gc_grace=0.
> Do a CAS read after say 1 day on the same CQL partition and this time prepare 
> phase involved A and C. most_recent_commit on C for this CQL partition is 
> empty. A sends the most_recent_commit to C with a timestamp of 0 and with a 
> TTL of 10 days. This most_recent_commit on C will expire on 11th day since it 
> is inserted after 1 day. 
> most_recent_commit are now in sync on A,B and C, however A and B 
> most_recent_commit will expire on 10th day whereas for C it will expire on 
> 11th day since it was inserted one day later. 
> Do another CAS read after 10days when most_recent_commit on A and B have 
> expired and is treated as tombstones till compacted. In this CAS read, say A 
> and C are involved in prepare phase. most_recent_commit will not match 
> between them since it is expired in A and is still there on C. This will 
> cause most_recent_commit to be applied to A with a timestamp of 0 and TTL of 
> 10 days. If A has not compacted away the original most_recent_commit which 
> has expired, this new write to most_recent_commit wont be visible on reads 
> since there is a tombstone with same timestamp(Delete wins over data with 
> same timestamp). 
> Another round of prepare will follow and again A would say it does not know 
> about most_recent_write(covered by original write which is not a tombstone) 
> and C will again try to send the write to A. This can keep going on till the 
> request timeouts or only A and B are involved in the prepare phase. 
> When A’s original most_recent_commit which is now a tombstone is compacted, 
> all the inserts which it was covering will come live. This will in turn again 
> get played to another replica. This ping pong can keep going on for a long 
> time. 
> The issue is that most_recent_commit is expiring at different times across 
> replicas. When they get replayed to a replica to make it in sync, we again 
> set the TTL from that point.  
> During the CAS read which timed out, most_recent_commit was being sent to 
> another replica in a loop. Even in successful requests, it will try to loop 
> for a couple of times if involving A and C and then when the replicas which 
> respond are A and B, it will succeed. So this will have impact on latencies 
> as well. 
> These timeouts gets worse when a machine is down as no progress can be made 
> as the machine with unexpired commit is always involved in the CAS prepare 
> round. Also with range movements, the new machine gaining range has empty 
> most recent commit and gets the commit at a later time causing same issue. 
> Repro steps:
> 1. Paxos TTL is max(3 hours, gc_grace) as defined in 
> SystemKeyspace.paxosTtl(). Change this method to not put a minimum TTL of 3 
> hours. 
> Method  SystemKeyspace.paxosTtl() will look like return 
> metadata.getGcGraceSeconds();   instead of return Math.max(3 * 3600, 
> metadata.getGcGraceSeconds());
> We are doing this so that we dont need to wait for 3 hours. 
> Create a 3 node cluster with the code change suggested above with machines 
> A,B and C
> CREATE KEYSPACE  test WITH REPLICATION = { 'class' : 'SimpleStrategy', 
> 'replication_factor' : 3 };
> use test;
> CREATE TABLE users (a int PRIMARY KEY,b int);
> alter table users WITH gc_grace_seconds=120;
> consistency QUORUM;
> bring down machine C
> INSERT INTO users (user_name, password ) VALUES ( 1,1) IF NOT EXISTS;
> Nodetool flush on machine A and B
> Bring up the down 

[jira] [Updated] (CASSANDRA-11973) Is MemoryUtil.getShort() supposed to return a sign-extended or non-sign-extended value?

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-11973:
--
Fix Version/s: (was: 3.9)
   3.8

> Is MemoryUtil.getShort() supposed to return a sign-extended or 
> non-sign-extended value?
> ---
>
> Key: CASSANDRA-11973
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11973
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Rei Odaira
>Assignee: Rei Odaira
>Priority: Minor
> Fix For: 2.2.8, 3.0.9, 3.8
>
> Attachments: 11973-2.2.txt
>
>
> In org.apache.cassandra.utils.memory.MemoryUtil.getShort(), the returned 
> value of unsafe.getShort(address) is bit-wise-AND'ed with 0x, while that 
> of getShortByByte(address) is not. This inconsistency results in different 
> returned values when the short integer is negative. Which is preferred 
> behavior? Looking at NativeClustering and NativeCellTest, it seems like 
> non-sign-extension is assumed.
> By the way, is there any reason MemoryUtil.getShort() and 
> MemoryUtil.getShortByByte() return "int", not "short"?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12044) Materialized view definition regression in clustering key

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12044:
--
Fix Version/s: (was: 3.9)
   3.8

> Materialized view definition regression in clustering key
> -
>
> Key: CASSANDRA-12044
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12044
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Mior
>Assignee: Carl Yeksigian
> Fix For: 3.0.9, 3.8
>
>
> This bug was reported on the 
> [users|https://mail-archives.apache.org/mod_mbox/cassandra-user/201606.mbox/%3CCAG0vsSJRtRjLJqKsd3M8X-8nXpPwRj7Q80mNkuy8sy%2B%2B%3D%2BocHA%40mail.gmail.com%3E]
>  mailing list. The following definitions work in 3.0.3 but fail in 3.0.7.
> {code}
> CREATE TABLE ks.pa (
> id bigint,
> sub_id text,
> name text,
> class text,
> r_id bigint,
> k_id bigint,
> created timestamp,
> priority int,
> updated timestamp,
> value text,
> PRIMARY KEY (id, sub_id, name)
> );
> CREATE ks.mv_pa AS
> SELECT k_id, name, value, sub_id, id, class, r_id
> FROM ks.pa
> WHERE k_id IS NOT NULL AND name IS NOT NULL AND value IS NOT NULL AND 
> sub_id IS NOT NULL AND id IS NOT NULL
> PRIMARY KEY ((k_id, name), value, sub_id, id);
> {code}
> After running bisect, I've narrowed it down to commit 
> [86ba227|https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=commit;h=86ba227477b9f8595eb610ecaf950cfbc29dd36b]
>  from [CASSANDRA-11475|https://issues.apache.org/jira/browse/CASSANDRA-11475].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12072) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12072:
--
Fix Version/s: (was: 3.9)
   3.8

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test
> --
>
> Key: CASSANDRA-12072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12072
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: dtest
> Fix For: 3.8
>
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> Multiple failures:
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/create_and_grant_roles_with_superuser_status_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/drop_keyspace_cleans_up_function_level_permissions_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_read_wrong_column_names/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_boolstyle_round_trip/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/disable_autocompaction_alter_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe_mv/
> Logs are from 
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12130) SASI related tests failing since CASSANDRA-11820

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12130:
--
Fix Version/s: (was: 3.9)
   3.8

> SASI related tests failing since CASSANDRA-11820
> 
>
> Key: CASSANDRA-12130
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12130
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
> Fix For: 3.8
>
>
> Since CASSANDRA-11820 was committed, a number of tests covering SASI have 
> been failing. In both {{SASIIndexTest}} and {{SSTableFlushObserverTest}}, 
> rows are built using an unsorted builder, which assumes that the columns are 
> added in clustering order. However, in both cases, this is not true and the 
> additional checks added to {{UnfilteredSerializer::serializeRowBody}} by 
> CASSANDRA-11820 now trigger assertion errors and, ultimately, failing tests. 
> In addition, {{SASIIndexTest}} reuses a single table in multiple tests and 
> performs its cleanup in the tear down method. When the assertion error is 
> triggered, the tear down is not run, leaving data in the table and causing 
> other failures in subsequent tests. 
> Patch to follow shortly...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12105) ThriftServer.stop is not thread safe

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12105:
--
Fix Version/s: (was: 3.9)
   3.8

> ThriftServer.stop is not thread safe
> 
>
> Key: CASSANDRA-12105
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12105
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Brian Wawok
>Assignee: Brian Wawok
>Priority: Minor
> Fix For: 2.2.8, 3.0.9, 3.8
>
> Attachments: patch1.txt, patch2.txt
>
>
> There is a small thread safety issue in ThriftServer.stop(). If we have 
> multiple calls to stop, one thread may NPE or otherwise do bad stuff.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12090) Digest mismatch if static column is NULL

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12090:
--
Fix Version/s: (was: 3.9)
   3.8

> Digest mismatch if static column is NULL
> 
>
> Key: CASSANDRA-12090
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12090
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
> Fix For: 3.0.9, 3.8
>
> Attachments: 12090.txt, trace.txt
>
>
> If a table has a static column and this column has a null value for a 
> partition a SELECT on this partition will always trigger a digest mismatch, 
> but the following full data read will not trigger a read repair since there 
> is  no mismatch in the data.
> This can be recreated using a 3 node ccm cluster with the following commands:
> {code:sql}
> CREATE KEYSPACE foo WITH replication = {'class': 'NetworkTopologyStrategy', 
> 'dc1': '3' };
> CREATE TABLE foo.foo ( key int, foo int, col int static, PRIMARY KEY (key, 
> foo) );
> CONSISTENCY QUORUM;
> INSERT INTO foo.foo (key, foo) VALUES ( 1,1);
> TRACING ON;
> SELECT * FROM foo.foo WHERE key = 1 and foo =1;
> {code}
> I have added the trace in an attachment. In the trace you can see that digest 
> read is performed and that there is a digest mismatch, but the full data read 
> does not result in a mismatch. Repeating the SELECT statement will give the 
> same trace over and over.
> The problem seams to be that the name of the static column is included when 
> the digest response is calculated even if the column has no value. When the 
> digest for the data response is calculated the column name is not included.
> I think the can be solved by updating {{UnfilteredRowIterators.digest()}} so 
> excludes the static column if it has no value. I have a patch that does this, 
> it merges to both 3.0 and trunk. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12146) Use dedicated executor for sending JMX notifications

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12146:
--
Fix Version/s: (was: 3.9)
   3.8

> Use dedicated executor for sending JMX notifications
> 
>
> Key: CASSANDRA-12146
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12146
> Project: Cassandra
>  Issue Type: Bug
>  Components: Observability
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 2.2.8, 3.0.9, 3.8
>
> Attachments: 12146-2.2.patch
>
>
> I'm currently looking into an issue with our repair process where we can 
> notice a significant delay at the end of the repair task and before nodetool 
> is actually terminating. At the same time JMX NOTIF_LOST errors are reported 
> in nodetool during most repair runs.
> Currently {{StorageService.repairAsync(keyspace, options)}} is called through 
> JMX, which will start a new thread executing RepairRunnable using the 
> provided options. StorageService itself implements 
> NotificationBroadcasterSupport and will send JMX progress notifications 
> emitted from RepairRunnable (or during bootstrap). If you take a closer look 
> at {{RepairRunnable}}, {{JMXProgressSupport}} and 
> {{StorageService/NotificationBroadcasterSupport.sendNotification}} you'll 
> notice that this all happens within the calling thread, i.e. RepairRunnable. 
> Given the lost notifications and all kind of potential networking related 
> issues, I'm not really comfortable having the repair coordinator thread 
> running in the JMX stack. Fortunately NotificationBroadcasterSupport accepts 
> a custom executor as constructor argument. See attached patched.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12145) Cassandra Stress histogram log is empty if there's only a single operation

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12145:
--
Fix Version/s: (was: 3.9)
   3.8

> Cassandra Stress histogram log is empty if there's only a single operation
> --
>
> Key: CASSANDRA-12145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12145
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Nitsan Wakart
>Assignee: Nitsan Wakart
>Priority: Minor
> Fix For: 3.8
>
>
> Bug fix is available here:
> https://github.com/nitsanw/cassandra/tree/hdr-logging-bugfix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12107) Fix range scans for table with live static rows

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12107:
--
Fix Version/s: (was: 3.9)
   3.8

> Fix range scans for table with live static rows
> ---
>
> Key: CASSANDRA-12107
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12107
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Sharvanath Pathak
> Fix For: 3.0.9, 3.8
>
> Attachments: 12107-3.0.txt, repro
>
>
> We were seeing some weird behaviour with limit based scan queries. In 
> particular, we see the following:
> {noformat}
> $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM 
> files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2"
> Consistency level set to LOCAL_QUORUM.
>  uuid | system.token(uuid)
> --+--
>  6b470c3e43ee06d1 | -9218823070349964862
>  484b091ca97803cd | -8954822859271125729
> (2 rows)
> $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM 
> files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1"
> Consistency level set to LOCAL_QUORUM.
>  uuid | system.token(uuid)
> --+--
>  c348aaec2f1e4b85 | -9218781105444826588
> {noformat}
> In the table uuid is partition key, and it has a clustering key as well.
> So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. 
> After some investigation, it seems to me like the issue is in the way 
> DataLimits handles static rows. Here is a patch for trunk 
> (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc)
>  which seems to fix it for me. Please take a look, seems like a pretty 
> critical issue to me.
> I have forked the dtests for it as well. However, since trunk has some 
> failures already, I'm not fully sure how to infer the results.
> http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/
> http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12123) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_next_2_1_x_To_current_3_x.cql3_non_compound_range_tombstones_test

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12123:
--
Fix Version/s: (was: 3.9)
   3.8

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_next_2_1_x_To_current_3_x.cql3_non_compound_range_tombstones_test
> ---
>
> Key: CASSANDRA-12123
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12123
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Tyler Hobbs
>  Labels: dtest
> Fix For: 3.0.9, 3.8
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/37/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_next_2_1_x_To_current_3_x/cql3_non_compound_range_tombstones_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #37
> Failing here:
> {code}
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1667, in cql3_non_compound_range_tombstones_test
> self.assertEqual(6, len(row), row)
> {code}
> As we see, the row has more data returned. This implies that data isn't 
> properly being shadowed by the tombstone. As such, I'm filing this directly 
> as a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12098) dtest failure in secondary_indexes_test.TestSecondaryIndexes.test_only_coordinator_chooses_index_for_query

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12098:
--
Fix Version/s: (was: 3.9)
   3.8

> dtest failure in 
> secondary_indexes_test.TestSecondaryIndexes.test_only_coordinator_chooses_index_for_query
> --
>
> Key: CASSANDRA-12098
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12098
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Sam Tunnicliffe
>  Labels: dtest
> Fix For: 3.0.9, 3.8
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/273/testReport/secondary_indexes_test/TestSecondaryIndexes/test_only_coordinator_chooses_index_for_query
> Failed on CassCI build trunk_offheap_dtest #273
> {code}
> Standard Output
> Unexpected error in node1 log, error: 
> ERROR [MessagingService-Incoming-/127.0.0.3] 2016-06-26 08:11:32,185 
> CassandraDaemon.java:219 - Exception in thread 
> Thread[MessagingService-Incoming-/127.0.0.3,5,main]
> java.lang.RuntimeException: Unknown column b during deserialization
>   at 
> org.apache.cassandra.db.Columns$Serializer.deserialize(Columns.java:433) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.SerializationHeader$Serializer.deserializeForMessaging(SerializationHeader.java:407)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.deserializeHeader(UnfilteredRowIteratorSerializer.java:192)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize30(PartitionUpdate.java:668)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.deserialize(PartitionUpdate.java:642)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:349)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:368)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:305)
>  ~[main/:na]
>   at org.apache.cassandra.net.MessageIn.read(MessageIn.java:114) 
> ~[main/:na]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:190)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:178)
>  ~[main/:na]
>   at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:92)
>  ~[main/:na]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12147) Static thrift tables with non UTF8Type comparators can have column names converted incorrectly

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12147:
--
Fix Version/s: (was: 3.9)
   3.8

> Static thrift tables with non UTF8Type comparators can have column names 
> converted incorrectly
> --
>
> Key: CASSANDRA-12147
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12147
> Project: Cassandra
>  Issue Type: Bug
>  Components: Distributed Metadata
>Reporter: Aleksey Yeschenko
>Assignee: Aleksey Yeschenko
> Fix For: 3.0.9, 3.8
>
>
> {{CompactTables::columnDefinitionComparator()}} has been broken since 
> CASSANDRA-8099 for non-super columnfamilies, if the comparator is not 
> {{UTF8Type}}. This results in being unable to read some pre-existing 2.x data 
> post upgrade (it's not lost, but becomes inaccessible).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12191) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12191:
--
Fix Version/s: (was: 3.9)
   3.8

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12191
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12191
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Fix For: 3.8
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> Failed on CassCi build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1571, in cql3_non_compound_range_tombstones_test
> self.assertEqual(6, len(row), row)
> {code}
> Seems related to Cassandra-12123



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11988) NullPointerExpception when reading/compacting table

2016-07-18 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-11988:
---
Reviewer: Sylvain Lebresne
  Status: Patch Available  (was: In Progress)

Not sure what happened, but in the time since I last ran this, the 
{{SSTableRewriterTest}} is happy on this branch.

[~slebresne]: can you take review on this? This is a really simple fix, which 
I'm afraid is actually naive.

> NullPointerExpception when reading/compacting table
> ---
>
> Key: CASSANDRA-11988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11988
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nimi Wariboko Jr.
>Assignee: Carl Yeksigian
> Fix For: 3.x
>
>
> I have a table that suddenly refuses to be read or compacted. Issuing a read 
> on the table causes a NPE.
> On compaction, it returns the error
> {code}
> ERROR [CompactionExecutor:6] 2016-06-09 17:10:15,724 CassandraDaemon.java:213 
> - Exception in thread Thread[CompactionExecutor:6,1,main]
> java.lang.NullPointerException: null
>   at 
> org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
>  ~[apache-cassandra-3.6.jar:3.6]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_45]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_45]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_45]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
> {code}
> Schema:
> {code}
> CREATE TABLE cmpayments.report_payments (
> reportid timeuuid,
> userid timeuuid,
> adjustedearnings decimal,
> deleted set static,
> earnings map,
> gross map,
> organizationid text,
> payall timestamp static,
> status text,
> PRIMARY KEY (reportid, userid)
> ) WITH CLUSTERING ORDER BY (userid ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12072) dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test

2016-07-18 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-12072:
--
Fix Version/s: (was: 3.10)

> dtest failure in auth_test.TestAuthRoles.udf_permissions_in_selection_test
> --
>
> Key: CASSANDRA-12072
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12072
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Joel Knighton
>  Labels: dtest
> Fix For: 3.9
>
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> Multiple failures:
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/create_and_grant_roles_with_superuser_status_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/drop_keyspace_cleans_up_function_level_permissions_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_max_parse_errors/
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_read_wrong_column_names/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_boolstyle_round_trip/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/compaction_test/TestCompaction_with_SizeTieredCompactionStrategy/disable_autocompaction_alter_test/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe/
> http://cassci.datastax.com/job/trunk_offheap_dtest/264/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_describe_mv/
> Logs are from 
> http://cassci.datastax.com/job/trunk_offheap_dtest/265/testReport/auth_test/TestAuthRoles/udf_permissions_in_selection_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-4650) RangeStreamer should be smarter when picking endpoints for streaming in case of N >=3 in each DC.

2016-07-18 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382767#comment-15382767
 ] 

T Jake Luciani commented on CASSANDRA-4650:
---

I looked over the patch and I like the idea.  

My questions are:

* It wasn't clear what you are using to define capacity?  Ideally size per 
range but we probably don't disseminate that.
* Can you kick off tests and dtests? or should I do it for you?  


> RangeStreamer should be smarter when picking endpoints for streaming in case 
> of N >=3 in each DC.  
> ---
>
> Key: CASSANDRA-4650
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4650
> Project: Cassandra
>  Issue Type: Improvement
>Affects Versions: 1.1.5
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
>  Labels: streaming
> Attachments: CASSANDRA-4650_trunk.txt, photo-1.JPG
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> getRangeFetchMap method in RangeStreamer should pick unique nodes to stream 
> data from when number of replicas in each DC is three or more. 
> When N>=3 in a DC, there are two options for streaming a range. Consider an 
> example of 4 nodes in one datacenter and replication factor of 3. 
> If a node goes down, it needs to recover 3 ranges of data. With current code, 
> two nodes could get selected as it orders the node by proximity. 
> We ideally will want to select 3 nodes for streaming the data. We can do this 
> by selecting unique nodes for each range.  
> Advantages:
> This will increase the performance of bootstrapping a node and will also put 
> less pressure on nodes serving the data. 
> Note: This does not affect if N < 3 in each DC as then it streams data from 
> only 2 nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12225) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-07-18 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12225:
-
Fix Version/s: (was: 3.9)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-12225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12225
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/336/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> Failed on CassCI build trunk_offheap_dtest #336
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 321, in clustering_column_test
> self.assertEqual(len(result), 2, "Expecting {} users, got {}".format(2, 
> len(result)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "Expecting 2 users, got 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12180) Should be able to override compaction space check

2016-07-18 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-12180:
---
Status: Open  (was: Patch Available)

> Should be able to override compaction space check
> -
>
> Key: CASSANDRA-12180
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12180
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12180_3.0.txt
>
>
> If there's not enough space for a compaction it won't do it and print the 
> exception below. Sometimes we know compaction will free up lot of space since 
> an ETL job could have inserted a lot of deletes. This override helps in this 
> case. 
> ERROR [CompactionExecutor:17] CassandraDaemon.java (line 258) Exception in 
> thread Thread
> [CompactionExecutor:17,1,main]
> java.lang.RuntimeException: Not enough space for compaction, estimated 
> sstables = 1552, expected
> write size = 260540558535
> at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace
> (CompactionTask.java:306)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.
> java:106)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.
> java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.
> java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run
> (CompactionManager.java:198)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12180) Should be able to override compaction space check

2016-07-18 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382692#comment-15382692
 ] 

Carl Yeksigian commented on CASSANDRA-12180:


[~kohlisankalp] What is the purpose of the changes with the min free space? It 
seems like we only need to changes to add support for 
{{compactionDiskSpaceCheck}}.

> Should be able to override compaction space check
> -
>
> Key: CASSANDRA-12180
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12180
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Trivial
> Attachments: CASSANDRA-12180_3.0.txt
>
>
> If there's not enough space for a compaction it won't do it and print the 
> exception below. Sometimes we know compaction will free up lot of space since 
> an ETL job could have inserted a lot of deletes. This override helps in this 
> case. 
> ERROR [CompactionExecutor:17] CassandraDaemon.java (line 258) Exception in 
> thread Thread
> [CompactionExecutor:17,1,main]
> java.lang.RuntimeException: Not enough space for compaction, estimated 
> sstables = 1552, expected
> write size = 260540558535
> at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace
> (CompactionTask.java:306)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.
> java:106)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.
> java:60)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.
> java:59)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run
> (CompactionManager.java:198)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12124) dtest failure in upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk.select_with_alias_test

2016-07-18 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12124:
-
Fix Version/s: (was: 3.9)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk.select_with_alias_test
> -
>
> Key: CASSANDRA-12124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12124
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all-custom_branch_runs/37/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_next_2_1_x_To_head_trunk/select_with_alias_test
> Failed on CassCI build upgrade_tests-all-custom_branch_runs #37
> This is just a problem with different error messages across C* versions. 
> Someone needs to do the legwork of figuring out what is required where, and 
> filtering. The query is failing correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12164) dtest failure in materialized_views_test.TestMaterializedViews.add_dc_after_mv_network_replication_test

2016-07-18 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12164:
-
Fix Version/s: (was: 3.9)

> dtest failure in 
> materialized_views_test.TestMaterializedViews.add_dc_after_mv_network_replication_test
> ---
>
> Key: CASSANDRA-12164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12164
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, 
> node5_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/309/testReport/materialized_views_test/TestMaterializedViews/add_dc_after_mv_network_replication_test
> Failed on CassCI build trunk_offheap_dtest #309
> {code}
> Standard Output
> Unexpected error in node4 log, error: 
> ERROR [main] 2016-07-06 19:21:26,631 MigrationManager.java:164 - Migration 
> task failed to complete
> {code}
> Related failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/423/testReport/materialized_views_test/TestMaterializedViews/add_node_after_mv_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12192) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12192:

Issue Type: Bug  (was: Test)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test
> 
>
> Key: CASSANDRA-12192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 3668, in map_keys_indexing_test
> cursor.execute("TRUNCATE test")
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> '
> {code}
> Related failure: 
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12192) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test

2016-07-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382687#comment-15382687
 ] 

Philip Thompson commented on CASSANDRA-12192:
-

I see node1 and node2 log "blocking truncate operation", but not node3. I do 
see node3 "discard sstable data for truncate" though, so I'm not really sure 
what is supposed to be happening on trunk here, and I think I need a dev to 
take a look.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test
> 
>
> Key: CASSANDRA-12192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 3668, in map_keys_indexing_test
> cursor.execute("TRUNCATE test")
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> '
> {code}
> Related failure: 
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12192) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12192:

Assignee: (was: DS Test Eng)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test
> 
>
> Key: CASSANDRA-12192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 3668, in map_keys_indexing_test
> cursor.execute("TRUNCATE test")
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> '
> {code}
> Related failure: 
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12225) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-07-18 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12225:
-
Fix Version/s: 3.9

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-12225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12225
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.9
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/336/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> Failed on CassCI build trunk_offheap_dtest #336
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 321, in clustering_column_test
> self.assertEqual(len(result), 2, "Expecting {} users, got {}".format(2, 
> len(result)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "Expecting 2 users, got 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11902) dtest failure in hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11902:

Assignee: (was: DS Test Eng)

> dtest failure in 
> hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test
> ---
>
> Key: CASSANDRA-11902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11902
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>  Labels: dtest
> Fix For: 3.9
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log
>
>
> Failure occurred on trunk here:
> http://cassci.datastax.com/job/trunk_dtest/1239/testReport/hintedhandoff_test/TestHintedHandoffConfig/hintedhandoff_dc_reenabled_test/
> Logs are attached
> We re-enable HH on a DC, but we aren't seeing hints move in the logs, so this 
> does worry me a bit. I'm not sure quite how flaky it is. It's only failed 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11902) dtest failure in hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-11902:

Issue Type: Bug  (was: Test)

> dtest failure in 
> hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test
> ---
>
> Key: CASSANDRA-11902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11902
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.9
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log
>
>
> Failure occurred on trunk here:
> http://cassci.datastax.com/job/trunk_dtest/1239/testReport/hintedhandoff_test/TestHintedHandoffConfig/hintedhandoff_dc_reenabled_test/
> Logs are attached
> We re-enable HH on a DC, but we aren't seeing hints move in the logs, so this 
> does worry me a bit. I'm not sure quite how flaky it is. It's only failed 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12192) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test

2016-07-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382675#comment-15382675
 ] 

Philip Thompson commented on CASSANDRA-12192:
-

Okay, so then disregard my copy from the logs. This error is already ignored, 
meaning if it isn't causing the truncate to fail, there is a different problem.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test
> 
>
> Key: CASSANDRA-12192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 3668, in map_keys_indexing_test
> cursor.execute("TRUNCATE test")
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> '
> {code}
> Related failure: 
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12164) dtest failure in materialized_views_test.TestMaterializedViews.add_dc_after_mv_network_replication_test

2016-07-18 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-12164:
-
Fix Version/s: 3.9

> dtest failure in 
> materialized_views_test.TestMaterializedViews.add_dc_after_mv_network_replication_test
> ---
>
> Key: CASSANDRA-12164
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12164
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.9
>
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, 
> node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, 
> node5_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/309/testReport/materialized_views_test/TestMaterializedViews/add_dc_after_mv_network_replication_test
> Failed on CassCI build trunk_offheap_dtest #309
> {code}
> Standard Output
> Unexpected error in node4 log, error: 
> ERROR [main] 2016-07-06 19:21:26,631 MigrationManager.java:164 - Migration 
> task failed to complete
> {code}
> Related failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/423/testReport/materialized_views_test/TestMaterializedViews/add_node_after_mv_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11902) dtest failure in hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test

2016-07-18 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382672#comment-15382672
 ] 

Jim Witschey commented on CASSANDRA-11902:
--

I think I'd rather have a dev have a look.

> dtest failure in 
> hintedhandoff_test.TestHintedHandoffConfig.hintedhandoff_dc_reenabled_test
> ---
>
> Key: CASSANDRA-11902
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11902
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: DS Test Eng
>  Labels: dtest
> Fix For: 3.9
>
> Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log
>
>
> Failure occurred on trunk here:
> http://cassci.datastax.com/job/trunk_dtest/1239/testReport/hintedhandoff_test/TestHintedHandoffConfig/hintedhandoff_dc_reenabled_test/
> Logs are attached
> We re-enable HH on a DC, but we aren't seeing hints move in the logs, so this 
> does worry me a bit. I'm not sure quite how flaky it is. It's only failed 
> once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9054) Break DatabaseDescriptor up into multiple classes.

2016-07-18 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382669#comment-15382669
 ] 

Yuki Morishita commented on CASSANDRA-9054:
---

[~bdeggleston] I'm not really looking into breaking up/tearing down DD for now. 
If broken up ones remain static global class, I consider they still are not 
good enough. I'm trying to minimize access to DD by actually passing config 
values rather than pulling from global static ones.

> Break DatabaseDescriptor up into multiple classes.
> --
>
> Key: CASSANDRA-9054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9054
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremiah Jordan
>Assignee: Robert Stupp
> Fix For: 3.x
>
>
> Right now to get at Config stuff you go through DatabaseDescriptor.  But when 
> you instantiate DatabaseDescriptor it actually opens system tables and such, 
> which triggers commit log replays, and other things if the right flags aren't 
> set ahead of time.  This makes getting at config stuff from tools annoying, 
> as you have to be very careful about instantiation orders.
> It would be nice if we could break DatabaseDescriptor up into multiple 
> classes, so that getting at config stuff from tools wasn't such a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12192) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12192:

Assignee: DS Test Eng

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test
> 
>
> Key: CASSANDRA-12192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 3668, in map_keys_indexing_test
> cursor.execute("TRUNCATE test")
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> '
> {code}
> Related failure: 
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12192) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test

2016-07-18 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-12192:

Issue Type: Test  (was: Bug)

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk.map_keys_indexing_test
> 
>
> Key: CASSANDRA-12192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12192
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test
> Failed on CassCI build upgrade_tests-all #59
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 3668, in map_keys_indexing_test
> cursor.execute("TRUNCATE test")
>   File "cassandra/cluster.py", line 1941, in 
> cassandra.cluster.Session.execute (cassandra/cluster.c:33642)
> return self.execute_async(query, parameters, trace, custom_payload, 
> timeout, execution_profile).result()
>   File "cassandra/cluster.py", line 3629, in 
> cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:69369)
> raise self._final_exception
> '
> {code}
> Related failure: 
> http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_3_0_x_To_head_trunk/map_keys_indexing_test/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12225) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test

2016-07-18 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382665#comment-15382665
 ] 

Philip Thompson commented on CASSANDRA-12225:
-

We're inserting and then reading at QUORUM. When you say the test is flakey, 
are you blaming the test or C*? I'm not sure what changes the test could need.

> dtest failure in 
> materialized_views_test.TestMaterializedViews.clustering_column_test
> -
>
> Key: CASSANDRA-12225
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12225
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: DS Test Eng
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_offheap_dtest/336/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test
> Failed on CassCI build trunk_offheap_dtest #336
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/materialized_views_test.py", line 
> 321, in clustering_column_test
> self.assertEqual(len(result), 2, "Expecting {} users, got {}".format(2, 
> len(result)))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> "Expecting 2 users, got 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11998) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test

2016-07-18 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382664#comment-15382664
 ] 

Carl Yeksigian commented on CASSANDRA-11998:


Opened [dtest pr|https://github.com/riptano/cassandra-dtest/pull/1105], CI 
looks clean.

> dtest failure in 
> offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
> ---
>
> Key: CASSANDRA-11998
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11998
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: Carl Yeksigian
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.2_dtest/635/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test
> Failed on CassCI build cassandra-2.2_dtest #635



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >