[jira] [Commented] (CASSANDRA-9661) Endless compaction to a tiny, tombstoned SStable

2015-07-10 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621818#comment-14621818
 ] 

Jeff Jirsa commented on CASSANDRA-9661:
---

[~noel2004] - can you confirm this was on 2.1.5? 


 Endless compaction to a tiny, tombstoned SStable
 

 Key: CASSANDRA-9661
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9661
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: WeiFan
Assignee: Yuki Morishita
  Labels: compaction, dtcs

 We deployed a 3-nodes cluster (with 2.1.5) which worked under stable write 
 requests ( about 2k wps) to a CF with DTCS, a default TTL as 43200s and 
 gc_grace as 21600s. The CF contained inserted only, complete time series 
 data. We found cassandra will occasionally keep writing logs like this:
 INFO  [CompactionExecutor:30551] 2015-06-26 18:10:06,195 
 CompactionTask.java:270 - Compacted 1 sstables to 
 [/home/cassandra/workdata/data/sen_vaas_test/nodestatus-f96c7c50155811e589f69752ac9b06c7/sen_vaas_test-nodestatus-ka-2516270,].
   449 bytes to 449 (~100% of original) in 12ms = 0.035683MB/s.  4 total 
 partitions merged to 4.  Partition merge counts were {1:4, }
 INFO  [CompactionExecutor:30551] 2015-06-26 18:10:06,241 
 CompactionTask.java:140 - Compacting 
 [SSTableReader(path='/home/cassandra/workdata/data/sen_vaas_test/nodestatus-f96c7c50155811e589f69752ac9b06c7/sen_vaas_test-nodestatus-ka-2516270-Data.db')]
 INFO  [CompactionExecutor:30551] 2015-06-26 18:10:06,253 
 CompactionTask.java:270 - Compacted 1 sstables to 
 [/home/cassandra/workdata/data/sen_vaas_test/nodestatus-f96c7c50155811e589f69752ac9b06c7/sen_vaas_test-nodestatus-ka-2516271,].
   449 bytes to 449 (~100% of original) in 12ms = 0.035683MB/s.  4 total 
 partitions merged to 4.  Partition merge counts were {1:4, }
 It seems that cassandra kept doing compacting to a single SStable, serveral 
 times per second, and lasted for many hours. Tons of logs were thrown and one 
 CPU core exhausted during this time. The endless compacting finally end when 
 another compaction started with a group of SStables (including previous one). 
 All of our 3 nodes have been hit by this problem, but occurred in different 
 time.
 We could not figure out how the problematic SStable come up because the log 
 has wrapped around. 
 We have dumped the records in the SStable and found it has the oldest data in 
 our CF (again, our data was time series), and all of the record in this 
 SStable have bben expired for more than 18 hours (12 hrs TTL + 6 hrs gc) so 
 they should be dropped. However, c* do nothing to this SStable but compacting 
 it again and again, until more SStable were out-dated enough to be considered 
 for compacting together with this one by DTCS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9765) checkForEndpointCollision fails for legitimate collisions

2015-07-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621826#comment-14621826
 ] 

Stefania commented on CASSANDRA-9765:
-

The reason why {{!isFatClient()}} is false and doesn't trigger the exception is 
because the state is SHUTDOWN rather than one of the DEAD states (REMOVING, 
REMOVED, LEFT and HIBERNATE). It seems that we cannot determine if a SHUTDOWN 
node was a fat client and therefore isFatClient() should not check if shutdown 
nodes are members, since they aren't. For fear of breaking stuff I added a new 
method that excludes SHUTDOWN nodes and I called it isLiveFatClient().

I've attached patch for 2.0 and a dtest that clearly reproduces the problem 
with the existing 2.0 code.

[~jbellis], please confirm you are happy with the fix in 2.0, we can always 
rollback CASSANDRA-7939 and only fix in later revisions. We also need a 
reviewer. 

 checkForEndpointCollision fails for legitimate collisions
 -

 Key: CASSANDRA-9765
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9765
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Stefania
 Fix For: 2.0.17


 Since CASSANDRA-7939, checkForEndpointCollision no longer catches a 
 legitimate collision. Without CASSANDRA-7939, wiping a node and starting it 
 again fails with 'A node with address %s already exists', but with it the 
 node happily enters joining state, potentially streaming from the wrong place 
 and violating consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-9686:
---
Fix Version/s: 2.1.x

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-10 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621869#comment-14621869
 ] 

Marcus Eriksson commented on CASSANDRA-9686:


this looks good to me, +1

we should push this to 2.1 as well I think, could you backport (and squash etc)?

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9765) checkForEndpointCollision fails for legitimate collisions

2015-07-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621927#comment-14621927
 ] 

Stefania commented on CASSANDRA-9765:
-

CI results:

http://cassci.datastax.com/job/stef1927-9765-2.0-testall/lastSuccessfulBuild/
http://cassci.datastax.com/job/stef1927-9765-2.0-dtest/lastSuccessfulBuild/

 checkForEndpointCollision fails for legitimate collisions
 -

 Key: CASSANDRA-9765
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9765
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Stefania
 Fix For: 2.0.17


 Since CASSANDRA-7939, checkForEndpointCollision no longer catches a 
 legitimate collision. Without CASSANDRA-7939, wiping a node and starting it 
 again fails with 'A node with address %s already exists', but with it the 
 node happily enters joining state, potentially streaming from the wrong place 
 and violating consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14621937#comment-14621937
 ] 

Stefania commented on CASSANDRA-9686:
-

It's [done|https://github.com/stef1927/cassandra/commits/9686-2.1].

I will post the CI results once they are available.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/4] cassandra git commit: Move schema tables to the new system_schema keyspace

2015-07-10 Thread aleksey
http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java 
b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
new file mode 100644
index 000..0e40ed2
--- /dev/null
+++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
@@ -0,0 +1,1501 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.schema;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+import java.util.*;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Function;
+
+import com.google.common.collect.ImmutableList;
+import com.google.common.collect.MapDifference;
+import com.google.common.collect.Maps;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.cache.CachingOptions;
+import org.apache.cassandra.config.*;
+import org.apache.cassandra.cql3.ColumnIdentifier;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
+import org.apache.cassandra.cql3.functions.*;
+import org.apache.cassandra.db.ClusteringComparator;
+import org.apache.cassandra.db.*;
+import org.apache.cassandra.db.marshal.*;
+import org.apache.cassandra.db.partitions.*;
+import org.apache.cassandra.db.rows.*;
+import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.io.compress.CompressionParameters;
+import org.apache.cassandra.service.StorageService;
+import org.apache.cassandra.utils.ByteBufferUtil;
+import org.apache.cassandra.utils.FBUtilities;
+import org.apache.cassandra.utils.concurrent.OpOrder;
+
+import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal;
+import static org.apache.cassandra.utils.FBUtilities.fromJsonMap;
+import static org.apache.cassandra.utils.FBUtilities.json;
+
+/**
+ * system_schema.* tables and methods for manipulating them.
+ */
+public final class SchemaKeyspace
+{
+private SchemaKeyspace()
+{
+}
+
+private static final Logger logger = 
LoggerFactory.getLogger(SchemaKeyspace.class);
+
+public static final String NAME = system_schema;
+
+public static final String KEYSPACES = keyspaces;
+public static final String TABLES = tables;
+public static final String COLUMNS = columns;
+public static final String TRIGGERS = triggers;
+public static final String TYPES = types;
+public static final String FUNCTIONS = functions;
+public static final String AGGREGATES = aggregates;
+
+public static final ListString ALL =
+ImmutableList.of(KEYSPACES, TABLES, COLUMNS, TRIGGERS, TYPES, 
FUNCTIONS, AGGREGATES);
+
+private static final CFMetaData Keyspaces =
+compile(KEYSPACES,
+keyspace definitions,
+CREATE TABLE %s (
++ keyspace_name text,
++ durable_writes boolean,
++ replication maptext, text,
++ PRIMARY KEY ((keyspace_name;
+
+private static final CFMetaData Tables =
+compile(TABLES,
+table definitions,
+CREATE TABLE %s (
++ keyspace_name text,
++ table_name text,
++ bloom_filter_fp_chance double,
++ caching text,
++ cf_id uuid, // post-2.1 UUID cfid
++ comment text,
++ compaction_strategy_class text,
++ compaction_strategy_options text,
++ comparator text,
++ compression_parameters text,
++ default_time_to_live int,
++ default_validator text,
++ dropped_columns maptext, bigint,
++ dropped_columns_types maptext, text,
++ gc_grace_seconds int,
++ is_dense boolean,
++ key_validator text,
++ local_read_repair_chance double,
++ 

[1/4] cassandra git commit: Move schema tables to the new system_schema keyspace

2015-07-10 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 81ba56163 - 7d6c876ec


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 93f69a9..596e463 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -2471,7 +2471,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public int forceKeyspaceCleanup(String keyspaceName, String... 
columnFamilies) throws IOException, ExecutionException, InterruptedException
 {
-if (keyspaceName.equals(SystemKeyspace.NAME))
+if (Schema.isSystemKeyspace(keyspaceName))
 throw new RuntimeException(Cleanup of the system keyspace is 
neither necessary nor wise);
 
 CompactionManager.AllSSTableOpStatus status = 
CompactionManager.AllSSTableOpStatus.SUCCESSFUL;
@@ -2705,7 +2705,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 MapString, TabularData snapshotMap = new HashMap();
 for (Keyspace keyspace : Keyspace.all())
 {
-if (SystemKeyspace.NAME.equals(keyspace.getName()))
+if (Schema.isSystemKeyspace(keyspace.getName()))
 continue;
 
 for (ColumnFamilyStore cfStore : keyspace.getColumnFamilyStores())
@@ -2731,7 +2731,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 long total = 0;
 for (Keyspace keyspace : Keyspace.all())
 {
-if (SystemKeyspace.NAME.equals(keyspace.getName()))
+if (Schema.isSystemKeyspace(keyspace.getName()))
 continue;
 
 for (ColumnFamilyStore cfStore : keyspace.getColumnFamilyStores())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/src/java/org/apache/cassandra/thrift/ThriftConversion.java
--
diff --git a/src/java/org/apache/cassandra/thrift/ThriftConversion.java 
b/src/java/org/apache/cassandra/thrift/ThriftConversion.java
index f60ea48..3e0c8f4 100644
--- a/src/java/org/apache/cassandra/thrift/ThriftConversion.java
+++ b/src/java/org/apache/cassandra/thrift/ThriftConversion.java
@@ -40,9 +40,9 @@ import 
org.apache.cassandra.locator.AbstractReplicationStrategy;
 import org.apache.cassandra.locator.LocalStrategy;
 import org.apache.cassandra.schema.KeyspaceMetadata;
 import org.apache.cassandra.schema.KeyspaceParams;
+import org.apache.cassandra.schema.SchemaKeyspace;
 import org.apache.cassandra.schema.Tables;
 import org.apache.cassandra.serializers.MarshalException;
-import org.apache.cassandra.schema.LegacySchemaTables;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.UUIDGen;
 
@@ -414,7 +414,7 @@ public class ThriftConversion
 cols.add(convertThriftCqlRow(row));
 UntypedResultSet colsRows = UntypedResultSet.create(cols);
 
-return LegacySchemaTables.createTableFromTableRowAndColumnRows(cfRow, 
colsRows);
+return SchemaKeyspace.createTableFromTableRowAndColumnRows(cfRow, 
colsRows);
 }
 
 private static MapString, ByteBuffer convertThriftCqlRow(CqlRow row)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/src/java/org/apache/cassandra/thrift/ThriftValidation.java
--
diff --git a/src/java/org/apache/cassandra/thrift/ThriftValidation.java 
b/src/java/org/apache/cassandra/thrift/ThriftValidation.java
index dd5bf98..13c55aa 100644
--- a/src/java/org/apache/cassandra/thrift/ThriftValidation.java
+++ b/src/java/org/apache/cassandra/thrift/ThriftValidation.java
@@ -632,8 +632,8 @@ public class ThriftValidation
 
 public static void validateKeyspaceNotSystem(String modifiedKeyspace) 
throws org.apache.cassandra.exceptions.InvalidRequestException
 {
-if (modifiedKeyspace.equalsIgnoreCase(SystemKeyspace.NAME))
-throw new 
org.apache.cassandra.exceptions.InvalidRequestException(system keyspace is not 
user-modifiable);
+if (Schema.isSystemKeyspace(modifiedKeyspace))
+throw new 
org.apache.cassandra.exceptions.InvalidRequestException(String.format(%s 
keyspace is not user-modifiable, modifiedKeyspace));
 }
 
 //public static IDiskAtomFilter asIFilter(SlicePredicate sp, CFMetaData 
metadata, ByteBuffer superColumn)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/src/java/org/apache/cassandra/tools/nodetool/Cleanup.java
--
diff --git a/src/java/org/apache/cassandra/tools/nodetool/Cleanup.java 

[jira] [Commented] (CASSANDRA-9717) TestCommitLog segment size dtests fail on trunk

2015-07-10 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622145#comment-14622145
 ] 

Branimir Lambov commented on CASSANDRA-9717:


I am sorry, the seed discussion only applies to compressed versions of the 
test, which {{default_segment_size_test}} isn't.

I looked at the test and am wondering why it expects an overall size of 60mb 
for two segments of 32mb each? It appears to error out as it sees 64, which is 
correct and as expected.

60 (or maybe 62.5 as the test appears to allow for 12 or 13 segments) is 
correct for {{small_segment_size_test}}.

The compressed version of this test, however, fails as the last segment of the 
log is smaller. This is also expected-- compressed segments grow as data is 
added; we probably need to special-case this.

 TestCommitLog segment size dtests fail on trunk
 ---

 Key: CASSANDRA-9717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9717
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey
Assignee: Branimir Lambov
Priority: Blocker
 Fix For: 3.0 beta 1


 The test for the commit log segment size when the specified size is 32MB. It 
 fails for me locally and on on cassci. ([cassci 
 link|http://cassci.datastax.com/view/trunk/job/trunk_dtest/305/testReport/commitlog_test/TestCommitLog/default_segment_size_test/])
 The command to run the test by itself is {{CASSANDRA_VERSION=git:trunk 
 nosetests commitlog_test.py:TestCommitLog.default_segment_size_test}}.
 EDIT: a similar test, 
 {{commitlog_test.py:TestCommitLog.small_segment_size_test}}, also fails with 
 a similar error.
 The solution here may just be to change the expected size or the acceptable 
 error -- the result isn't far off. I'm happy to make the dtest change if 
 that's the solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6717) Modernize schema tables

2015-07-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1469#comment-1469
 ] 

Aleksey Yeschenko commented on CASSANDRA-6717:
--

Committed to trunk as {{7d6c876ec9f8dd143046ff49b5d61066ad5206c1}}. Fixed the 
dtests broken by the patch in cassandra-dtest commit 
{{d31b56075623c19e5151400f83c8ea43f986d5ea}}, with the exception of 
{{auth_test.py}}.

Dtests should really switch to using python-driver metadata API directly, and 
not query the internal schema tables. Will ask someone to make those changes.

 Modernize schema tables
 ---

 Key: CASSANDRA-6717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6717
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
  Labels: client-impacting
 Fix For: 3.0 beta 1


 There is a few problems/improvements that can be done with the way we store 
 schema:
 # CASSANDRA-4988: as explained on the ticket, storing the comparator is now 
 redundant (or almost, we'd need to store whether the table is COMPACT or not 
 too, which we don't currently is easy and probably a good idea anyway), it 
 can be entirely reconstructed from the infos in schema_columns (the same is 
 true of key_validator and subcomparator, and replacing default_validator by a 
 COMPACT_VALUE column in all case is relatively simple). And storing the 
 comparator as an opaque string broke concurrent updates of sub-part of said 
 comparator (concurrent collection addition or altering 2 separate clustering 
 columns typically) so it's really worth removing it.
 # CASSANDRA-4603: it's time to get rid of those ugly json maps. I'll note 
 that schema_keyspaces is a problem due to its use of COMPACT STORAGE, but I 
 think we should fix it once and for-all nonetheless (see below).
 # For CASSANDRA-6382 and to allow indexing both map keys and values at the 
 same time, we'd need to be able to have more than one index definition for a 
 given column.
 # There is a few mismatches in table options between the one stored in the 
 schema and the one used when declaring/altering a table which would be nice 
 to fix. The compaction, compression and replication maps are one already 
 mentioned from CASSANDRA-4603, but also for some reason 
 'dclocal_read_repair_chance' in CQL is called just 'local_read_repair_chance' 
 in the schema table, and 'min/max_compaction_threshold' are column families 
 option in the schema but just compaction options for CQL (which makes more 
 sense).
 None of those issues are major, and we could probably deal with them 
 independently but it might be simpler to just fix them all in one shot so I 
 wanted to sum them all up here. In particular, the fact that 
 'schema_keyspaces' uses COMPACT STORAGE is annoying (for the replication map, 
 but it may limit future stuff too) which suggest we should migrate it to a 
 new, non COMPACT table. And while that's arguably a detail, it wouldn't hurt 
 to rename schema_columnfamilies to schema_tables for the years to come since 
 that's the prefered vernacular for CQL.
 Overall, what I would suggest is to move all schema tables to a new keyspace, 
 named 'schema' for instance (or 'system_schema' but I prefer the shorter 
 version), and fix all the issues above at once. Since we currently don't 
 exchange schema between nodes of different versions, all we'd need to do that 
 is a one shot startup migration, and overall, I think it could be simpler for 
 clients to deal with one clear migration than to have to handle minor 
 individual changes all over the place. I also think it's somewhat cleaner 
 conceptually to have schema tables in their own keyspace since they are 
 replicated through a different mechanism than other system tables.
 If we do that, we could, for instance, migrate to the following schema tables 
 (details up for discussion of course):
 {noformat}
 CREATE TYPE user_type (
   name text,
   column_names listtext,
   column_types listtext
 )
 CREATE TABLE keyspaces (
   name text PRIMARY KEY,
   durable_writes boolean,
   replication mapstring, string,
   user_types mapstring, user_type
 )
 CREATE TYPE trigger_definition (
   name text,
   options maptex, text
 )
 CREATE TABLE tables (
   keyspace text,
   name text,
   id uuid,
   table_type text, // COMPACT, CQL or SUPER
   dropped_columns maptext, bigint,
   triggers maptext, trigger_definition,
   // options
   comment text,
   compaction maptext, text,
   compression maptext, text,
   read_repair_chance double,
   dclocal_read_repair_chance double,
   gc_grace_seconds int,
   caching text,
   rows_per_partition_to_cache text,
   default_time_to_live int,
   min_index_interval int,
   max_index_interval int,
   speculative_retry text,
   populate_io_cache_on_flush boolean,
   

[4/4] cassandra git commit: Move schema tables to the new system_schema keyspace

2015-07-10 Thread aleksey
Move schema tables to the new system_schema keyspace

patch by Aleksey Yeschenko; reviewed by Tyler Hobbs for CASSANDRA-6717


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d6c876e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d6c876e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d6c876e

Branch: refs/heads/trunk
Commit: 7d6c876ec9f8dd143046ff49b5d61066ad5206c1
Parents: 81ba561
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Jul 9 21:42:52 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jul 10 15:28:37 2015 +0300

--
 NEWS.txt|4 +
 ...-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar |  Bin 2154972 - 0 bytes
 ...ra-driver-core-2.2.0-rc2-SNAPSHOT-shaded.jar |  Bin 0 - 2162223 bytes
 ...sandra-driver-internal-only-2.6.0c2.post.zip |  Bin 198346 - 0 bytes
 lib/cassandra-driver-internal-only-2.6.0c2.zip  |  Bin 0 - 203206 bytes
 pylib/cqlshlib/cql3handling.py  |4 +-
 .../org/apache/cassandra/config/CFMetaData.java |6 +-
 .../org/apache/cassandra/config/Schema.java |   29 +-
 .../cql3/statements/AlterKeyspaceStatement.java |3 +-
 .../db/DefinitionsUpdateVerbHandler.java|4 +-
 src/java/org/apache/cassandra/db/Keyspace.java  |6 +-
 .../db/MigrationRequestVerbHandler.java |4 +-
 .../org/apache/cassandra/db/ReadCommand.java|3 +-
 .../org/apache/cassandra/db/SystemKeyspace.java |  172 +-
 .../io/sstable/format/SSTableReader.java|2 +-
 .../org/apache/cassandra/schema/Functions.java  |7 +
 .../cassandra/schema/LegacySchemaMigrator.java  |  804 ++
 .../cassandra/schema/LegacySchemaTables.java| 1502 --
 .../apache/cassandra/schema/SchemaKeyspace.java | 1501 +
 src/java/org/apache/cassandra/schema/Types.java |   12 +
 .../cassandra/service/CassandraDaemon.java  |9 +
 .../apache/cassandra/service/ClientState.java   |9 +-
 .../cassandra/service/MigrationManager.java |   71 +-
 .../apache/cassandra/service/MigrationTask.java |4 +-
 .../apache/cassandra/service/StorageProxy.java  |2 +-
 .../cassandra/service/StorageService.java   |6 +-
 .../cassandra/thrift/ThriftConversion.java  |4 +-
 .../cassandra/thrift/ThriftValidation.java  |4 +-
 .../cassandra/tools/nodetool/Cleanup.java   |6 +-
 .../utils/NativeSSTableLoaderClient.java|   19 +-
 .../unit/org/apache/cassandra/SchemaLoader.java |4 +-
 .../apache/cassandra/config/CFMetaDataTest.java |   10 +-
 .../config/LegacySchemaTablesTest.java  |  153 --
 .../cql3/validation/entities/UFTest.java|   16 +-
 .../cql3/validation/operations/AlterTest.java   |   24 +-
 .../operations/InsertUpdateIfConditionTest.java |6 +-
 .../org/apache/cassandra/schema/DefsTest.java   |2 +-
 .../schema/LegacySchemaMigratorTest.java|  549 +++
 .../cassandra/schema/SchemaKeyspaceTest.java|  153 ++
 .../service/StorageServiceServerTest.java   |5 +-
 40 files changed, 3328 insertions(+), 1791 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 54ed7c6..ce05b92 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -21,6 +21,10 @@ Upgrading
_ New SSTable version 'la' with improved bloom-filter false-positive 
handling
  compared to previous version 'ka' used in 2.2 and 2.1. Running 
sstableupgrade
  is not necessary but recommended.
+   - Before upgrading to 3.0, make sure that your cluster is in complete 
agreement
+ (schema versions outputted by `nodetool describecluster` are all the 
same).
+   - Schema metadata is now stored in the new `system_schema` keyspace, and
+ legacy `system.schema_*` tables are now gone; see CASSANDRA-6717 for 
details.
- Pig's CassandraStorage has been removed. Use CqlNativeStorage instead.
- Hadoop BulkOutputFormat and BulkRecordWriter have been removed; use
  CqlBulkOutputFormat and CqlBulkRecordWriter instead.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar
--
diff --git a/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar 
b/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar
deleted file mode 100644
index 7d971df..000
Binary files a/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-20150617-shaded.jar 
and /dev/null differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/lib/cassandra-driver-core-2.2.0-rc2-SNAPSHOT-shaded.jar

[3/4] cassandra git commit: Move schema tables to the new system_schema keyspace

2015-07-10 Thread aleksey
http://git-wip-us.apache.org/repos/asf/cassandra/blob/7d6c876e/src/java/org/apache/cassandra/schema/LegacySchemaTables.java
--
diff --git a/src/java/org/apache/cassandra/schema/LegacySchemaTables.java 
b/src/java/org/apache/cassandra/schema/LegacySchemaTables.java
deleted file mode 100644
index c8e163c..000
--- a/src/java/org/apache/cassandra/schema/LegacySchemaTables.java
+++ /dev/null
@@ -1,1502 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.schema;
-
-import java.io.IOException;
-import java.nio.ByteBuffer;
-import java.security.MessageDigest;
-import java.security.NoSuchAlgorithmException;
-import java.util.*;
-import java.util.concurrent.TimeUnit;
-import java.util.function.Function;
-
-import com.google.common.collect.MapDifference;
-import com.google.common.collect.Maps;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import org.apache.cassandra.cache.CachingOptions;
-import org.apache.cassandra.config.*;
-import org.apache.cassandra.cql3.*;
-import org.apache.cassandra.cql3.functions.AbstractFunction;
-import org.apache.cassandra.cql3.functions.FunctionName;
-import org.apache.cassandra.cql3.functions.UDFunction;
-import org.apache.cassandra.cql3.functions.UDAggregate;
-import org.apache.cassandra.db.*;
-import org.apache.cassandra.db.rows.*;
-import org.apache.cassandra.db.marshal.*;
-import org.apache.cassandra.db.partitions.*;
-import org.apache.cassandra.exceptions.ConfigurationException;
-import org.apache.cassandra.exceptions.InvalidRequestException;
-import org.apache.cassandra.io.compress.CompressionParameters;
-import org.apache.cassandra.service.StorageService;
-import org.apache.cassandra.utils.ByteBufferUtil;
-import org.apache.cassandra.utils.FBUtilities;
-import org.apache.cassandra.utils.concurrent.OpOrder;
-
-import static org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal;
-import static org.apache.cassandra.utils.FBUtilities.fromJsonMap;
-import static org.apache.cassandra.utils.FBUtilities.json;
-
-/** system.schema_* tables used to store keyspace/table/type attributes prior 
to C* 3.0 */
-public final class LegacySchemaTables
-{
-private LegacySchemaTables()
-{
-}
-
-private static final Logger logger = 
LoggerFactory.getLogger(LegacySchemaTables.class);
-
-public static final String KEYSPACES = schema_keyspaces;
-public static final String COLUMNFAMILIES = schema_columnfamilies;
-public static final String COLUMNS = schema_columns;
-public static final String TRIGGERS = schema_triggers;
-public static final String USERTYPES = schema_usertypes;
-public static final String FUNCTIONS = schema_functions;
-public static final String AGGREGATES = schema_aggregates;
-
-public static final ListString ALL = Arrays.asList(KEYSPACES, 
COLUMNFAMILIES, COLUMNS, TRIGGERS, USERTYPES, FUNCTIONS, AGGREGATES);
-
-private static final CFMetaData Keyspaces =
-compile(KEYSPACES,
-keyspace definitions,
-CREATE TABLE %s (
-+ keyspace_name text,
-+ durable_writes boolean,
-+ strategy_class text,
-+ strategy_options text,
-+ PRIMARY KEY ((keyspace_name))) 
-+ WITH COMPACT STORAGE);
-
-private static final CFMetaData Columnfamilies =
-compile(COLUMNFAMILIES,
-table definitions,
-CREATE TABLE %s (
-+ keyspace_name text,
-+ columnfamily_name text,
-+ bloom_filter_fp_chance double,
-+ caching text,
-+ cf_id uuid, // post-2.1 UUID cfid
-+ comment text,
-+ compaction_strategy_class text,
-+ compaction_strategy_options text,
-+ comparator text,
-+ compression_parameters text,
-+ default_time_to_live int,
-+ default_validator text,
-+ dropped_columns maptext, bigint,
-+ dropped_columns_types maptext, text,
-+ gc_grace_seconds int,
-+ is_dense 

[jira] [Updated] (CASSANDRA-9765) checkForEndpointCollision fails for legitimate collisions

2015-07-10 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-9765:

Reviewer: Richard Low

 checkForEndpointCollision fails for legitimate collisions
 -

 Key: CASSANDRA-9765
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9765
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Stefania
 Fix For: 2.0.17


 Since CASSANDRA-7939, checkForEndpointCollision no longer catches a 
 legitimate collision. Without CASSANDRA-7939, wiping a node and starting it 
 again fails with 'A node with address %s already exists', but with it the 
 node happily enters joining state, potentially streaming from the wrong place 
 and violating consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9686) FSReadError and LEAK DETECTED after upgrading

2015-07-10 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14623160#comment-14623160
 ] 

Stefania commented on CASSANDRA-9686:
-

CI results for 2.1:

http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9686-2.1-testall/lastCompletedBuild/testReport/
http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9686-2.1-dtest/lastCompletedBuild/testReport/

There seems to be a few flacky dtests so I launched a second dtest build.

 FSReadError and LEAK DETECTED after upgrading
 -

 Key: CASSANDRA-9686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9686
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Windows-7-32 bit, 3.2GB RAM, Java 1.7.0_55
Reporter: Andreas Schnitzerling
Assignee: Stefania
 Fix For: 2.1.x, 2.2.x

 Attachments: cassandra.bat, cassandra.yaml, 
 compactions_in_progress.zip, sstable_activity.zip, system.log


 After upgrading one of 15 nodes from 2.1.7 to 2.2.0-rc1 I get FSReadError and 
 LEAK DETECTED on start. Deleting the listed files, the failure goes away.
 {code:title=system.log}
 ERROR [SSTableBatchOpen:1] 2015-06-29 14:38:34,554 
 DebuggableThreadPoolExecutor.java:242 - Error in ThreadPoolExecutor
 org.apache.cassandra.io.FSReadError: java.io.IOException: Compressed file 
 with 0 chunks encountered: java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.init(CompressionMetadata.java:117)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:86)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:142)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:101)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:178)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:681)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:644)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:443)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:350)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at 
 org.apache.cassandra.io.sstable.format.SSTableReader$4.run(SSTableReader.java:480)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
 ~[na:1.7.0_55]
   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_55]
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_55]
   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
 Caused by: java.io.IOException: Compressed file with 0 chunks encountered: 
 java.io.DataInputStream@1c42271
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.readChunkOffsets(CompressionMetadata.java:174)
  ~[apache-cassandra-2.2.0-rc1.jar:2.2.0-rc1]
   ... 15 common frames omitted
 ERROR [Reference-Reaper:1] 2015-06-29 14:38:34,734 Ref.java:189 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@3e547f) to class 
 org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1926439:D:\Programme\Cassandra\data\data\system\compactions_in_progress\system-compactions_in_progress-ka-6866
  was not released before the reference was garbage collected
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6477) Materialized Views (was: Global Indexes)

2015-07-10 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14623168#comment-14623168
 ] 

Joshua McKenzie commented on CASSANDRA-6477:


Thus far, the implementation looks pretty solid. Fairly self-documenting 
(pending some of the naming issues we discussed above). I have more code to go 
through but have some more feedback.

Questions:
# Currently we're submitting the MaterializedViewBuilder to the standard 
executor in CompactionManager. Is it logical to have these 2 operations share a 
thread pool and resources?
# Why the removal of updatePKIndexes from Cells.java.reconcile?

Feedback:

h6. {{General}}
* Why are concurrent_batchlog_writes and concurrent_materialized_view_writes 
both hard-coded to 32 and not-in the yaml?

h6. {{AlterTableStatement}}
* mv.included.isEmpty() as a check to see if all columns are included is 
counter-intuitive. Add a helper function mv.included.containsAll() (mentioned 
prior, may be addressed in subsequent commits)
* announceMigration: materializedViewDrops is never initialized or used

h6. {{CreateMaterializedViewStatement}}
* in CreateMaterializedViewStatement.announceMigration, while turning 
ColumnIdentifier.Raw into ColumnIdentifier, we allow = 1 non-pk column in a MV 
partition key however the error message we log on multiple attempts reads 
Cannot include non-primary key column '%s' in materialized view partition 
key. We should log that = 1 are allowed instead. We should also document why 
this restriction is in-place in the code.
* refactor out duplication w/building targetPartitionKeys and 
targetPartitionColumns w/nonPKTarget

h6. {{DropMaterializedViewStatement}}
* In findMaterializedView, you don't need to iterate across all the members of 
cfm.getMaterializedViews().values() as it's a Map, you can just check for 
whether or not it contains a member at columnFamily() index.

h6. {{SSTableIterator}}
* l252. Was this addressing a bug you uncovered during development or an 
accidental change? Your update is changing what we're comparing to 
indexes.size() by pre-incrementing currentIndexIdx before said comparison.

h6. {{MaterializedViewBuilder}}
* MaterializedViewBuilder.getCompactionInfo isn't giving us particularly good 
information about the # Tokens built vs. total. We discussed this offline - 
needs some comments in the code as to why it's currently limited in this 
fashion.

h6. {{MaterializedViewUtils}}
* Should probably add a comment explaining why baseNaturalEndpoints and 
viewNaturalEndpoints always have the same # of entries, so .get on baseIdx is 
safe. Otherwise it takes a lot of knowledge about the MV RF implementation to 
understand it (thinking about future developers here)

h6. {{AbstractReadCommandBuilder}}
* In {{.makeColumnFilter()}} - Why the change to the temporary stack ptr for 
CFMetaData?
* It's not immediately clear to me why you changed from the allColumnsBuilder 
to the selectionBuilder - could use some clarification on that (for me here, 
not necessarily comments on code)

h6. {{ColumnFamilyStore}}
* Pull contents of initRowCache into init() - rather than breaking into 2 
separate methods, just have the 1 renamed w/MVM init in it

h6. Nits:
* SingleColumnRestriction: unnecessary whitespace changes
* AlterTableStatement:
** extraneous whitespace in announceMigration
** tab in announceMigration modified to break 120 char
* CreateMaterializedViewStatement:
** unused imports
** Double whitespace between methods
* DropMaterializedViewStatement: Cannot drop non existing should be Cannot 
drop non-existent
* CompactionManager: Need space after submitMaterializedViewBuilder method close

 Materialized Views (was: Global Indexes)
 

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Carl Yeksigian
  Labels: cql
 Fix For: 3.0 beta 1

 Attachments: test-view-data.sh, users.yaml


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9774) fix sstableverify dtest

2015-07-10 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-9774:

Summary: fix sstableverify dtest  (was: sstableverify doesn't detect 
missing sstables on trunk)

 fix sstableverify dtest
 ---

 Key: CASSANDRA-9774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9774
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey
Assignee: Jim Witschey
Priority: Blocker
 Fix For: 3.0.x


 One of our dtests for {{sstableverify}} 
 ({{offline_tools_test.py:TestOfflineTools.sstableverify_test}}) is failing 
 hard on trunk ([cassci 
 history|http://cassci.datastax.com/view/trunk/job/trunk_dtest/lastCompletedBuild/testReport/offline_tools_test/TestOfflineTools/sstableverify_test/history/])
 The way the test works is by deleting an SSTable, then running 
 {{sstableverify}} on its table. In earlier versions, it successfully detects 
 this problem and outputs that it was not released before the reference was 
 garbage collected. The test no longer finds this string in the output; 
 looking through the output of the test, it doesn't look like it reports any 
 problems at all.
 EDIT: After digging into the C* source a bit, I may have misattributed the 
 problem to {{sstableverify}}; this could be a more general memory management 
 problem, as the error text expected in the dtest is emitted by part of the 
 {{Ref}} implementation:
 https://github.com/apache/cassandra/blob/075ff5000ced24b42f3b540815cae471bee4049d/src/java/org/apache/cassandra/utils/concurrent/Ref.java#L187



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9765) checkForEndpointCollision fails for legitimate collisions

2015-07-10 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622498#comment-14622498
 ] 

Jonathan Ellis commented on CASSANDRA-9765:
---

Do you have time to review, [~rlow]?

 checkForEndpointCollision fails for legitimate collisions
 -

 Key: CASSANDRA-9765
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9765
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Stefania
 Fix For: 2.0.17


 Since CASSANDRA-7939, checkForEndpointCollision no longer catches a 
 legitimate collision. Without CASSANDRA-7939, wiping a node and starting it 
 again fails with 'A node with address %s already exists', but with it the 
 node happily enters joining state, potentially streaming from the wrong place 
 and violating consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9774) sstableverify doesn't detect missing sstables on trunk

2015-07-10 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey updated CASSANDRA-9774:

Assignee: Jim Witschey

 sstableverify doesn't detect missing sstables on trunk
 --

 Key: CASSANDRA-9774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9774
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey
Assignee: Jim Witschey
Priority: Blocker
 Fix For: 3.0.x


 One of our dtests for {{sstableverify}} 
 ({{offline_tools_test.py:TestOfflineTools.sstableverify_test}}) is failing 
 hard on trunk ([cassci 
 history|http://cassci.datastax.com/view/trunk/job/trunk_dtest/lastCompletedBuild/testReport/offline_tools_test/TestOfflineTools/sstableverify_test/history/])
 The way the test works is by deleting an SSTable, then running 
 {{sstableverify}} on its table. In earlier versions, it successfully detects 
 this problem and outputs that it was not released before the reference was 
 garbage collected. The test no longer finds this string in the output; 
 looking through the output of the test, it doesn't look like it reports any 
 problems at all.
 EDIT: After digging into the C* source a bit, I may have misattributed the 
 problem to {{sstableverify}}; this could be a more general memory management 
 problem, as the error text expected in the dtest is emitted by part of the 
 {{Ref}} implementation:
 https://github.com/apache/cassandra/blob/075ff5000ced24b42f3b540815cae471bee4049d/src/java/org/apache/cassandra/utils/concurrent/Ref.java#L187



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9765) checkForEndpointCollision fails for legitimate collisions

2015-07-10 Thread Richard Low (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622507#comment-14622507
 ] 

Richard Low commented on CASSANDRA-9765:


Yes I can.

 checkForEndpointCollision fails for legitimate collisions
 -

 Key: CASSANDRA-9765
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9765
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Richard Low
Assignee: Stefania
 Fix For: 2.0.17


 Since CASSANDRA-7939, checkForEndpointCollision no longer catches a 
 legitimate collision. Without CASSANDRA-7939, wiping a node and starting it 
 again fails with 'A node with address %s already exists', but with it the 
 node happily enters joining state, potentially streaming from the wrong place 
 and violating consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9661) Endless compaction to a tiny, tombstoned SStable

2015-07-10 Thread WeiFan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622269#comment-14622269
 ] 

WeiFan commented on CASSANDRA-9661:
---

Yes, we first deployed the 2.1.5 released from official page and encountered 
this problem. After that we patched the code from tag 
cassandra-2.1.5-tentative (b4fae85578b1bd31d162be9cb58b03c0be9f853f)

 Endless compaction to a tiny, tombstoned SStable
 

 Key: CASSANDRA-9661
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9661
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: WeiFan
Assignee: Yuki Morishita
  Labels: compaction, dtcs

 We deployed a 3-nodes cluster (with 2.1.5) which worked under stable write 
 requests ( about 2k wps) to a CF with DTCS, a default TTL as 43200s and 
 gc_grace as 21600s. The CF contained inserted only, complete time series 
 data. We found cassandra will occasionally keep writing logs like this:
 INFO  [CompactionExecutor:30551] 2015-06-26 18:10:06,195 
 CompactionTask.java:270 - Compacted 1 sstables to 
 [/home/cassandra/workdata/data/sen_vaas_test/nodestatus-f96c7c50155811e589f69752ac9b06c7/sen_vaas_test-nodestatus-ka-2516270,].
   449 bytes to 449 (~100% of original) in 12ms = 0.035683MB/s.  4 total 
 partitions merged to 4.  Partition merge counts were {1:4, }
 INFO  [CompactionExecutor:30551] 2015-06-26 18:10:06,241 
 CompactionTask.java:140 - Compacting 
 [SSTableReader(path='/home/cassandra/workdata/data/sen_vaas_test/nodestatus-f96c7c50155811e589f69752ac9b06c7/sen_vaas_test-nodestatus-ka-2516270-Data.db')]
 INFO  [CompactionExecutor:30551] 2015-06-26 18:10:06,253 
 CompactionTask.java:270 - Compacted 1 sstables to 
 [/home/cassandra/workdata/data/sen_vaas_test/nodestatus-f96c7c50155811e589f69752ac9b06c7/sen_vaas_test-nodestatus-ka-2516271,].
   449 bytes to 449 (~100% of original) in 12ms = 0.035683MB/s.  4 total 
 partitions merged to 4.  Partition merge counts were {1:4, }
 It seems that cassandra kept doing compacting to a single SStable, serveral 
 times per second, and lasted for many hours. Tons of logs were thrown and one 
 CPU core exhausted during this time. The endless compacting finally end when 
 another compaction started with a group of SStables (including previous one). 
 All of our 3 nodes have been hit by this problem, but occurred in different 
 time.
 We could not figure out how the problematic SStable come up because the log 
 has wrapped around. 
 We have dumped the records in the SStable and found it has the oldest data in 
 our CF (again, our data was time series), and all of the record in this 
 SStable have bben expired for more than 18 hours (12 hrs TTL + 6 hrs gc) so 
 they should be dropped. However, c* do nothing to this SStable but compacting 
 it again and again, until more SStable were out-dated enough to be considered 
 for compacting together with this one by DTCS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9519) CASSANDRA-8448 Doesn't seem to be fixed

2015-07-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622321#comment-14622321
 ] 

Sylvain Lebresne commented on CASSANDRA-9519:
-

bq. My take from the stacktrace was that DES.sortByProximityWithScore was 
calling super to AES.sortByProximity which then had a problem with the array 
changing while being sorted. 

Well, the whole point of {{sortByProximity}} is to sort the list input in 
place, so if the caller changes the input behind our back, we have a very 
serious problem (and a quick scan of the call site indicates we do no such 
thing). In fact, your patch break {{sortByProximity}} plain and simple since it 
make it sort a local copy of the list that nobody ever gets (and the list that 
should be sorted isn't).

Besides, the error message strongly suggests the problem is with the comparison 
method.

bq. It would be nice if we had a test that could reproduce this so we don't 
have to play guessing games

It's actually not all that hard. I've push on [my 
branch|https://github.com/pcmanus/cassandra/commits/9519] a test that on my box 
fail pretty reliably without the patch but haven't failed with it


 CASSANDRA-8448 Doesn't seem to be fixed
 ---

 Key: CASSANDRA-9519
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9519
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jeremiah Jordan
 Fix For: 2.1.x, 2.2.x

 Attachments: 9519.txt


 Still seeing the Comparison method violates its general contract! in 2.1.5
 {code}
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
   at java.util.TimSort.mergeHi(TimSort.java:895) ~[na:1.8.0_45]
   at java.util.TimSort.mergeAt(TimSort.java:512) ~[na:1.8.0_45]
   at java.util.TimSort.mergeCollapse(TimSort.java:437) ~[na:1.8.0_45]
   at java.util.TimSort.sort(TimSort.java:241) ~[na:1.8.0_45]
   at java.util.Arrays.sort(Arrays.java:1512) ~[na:1.8.0_45]
   at java.util.ArrayList.sort(ArrayList.java:1454) ~[na:1.8.0_45]
   at java.util.Collections.sort(Collections.java:175) ~[na:1.8.0_45]
   at 
 org.apache.cassandra.locator.AbstractEndpointSnitch.sortByProximity(AbstractEndpointSnitch.java:49)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximityWithScore(DynamicEndpointSnitch.java:158)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximityWithBadness(DynamicEndpointSnitch.java:187)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.sortByProximity(DynamicEndpointSnitch.java:152)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.service.StorageProxy.getLiveSortedEndpoints(StorageProxy.java:1530)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.service.StorageProxy.getRangeSlice(StorageProxy.java:1688)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:256)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:209)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:63)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238)
  ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:260) 
 ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:272) 
 ~[cassandra-all-2.1.5.469.jar:2.1.5.469]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6717) Modernize schema tables

2015-07-10 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622277#comment-14622277
 ] 

Sam Tunnicliffe commented on CASSANDRA-6717:


PR updating {{auth_test.py}} [here | 
https://github.com/riptano/cassandra-dtest/pull/380]

 Modernize schema tables
 ---

 Key: CASSANDRA-6717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6717
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
  Labels: client-impacting
 Fix For: 3.0 beta 1


 There is a few problems/improvements that can be done with the way we store 
 schema:
 # CASSANDRA-4988: as explained on the ticket, storing the comparator is now 
 redundant (or almost, we'd need to store whether the table is COMPACT or not 
 too, which we don't currently is easy and probably a good idea anyway), it 
 can be entirely reconstructed from the infos in schema_columns (the same is 
 true of key_validator and subcomparator, and replacing default_validator by a 
 COMPACT_VALUE column in all case is relatively simple). And storing the 
 comparator as an opaque string broke concurrent updates of sub-part of said 
 comparator (concurrent collection addition or altering 2 separate clustering 
 columns typically) so it's really worth removing it.
 # CASSANDRA-4603: it's time to get rid of those ugly json maps. I'll note 
 that schema_keyspaces is a problem due to its use of COMPACT STORAGE, but I 
 think we should fix it once and for-all nonetheless (see below).
 # For CASSANDRA-6382 and to allow indexing both map keys and values at the 
 same time, we'd need to be able to have more than one index definition for a 
 given column.
 # There is a few mismatches in table options between the one stored in the 
 schema and the one used when declaring/altering a table which would be nice 
 to fix. The compaction, compression and replication maps are one already 
 mentioned from CASSANDRA-4603, but also for some reason 
 'dclocal_read_repair_chance' in CQL is called just 'local_read_repair_chance' 
 in the schema table, and 'min/max_compaction_threshold' are column families 
 option in the schema but just compaction options for CQL (which makes more 
 sense).
 None of those issues are major, and we could probably deal with them 
 independently but it might be simpler to just fix them all in one shot so I 
 wanted to sum them all up here. In particular, the fact that 
 'schema_keyspaces' uses COMPACT STORAGE is annoying (for the replication map, 
 but it may limit future stuff too) which suggest we should migrate it to a 
 new, non COMPACT table. And while that's arguably a detail, it wouldn't hurt 
 to rename schema_columnfamilies to schema_tables for the years to come since 
 that's the prefered vernacular for CQL.
 Overall, what I would suggest is to move all schema tables to a new keyspace, 
 named 'schema' for instance (or 'system_schema' but I prefer the shorter 
 version), and fix all the issues above at once. Since we currently don't 
 exchange schema between nodes of different versions, all we'd need to do that 
 is a one shot startup migration, and overall, I think it could be simpler for 
 clients to deal with one clear migration than to have to handle minor 
 individual changes all over the place. I also think it's somewhat cleaner 
 conceptually to have schema tables in their own keyspace since they are 
 replicated through a different mechanism than other system tables.
 If we do that, we could, for instance, migrate to the following schema tables 
 (details up for discussion of course):
 {noformat}
 CREATE TYPE user_type (
   name text,
   column_names listtext,
   column_types listtext
 )
 CREATE TABLE keyspaces (
   name text PRIMARY KEY,
   durable_writes boolean,
   replication mapstring, string,
   user_types mapstring, user_type
 )
 CREATE TYPE trigger_definition (
   name text,
   options maptex, text
 )
 CREATE TABLE tables (
   keyspace text,
   name text,
   id uuid,
   table_type text, // COMPACT, CQL or SUPER
   dropped_columns maptext, bigint,
   triggers maptext, trigger_definition,
   // options
   comment text,
   compaction maptext, text,
   compression maptext, text,
   read_repair_chance double,
   dclocal_read_repair_chance double,
   gc_grace_seconds int,
   caching text,
   rows_per_partition_to_cache text,
   default_time_to_live int,
   min_index_interval int,
   max_index_interval int,
   speculative_retry text,
   populate_io_cache_on_flush boolean,
   bloom_filter_fp_chance double
   memtable_flush_period_in_ms int,
   PRIMARY KEY (keyspace, name)
 )
 CREATE TYPE index_definition (
   name text,
   index_type text,
   options maptext, text
 )
 CREATE TABLE columns (
   keyspace text,
   table text,
   name text,
   kind text, // 

[jira] [Comment Edited] (CASSANDRA-9729) CQLSH exception - OverflowError: normalized days too large to fit in a C int

2015-07-10 Thread Chandran Anjur Narasimhan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622651#comment-14622651
 ] 

Chandran Anjur Narasimhan edited comment on CASSANDRA-9729 at 7/10/15 5:49 PM:
---

Here is the schema for this CF:

qlsh:ccc desc COLUMNFAMILY task_result;

CREATE TABLE ccc.task_result (
submissionid text,
ezid text,
name text,
time timestamp,
analyzed_index_root text,
analyzed_log_path text,
clientid text,
end_time timestamp,
jenkins_path text,
log_file_path text,
path_available boolean,
path_to_task text,
required_for_overall_status boolean,
start_time timestamp,
state text,
status text,
translated_criteria_status text,
type text,
PRIMARY KEY (submissionid, ezid, name, time)
) WITH CLUSTERING ORDER BY (ezid ASC, name ASC, time ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'Stores results of each task'
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
CREATE INDEX ez_task_result_task_type ON ccc.ez_task_result (type);

let me know if you need more info



was (Author: reach.nchan):
Here is the schema for this CF:

qlsh:ccc desc COLUMNFAMILY ez_task_result;

CREATE TABLE ccc.task_result (
submissionid text,
ezid text,
name text,
time timestamp,
analyzed_index_root text,
analyzed_log_path text,
clientid text,
end_time timestamp,
jenkins_path text,
log_file_path text,
path_available boolean,
path_to_task text,
required_for_overall_status boolean,
start_time timestamp,
state text,
status text,
translated_criteria_status text,
type text,
PRIMARY KEY (submissionid, ezid, name, time)
) WITH CLUSTERING ORDER BY (ezid ASC, name ASC, time ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'Stores results of each task'
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
CREATE INDEX ez_task_result_task_type ON ccc.ez_task_result (type);

let me know if you need more info


 CQLSH exception - OverflowError: normalized days too large to fit in a C int
 

 Key: CASSANDRA-9729
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9729
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: OSX 10.10.2
Reporter: Chandran Anjur Narasimhan
  Labels: cqlsh

 Running a select command using CQLSH 2.1.5, 2.1.7 throws exception. This 
 works nicely in 2.0.14 version.
 Environment:
 
 JAVA - 1.8
 Python - 2.7.6
 Cassandra Server - 2.1.7
 CQLSH - 5.0.1
 Logs:
 ==
 CQLSH - cassandra 2.0.14 - working with no issues
 -
 NCHAN-M-D0LZ:apache nchan$ cd apache-cassandra-2.0.14/
 NCHAN-M-D0LZ:apache-cassandra-2.0.14 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at myip:9160.
 [cqlsh 4.1.1 | Cassandra 2.1.7 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  i get all the 25 values
 CQLSH - cassandra 2.1.5  - python exception
 -
 NCHAN-M-D0LZ:apache-cassandra-2.1.5 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at ip-address:9042.
 [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * 

[jira] [Commented] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-07-10 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622693#comment-14622693
 ] 

Yuki Morishita commented on CASSANDRA-9448:
---

[~stefania_alborghetti] Thanks! So it is possible to have both old and new name 
for metrics.
Even though we cannot mark old ones as {{@Deprecated}} for technical reason, 
this is enough for transition.

I created [pull request|https://github.com/stef1927/cassandra/pull/1] for your 
branch to use new names for {{ColumnFamilyStoreMBean}}.
If that looks good, I will commit the change.

 Metrics should use up to date nomenclature
 --

 Key: CASSANDRA-9448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Stefania
  Labels: docs-impacting, jmx
 Fix For: 3.0 beta 1


 There are a number of exposed metrics that currently are named using the old 
 nomenclature of columnfamily and rows (meaning partitions).
 It would be good to audit all metrics and update any names to match what they 
 actually represent; we should probably do that in a single sweep to avoid a 
 confusing mixture of old and new terminology. 
 As we'd need to do this in a major release, I've initially set the fixver for 
 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9771) Revert CASSANDRA-9542 (allow native functions in UDA)

2015-07-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622707#comment-14622707
 ] 

Robert Stupp commented on CASSANDRA-9771:
-

Reverted CASSANDRA-9542.
[Branch for 2.2|https://github.com/snazy/cassandra/tree/9771-revert-9542-2.2]
[Branch for 
trunk|https://github.com/snazy/cassandra/tree/9771-revert-9542-trunk]
[2.2 
testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-9772-revert-9542-2.2-testall/]
[2.2 
dtests|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-9772-revert-9542-2.2-dtest/]
[trunk 
testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-9772-revert-9542-trunk-testall/]
[trunk 
dtests|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-9772-revert-9542-trunk-dtest/]
(cassci should start soon)

 Revert CASSANDRA-9542 (allow native functions in UDA)
 -

 Key: CASSANDRA-9771
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9771
 Project: Cassandra
  Issue Type: Task
Reporter: Robert Stupp
Assignee: Robert Stupp
Priority: Blocker
 Fix For: 2.2.0


 As [noted in this 
 comment|https://issues.apache.org/jira/browse/CASSANDRA-9542?focusedCommentId=14620414page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14620414]
  of CASSANDRA-9542, we should revert it.
 Setting priority to blocker, since once 9542 gets into 2.2.0, we cannot 
 revert it.
 Will provide a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9777) If you have a ~/.cqlshrc and a ~/.cassandra/cqlshrc, cqlsh will overwrite the latter with the former

2015-07-10 Thread Jon Moses (JIRA)
Jon Moses created CASSANDRA-9777:


 Summary: If you have a ~/.cqlshrc and a ~/.cassandra/cqlshrc, 
cqlsh will overwrite the latter with the former
 Key: CASSANDRA-9777
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9777
 Project: Cassandra
  Issue Type: Bug
Reporter: Jon Moses


If you have a .cqlshrc file, and a ~/.cassandra/cqlshrc file, when you run 
`cqlsh`, it will overwrite the latter with the former.  
https://github.com/apache/cassandra/blob/trunk/bin/cqlsh#L202

If the 'new' path exists (~/.cassandra/cqlsh), cqlsh should either WARN or just 
leave the files alone.

{noformat}
~$ cat .cqlshrc
[authentication]
~$ cat .cassandra/cqlshrc
[connection]
~$ cqlsh
~$ cat .cqlshrc
cat: .cqlshrc: No such file or directory
~$ cat .cassandra/cqlshrc
[authentication]
~$

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9771) Revert CASSANDRA-9542 (allow native functions in UDA)

2015-07-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622713#comment-14622713
 ] 

Aleksey Yeschenko commented on CASSANDRA-9771:
--

Aside from the typo in {{CQL.textile}}, the revert LGTM. So long as cassci is 
happy.

 Revert CASSANDRA-9542 (allow native functions in UDA)
 -

 Key: CASSANDRA-9771
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9771
 Project: Cassandra
  Issue Type: Task
Reporter: Robert Stupp
Assignee: Robert Stupp
Priority: Blocker
 Fix For: 2.2.0, 3.0.0 rc1


 As [noted in this 
 comment|https://issues.apache.org/jira/browse/CASSANDRA-9542?focusedCommentId=14620414page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14620414]
  of CASSANDRA-9542, we should revert it.
 Setting priority to blocker, since once 9542 gets into 2.2.0, we cannot 
 revert it.
 Will provide a patch soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9729) CQLSH exception - OverflowError: normalized days too large to fit in a C int

2015-07-10 Thread Chandran Anjur Narasimhan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622651#comment-14622651
 ] 

Chandran Anjur Narasimhan commented on CASSANDRA-9729:
--

Here is the schema for this CF:

qlsh:ccc desc COLUMNFAMILY ez_task_result;

CREATE TABLE ccc.task_result (
submissionid text,
ezid text,
name text,
time timestamp,
analyzed_index_root text,
analyzed_log_path text,
clientid text,
end_time timestamp,
jenkins_path text,
log_file_path text,
path_available boolean,
path_to_task text,
required_for_overall_status boolean,
start_time timestamp,
state text,
status text,
translated_criteria_status text,
type text,
PRIMARY KEY (submissionid, ezid, name, time)
) WITH CLUSTERING ORDER BY (ezid ASC, name ASC, time ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = 'Stores results of each task'
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.1
AND speculative_retry = '99.0PERCENTILE';
CREATE INDEX ez_task_result_task_type ON ccc.ez_task_result (type);

let me know if you need more info


 CQLSH exception - OverflowError: normalized days too large to fit in a C int
 

 Key: CASSANDRA-9729
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9729
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: OSX 10.10.2
Reporter: Chandran Anjur Narasimhan
  Labels: cqlsh

 Running a select command using CQLSH 2.1.5, 2.1.7 throws exception. This 
 works nicely in 2.0.14 version.
 Environment:
 
 JAVA - 1.8
 Python - 2.7.6
 Cassandra Server - 2.1.7
 CQLSH - 5.0.1
 Logs:
 ==
 CQLSH - cassandra 2.0.14 - working with no issues
 -
 NCHAN-M-D0LZ:apache nchan$ cd apache-cassandra-2.0.14/
 NCHAN-M-D0LZ:apache-cassandra-2.0.14 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at myip:9160.
 [cqlsh 4.1.1 | Cassandra 2.1.7 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  i get all the 25 values
 CQLSH - cassandra 2.1.5  - python exception
 -
 NCHAN-M-D0LZ:apache-cassandra-2.1.5 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at ip-address:9042.
 [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
 Traceback (most recent call last):
   File bin/cqlsh, line 1001, in perform_simple_statement
 rows = self.session.execute(statement, trace=self.tracing_enabled)
   File 
 /Users/nchan/Programs/apache/apache-cassandra-2.1.5/bin/../lib/cassandra-driver-internal-only-2.5.0.zip/cassandra-driver-2.5.0/cassandra/cluster.py,
  line 1404, in execute
 result = future.result(timeout)
   File 
 /Users/nchan/Programs/apache/apache-cassandra-2.1.5/bin/../lib/cassandra-driver-internal-only-2.5.0.zip/cassandra-driver-2.5.0/cassandra/cluster.py,
  line 2974, in result
 raise self._final_exception
 OverflowError: normalized days too large to fit in a C int
 cqlsh:ccc 
 CQLSH - cassandra 2.1.7 - python exception
 -
 NCHAN-M-D0LZ:apache-cassandra-2.1.7 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at 171.71.189.11:9042.
 [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
 Traceback (most recent call last):
   File bin/cqlsh, line 1041, in perform_simple_statement
 rows = self.session.execute(statement, trace=self.tracing_enabled)
   File 
 

cassandra git commit: Make CFMetaData.triggers immutable

2015-07-10 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk a827a3717 - 16044a6f4


Make CFMetaData.triggers immutable

patch by Aleksey Yeschenko; reviewed by Robert Stupp for CASSANDRA-9712


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16044a6f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16044a6f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16044a6f

Branch: refs/heads/trunk
Commit: 16044a6f4c19a899172efc8b2d0ac3e4723d4c88
Parents: a827a37
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Jul 2 16:46:14 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jul 10 20:08:59 2015 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java |  31 +
 .../cassandra/config/TriggerDefinition.java |  69 --
 .../cql3/statements/CreateTriggerStatement.java |  22 +--
 .../cql3/statements/DropTriggerStatement.java   |  20 ++-
 .../cassandra/schema/LegacySchemaMigrator.java  |  13 +-
 .../apache/cassandra/schema/SchemaKeyspace.java |  51 ---
 .../cassandra/schema/TriggerMetadata.java   |  72 ++
 .../org/apache/cassandra/schema/Triggers.java   | 137 +++
 .../cassandra/thrift/ThriftConversion.java  |  21 +--
 .../cassandra/triggers/TriggerExecutor.java |   9 +-
 .../cql3/validation/operations/CreateTest.java  |  46 +++
 .../schema/LegacySchemaMigratorTest.java|   8 +-
 .../cassandra/triggers/TriggerExecutorTest.java |  24 ++--
 .../cassandra/triggers/TriggersSchemaTest.java  |  26 ++--
 14 files changed, 348 insertions(+), 201 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/16044a6f/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index f3c8bc1..53d2171 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -49,6 +49,7 @@ import org.apache.cassandra.io.compress.CompressionParameters;
 import org.apache.cassandra.io.compress.LZ4Compressor;
 import org.apache.cassandra.io.util.DataOutputPlus;
 import org.apache.cassandra.schema.SchemaKeyspace;
+import org.apache.cassandra.schema.Triggers;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.UUIDGen;
@@ -190,8 +191,8 @@ public final class CFMetaData
 private volatile int memtableFlushPeriod = 0;
 private volatile int defaultTimeToLive = DEFAULT_DEFAULT_TIME_TO_LIVE;
 private volatile SpeculativeRetry speculativeRetry = 
DEFAULT_SPECULATIVE_RETRY;
-private volatile MapColumnIdentifier, DroppedColumn droppedColumns = new 
HashMap();
-private volatile MapString, TriggerDefinition triggers = new HashMap();
+private volatile MapColumnIdentifier, DroppedColumn droppedColumns = new 
HashMap();
+private volatile Triggers triggers = Triggers.none();
 private volatile boolean isPurged = false;
 /*
  * All CQL3 columns definition are stored in the columnMetadata map.
@@ -237,7 +238,7 @@ public final class CFMetaData
 public CFMetaData defaultTimeToLive(int prop) {defaultTimeToLive = prop; 
return this;}
 public CFMetaData speculativeRetry(SpeculativeRetry prop) 
{speculativeRetry = prop; return this;}
 public CFMetaData droppedColumns(MapColumnIdentifier, DroppedColumn 
cols) {droppedColumns = cols; return this;}
-public CFMetaData triggers(MapString, TriggerDefinition prop) {triggers 
= prop; return this;}
+public CFMetaData triggers(Triggers prop) {triggers = prop; return this;}
 
 private CFMetaData(String keyspace,
String name,
@@ -352,7 +353,7 @@ public final class CFMetaData
 return CFMetaData.Builder.create(keyspace, 
name).addPartitionKey(key, BytesType.instance).build();
 }
 
-public MapString, TriggerDefinition getTriggers()
+public Triggers getTriggers()
 {
 return triggers;
 }
@@ -467,7 +468,7 @@ public final class CFMetaData
   .speculativeRetry(oldCFMD.speculativeRetry)
   .memtableFlushPeriod(oldCFMD.memtableFlushPeriod)
   .droppedColumns(new HashMap(oldCFMD.droppedColumns))
-  .triggers(new HashMap(oldCFMD.triggers));
+  .triggers(oldCFMD.triggers);
 }
 
 /**
@@ -1198,24 +1199,6 @@ public final class CFMetaData
 return removed;
 }
 
-public void addTriggerDefinition(TriggerDefinition def) throws 
InvalidRequestException
-{
-if (containsTriggerDefinition(def))
-throw new InvalidRequestException(

[jira] [Commented] (CASSANDRA-9712) Refactor CFMetaData

2015-07-10 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622555#comment-14622555
 ] 

Robert Stupp commented on CASSANDRA-9712:
-

ship it (unless cassci complains)

 Refactor CFMetaData
 ---

 Key: CASSANDRA-9712
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9712
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 3.x


 As part of CASSANDRA-9425 and a follow-up to CASSANDRA-9665, and a 
 pre-requisite for new schema change protocol, this ticket will do the 
 following
 1. Make the triggers {{HashMap}} immutable (new {{Triggers}} class)
 2. Allow multiple 2i definitions per column in CFMetaData
 3. to be filled in
 4. Rename and move {{config.CFMetaData}} to {{schema.TableMetadata}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9705) Simplify some of 8099's concrete implementations

2015-07-10 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622556#comment-14622556
 ] 

Sylvain Lebresne commented on CASSANDRA-9705:
-

Pushed a branch for this 
[here|https://github.com/pcmanus/cassandra/commits/9705]. It removes all use of 
flyweights, and basically rewrite all Row, Cell and Partition implementations. 
The result is admittedly a lot simpler and less error prone. I suspect it's 
also faster but haven't looked at that much yet.

The patch is actually not very small because the change of implementation 
allowed a lot of related simplifications. Hopefully this won't be too 
disruptive on other patches, but I'd still be happy if we can get that in as 
fast as possible, if only because I'd rather not spend too much time fixing 
unit tests and dtests that are fixed with this or easier to.

Concretely there is 3 commits:
* the first one update the implementations of {{ClusteringPrefix}} and its 
subclasses. That one is pretty simple and self-contained.
* the second one is the main meat. It rewrites most of the rest and was 
unfortunately much harder to split in smaller pieces.
* the last one if kind of a follow-up: we're currently using {{LivenessInfo}} 
for both {{Row}} and {{Cell}}, but after the previous patches it's barely used 
by {{Cell}}. So that 3rd patch make it be only for rows, which make things 
cleaner/simpler anyways.

I plan on doing another review of the whole patch on Monday and to add comments 
where they are missing so there may be a few minor updates then, but it's 
basically ready for review otherwise.



 Simplify some of 8099's concrete implementations
 

 Key: CASSANDRA-9705
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9705
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1


 As mentioned in the ticket comments, some of the concrete implementations 
 (for Cell, Row, Clustering, PartitionUpdate, ...) of the initial patch for 
 CASSANDRA-8099 are more complex than they should be (the use of flyweight is 
 typically probably ill-fitted), which probably has performance consequence. 
 This ticket is to track the refactoring/simplifying those implementation 
 (mainly by removing the use of flyweights and simplifying accordingly).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9774) fix sstableverify dtest

2015-07-10 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622523#comment-14622523
 ] 

Benedict commented on CASSANDRA-9774:
-

I'm afraid I'm not the best person to ask about that, as I'm not familiar with 
these tools. All I can tell you is that whenever you see was not released 
before the reference was garbage collected it is a bug, not a feature :)

 fix sstableverify dtest
 ---

 Key: CASSANDRA-9774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9774
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey
Assignee: Jim Witschey
Priority: Blocker
 Fix For: 3.0.x


 One of our dtests for {{sstableverify}} 
 ({{offline_tools_test.py:TestOfflineTools.sstableverify_test}}) is failing 
 hard on trunk ([cassci 
 history|http://cassci.datastax.com/view/trunk/job/trunk_dtest/lastCompletedBuild/testReport/offline_tools_test/TestOfflineTools/sstableverify_test/history/])
 The way the test works is by deleting an SSTable, then running 
 {{sstableverify}} on its table. In earlier versions, it successfully detects 
 this problem and outputs that it was not released before the reference was 
 garbage collected. The test no longer finds this string in the output; 
 looking through the output of the test, it doesn't look like it reports any 
 problems at all.
 EDIT: After digging into the C* source a bit, I may have misattributed the 
 problem to {{sstableverify}}; this could be a more general memory management 
 problem, as the error text expected in the dtest is emitted by part of the 
 {{Ref}} implementation:
 https://github.com/apache/cassandra/blob/075ff5000ced24b42f3b540815cae471bee4049d/src/java/org/apache/cassandra/utils/concurrent/Ref.java#L187



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9705) Simplify some of 8099's concrete implementations

2015-07-10 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9705:

Reviewer: Benedict

 Simplify some of 8099's concrete implementations
 

 Key: CASSANDRA-9705
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9705
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1


 As mentioned in the ticket comments, some of the concrete implementations 
 (for Cell, Row, Clustering, PartitionUpdate, ...) of the initial patch for 
 CASSANDRA-8099 are more complex than they should be (the use of flyweight is 
 typically probably ill-fitted), which probably has performance consequence. 
 This ticket is to track the refactoring/simplifying those implementation 
 (mainly by removing the use of flyweights and simplifying accordingly).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9712) Refactor CFMetaData

2015-07-10 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622601#comment-14622601
 ] 

Aleksey Yeschenko commented on CASSANDRA-9712:
--

cassci complains, but not about anything new 
(http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-9712-dtest/,
 
http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-9712-testall/).

Anyway, committed to trunk as {{16044a6f4c19a899172efc8b2d0ac3e4723d4c88}}, 
thanks.

 Refactor CFMetaData
 ---

 Key: CASSANDRA-9712
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9712
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 3.x


 As part of CASSANDRA-9425 and a follow-up to CASSANDRA-9665, and a 
 pre-requisite for new schema change protocol, this ticket will do the 
 following
 1. Make the triggers {{HashMap}} immutable (new {{Triggers}} class)
 2. Allow multiple 2i definitions per column in CFMetaData
 3. to be filled in
 4. Rename and move {{config.CFMetaData}} to {{schema.TableMetadata}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6477) Materialized Views (was: Global Indexes)

2015-07-10 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622833#comment-14622833
 ] 

Alan Boudreault commented on CASSANDRA-6477:


Using Carl's branch 6477-rebase


 Materialized Views (was: Global Indexes)
 

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Carl Yeksigian
  Labels: cql
 Fix For: 3.0 beta 1

 Attachments: test-view-data.sh, users.yaml


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Remove dead iSchemaKeyspace/LegacySchemaMigrator code

2015-07-10 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 16044a6f4 - c734cb8b6


Remove dead iSchemaKeyspace/LegacySchemaMigrator code


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c734cb8b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c734cb8b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c734cb8b

Branch: refs/heads/trunk
Commit: c734cb8b60c9bc96303d0cf5b77a7eabec5a49e4
Parents: 16044a6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jul 10 21:50:16 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jul 10 22:16:46 2015 +0300

--
 .../org/apache/cassandra/db/SystemKeyspace.java |  1 -
 .../cassandra/schema/LegacySchemaMigrator.java  |  7 +-
 .../apache/cassandra/schema/SchemaKeyspace.java | 89 ++--
 .../schema/LegacySchemaMigratorTest.java|  2 -
 4 files changed, 6 insertions(+), 93 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c734cb8b/src/java/org/apache/cassandra/db/SystemKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/db/SystemKeyspace.java 
b/src/java/org/apache/cassandra/db/SystemKeyspace.java
index f0c91d6..e8247a3 100644
--- a/src/java/org/apache/cassandra/db/SystemKeyspace.java
+++ b/src/java/org/apache/cassandra/db/SystemKeyspace.java
@@ -301,7 +301,6 @@ public final class SystemKeyspace
 + default_time_to_live int,
 + default_validator text,
 + dropped_columns maptext, bigint,
-+ dropped_columns_types maptext, text,
 + gc_grace_seconds int,
 + is_dense boolean,
 + key_validator text,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c734cb8b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
--
diff --git a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java 
b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
index 4748820..996b5ff 100644
--- a/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
+++ b/src/java/org/apache/cassandra/schema/LegacySchemaMigrator.java
@@ -330,12 +330,7 @@ public final class LegacySchemaMigrator
 cfm.bloomFilterFpChance(cfm.getBloomFilterFpChance());
 
 if (tableRow.has(dropped_columns))
-{
-MapString, String types = tableRow.has(dropped_columns_types)
-  ? 
tableRow.getMap(dropped_columns_types, UTF8Type.instance, UTF8Type.instance)
-  : Collections.String, StringemptyMap();
-addDroppedColumns(cfm, tableRow.getMap(dropped_columns, 
UTF8Type.instance, LongType.instance), types);
-}
+addDroppedColumns(cfm, tableRow.getMap(dropped_columns, 
UTF8Type.instance, LongType.instance), Collections.emptyMap());
 
 cfm.triggers(createTriggersFromTriggerRows(triggerRows));
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c734cb8b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
--
diff --git a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java 
b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
index 5aad59f..8411104 100644
--- a/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
+++ b/src/java/org/apache/cassandra/schema/SchemaKeyspace.java
@@ -1010,18 +1010,11 @@ public final class SchemaKeyspace
 
 // We don't really use the default validator but as we have it for 
backward compatibility, we use it to know if it's a counter table
 AbstractType? defaultValidator = 
TypeParser.parse(result.getString(default_validator));
-boolean isCounter =  defaultValidator instanceof CounterColumnType;
+boolean isCounter = defaultValidator instanceof CounterColumnType;
 
 UUID cfId = result.getUUID(cf_id);
 
 boolean isCQLTable = !isSuper  !isDense  isCompound;
-boolean isStaticCompactTable = !isDense  !isCompound;
-
-// Internally, compact tables have a specific layout, see 
CompactTables. But when upgrading from
-// previous versions, they may not have the expected schema, so detect 
if we need to upgrade and do
-// it in createColumnsFromColumnRows.
-// We can remove this once we don't support upgrade from versions  
3.0.
-boolean needsUpgrade = !isCQLTable  
checkNeedsUpgrade(serializedColumnDefinitions, isSuper, isStaticCompactTable);
 
 ListColumnDefinition columnDefs = 
createColumnsFromColumnRows(serializedColumnDefinitions,
 

[jira] [Commented] (CASSANDRA-6477) Materialized Views (was: Global Indexes)

2015-07-10 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622809#comment-14622809
 ] 

Alan Boudreault commented on CASSANDRA-6477:


Okay, I've been working on these comparisons but haven't been able to provide 
useful results due to an issue I hit. I am doing my benchmarks on ec2 with a 
cluster of 3 nodes. Basically, I can get realistic and useful results with C* 
stock (no MV) and C* with a Secondary Index ( between 7 and 85000 op/s). 
When it comes to testing C* with 1 MV, I got many many WriteTimeoutExceptions 
which results in a performance of 100 operations per second. I have been able 
to reproduce that 100 op/s locally using a 3 nodes cluster. The issue doesn't 
seem to be present when using a single node cluster.  

I've profiled one of the node and it looks like most of the time is spend in 
io.netty.channel.epoll.EpollEventLoop.epollWait() (like 75% of the time).  

Here's a yourkit snapshot of the first node of the cluster.
http://dl.alanb.ca/CassandraDaemon-cluster-3-nodes-2015-07-10.snapshot.zip

I've attached my users.yaml profile that I am using for testing: [^users.yaml]

Here's the materialized view creation statement:
{code}
CREATE MATERIALIZED VIEW perftesting.users_by_first_name AS SELECT * FROM 
perftesting.users PRIMARY KEY (first_name);
{code}

Here's the stress command I've been using:
{code}
cassandra-stress user profile=/path/to/users.yaml ops\(insert=1\) n=500 
no-warmup -pop seq=1..200M  no-wrap -rate threads=200 -node 
127.0.0.1,127.0.0.2,127.0.0.3
{code}


Let me know if I am doing anything wrong or if I can provide anything else to 
help. I'll provide the benchmarks as soon as I have a workaround for this issue.

 Materialized Views (was: Global Indexes)
 

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Carl Yeksigian
  Labels: cql
 Fix For: 3.0 beta 1

 Attachments: test-view-data.sh


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6477) Materialized Views (was: Global Indexes)

2015-07-10 Thread Alan Boudreault (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Boudreault updated CASSANDRA-6477:
---
Attachment: users.yaml

 Materialized Views (was: Global Indexes)
 

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Carl Yeksigian
  Labels: cql
 Fix For: 3.0 beta 1

 Attachments: test-view-data.sh, users.yaml


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9729) CQLSH exception - OverflowError: normalized days too large to fit in a C int

2015-07-10 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622777#comment-14622777
 ] 

Adam Holmberg commented on CASSANDRA-9729:
--

[~reach.nchan] that confirms that we are working with timestamp types, which is 
what I suspected. Perhaps we could determine if you have incorrectly-encoded 
timestamps as follows:
{code}
select time, timestampAsBlob(time), start_time, timestampAsBlob(start_time), 
from task_result where submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
{code}
Running this in Cassandra 2.0 cqlsh should show us if the raw values are scaled 
incorrectly.

If scaling is the issue, we'll need to determine if this workaround is safe and 
worth putting in the new driver/cqlsh. How you handle this in the application 
tier depends on what clients you have in use.

Please let me know what you find out.

 CQLSH exception - OverflowError: normalized days too large to fit in a C int
 

 Key: CASSANDRA-9729
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9729
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: OSX 10.10.2
Reporter: Chandran Anjur Narasimhan
  Labels: cqlsh

 Running a select command using CQLSH 2.1.5, 2.1.7 throws exception. This 
 works nicely in 2.0.14 version.
 Environment:
 
 JAVA - 1.8
 Python - 2.7.6
 Cassandra Server - 2.1.7
 CQLSH - 5.0.1
 Logs:
 ==
 CQLSH - cassandra 2.0.14 - working with no issues
 -
 NCHAN-M-D0LZ:apache nchan$ cd apache-cassandra-2.0.14/
 NCHAN-M-D0LZ:apache-cassandra-2.0.14 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at myip:9160.
 [cqlsh 4.1.1 | Cassandra 2.1.7 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  i get all the 25 values
 CQLSH - cassandra 2.1.5  - python exception
 -
 NCHAN-M-D0LZ:apache-cassandra-2.1.5 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at ip-address:9042.
 [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
 Traceback (most recent call last):
   File bin/cqlsh, line 1001, in perform_simple_statement
 rows = self.session.execute(statement, trace=self.tracing_enabled)
   File 
 /Users/nchan/Programs/apache/apache-cassandra-2.1.5/bin/../lib/cassandra-driver-internal-only-2.5.0.zip/cassandra-driver-2.5.0/cassandra/cluster.py,
  line 1404, in execute
 result = future.result(timeout)
   File 
 /Users/nchan/Programs/apache/apache-cassandra-2.1.5/bin/../lib/cassandra-driver-internal-only-2.5.0.zip/cassandra-driver-2.5.0/cassandra/cluster.py,
  line 2974, in result
 raise self._final_exception
 OverflowError: normalized days too large to fit in a C int
 cqlsh:ccc 
 CQLSH - cassandra 2.1.7 - python exception
 -
 NCHAN-M-D0LZ:apache-cassandra-2.1.7 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at 171.71.189.11:9042.
 [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
 Traceback (most recent call last):
   File bin/cqlsh, line 1041, in perform_simple_statement
 rows = self.session.execute(statement, trace=self.tracing_enabled)
   File 
 /Users/nchan/Programs/apache/apache-cassandra-2.1.7/bin/../lib/cassandra-driver-internal-only-2.5.1.zip/cassandra-driver-2.5.1/cassandra/cluster.py,
  line 1405, in execute
 result = future.result(timeout)
   File 
 /Users/nchan/Programs/apache/apache-cassandra-2.1.7/bin/../lib/cassandra-driver-internal-only-2.5.1.zip/cassandra-driver-2.5.1/cassandra/cluster.py,
  line 2976, in result
 raise self._final_exception
 OverflowError: normalized days too large to fit in a C int
 cqlsh:ccc 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9666) Provide an alternative to DTCS

2015-07-10 Thread Robbie Strickland (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622822#comment-14622822
 ] 

Robbie Strickland commented on CASSANDRA-9666:
--

I'd like to second the changes [~krummas] suggested at the very least, as I 
agree that it's a saner scheme than tiers.  DTCS basically never stops 
compacting under normal conditions, and for most use cases there's little 
real benefit in compacting older data into larger sstables.  

 Provide an alternative to DTCS
 --

 Key: CASSANDRA-9666
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9666
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeff Jirsa
Assignee: Jeff Jirsa
 Fix For: 2.1.x, 2.2.x


 DTCS is great for time series data, but it comes with caveats that make it 
 difficult to use in production (typical operator behaviors such as bootstrap, 
 removenode, and repair have MAJOR caveats as they relate to 
 max_sstable_age_days, and hints/read repair break the selection algorithm).
 I'm proposing an alternative, TimeWindowCompactionStrategy, that sacrifices 
 the tiered nature of DTCS in order to address some of DTCS' operational 
 shortcomings. I believe it is necessary to propose an alternative rather than 
 simply adjusting DTCS, because it fundamentally removes the tiered nature in 
 order to remove the parameter max_sstable_age_days - the result is very very 
 different, even if it is heavily inspired by DTCS. 
 Specifically, rather than creating a number of windows of ever increasing 
 sizes, this strategy allows an operator to choose the window size, compact 
 with STCS within the first window of that size, and aggressive compact down 
 to a single sstable once that window is no longer current. The window size is 
 a combination of unit (minutes, hours, days) and size (1, etc), such that an 
 operator can expect all data using a block of that size to be compacted 
 together (that is, if your unit is hours, and size is 6, you will create 
 roughly 4 sstables per day, each one containing roughly 6 hours of data). 
 The result addresses a number of the problems with 
 DateTieredCompactionStrategy:
 - At the present time, DTCS’s first window is compacted using an unusual 
 selection criteria, which prefers files with earlier timestamps, but ignores 
 sizes. In TimeWindowCompactionStrategy, the first window data will be 
 compacted with the well tested, fast, reliable STCS. All STCS options can be 
 passed to TimeWindowCompactionStrategy to configure the first window’s 
 compaction behavior.
 - HintedHandoff may put old data in new sstables, but it will have little 
 impact other than slightly reduced efficiency (sstables will cover a wider 
 range, but the old timestamps will not impact sstable selection criteria 
 during compaction)
 - ReadRepair may put old data in new sstables, but it will have little impact 
 other than slightly reduced efficiency (sstables will cover a wider range, 
 but the old timestamps will not impact sstable selection criteria during 
 compaction)
 - Small, old sstables resulting from streams of any kind will be swiftly and 
 aggressively compacted with the other sstables matching their similar 
 maxTimestamp, without causing sstables in neighboring windows to grow in size.
 - The configuration options are explicit and straightforward - the tuning 
 parameters leave little room for error. The window is set in common, easily 
 understandable terms such as “12 hours”, “1 Day”, “30 days”. The 
 minute/hour/day options are granular enough for users keeping data for hours, 
 and users keeping data for years. 
 - There is no explicitly configurable max sstable age, though sstables will 
 naturally stop compacting once new data is written in that window. 
 - Streaming operations can create sstables with old timestamps, and they'll 
 naturally be joined together with sstables in the same time bucket. This is 
 true for bootstrap/repair/sstableloader/removenode. 
 - It remains true that if old data and new data is written into the memtable 
 at the same time, the resulting sstables will be treated as if they were new 
 sstables, however, that no longer negatively impacts the compaction 
 strategy’s selection criteria for older windows. 
 Patch provided for both 2.1 ( 
 https://github.com/jeffjirsa/cassandra/commits/twcs-2.1 ) and 2.2 ( 
 https://github.com/jeffjirsa/cassandra/commits/twcs )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9670) Cannot run CQL scripts on Windows AND having error Ubuntu Linux

2015-07-10 Thread Sanjay Patel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622846#comment-14622846
 ] 

Sanjay Patel commented on CASSANDRA-9670:
-

Philip, any workaround for this ?
Thanks

 Cannot run CQL scripts on Windows AND having error Ubuntu Linux
 ---

 Key: CASSANDRA-9670
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9670
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: DataStax Community Edition 
 on Windows 7, 64 Bit and Ubuntu 
Reporter: Sanjay Patel
Assignee: Philip Thompson
  Labels: cqlsh
 Fix For: 2.1.x

 Attachments: cities.cql, germany_cities.cql, germany_cities.cql, 
 india_cities.csv, india_states.csv, sp_setup.cql


 After installation of 2.1.6 and 2.1.7 it is not possible to execute cql 
 scripts, which were earlier executed on windows + Linux environment 
 successfully.
 I have tried to install Python 2 latest version and try to execute, but 
 having same error.
 Attaching cities.cql for reference.
 ---
 {code}
 cqlsh source 'shoppoint_setup.cql' ;
 shoppoint_setup.cql:16:InvalidRequest: code=2200 [Invalid query] 
 message=Keyspace 'shopping' does not exist
 shoppoint_setup.cql:647:'ascii' codec can't decode byte 0xc3 in position 57: 
 ordinal not in range(128)
 cities.cql:9:'ascii' codec can't decode byte 0xc3 in position 51: ordinal not 
 in range(128)
 cities.cql:14:
 Error starting import process:
 cities.cql:14:Can't pickle type 'thread.lock': it's not found as thread.lock
 cities.cql:14:can only join a started process
 cities.cql:16:
 Error starting import process:
 cities.cql:16:Can't pickle type 'thread.lock': it's not found as thread.lock
 cities.cql:16:can only join a started process
 Traceback (most recent call last):
   File string, line 1, in module
   File I:\programm\python2710\lib\multiprocessing\forking.py, line 380, in 
 main
 prepare(preparation_data)
   File I:\programm\python2710\lib\multiprocessing\forking.py, line 489, in 
 prepare
 Traceback (most recent call last):
   File string, line 1, in module
 file, path_name, etc = imp.find_module(main_name, dirs)
 ImportError: No module named cqlsh
   File I:\programm\python2710\lib\multiprocessing\forking.py, line 380, in 
 main
 prepare(preparation_data)
   File I:\programm\python2710\lib\multiprocessing\forking.py, line 489, in 
 prepare
 file, path_name, etc = imp.find_module(main_name, dirs)
 ImportError: No module named cqlsh
 shoppoint_setup.cql:663:'ascii' codec can't decode byte 0xc3 in position 18: 
 ordinal not in range(128)
 ipcache.cql:28:ServerError: ErrorMessage code= [Server error] 
 message=java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 I:\var\lib\cassandra\data\syste
 m\schema_columns-296e9c049bec3085827dc17d3df2122a\system-schema_columns-ka-300-Data.db
  (The process cannot access the file because it is being used by another 
 process)
 ccavn_bulkupdate.cql:75:ServerError: ErrorMessage code= [Server error] 
 message=java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
 java.lang.RuntimeException: java.io.FileNotFoundException: 
 I:\var\lib\cassandra\d
 ata\system\schema_columns-296e9c049bec3085827dc17d3df2122a\system-schema_columns-tmplink-ka-339-Data.db
  (The process cannot access the file because it is being used by another 
 process)
 shoppoint_setup.cql:680:'ascii' codec can't decode byte 0xe2 in position 14: 
 ordinal not in range(128){code}
 -
 In one of Ubuntu development environment we have similar errors.
 -
 {code}
 shoppoint_setup.cql:647:'ascii' codec can't decode byte 0xc3 in position 57: 
 ordinal not in range(128)
 cities.cql:9:'ascii' codec can't decode byte 0xc3 in position 51: ordinal not 
 in range(128)
 (corresponding line) COPY cities (city,country_code,state,isactive) FROM 
 'testdata/india_cities.csv' ;
 [19:53:18] j.basu: shoppoint_setup.cql:663:'ascii' codec can't decode byte 
 0xc3 in position 18: ordinal not in range(128)
 {code}
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix InsertUpdateIfConditionTest

2015-07-10 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7d6c876ec - a827a3717


Fix InsertUpdateIfConditionTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a827a371
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a827a371
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a827a371

Branch: refs/heads/trunk
Commit: a827a37171b14ec1446196733eb941e7da42a96a
Parents: 7d6c876
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jul 10 18:19:35 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jul 10 18:19:35 2015 +0300

--
 .../operations/InsertUpdateIfConditionTest.java | 30 
 1 file changed, 24 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a827a371/test/unit/org/apache/cassandra/cql3/validation/operations/InsertUpdateIfConditionTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/InsertUpdateIfConditionTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/operations/InsertUpdateIfConditionTest.java
index 19f85bf..a289df9 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/operations/InsertUpdateIfConditionTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/InsertUpdateIfConditionTest.java
@@ -23,10 +23,11 @@ import org.junit.Test;
 import org.apache.cassandra.cql3.CQLTester;
 import org.apache.cassandra.exceptions.InvalidRequestException;
 import org.apache.cassandra.exceptions.SyntaxException;
+import org.apache.cassandra.schema.SchemaKeyspace;
 
+import static java.lang.String.format;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
 
 public class InsertUpdateIfConditionTest extends CQLTester
 {
@@ -769,17 +770,26 @@ public class InsertUpdateIfConditionTest extends CQLTester
 
 // create and confirm
 schemaChange(CREATE KEYSPACE IF NOT EXISTS  + keyspace +  WITH 
replication = { 'class':'SimpleStrategy', 'replication_factor':1} and 
durable_writes = true );
-assertRows(execute(select durable_writes from system.schema_keyspaces 
where keyspace_name = ?, keyspace), row(true));
+assertRows(execute(format(select durable_writes from %s.%s where 
keyspace_name = ?,
+  SchemaKeyspace.NAME,
+  SchemaKeyspace.KEYSPACES),
+   keyspace),
+   row(true));
 
 // unsuccessful create since it's already there, confirm settings 
don't change
 schemaChange(CREATE KEYSPACE IF NOT EXISTS  + keyspace +  WITH 
replication = {'class':'SimpleStrategy', 'replication_factor':1} and 
durable_writes = false );
 
-assertRows(execute(select durable_writes from system.schema_keyspaces 
where keyspace_name = ?, keyspace), row(true));
+assertRows(execute(format(select durable_writes from %s.%s where 
keyspace_name = ?,
+  SchemaKeyspace.NAME,
+  SchemaKeyspace.KEYSPACES),
+   keyspace),
+   row(true));
 
 // drop and confirm
 schemaChange(DROP KEYSPACE IF EXISTS  + keyspace);
 
-assertEmpty(execute(select * from system.schema_keyspaces where 
keyspace_name = ?, keyspace));
+assertEmpty(execute(format(select * from %s.%s where keyspace_name = 
?, SchemaKeyspace.NAME, SchemaKeyspace.KEYSPACES),
+keyspace));
 }
 
 
@@ -854,7 +864,11 @@ public class InsertUpdateIfConditionTest extends CQLTester
 
 // create and confirm
 execute(CREATE TYPE IF NOT EXISTS mytype (somefield int));
-assertRows(execute(SELECT type_name from system.schema_usertypes 
where keyspace_name = ? and type_name = ?, KEYSPACE, mytype),
+assertRows(execute(format(SELECT type_name from %s.%s where 
keyspace_name = ? and type_name = ?,
+  SchemaKeyspace.NAME,
+  SchemaKeyspace.TYPES),
+   KEYSPACE,
+   mytype),
row(mytype));
 
 // unsuccessful create since it 's already there
@@ -863,6 +877,10 @@ public class InsertUpdateIfConditionTest extends CQLTester
 
 // drop and confirm
 execute(DROP TYPE IF EXISTS mytype);
-assertEmpty(execute(SELECT type_name from system.schema_usertypes 
where keyspace_name = ? and type_name = ?, KEYSPACE, mytype));
+assertEmpty(execute(format(SELECT type_name from %s.%s where 
keyspace_name = ? and type_name = ?,
+   SchemaKeyspace.NAME,
+ 

[jira] [Assigned] (CASSANDRA-9384) Update jBCrypt dependency to version 0.4

2015-07-10 Thread Marko Denda (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marko Denda reassigned CASSANDRA-9384:
--

Assignee: Marko Denda

 Update jBCrypt dependency to version 0.4
 

 Key: CASSANDRA-9384
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9384
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Marko Denda
 Fix For: 2.1.x, 2.0.x, 2.2.x


 https://bugzilla.mindrot.org/show_bug.cgi?id=2097
 Although the bug tracker lists it as NEW/OPEN, the release notes for 0.4 
 indicate that this is now fixed, so we should update.
 Thanks to [~Bereng] for identifying the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9729) CQLSH exception - OverflowError: normalized days too large to fit in a C int

2015-07-10 Thread Chandran Anjur Narasimhan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14622881#comment-14622881
 ] 

Chandran Anjur Narasimhan commented on CASSANDRA-9729:
--

Here is the output of the command:

Please not that there are some junk values in the start_time row 16  20. Is 
this is the one causing the issue?

cqlsh:ccc select time, timestampAsBlob(time), start_time, 
timestampAsBlob(start_time) from task_result where 
submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';

 time | timestampAsBlob(time) | start_time  
  | timestampAsBlob(start_time)
--+---+---+-
 2015-06-30 09:44:23-0700 |0x014e455aed9a |  2015-06-30 
09:44:23-0700 |  0x014e455aea58
 2015-06-30 09:44:58-0700 |0x014e455b737a |  2015-06-30 
09:44:58-0700 |  0x014e455b7310
 2015-06-30 09:45:15-0700 |0x014e455bb867 |  2015-06-30 
09:45:15-0700 |  0x014e455bb578
 2015-06-30 09:46:02-0700 |0x014e455c6de0 |  2015-06-30 
09:45:15-0700 |  0x014e455bb578
 2015-06-30 09:46:03-0700 |0x014e455c7342 |  2015-06-30 
09:46:03-0700 |  0x014e455c70f8
 2015-06-30 09:46:18-0700 |0x014e455caf72 |  2015-06-30 
09:46:03-0700 |  0x014e455c70f8
 2015-06-30 09:46:23-0700 |0x014e455cc180 |  2015-06-30 
09:46:19-0700 |  0x014e455caf78
 2015-06-30 09:48:24-0700 |0x014e455e9847 |  2015-06-30 
09:48:23-0700 |  0x014e455e965c
 2015-06-30 09:48:30-0700 |0x014e455eb0d4 |  2015-06-30 
09:48:24-0700 |  0x014e455e9940
 2015-06-30 09:48:34-0700 |0x014e455ec04c |  2015-06-30 
09:48:24-0700 |  0x014e455e9940
 2015-06-30 09:48:35-0700 |0x014e455ec3fb |  2015-06-30 
09:48:24-0700 |  0x014e455e9940
 2015-06-30 10:06:55-0700 |0x014e456f8bee |  2015-06-30 
09:48:24-0700 |  0x014e455e9940
 2015-06-30 10:06:56-0700 |0x014e456f8f96 |  2015-06-30 
09:48:24-0700 |  0x014e455e9940
 2015-06-30 10:11:00-0700 |0x014e45734b99 |  2015-06-30 
09:48:24-0700 |  0x014e455e9940
 2015-06-30 10:11:02-0700 |0x014e45735155 |  2015-06-30 
09:48:24-0700 |  0x014e455e9940
 2015-06-30 09:49:59-0700 |0x014e45600cf0 | 266574838-06-29 
18:08:16-0800 |  0x74be3c152aa4ed80
 2015-06-30 09:50:01-0700 |0x014e4560146a |  2015-06-30 
09:48:57-0700 |  0x014e455f18a8
 2015-06-30 09:50:01-0700 |0x014e4560162a |  2015-06-30 
09:48:57-0700 |  0x014e455f18a8
 2015-06-30 10:09:07-0700 |0x014e4571928b |  2015-06-30 
09:48:57-0700 |  0x014e455f18a8
 2015-06-30 09:48:54-0700 |0x014e455f0d23 | 266574820-06-28 
18:08:16-0800 |  0x74be3b90e6700980
 2015-06-30 09:48:55-0700 |0x014e455f1113 |  2015-06-30 
09:48:39-0700 |  0x014e455ed258
 2015-06-30 09:48:55-0700 |0x014e455f12aa |  2015-06-30 
09:48:39-0700 |  0x014e455ed258
 2015-06-30 10:09:02-0700 |0x014e45717d12 |  2015-06-30 
09:48:39-0700 |  0x014e455ed258
 2015-06-30 10:09:03-0700 |0x014e45718124 |  2015-06-30 
09:48:39-0700 |  0x014e455ed258
 2015-06-30 10:09:07-0700 |0x014e45718f62 |  2015-06-30 
09:48:39-0700 |  0x014e455ed258

(25 rows)

cqlsh:ccc 



 CQLSH exception - OverflowError: normalized days too large to fit in a C int
 

 Key: CASSANDRA-9729
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9729
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: OSX 10.10.2
Reporter: Chandran Anjur Narasimhan
  Labels: cqlsh

 Running a select command using CQLSH 2.1.5, 2.1.7 throws exception. This 
 works nicely in 2.0.14 version.
 Environment:
 
 JAVA - 1.8
 Python - 2.7.6
 Cassandra Server - 2.1.7
 CQLSH - 5.0.1
 Logs:
 ==
 CQLSH - cassandra 2.0.14 - working with no issues
 -
 NCHAN-M-D0LZ:apache nchan$ cd apache-cassandra-2.0.14/
 NCHAN-M-D0LZ:apache-cassandra-2.0.14 nchan$ bin/cqlsh
 Connected to CCC Multi-Region Cassandra Cluster at myip:9160.
 [cqlsh 4.1.1 | Cassandra 2.1.7 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
 Use HELP for help.
 cqlsh use ccc;
 cqlsh:ccc select count(*) from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  count
 ---
 25
 (1 rows)
 cqlsh:ccc select * from task_result where 
 submissionid='40f89a3d1f4711e5ac2b005056bb0e8b';
  i get all the 25 values
 

[jira] [Updated] (CASSANDRA-9736) Add alter statement for MV

2015-07-10 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-9736:
---
Reviewer: Joshua McKenzie

 Add alter statement for MV
 --

 Key: CASSANDRA-9736
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9736
 Project: Cassandra
  Issue Type: Improvement
Reporter: Carl Yeksigian

 {{ALTER MV}} would allow us to drop columns in the base table without first 
 dropping the materialized views, since we'd be able to later drop columns in 
 the MV.
 Also, we should be able to add new columns to the MV; a new builder would 
 have to run to copy the values for these additional columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)