[jira] [Created] (CASSANDRA-9644) DTCS configuration proposals for handling consequences of repairs

2015-06-24 Thread Antti Nissinen (JIRA)
Antti Nissinen created CASSANDRA-9644:
-

 Summary: DTCS configuration proposals for handling consequences of 
repairs
 Key: CASSANDRA-9644
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9644
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Antti Nissinen
 Fix For: 3.x, 2.1.x
 Attachments: node0_20150621_1646_time_graph.txt, 
node0_20150621_2320_time_graph.txt, node0_20150623_1526_time_graph.txt, 
node1_20150621_1646_time_graph.txt, node1_20150621_2320_time_graph.txt, 
node1_20150623_1526_time_graph.txt, node2_20150621_1646_time_graph.txt, 
node2_20150621_2320_time_graph.txt, node2_20150623_1526_time_graph.txt, 
nodetool status infos.txt, sstable_compaction_trace.txt, 
sstable_compaction_trace_snipped.txt, sstable_counts.jpg

This is a document bringing up some issues when DTCS is used to compact time 
series data in a three node cluster. The DTCS is currently configured with a 
few parameters that are making the configuration fairly simple, but might cause 
problems in certain special cases like recovering from the flood of small 
SSTables due to repair operation. We are suggesting some ideas that might be a 
starting point for further discussions. Following sections are containing:

- Description of the cassandra setup
- Feeding process of the data
- Failure testing
- Issues caused by the repair operations for the DTCS
- Proposal for the DTCS configuration parameters

Attachments are included to support the discussion and there is a separate 
section giving explanation for those.

Cassandra setup and data model

- Cluster is composed from three nodes running Cassandra 2.1.2. Replication 
factor is two and read and write consistency levels are ONE.
- Data is time series data. Data is saved so that one row contains a certain 
time span of data for a given metric ( 20 days in this case). The row key 
contains information about the start time of the time span and metrix name. 
Column name gives the offset from the beginning of time span. Column time stamp 
is set to correspond time stamp when adding together the timestamp from the row 
key and the offset (the actual time stamp of data point). Data model is analog 
to KairosDB implementation.
- Average sampling rate is 10 seconds varying significantly from metric to 
metric.
- 100 000 metrics are fed to the Cassandra.
- max_sstable_age_days is set to 5 days (objective is to keep SStable files in 
manageable size, around 50 GB)
- TTL is not in use in the test.

Procedure for the failure test.

- Data is first dumped to Cassandra for 11 days and the data dumping is stopped 
so that DTCS will have a change to finish all compactions. Data is dumped with 
fake timestamps so that column time stamp is set when data is written to 
Cassandra.
- One of the nodes is taken down and new data is dumped on top of the earlier 
data covering couple of hours worth of data (faked time stamps).
- Dumping is stopped and the node is kept down for few hours.
- Node is taken up and the nodetool repair is applied on the node that was 
down.

Consequences

- Repair operation will lead to massive amount of new SStables far back in the 
history. New SStables are covering similar time spans than the files that were 
created by DTCS before the shutdown of one of the nodes.
- To be able to compact the small files the max_sstable_age_days should be 
increased to allow compaction to handle the files. However, the in a practical 
case the time window will increase so large that generated files will be huge 
that is not desirable. The compaction also combines together one very large 
file with a bunch of small files in several phases that is not effective. 
Generating really large files may also lead to out of disc space problems.
- See the list of time graphs later in the document.

Improvement proposals for the DTCS configuration

Below is a list of desired properties for the configuration. Current parameters 
are mentioned if available.

- Initial window size (currently:base_time_seconds)
- The amount of similar size windows for the bucketing (currently: 
min_threshold)
- The multiplier for the window size when increased (currently: min_threshold). 
This we would like to be independent from the min_threshold parameter so that 
you could actually control the rate how fast the window size is increased.
- Maximum length of the time window inside which the files are assigned for a 
certain bucket (not currently defined). This means that expansion of time 
window length is restricted. When the limit is reached the window size will be 
same all the way back in the history (e.g. one week)
- The maximum horizon in which SStables are candidates for buckets (currently: 
max_sstable_age_days)
- Maximum file size of SStable allowed to be in a set of files to be compacted 
(not possible currently). Preventing out of disk space 

[jira] [Commented] (CASSANDRA-9543) Integrate release 2.2 java driver

2015-06-24 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598929#comment-14598929
 ] 

Robert Stupp commented on CASSANDRA-9543:
-

/cc [~omichallat] (just added to the loop)

 Integrate release 2.2 java driver
 -

 Key: CASSANDRA-9543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9543
 Project: Cassandra
  Issue Type: Task
Reporter: Robert Stupp
 Fix For: 2.2.x


 Follow-up of CASSANDRA-9493.
 Hint: cleanup {{build.xml}} for commented out {{çassandra-driver-core}} maven 
 dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9064) [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe table statement

2015-06-24 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-9064:
--
Assignee: Adam Holmberg  (was: Benjamin Lerer)

 [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe 
 table statement
 

 Key: CASSANDRA-9064
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9064
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.3 on mac os x
Reporter: Sujeet Gholap
Assignee: Adam Holmberg
  Labels: cqlsh
 Fix For: 2.1.x


 Here's how to reproduce:
 1) Create a table with LeveledCompactionStrategy
 CREATE keyspace foo WITH REPLICATION = {'class': 'SimpleStrategy', 
 'replication_factor' : 3};
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH compaction = {'class': 'LeveledCompactionStrategy'};
 2) Describe the table and save the output
 cqlsh -e describe table foo.bar
 Output should be something like
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH bloom_filter_fp_chance = 0.1
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 3) Save the output to repro.cql
 4) Drop the table foo.bar
 cqlsh -e drop table foo.bar
 5) Run the create table statement we saved
 cqlsh -f repro.cql
 6) Expected: normal execution without an error
 7) Reality:
 ConfigurationException: ErrorMessage code=2300 [Query invalid because of 
 configuration issue] message=Properties specified [min_threshold, 
 max_threshold] are not understood by LeveledCompactionStrategy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9645) Make DateTieredCompactionStrategy easier to use

2015-06-24 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-9645:
--

 Summary: Make DateTieredCompactionStrategy easier to use
 Key: CASSANDRA-9645
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9645
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson


It has proven to be quite difficult to use DTCS properly, we need to make it 
easier and safer. Things to do could be (but there is surely more things) :

* put warnings in logs if read repair is used
* better debug logging which explains why certain sstables are selected etc
* Auto-tune settings, this could be quite difficult, but we should atleast do 
CASSANDRA-9130 to default max_sstable_age to gcgs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Improve log output from unit tests

2015-06-24 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/trunk b7e72e1ee - c8d3cc149


Improve log output from unit tests

patch by Ariel Weisberg; reviewed by Robert Stupp for CASSANDRA-9528


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c8d3cc14
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c8d3cc14
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c8d3cc14

Branch: refs/heads/trunk
Commit: c8d3cc1493a0ca47fa34e88d9a113440611dce3b
Parents: b7e72e1
Author: Ariel Weisberg ar...@weisberg.ws
Authored: Wed Jun 24 08:09:52 2015 +0200
Committer: Robert Stupp sn...@snazy.de
Committed: Wed Jun 24 08:09:52 2015 +0200

--
 CHANGES.txt |   1 +
 build.xml   |  25 +-
 test/conf/logback-test.xml  |  45 +-
 .../CassandraBriefJUnitResultFormatter.java |  13 +-
 .../CassandraXMLJUnitResultFormatter.java   |  11 +
 .../org/apache/cassandra/ConsoleAppender.java   |  81 
 .../apache/cassandra/LogbackStatusListener.java | 454 +++
 .../org/apache/cassandra/TeeingAppender.java|  79 
 8 files changed, 680 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8d3cc14/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 33869cb..d2d1d5f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0:
+ * Improve log output from unit tests (CASSANDRA-9528)
  * Add algorithmic token allocation (CASSANDRA-7032)
  * Add nodetool command to replay batchlog (CASSANDRA-9547)
  * Make file buffer cache independent of paths being read (CASSANDRA-8897)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8d3cc14/build.xml
--
diff --git a/build.xml b/build.xml
index 1fbc2fa..3d83ee6 100644
--- a/build.xml
+++ b/build.xml
@@ -132,6 +132,20 @@
   format property=YEAR pattern=/
 /tstamp
 
+!-- Check if all tests are being run or just one. If it's all tests don't 
spam the console with test output.
+ If it's an individual test print the output from the test under the 
assumption someone is debugging the test
+ and wants to know what is going on without having to context switch 
to the log file that is generated.
+ Debug level output still needs to be retrieved from the log file.  --
+script language=javascript
+if (project.getProperty(cassandra.keepBriefBrief) == null)
+{
+if (project.getProperty(test.name).equals(*Test))
+project.setProperty(cassandra.keepBriefBrief, true);
+else
+project.setProperty(cassandra.keepBriefBrief, false);
+}
+/script
+
 !--
  Add all the dependencies.
 --
@@ -149,7 +163,7 @@
   exclude name=**/*-sources.jar/
 /fileset
 /path
-   
+
path id=cobertura.classpath
pathelement location=${cobertura.classes.dir}/
/path
@@ -709,7 +723,7 @@
 description=Run in test mode.  Not for production use!
   java classname=org.apache.cassandra.service.CassandraDaemon 
fork=true
 classpath
-  path refid=cassandra.classpath/  
+  path refid=cassandra.classpath/
   pathelement location=${test.conf}/
 /classpath
 jvmarg value=-Dstorage-config=${test.conf}/
@@ -1131,8 +1145,8 @@
 attribute name=filelist default= /
 attribute name=poffset default=0/
 attribute name=testtag default=/
-
 attribute name=usejacoco default=no/
+
 sequential
   condition property=additionalagent
  
value=-javaagent:${build.dir.lib}/jars/jacocoagent.jar=destfile=${jacoco.execfile}
@@ -1157,7 +1171,8 @@
 jvmarg 
value=-Dcassandra.test.use_prepared=${cassandra.test.use_prepared}/
jvmarg value=-Dcassandra.test.offsetseed=@{poffset}/
 jvmarg value=-Dcassandra.test.sstableformatdevelopment=true/
-jvmarg value=-Dcassandra.testtag=@{testtag}/
+jvmarg value=-Dcassandra.testtag=@{testtag}/ 
+jvmarg value=-Dcassandra.keepBriefBrief=${cassandra.keepBriefBrief} 
/
optjvmargs/
 classpath
   path refid=cassandra.classpath /
@@ -1989,7 +2004,7 @@
 arg value=-properties /
 arg value=${ecj.properties} /
 arg value=-cp /
-arg value=${toString:cassandra.classpath} /
+arg value=${toString:cassandra.build.classpath} /
 arg value=${build.src.java} /
 /java
   /target

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c8d3cc14/test/conf/logback-test.xml

[jira] [Commented] (CASSANDRA-7342) CAS writes does not have hint functionality.

2015-06-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599033#comment-14599033
 ] 

Sylvain Lebresne commented on CASSANDRA-7342:
-

Yes, I think we can treat those as normal hints. The only downside of not going 
through {{savePaxosCommit}} might be that we'll end up replaying that same 
commit. Which is both harmless and unlikely since if you get hints, you're 
probably not up to date in the first place, in which case going through 
{{savePaxosCommit}} is wasted anyway. So happy to keep it simple.

 CAS writes does not have hint functionality. 
 -

 Key: CASSANDRA-7342
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7342
 Project: Cassandra
  Issue Type: Sub-task
Reporter: sankalp kohli
Assignee: sankalp kohli

 When a dead node comes up, it gets the last commit but not anything which it 
 has missed. 
 This reduces the durability of those writes compared to other writes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9644) DTCS configuration proposals for handling consequences of repairs

2015-06-24 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599160#comment-14599160
 ] 

Marcus Eriksson commented on CASSANDRA-9644:


Thanks for the report, and yes, there are a bunch of things we should do. 
Digesting your post, I think these are the most important ones:

bq. Maximum length of the time window inside which the files are assigned for a 
certain bucket (not currently defined). This means that expansion of time 
window length is restricted. When the limit is reached the window size will be 
same all the way back in the history (e.g. one week)
yes, we should make it possible to put a cap on how big the windows can get

bq. Inside the bucket when the amount of files is limited by max_threshold, the 
compaction would select first small files instead of one huge file and a bunch 
of small files.
yes, we should do STCS within the windows when we have more than max_threshold 
files

bq. Optional strategies to select the most interesting bucket
we should probably prioritize compaction in the windows that actually serve 
reads, something similar to STCS.mostInterestingBucket

Should also note that using incremental repair makes sure that we don't stream 
in a bunch of data in the old windows

 DTCS configuration proposals for handling consequences of repairs
 -

 Key: CASSANDRA-9644
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9644
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Antti Nissinen
 Fix For: 3.x, 2.1.x

 Attachments: node0_20150621_1646_time_graph.txt, 
 node0_20150621_2320_time_graph.txt, node0_20150623_1526_time_graph.txt, 
 node1_20150621_1646_time_graph.txt, node1_20150621_2320_time_graph.txt, 
 node1_20150623_1526_time_graph.txt, node2_20150621_1646_time_graph.txt, 
 node2_20150621_2320_time_graph.txt, node2_20150623_1526_time_graph.txt, 
 nodetool status infos.txt, sstable_compaction_trace.txt, 
 sstable_compaction_trace_snipped.txt, sstable_counts.jpg


 This is a document bringing up some issues when DTCS is used to compact time 
 series data in a three node cluster. The DTCS is currently configured with a 
 few parameters that are making the configuration fairly simple, but might 
 cause problems in certain special cases like recovering from the flood of 
 small SSTables due to repair operation. We are suggesting some ideas that 
 might be a starting point for further discussions. Following sections are 
 containing:
 - Description of the cassandra setup
 - Feeding process of the data
 - Failure testing
 - Issues caused by the repair operations for the DTCS
 - Proposal for the DTCS configuration parameters
 Attachments are included to support the discussion and there is a separate 
 section giving explanation for those.
 Cassandra setup and data model
 - Cluster is composed from three nodes running Cassandra 2.1.2. Replication 
 factor is two and read and write consistency levels are ONE.
 - Data is time series data. Data is saved so that one row contains a certain 
 time span of data for a given metric ( 20 days in this case). The row key 
 contains information about the start time of the time span and metrix name. 
 Column name gives the offset from the beginning of time span. Column time 
 stamp is set to correspond time stamp when adding together the timestamp from 
 the row key and the offset (the actual time stamp of data point). Data model 
 is analog to KairosDB implementation.
 - Average sampling rate is 10 seconds varying significantly from metric to 
 metric.
 - 100 000 metrics are fed to the Cassandra.
 - max_sstable_age_days is set to 5 days (objective is to keep SStable files 
 in manageable size, around 50 GB)
 - TTL is not in use in the test.
 Procedure for the failure test.
 - Data is first dumped to Cassandra for 11 days and the data dumping is 
 stopped so that DTCS will have a change to finish all compactions. Data is 
 dumped with fake timestamps so that column time stamp is set when data is 
 written to Cassandra.
 - One of the nodes is taken down and new data is dumped on top of the earlier 
 data covering couple of hours worth of data (faked time stamps).
 - Dumping is stopped and the node is kept down for few hours.
 - Node is taken up and the nodetool repair is applied on the node that was 
 down.
 Consequences
 - Repair operation will lead to massive amount of new SStables far back in 
 the history. New SStables are covering similar time spans than the files that 
 were created by DTCS before the shutdown of one of the nodes.
 - To be able to compact the small files the max_sstable_age_days should be 
 increased to allow compaction to handle the files. However, the in a 
 practical case the time window will increase so large that 

[jira] [Comment Edited] (CASSANDRA-9528) Improve log output from unit tests

2015-06-24 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598920#comment-14598920
 ] 

Robert Stupp edited comment on CASSANDRA-9528 at 6/24/15 6:12 AM:
--

LGTM - committed as c8d3cc1493a0ca47fa34e88d9a113440611dce3b

The [build artifacts from the jenkins 
page|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-C-9528-testall/15/]
 only contain the gzipped files but not the other, non-compressed log files - 
that needs to be adjusted in the Jenkins job configuration. /cc 
[~philipthompson]


was (Author: snazy):
LGTM - committed as c8d3cc1493a0ca47fa34e88d9a113440611dce3b

The [build artifacts from the jenkins 
page|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-C-9528-testall/15/]
 only contain the gzipped files but not the other, non-compressed log files - 
that needs to be adjusted in the Jenkins job configuration.

 Improve log output from unit tests
 --

 Key: CASSANDRA-9528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9528
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.0 beta 1


 * Single log output file per suite
 * stdout/stderr to the same log file with proper interleaving
 * Don't interleave interactive output from unit tests run concurrently to the 
 console. Print everything about the test once the test has completed.
 * Fetch and compress log files as part of artifacts collected by cassci



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9064) [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe table statement

2015-06-24 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14598987#comment-14598987
 ] 

Benjamin Lerer commented on CASSANDRA-9064:
---

[~aholmber] I assigned the ticket to you as you will know exactly when the 
python driver fix is ready to be used by Cassandra.

 [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe 
 table statement
 

 Key: CASSANDRA-9064
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9064
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.3 on mac os x
Reporter: Sujeet Gholap
Assignee: Adam Holmberg
  Labels: cqlsh
 Fix For: 2.1.x


 Here's how to reproduce:
 1) Create a table with LeveledCompactionStrategy
 CREATE keyspace foo WITH REPLICATION = {'class': 'SimpleStrategy', 
 'replication_factor' : 3};
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH compaction = {'class': 'LeveledCompactionStrategy'};
 2) Describe the table and save the output
 cqlsh -e describe table foo.bar
 Output should be something like
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH bloom_filter_fp_chance = 0.1
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 3) Save the output to repro.cql
 4) Drop the table foo.bar
 cqlsh -e drop table foo.bar
 5) Run the create table statement we saved
 cqlsh -f repro.cql
 6) Expected: normal execution without an error
 7) Reality:
 ConfigurationException: ErrorMessage code=2300 [Query invalid because of 
 configuration issue] message=Properties specified [min_threshold, 
 max_threshold] are not understood by LeveledCompactionStrategy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599050#comment-14599050
 ] 

Stefania commented on CASSANDRA-8894:
-

[~benedict], the code is 
[here|https://github.com/stef1927/cassandra/commits/8894] if you want to take a 
look. It's pending CI and the cperf test.

bq. For the index file, we scan a number of index records and ideally want them 
all to be read in one go. So we need to ask the IndexSummary to tell us how 
many records are in the scan range we've found (by calling 
getEffectiveIndexIntervalAfterIndex, and to divide the file length by this (and 
round up). 

getEffectiveIndexIntervalAfterIndex(int index) returns the number of index 
partitions in the scan range. So to me it doesn't make sense to divide the file 
length by this number as there are multiple scan ranges. By scan range I mean 
all the index partitions in an index interval between two index summaries. 
Shouldn't we rather divide the file length by the total number of scan ranges 
(i.e. summaries given by IndexSummary.size() I believe) so as to have an 
average length of each scan range - without knowing in advance the exact size 
of each index partition?

 Our default buffer size for (uncompressed) buffered reads should be smaller, 
 and based on the expected record size
 --

 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
 Fix For: 3.x


 A large contributor to slower buffered reads than mmapped is likely that we 
 read a full 64Kb at once, when average record sizes may be as low as 140 
 bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
 each read will touch 32 of these, meaning we are unlikely to almost ever be 
 hitting the TLB, and will be incurring at least 30 unnecessary misses each 
 time (as well as the other costs of larger than necessary accesses). When 
 working with an SSD there is little to no benefit reading more than 4Kb at 
 once, and in either case reading more data than we need is wasteful. So, I 
 propose selecting a buffer size that is the next larger power of 2 than our 
 average record size (with a minimum of 4Kb), so that we expect to read in one 
 operation. I also propose that we create a pool of these buffers up-front, 
 and that we ensure they are all exactly aligned to a virtual page, so that 
 the source and target operations each touch exactly one virtual page per 4Kb 
 of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-24 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599150#comment-14599150
 ] 

Robert Stupp commented on CASSANDRA-9499:
-

[~aweisberg] OHC now supports putIfAbsentDirect and addOrReplaceDirect. It's 
not released yet - can you take a look at the [relevant 
commit|https://github.com/snazy/ohc/commit/e18330130a859735d105d112d57ab718aca18697]
 in {{develop}} branch whether this works for you?

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7814) enable describe on indices

2015-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599152#comment-14599152
 ] 

Stefania commented on CASSANDRA-7814:
-

[~thobbs], I've rebased and bundled the 2.6c1 driver.

The dtests pull request is 
[here|https://github.com/riptano/cassandra-dtest/pull/343].

CI is still running and will be available 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-7814-dtest/]
 and 
[here|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-7814-testall/].

 enable describe on indices
 --

 Key: CASSANDRA-7814
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7814
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: radha
Assignee: Stefania
Priority: Minor
 Fix For: 2.1.x


 Describe index should be supported, right now, the only way is to export the 
 schema and find what it really is before updating/dropping the index.
 verified in 
 [cqlsh 3.1.8 | Cassandra 1.2.18.1 | CQL spec 3.0.0 | Thrift protocol 19.36.2]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9594) metrics reporter doesn't start until after a bootstrap

2015-06-24 Thread Tommy Stendahl (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommy Stendahl updated CASSANDRA-9594:
--
Attachment: 9594.txt

 metrics reporter doesn't start until after a bootstrap
 --

 Key: CASSANDRA-9594
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9594
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Evans
Priority: Minor
  Labels: lhf
 Attachments: 9594.txt


 In {{o.a.c.service.CassandraDaemon#setup}}, the metrics reporter is started 
 immediately after the invocation of 
 {{o.a.c.service.StorageService#initServer}}, which for a bootstrapping node 
 may block for a considerable period of time.  If the metrics reporter is your 
 only source of visibility, then you are blind until the bootstrap completes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9165) improve error pathway test coverage, including fault injection testing

2015-06-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599327#comment-14599327
 ] 

Benedict commented on CASSANDRA-9165:
-

For reference, 
[here|https://github.com/belliottsmith/cassandra/tree/fault-injection-8568] is 
the branch I constructed an initial process for testing LifecycleTransaction.

 improve error pathway test coverage, including fault injection testing
 --

 Key: CASSANDRA-9165
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9165
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
  Labels: retrospective_generated





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9090) Allow JMX over SSL directly from nodetool

2015-06-24 Thread Marcus Olsson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599328#comment-14599328
 ] 

Marcus Olsson edited comment on CASSANDRA-9090 at 6/24/15 12:31 PM:


This patch makes it possible to use SSL with nodetool by running it with:
{noformat}
nodetool --ssl
{noformat}

Then either using a configuration file in ~/.cassandra/ called 
nodetool.properties
{code:title=nodetool.properties}
# Path to keystore
keyStore=/path/to/keystore
# Keystore password
keyStorePassword=keystore-password
# Path to truststore
trustStore=/path/to/truststore
# Truststore password
trustStorePassword=truststore-password
# Enabled cipher suites
cipherSuites=enabled-cipher-suites
# Enabled protocols
enabledProtocols=enabled-protocols
{code}
or by running it with the flags:
{noformat}
nodetool --ssl -Djavax.net.ssl.keyStore=/path/to/keystore 
-Djavax.net.ssl.keyStorePassword=keystore-password 
-Djavax.net.ssl.trustStore=/path/to/truststore 
-Djavax.net.ssl.trustStorePassword=truststore-password 
-Djavax.rmi.ssl.client.enabledCipherSuites=enabled-cipher-suites 
-Djavax.rmi.ssl.client.enabledProtocols=enabled-protocols
{noformat}

Edit: This patch is only tested on 2.1.


was (Author: molsson):
This patch makes it possible to use SSL with nodetool by running it with:
{noformat}
nodetool --ssl
{noformat}

Then either using a configuration file in ~/.cassandra/ called 
nodetool.properties
{code:title=nodetool.properties}
# Path to keystore
keyStore=/path/to/keystore
# Keystore password
keyStorePassword=keystore-password
# Path to truststore
trustStore=/path/to/truststore
# Truststore password
trustStorePassword=truststore-password
# Enabled cipher suites
cipherSuites=enabled-cipher-suites
# Enabled protocols
enabledProtocols=enabled-protocols
{code}
or by running it with the flags:
{noformat}
nodetool --ssl -Djavax.net.ssl.keyStore=/path/to/keystore 
-Djavax.net.ssl.keyStorePassword=keystore-password 
-Djavax.net.ssl.trustStore=/path/to/truststore 
-Djavax.net.ssl.trustStorePassword=truststore-password 
-Djavax.rmi.ssl.client.enabledCipherSuites=enabled-cipher-suites 
-Djavax.rmi.ssl.client.enabledProtocols=enabled-protocols
{noformat}

 Allow JMX over SSL directly from nodetool
 -

 Key: CASSANDRA-9090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9090
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Philip Thompson
 Fix For: 3.x, 2.1.x, 2.0.x

 Attachments: cassandra-2.1-9090.patch


 Currently cqlsh allows users to connect via SSL to their cassandra cluster 
 via command line. 
 Nodetool only offers username/password authentication [1], and if users want 
 to use SSL, they need to use jconsole [2]. We should support nodetool 
 connecting via SSL in the same way cqlsh does.
 [1] http://wiki.apache.org/cassandra/JmxSecurity
 [2] https://www.lullabot.com/blog/article/monitor-java-jmx
 [3] 
 http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9594) metrics reporter doesn't start until after a bootstrap

2015-06-24 Thread Tommy Stendahl (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599345#comment-14599345
 ] 

Tommy Stendahl commented on CASSANDRA-9594:
---

I created a patch that starts the metrics reporter before 
{{o.a.c.service.StorageService#registerDaemon}}. Its based on the 2.0 branch 
and should merge to other branches without problem.

 metrics reporter doesn't start until after a bootstrap
 --

 Key: CASSANDRA-9594
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9594
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Eric Evans
Priority: Minor
  Labels: lhf
 Attachments: 9594.txt


 In {{o.a.c.service.CassandraDaemon#setup}}, the metrics reporter is started 
 immediately after the invocation of 
 {{o.a.c.service.StorageService#initServer}}, which for a bootstrapping node 
 may block for a considerable period of time.  If the metrics reporter is your 
 only source of visibility, then you are blind until the bootstrap completes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Utilise NoSpamLogger for rate limited logging

2015-06-24 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk c8d3cc149 - cb062839f


Utilise NoSpamLogger for rate limited logging

patch by ariel; reviewed by benedict for CASSANDRA-8584


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb062839
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb062839
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb062839

Branch: refs/heads/trunk
Commit: cb062839ff2480dd868a82ec37e54d502e82fc0d
Parents: c8d3cc1
Author: ariel ariel.weisb...@datastax.com
Authored: Wed Jun 24 12:07:56 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jun 24 12:07:56 2015 +0100

--
 .../db/commitlog/AbstractCommitLogService.java  | 18 --
 .../db/commitlog/MemoryMappedSegment.java   |  2 +-
 .../cassandra/io/sstable/SSTableRewriter.java   |  3 +-
 .../apache/cassandra/io/util/SegmentedFile.java |  2 +-
 .../org/apache/cassandra/utils/CLibrary.java| 24 +---
 .../apache/cassandra/utils/NoSpamLogger.java| 65 ++--
 .../apache/cassandra/utils/CLibraryTest.java|  2 +-
 .../cassandra/utils/NoSpamLoggerTest.java   | 44 ++---
 8 files changed, 89 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb062839/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
--
diff --git 
a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java 
b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
index 2a55600..3479440 100644
--- a/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
+++ b/src/java/org/apache/cassandra/db/commitlog/AbstractCommitLogService.java
@@ -17,6 +17,7 @@
  */
 package org.apache.cassandra.db.commitlog;
 
+import org.apache.cassandra.utils.NoSpamLogger;
 import org.apache.cassandra.utils.concurrent.WaitQueue;
 import org.slf4j.*;
 
@@ -28,8 +29,6 @@ import static 
org.apache.cassandra.db.commitlog.CommitLogSegment.Allocation;
 
 public abstract class AbstractCommitLogService
 {
-// how often should we log syngs that lag behind our desired period
-private static final long LAG_REPORT_INTERVAL = 
TimeUnit.MINUTES.toMillis(5);
 
 private Thread thread;
 private volatile boolean shutdown = false;
@@ -112,11 +111,18 @@ public abstract class AbstractCommitLogService
 syncCount++;
 totalSyncDuration += now - syncStarted;
 
-if (firstLagAt  0  now - firstLagAt = 
LAG_REPORT_INTERVAL)
+if (firstLagAt  0)
 {
-logger.warn(String.format(Out of %d commit log 
syncs over the past %ds with average duration of %.2fms, %d have exceeded the 
configured commit interval by an average of %.2fms,
-  syncCount, (now - 
firstLagAt) / 1000, (double) totalSyncDuration / syncCount, lagCount, (double) 
syncExceededIntervalBy / lagCount));
-firstLagAt = 0;
+//Only reset the lag tracking if it actually 
logged this time
+boolean logged = NoSpamLogger.log(
+logger,
+NoSpamLogger.Level.WARN,
+5,
+TimeUnit.MINUTES,
+Out of {} commit log syncs over the past 
{}s with average duration of {}ms, {} have exceeded the configured commit 
interval by an average of {}ms,
+  syncCount, (now - 
firstLagAt) / 1000, String.format(%.2f, (double) totalSyncDuration / 
syncCount), lagCount, String.format(%.2f, (double) syncExceededIntervalBy / 
lagCount));
+   if (logged)
+   firstLagAt = 0;
 }
 
 // if we have lagged this round, we probably have work 
to do already so we don't sleep

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb062839/src/java/org/apache/cassandra/db/commitlog/MemoryMappedSegment.java
--
diff --git 
a/src/java/org/apache/cassandra/db/commitlog/MemoryMappedSegment.java 
b/src/java/org/apache/cassandra/db/commitlog/MemoryMappedSegment.java
index e240a91..3a52e11 100644
--- a/src/java/org/apache/cassandra/db/commitlog/MemoryMappedSegment.java
+++ b/src/java/org/apache/cassandra/db/commitlog/MemoryMappedSegment.java
@@ -99,7 +99,7 @@ public class MemoryMappedSegment extends CommitLogSegment
 {
 throw new 

[jira] [Updated] (CASSANDRA-9090) Allow JMX over SSL directly from nodetool

2015-06-24 Thread Marcus Olsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Olsson updated CASSANDRA-9090:
-
Attachment: cassandra-2.1-9090.patch

This patch makes it possible to use SSL with nodetool by running it with:
{noformat}
nodetool --ssl
{noformat}

Then either using a configuration file in ~/.cassandra/ called 
nodetool.properties
{code:title=nodetool.properties}
# Path to keystore
keyStore=/path/to/keystore
# Keystore password
keyStorePassword=keystore-password
# Path to truststore
trustStore=/path/to/truststore
# Truststore password
trustStorePassword=truststore-password
# Enabled cipher suites
cipherSuites=enabled-cipher-suites
# Enabled protocols
enabledProtocols=enabled-protocols
{code}
or by running it with the flags:
{noformat}
nodetool --ssl -Djavax.net.ssl.keyStore=/path/to/keystore 
-Djavax.net.ssl.keyStorePassword=keystore-password 
-Djavax.net.ssl.trustStore=/path/to/truststore 
-Djavax.net.ssl.trustStorePassword=truststore-password 
-Djavax.rmi.ssl.client.enabledCipherSuites=enabled-cipher-suites 
-Djavax.rmi.ssl.client.enabledProtocols=enabled-protocols
{noformat}

 Allow JMX over SSL directly from nodetool
 -

 Key: CASSANDRA-9090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9090
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Philip Thompson
 Fix For: 3.x, 2.1.x, 2.0.x

 Attachments: cassandra-2.1-9090.patch


 Currently cqlsh allows users to connect via SSL to their cassandra cluster 
 via command line. 
 Nodetool only offers username/password authentication [1], and if users want 
 to use SSL, they need to use jconsole [2]. We should support nodetool 
 connecting via SSL in the same way cqlsh does.
 [1] http://wiki.apache.org/cassandra/JmxSecurity
 [2] https://www.lullabot.com/blog/article/monitor-java-jmx
 [3] 
 http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9646) Duplicated schema change event for table creation

2015-06-24 Thread Jorge Bay (JIRA)
Jorge Bay created CASSANDRA-9646:


 Summary: Duplicated schema change event for table creation
 Key: CASSANDRA-9646
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9646
 Project: Cassandra
  Issue Type: Bug
 Environment: OSX 10.9
Reporter: Jorge Bay


When I create a table (or a function), I'm getting notifications for 2 changes:
- Target:KEYSPACE and type: UPDATED
- Target: TABLE AND type CREATED.

I think the first one should not be there. This only occurs with C* 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-06-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599340#comment-14599340
 ] 

Benedict commented on CASSANDRA-7066:
-

Thanks. I've pushed 
[here|https://github.com/belliottsmith/cassandra/tree/7066-suggestions] some 
further follow up suggestions. The most important are the comments around 
{{LifecycleTransaction.doCommit()}}, which needs just a little more tweaking to 
be safe.

bq. However, I don't see why we need to do this every time

It also isn't at all onerous. Let's just annotate the code with the fact we can 
remove it in 4.0, and keep things nice and simple here. It's great to expunge 
all of that old code with just a couple of simple replacing lines.

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.x

 Attachments: 7066.txt


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9631) Unnecessary required filtering for query on indexed clustering key

2015-06-24 Thread Kevin Deldycke (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599359#comment-14599359
 ] 

Kevin Deldycke commented on CASSANDRA-9631:
---

Thanks [~blerer] for this quick update ! :)

 Unnecessary required filtering for query on indexed clustering key
 --

 Key: CASSANDRA-9631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9631
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.6 vanilla; 3-node local cluster; OSX 
 Yosemite 10.10.3; Installed with CCM.
Reporter: Kevin Deldycke
Assignee: Benjamin Lerer
  Labels: CQL, query, secondaryIndex

 Let's create and populate a simple table composed of one partition key {{a}}, 
 two clustering keys {{b}}  {{c}}, and one secondary index on a standard 
 column {{e}}:
 {code:sql}
 $ cqlsh 127.0.0.1
 Connected to test21 at 127.0.0.1:9160.
 [cqlsh 4.1.1 | Cassandra 2.1.6-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
 19.39.0]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION={'class': 'SimpleStrategy', 
 'replication_factor': 3};
 cqlsh CREATE TABLE test.table1 (
... a int,
... b int,
... c int,
... d int,
... e int,
... PRIMARY KEY (a, b, c)
... );
 cqlsh CREATE INDEX table1_e ON test.table1 (e);
 cqlsh INSERT INTO test.table1 (a, b, c, d, e) VALUES (1, 1, 1, 1, 1);
 (...)
 cqlsh SELECT * FROM test.table1;
  a | b | c | d | e
 ---+---+---+---+---
  1 | 1 | 1 | 1 | 1
  1 | 1 | 2 | 2 | 2
  1 | 1 | 3 | 3 | 3
  1 | 2 | 1 | 1 | 3
  1 | 3 | 1 | 1 | 1
  2 | 4 | 1 | 1 | 1
 (6 rows)
 {code}
 With such a schema, I am allowed to query on the indexed column without 
 filtering by providing the first two elements of the primary key:
 {code:sql}
 cqlsh SELECT * FROM test.table1 WHERE a=1 AND b=1 AND e=3;
  a | b | c | d | e
 ---+---+---+---+---
  1 | 1 | 3 | 3 | 3
 (1 rows)
 {code}
 Let's now introduce an index on the first clustering key:
 {code:sql}
 cqlsh CREATE INDEX table1_b ON test.table1 (b);
 {code}
 Now, I expect the same query as above to work without filtering, but it's not:
 {code:sql}
 cqlsh SELECT * FROM test.table1 WHERE a=1 AND b=1 AND e=3;
 Bad Request: Cannot execute this query as it might involve data filtering and 
 thus may have unpredictable performance. If you want to execute this query 
 despite the performance unpredictability, use ALLOW FILTERING
 {code}
 I think this is a bug on the way secondary indexes are accounted for when 
 checking for unfiltered queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9646) Duplicated schema change event for table creation

2015-06-24 Thread Jorge Bay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jorge Bay updated CASSANDRA-9646:
-
Labels: client-impacting  (was: )

 Duplicated schema change event for table creation
 -

 Key: CASSANDRA-9646
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9646
 Project: Cassandra
  Issue Type: Bug
 Environment: OSX 10.9
Reporter: Jorge Bay
  Labels: client-impacting

 When I create a table (or a function), I'm getting notifications for 2 
 changes:
 - Target:KEYSPACE and type: UPDATED
 - Target: TABLE AND type CREATED.
 I think the first one should not be there. This only occurs with C* 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9346) Expand upgrade testing for commitlog changes

2015-06-24 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599180#comment-14599180
 ] 

Branimir Lambov commented on CASSANDRA-9346:


bq. Bracing style inconsistencies are throughout both CommitLogUpgradeTest and 
CommitLogUpgradeTestMaker,

Fixed and uploaded to branch.

bq. along with unused exceptions, vars, etc.

Saw one unused variable, but nothing else. I'm not seeing any warnings either. 
Could you point me to specific problems?

 Expand upgrade testing for commitlog changes
 

 Key: CASSANDRA-9346
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9346
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Philip Thompson
Assignee: Branimir Lambov

 It seems that the current upgrade dtests always flush/drain a node before 
 upgrading it, meaning we have no coverage of reading the commitlog files from 
 a previous version.
 We should add (unless they exist somewhere I am not aware of ) a suite of 
 tests that specifically target upgrading with a significant amount of data 
 left in the commitlog files, that needs to be read by the upgraded node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9631) Unnecessary required filtering for query on indexed clustering key

2015-06-24 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599347#comment-14599347
 ] 

Benjamin Lerer commented on CASSANDRA-9631:
---

{quote}Now, using the index on b for that query is not smart and C* should 
always stick to the index on e for that query as it will be faster and won't 
require filtering. It's just not what happens in 2.1.{quote}

I had a look at the code of 2.1 and it seems that it used to ignore the index 
on {{b}} for this specific case. I broke that behaviour when I implemented 
CASSANDRA-8275. The 2.2/trunk behaviour was not affected by my patch.

I will provide some patches to fix the problem in 2.0 and 2.1 and add some 
extra unit tests to all the versions.

 Unnecessary required filtering for query on indexed clustering key
 --

 Key: CASSANDRA-9631
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9631
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.6 vanilla; 3-node local cluster; OSX 
 Yosemite 10.10.3; Installed with CCM.
Reporter: Kevin Deldycke
Assignee: Benjamin Lerer
  Labels: CQL, query, secondaryIndex

 Let's create and populate a simple table composed of one partition key {{a}}, 
 two clustering keys {{b}}  {{c}}, and one secondary index on a standard 
 column {{e}}:
 {code:sql}
 $ cqlsh 127.0.0.1
 Connected to test21 at 127.0.0.1:9160.
 [cqlsh 4.1.1 | Cassandra 2.1.6-SNAPSHOT | CQL spec 3.1.1 | Thrift protocol 
 19.39.0]
 Use HELP for help.
 cqlsh CREATE KEYSPACE test WITH REPLICATION={'class': 'SimpleStrategy', 
 'replication_factor': 3};
 cqlsh CREATE TABLE test.table1 (
... a int,
... b int,
... c int,
... d int,
... e int,
... PRIMARY KEY (a, b, c)
... );
 cqlsh CREATE INDEX table1_e ON test.table1 (e);
 cqlsh INSERT INTO test.table1 (a, b, c, d, e) VALUES (1, 1, 1, 1, 1);
 (...)
 cqlsh SELECT * FROM test.table1;
  a | b | c | d | e
 ---+---+---+---+---
  1 | 1 | 1 | 1 | 1
  1 | 1 | 2 | 2 | 2
  1 | 1 | 3 | 3 | 3
  1 | 2 | 1 | 1 | 3
  1 | 3 | 1 | 1 | 1
  2 | 4 | 1 | 1 | 1
 (6 rows)
 {code}
 With such a schema, I am allowed to query on the indexed column without 
 filtering by providing the first two elements of the primary key:
 {code:sql}
 cqlsh SELECT * FROM test.table1 WHERE a=1 AND b=1 AND e=3;
  a | b | c | d | e
 ---+---+---+---+---
  1 | 1 | 3 | 3 | 3
 (1 rows)
 {code}
 Let's now introduce an index on the first clustering key:
 {code:sql}
 cqlsh CREATE INDEX table1_b ON test.table1 (b);
 {code}
 Now, I expect the same query as above to work without filtering, but it's not:
 {code:sql}
 cqlsh SELECT * FROM test.table1 WHERE a=1 AND b=1 AND e=3;
 Bad Request: Cannot execute this query as it might involve data filtering and 
 thus may have unpredictable performance. If you want to execute this query 
 despite the performance unpredictability, use ALLOW FILTERING
 {code}
 I think this is a bug on the way secondary indexes are accounted for when 
 checking for unfiltered queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9528) Improve log output from unit tests

2015-06-24 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600197#comment-14600197
 ] 

Ariel Weisberg commented on CASSANDRA-9528:
---

Pushed a new branch since this was already merged.
https://github.com/apache/cassandra/compare/trunk...aweisberg:C-9528-2

 Improve log output from unit tests
 --

 Key: CASSANDRA-9528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9528
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.0 beta 1


 * Single log output file per suite
 * stdout/stderr to the same log file with proper interleaving
 * Don't interleave interactive output from unit tests run concurrently to the 
 console. Print everything about the test once the test has completed.
 * Fetch and compress log files as part of artifacts collected by cassci



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9126) java.lang.RuntimeException: Last written key DecoratedKey = current key DecoratedKey

2015-06-24 Thread Darla Baker (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600119#comment-14600119
 ] 

Darla Baker commented on CASSANDRA-9126:


Another user reported this same Error in 2.0.9 and they are using hsha and lcs. 
 scrub fixes the issue but it returns on occasion. As I get more details, I'll 
update.

 java.lang.RuntimeException: Last written key DecoratedKey = current key 
 DecoratedKey
 -

 Key: CASSANDRA-9126
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9126
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: srinivasu gottipati
Priority: Critical
 Fix For: 2.0.x


 Cassandra V: 2.0.14,
 Getting the following exceptions while trying to compact (I see this issue 
 was raised in earlier versions and marked as closed. However it still appears 
 in 2.0.14). In our case, compaction is not getting succeeded and keep failing 
 with this error.:
 {code}java.lang.RuntimeException: Last written key 
 DecoratedKey(3462767860784856708, 
 354038323137333038305f3330325f31355f474d4543454f) = current key 
 DecoratedKey(3462334604624154281, 
 354036333036353334315f3336315f31355f474d4543454f) writing into {code}
 ...
 Stacktrace:{code}
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:143)
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:166)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:167)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   at java.lang.Thread.run(Thread.java:745){code}
 Any help is greatly appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9640) Nodetool repair of very wide, large rows causes GC pressure and destabilization

2015-06-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600148#comment-14600148
 ] 

Jonathan Ellis commented on CASSANDRA-9640:
---

How large are the cells in these partitions?  Can you attach cfstats?

 Nodetool repair of very wide, large rows causes GC pressure and 
 destabilization
 ---

 Key: CASSANDRA-9640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
 Project: Cassandra
  Issue Type: Bug
 Environment: AWS, ~8GB heap
Reporter: Constance Eustace
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1.x

 Attachments: syslog.zip


 We've noticed our nodes becoming unstable with large, unrecoverable Old Gen 
 GCs until OOM.
 This appears to be around the time of repair, and the specific cause seems to 
 be one of our report computation tables that involves possible very wide rows 
 with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
 We truncate this occasionally, and we also had disabled this computation 
 report for a bit and noticed better node stabiliy.
 I wish I had more specifics. We are switching to an RF 1 table and do more 
 proactive truncation of the table. 
 When things calm down, we will attempt to replicate the issue and watch GC 
 and other logs.
 Any suggestion for things to look for/enable tracing on would be welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9619) Read performance regression in tables with many columns on trunk and 2.2 vs. 2.1

2015-06-24 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600167#comment-14600167
 ] 

Jim Witschey commented on CASSANDRA-9619:
-

The bisect is down to this range of commits:

https://github.com/apache/cassandra/compare/f34f712ad340ddb9b03619c84c950a1b854244d6...a4d075800cd7a56f8b5091e502aae979c318972b

To me it seems plausible -- a change in the memory management system could 
cause unpredictable performance regressions. Unfortunately, these commits are 
merged in from 2.1, so they don't explain a performance difference between 2.1 
and 2.2, at least not on their own. That, plus the fact that the regression 
doesn't show itself every time, makes me wonder if I have incorrectly marked 
some bad commits as good based on one good run. I'll revisit some of my 
bisecting decisions.

In the meantime, [~tjake], do you think trying to revert that memory management 
code is doable/worth a try?

 Read performance regression in tables with many columns on trunk and 2.2 vs. 
 2.1
 

 Key: CASSANDRA-9619
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9619
 Project: Cassandra
  Issue Type: Bug
Reporter: Jim Witschey
Assignee: T Jake Luciani
  Labels: perfomance
 Fix For: 2.2.0 rc2


 There seems to be a regression in read in 2.2 and trunk, as compared to 2.1 
 and 2.0. I found it running cstar_perf jobs with 50-column tables. 2.2 may be 
 worse than trunk, though my results on that aren't consistent. The relevant 
 cstar_perf jobs are here:
 http://cstar.datastax.com/tests/id/273e2ea8-0fc8-11e5-816c-42010af0688f
 http://cstar.datastax.com/tests/id/3a8002d6-1480-11e5-97ff-42010af0688f
 http://cstar.datastax.com/tests/id/40ff2766-1248-11e5-bac8-42010af0688f
 The sequence of commands for these jobs is
 {code}
 stress write n=6500 -rate threads=300 -col n=FIXED\(50\)
 stress read n=6500 -rate threads=300
 stress read n=6500 -rate threads=300
 {code}
 Have a look at the operations per second going from [the first read 
 operation|http://cstar.datastax.com/graph?stats=273e2ea8-0fc8-11e5-816c-42010af0688fmetric=op_rateoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=729.08ymin=0ymax=174379.7]
  to [the second read 
 operation|http://cstar.datastax.com/graph?stats=273e2ea8-0fc8-11e5-816c-42010af0688fmetric=op_rateoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=729.08ymin=0ymax=174379.7].
  They've fallen from ~135K to ~100K comparing trunk to 2.1 and 2.0. It's 
 slightly worse for 2.2, and 2.2 operations per second fall continuously from 
 the first to the second read operation.
 There's a corresponding increase in read latency -- it's noticable on trunk 
 and pretty bad on 2.2. Again, the latency gets higher and higher on 2.2 as 
 the read operations progress (see the graphs 
 [here|http://cstar.datastax.com/graph?stats=273e2ea8-0fc8-11e5-816c-42010af0688fmetric=95th_latencyoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=729.08ymin=0ymax=17.27]
  and 
 [here|http://cstar.datastax.com/graph?stats=273e2ea8-0fc8-11e5-816c-42010af0688fmetric=95th_latencyoperation=3_readsmoothing=1show_aggregates=truexmin=0xmax=928.62ymin=0ymax=14.52]).
 I see a similar regression in a [more recent 
 test|http://cstar.datastax.com/graph?stats=40ff2766-1248-11e5-bac8-42010af0688fmetric=op_rateoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=752.62ymin=0ymax=171799.1],
  though in this one trunk performed worse than 2.2. This run also didn't 
 display the increasing latency in 2.2.
 This regression may show for smaller numbers of columns, but not as 
 prominently, as shown [in the results to this test with the stress default of 
 5 
 columns|http://cstar.datastax.com/graph?stats=227cb89e-0fc8-11e5-9f14-42010af0688fmetric=99.9th_latencyoperation=3_readsmoothing=1show_aggregates=truexmin=0xmax=498.19ymin=0ymax=334.29].
  There's an increase in latency variability on trunk and 2.2, but I don't see 
 a regression in summary statistics.
 My measurements aren't confounded by [the recent regression in 
 cassandra-stress|https://issues.apache.org/jira/browse/CASSANDRA-9558]; 
 cstar_perf uses the same stress program (from trunk) on all versions on the 
 cluster.
 I'm currently working to
 - reproduce with a smaller workload so this is easier to bisect and debug.
 - get results with larger numbers of columns, since we've seen the regression 
 on 50 columns but not the stress default of 5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9640) Nodetool repair of very wide, large rows causes GC pressure and destabilization

2015-06-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9640:
---
Assignee: Yuki Morishita

 Nodetool repair of very wide, large rows causes GC pressure and 
 destabilization
 ---

 Key: CASSANDRA-9640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
 Project: Cassandra
  Issue Type: Bug
 Environment: AWS, ~8GB heap
Reporter: Constance Eustace
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1.x

 Attachments: syslog.zip


 We've noticed our nodes becoming unstable with large, unrecoverable Old Gen 
 GCs until OOM.
 This appears to be around the time of repair, and the specific cause seems to 
 be one of our report computation tables that involves possible very wide rows 
 with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
 We truncate this occasionally, and we also had disabled this computation 
 report for a bit and noticed better node stabiliy.
 I wish I had more specifics. We are switching to an RF 1 table and do more 
 proactive truncation of the table. 
 When things calm down, we will attempt to replicate the issue and watch GC 
 and other logs.
 Any suggestion for things to look for/enable tracing on would be welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9640) Nodetool repair of very wide, large rows causes GC pressure and destabilization

2015-06-24 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600067#comment-14600067
 ] 

Constance Eustace edited comment on CASSANDRA-9640 at 6/24/15 8:43 PM:
---

java version 1.7.0_75
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)

cass version: 2.1.5

nodetool command:

org.apache.cassandra.tools.NodeTool -p 17199 repair -par -inc




was (Author: cowardlydragon):
java version 1.7.0_75
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)

cass version: 2.1.5





 Nodetool repair of very wide, large rows causes GC pressure and 
 destabilization
 ---

 Key: CASSANDRA-9640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
 Project: Cassandra
  Issue Type: Bug
 Environment: AWS, ~8GB heap
Reporter: Constance Eustace
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1.x

 Attachments: syslog.zip


 We've noticed our nodes becoming unstable with large, unrecoverable Old Gen 
 GCs until OOM.
 This appears to be around the time of repair, and the specific cause seems to 
 be one of our report computation tables that involves possible very wide rows 
 with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
 We truncate this occasionally, and we also had disabled this computation 
 report for a bit and noticed better node stabiliy.
 I wish I had more specifics. We are switching to an RF 1 table and do more 
 proactive truncation of the table. 
 When things calm down, we will attempt to replicate the issue and watch GC 
 and other logs.
 Any suggestion for things to look for/enable tracing on would be welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9647) Tables created by cassandra-stress are omitted in DESCRIBE KEYSPACE

2015-06-24 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-9647:
---

 Summary: Tables created by cassandra-stress are omitted in 
DESCRIBE KEYSPACE
 Key: CASSANDRA-9647
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9647
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: T Jake Luciani
Priority: Minor


CASSANDRA-9374 modified cassandra-stress to only use CQL for creating its 
schema. This seems to work, as I'm testing on a cluster with start_rpc:false.

However, when I try to run a DESCRIBE on the schema it omits the tables, 
complaining that they were created with a legacy API:

{code}
cqlsh DESCRIBE KEYSPACE keyspace1 ;

CREATE KEYSPACE keyspace1 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '1'}  AND durable_writes = true;

/*
Warning: Table keyspace1.counter1 omitted because it has constructs not 
compatible with CQL (was created via legacy API).

Approximate structure, for reference:
(this should not be used to reproduce this schema)

CREATE TABLE keyspace1.counter1 (
key blob PRIMARY KEY,
C0 counter,
C1 counter,
C2 counter,
C3 counter,
C4 counter
) WITH COMPACT STORAGE
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
*/

/*
Warning: Table keyspace1.standard1 omitted because it has constructs not 
compatible with CQL (was created via legacy API).

Approximate structure, for reference:
(this should not be used to reproduce this schema)

CREATE TABLE keyspace1.standard1 (
key blob PRIMARY KEY,
C0 blob,
C1 blob,
C2 blob,
C3 blob,
C4 blob
) WITH COMPACT STORAGE
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
*/

cqlsh 
{code}

Note that it attempts to describe them anyway, but they are commented out and 
shouldn't be used to restore from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9647) Tables created by cassandra-stress are omitted in DESCRIBE KEYSPACE

2015-06-24 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-9647:

Description: 
CASSANDRA-9374 modified cassandra-stress to only use CQL for creating its 
schema. This seems to work, as I'm testing on a cluster with start_rpc:false.

However, when I try to run a DESCRIBE on the schema it omits the tables, 
complaining that they were created with a legacy API:

{code}
cqlsh DESCRIBE KEYSPACE keyspace1 ;

CREATE KEYSPACE keyspace1 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '1'}  AND durable_writes = true;

/*
Warning: Table keyspace1.counter1 omitted because it has constructs not 
compatible with CQL (was created via legacy API).

Approximate structure, for reference:
(this should not be used to reproduce this schema)

CREATE TABLE keyspace1.counter1 (
key blob PRIMARY KEY,
C0 counter,
C1 counter,
C2 counter,
C3 counter,
C4 counter
) WITH COMPACT STORAGE
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
*/

/*
Warning: Table keyspace1.standard1 omitted because it has constructs not 
compatible with CQL (was created via legacy API).

Approximate structure, for reference:
(this should not be used to reproduce this schema)

CREATE TABLE keyspace1.standard1 (
key blob PRIMARY KEY,
C0 blob,
C1 blob,
C2 blob,
C3 blob,
C4 blob
) WITH COMPACT STORAGE
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
*/

cqlsh 
{code}

Note that it attempts to describe them anyway, but they are commented out and 
shouldn't be used to restore from.

[This is the ccm workflow I used to test 
this|https://gist.githubusercontent.com/EnigmaCurry/e779055c8debf6de8ef9/raw/a894e99725b6df599f3ce1db5012dd6d069b1339/gistfile1.txt]

  was:
CASSANDRA-9374 modified cassandra-stress to only use CQL for creating its 
schema. This seems to work, as I'm testing on a cluster with start_rpc:false.

However, when I try to run a DESCRIBE on the schema it omits the tables, 
complaining that they were created with a legacy API:

{code}
cqlsh DESCRIBE KEYSPACE keyspace1 ;

CREATE KEYSPACE keyspace1 WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '1'}  AND durable_writes = true;

/*
Warning: Table keyspace1.counter1 omitted because it has constructs not 
compatible with CQL (was created via legacy API).

Approximate structure, for reference:
(this should not be used to reproduce this schema)

CREATE TABLE keyspace1.counter1 (
key blob PRIMARY KEY,
C0 counter,
C1 counter,
C2 counter,
C3 counter,
C4 counter
) WITH COMPACT STORAGE
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
*/

/*
Warning: Table keyspace1.standard1 omitted because it has constructs not 
compatible with CQL (was created via legacy API).

Approximate structure, for reference:
(this should not be used to reproduce this schema)

CREATE TABLE keyspace1.standard1 (
key blob PRIMARY KEY,
C0 blob,
C1 blob,
C2 blob,
C3 blob,
C4 blob
) WITH COMPACT STORAGE
AND bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {}
AND 

[jira] [Created] (CASSANDRA-9648) Warn if power profile is not High Performance on Windows

2015-06-24 Thread Joshua McKenzie (JIRA)
Joshua McKenzie created CASSANDRA-9648:
--

 Summary: Warn if power profile is not High Performance on Windows
 Key: CASSANDRA-9648
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9648
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
 Fix For: 2.2.x


Windows' power profiles have a pretty marked impact on application performance 
and the CPU frequency throttling is fairly aggressive even in balanced mode. As 
we have a large number of threads with varying work rather than a single busy 
thread-per-core, the scheduler on Windows sees enough downtime to constantly 
struggle w/our user-space operations and the frequency on the system will jump 
up and down even when fully saturated from a stress.

I've done some benchmarking of the Balanced vs. High Performance power 
profiles - [link to performance 
numbers|https://docs.google.com/spreadsheets/d/1YS8VtdZAgyec-mcnSgtNhQH9LiHstOaiMtlppvEIIM8/edit#gid=0].
 Note: reads are not saturating the box (or even impacting resources at all 
really) as the CPU's on both stress and node are sitting around 4% usage. Still 
have something to figure out there on 2.2.

We have a few ways we can approach this - for the 1st (warn), here's a branch 
with warning during startup if non-High Performance power profile detected: 
[here|https://github.com/apache/cassandra/compare/trunk...josh-mckenzie:check_power_plan].

Alternatively we could get more aggressive and actually attempt a powercfg /s 
to the GUID of the High Performance power profile or refuse to start Cassandra 
if we're not in the performance profile. I also briefly pursued using Sigar to 
query this information however the documentation for the library is no longer 
available (or at least I couldn't find it).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9619) Read performance regression in tables with many columns on trunk and 2.2 vs. 2.1

2015-06-24 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reassigned CASSANDRA-9619:
-

Assignee: T Jake Luciani

 Read performance regression in tables with many columns on trunk and 2.2 vs. 
 2.1
 

 Key: CASSANDRA-9619
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9619
 Project: Cassandra
  Issue Type: Bug
Reporter: Jim Witschey
Assignee: T Jake Luciani
  Labels: perfomance
 Fix For: 2.2.0 rc2


 There seems to be a regression in read in 2.2 and trunk, as compared to 2.1 
 and 2.0. I found it running cstar_perf jobs with 50-column tables. 2.2 may be 
 worse than trunk, though my results on that aren't consistent. The relevant 
 cstar_perf jobs are here:
 http://cstar.datastax.com/tests/id/273e2ea8-0fc8-11e5-816c-42010af0688f
 http://cstar.datastax.com/tests/id/3a8002d6-1480-11e5-97ff-42010af0688f
 http://cstar.datastax.com/tests/id/40ff2766-1248-11e5-bac8-42010af0688f
 The sequence of commands for these jobs is
 {code}
 stress write n=6500 -rate threads=300 -col n=FIXED\(50\)
 stress read n=6500 -rate threads=300
 stress read n=6500 -rate threads=300
 {code}
 Have a look at the operations per second going from [the first read 
 operation|http://cstar.datastax.com/graph?stats=273e2ea8-0fc8-11e5-816c-42010af0688fmetric=op_rateoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=729.08ymin=0ymax=174379.7]
  to [the second read 
 operation|http://cstar.datastax.com/graph?stats=273e2ea8-0fc8-11e5-816c-42010af0688fmetric=op_rateoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=729.08ymin=0ymax=174379.7].
  They've fallen from ~135K to ~100K comparing trunk to 2.1 and 2.0. It's 
 slightly worse for 2.2, and 2.2 operations per second fall continuously from 
 the first to the second read operation.
 There's a corresponding increase in read latency -- it's noticable on trunk 
 and pretty bad on 2.2. Again, the latency gets higher and higher on 2.2 as 
 the read operations progress (see the graphs 
 [here|http://cstar.datastax.com/graph?stats=273e2ea8-0fc8-11e5-816c-42010af0688fmetric=95th_latencyoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=729.08ymin=0ymax=17.27]
  and 
 [here|http://cstar.datastax.com/graph?stats=273e2ea8-0fc8-11e5-816c-42010af0688fmetric=95th_latencyoperation=3_readsmoothing=1show_aggregates=truexmin=0xmax=928.62ymin=0ymax=14.52]).
 I see a similar regression in a [more recent 
 test|http://cstar.datastax.com/graph?stats=40ff2766-1248-11e5-bac8-42010af0688fmetric=op_rateoperation=2_readsmoothing=1show_aggregates=truexmin=0xmax=752.62ymin=0ymax=171799.1],
  though in this one trunk performed worse than 2.2. This run also didn't 
 display the increasing latency in 2.2.
 This regression may show for smaller numbers of columns, but not as 
 prominently, as shown [in the results to this test with the stress default of 
 5 
 columns|http://cstar.datastax.com/graph?stats=227cb89e-0fc8-11e5-9f14-42010af0688fmetric=99.9th_latencyoperation=3_readsmoothing=1show_aggregates=truexmin=0xmax=498.19ymin=0ymax=334.29].
  There's an increase in latency variability on trunk and 2.2, but I don't see 
 a regression in summary statistics.
 My measurements aren't confounded by [the recent regression in 
 cassandra-stress|https://issues.apache.org/jira/browse/CASSANDRA-9558]; 
 cstar_perf uses the same stress program (from trunk) on all versions on the 
 cluster.
 I'm currently working to
 - reproduce with a smaller workload so this is easier to bisect and debug.
 - get results with larger numbers of columns, since we've seen the regression 
 on 50 columns but not the stress default of 5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9640) Nodetool repair of very wide, large rows causes GC pressure and destabilization

2015-06-24 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600078#comment-14600078
 ] 

Constance Eustace edited comment on CASSANDRA-9640 at 6/24/15 8:51 PM:
---

attached syslog.zip

the destabilization occurs around the end of _0040 and continues into the next 
logfile

I suspect that multiple huge/wide partition keys are being resolved in 
parallel, and that may be filling the heap, since a smaller set of large wide 
rows (1-10 rows) doesn't seem to bother it.

I'd guess we had 20-30 days of rows...

entity_etljob is the processing table that has the ultra-huge rows



was (Author: cowardlydragon):
attached syslog.zip

the destabilization occurs around the end of _0040 and continues into the next 
logfile

I suspect that multiple huge/wide partition keys are being resolved in 
parallel, and that may be filling the heap.

entity_etljob is the processing table that has the ultra-huge rows


 Nodetool repair of very wide, large rows causes GC pressure and 
 destabilization
 ---

 Key: CASSANDRA-9640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
 Project: Cassandra
  Issue Type: Bug
 Environment: AWS, ~8GB heap
Reporter: Constance Eustace
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1.x

 Attachments: syslog.zip


 We've noticed our nodes becoming unstable with large, unrecoverable Old Gen 
 GCs until OOM.
 This appears to be around the time of repair, and the specific cause seems to 
 be one of our report computation tables that involves possible very wide rows 
 with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
 We truncate this occasionally, and we also had disabled this computation 
 report for a bit and noticed better node stabiliy.
 I wish I had more specifics. We are switching to an RF 1 table and do more 
 proactive truncation of the table. 
 When things calm down, we will attempt to replicate the issue and watch GC 
 and other logs.
 Any suggestion for things to look for/enable tracing on would be welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9640) Nodetool repair of very wide, large rows causes GC pressure and destabilization

2015-06-24 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600078#comment-14600078
 ] 

Constance Eustace edited comment on CASSANDRA-9640 at 6/24/15 8:52 PM:
---

attached syslog.zip

the destabilization occurs around the end of _0040 and continues into the next 
logfile

I suspect that multiple huge/wide partition keys are being resolved in 
parallel, and that may be filling the heap, since a smaller set of large wide 
rows (1-10 rows) doesn't seem to bother it.

I'd guess we had 20-40 rows fo 5-10 GB each when this went teetering down.

entity_etljob is the processing table that has the ultra-huge rows



was (Author: cowardlydragon):
attached syslog.zip

the destabilization occurs around the end of _0040 and continues into the next 
logfile

I suspect that multiple huge/wide partition keys are being resolved in 
parallel, and that may be filling the heap, since a smaller set of large wide 
rows (1-10 rows) doesn't seem to bother it.

I'd guess we had 20-30 days of rows...

entity_etljob is the processing table that has the ultra-huge rows


 Nodetool repair of very wide, large rows causes GC pressure and 
 destabilization
 ---

 Key: CASSANDRA-9640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
 Project: Cassandra
  Issue Type: Bug
 Environment: AWS, ~8GB heap
Reporter: Constance Eustace
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1.x

 Attachments: syslog.zip


 We've noticed our nodes becoming unstable with large, unrecoverable Old Gen 
 GCs until OOM.
 This appears to be around the time of repair, and the specific cause seems to 
 be one of our report computation tables that involves possible very wide rows 
 with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
 We truncate this occasionally, and we also had disabled this computation 
 report for a bit and noticed better node stabiliy.
 I wish I had more specifics. We are switching to an RF 1 table and do more 
 proactive truncation of the table. 
 When things calm down, we will attempt to replicate the issue and watch GC 
 and other logs.
 Any suggestion for things to look for/enable tracing on would be welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9640) Nodetool repair of very wide, large rows causes GC pressure and destabilization

2015-06-24 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600078#comment-14600078
 ] 

Constance Eustace edited comment on CASSANDRA-9640 at 6/24/15 8:30 PM:
---

attached syslog.zip

the destabilization occurs around the end of _0040 and continues into the next 
logfile

I suspect that multiple huge/wide partition keys are being resolved in 
parallel, and that may be filling the heap.



was (Author: cowardlydragon):
the destabilization occurs around the end of _0040.


 Nodetool repair of very wide, large rows causes GC pressure and 
 destabilization
 ---

 Key: CASSANDRA-9640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
 Project: Cassandra
  Issue Type: Bug
 Environment: AWS, ~8GB heap
Reporter: Constance Eustace
Priority: Minor
 Fix For: 2.1.x

 Attachments: syslog.zip


 We've noticed our nodes becoming unstable with large, unrecoverable Old Gen 
 GCs until OOM.
 This appears to be around the time of repair, and the specific cause seems to 
 be one of our report computation tables that involves possible very wide rows 
 with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
 We truncate this occasionally, and we also had disabled this computation 
 report for a bit and noticed better node stabiliy.
 I wish I had more specifics. We are switching to an RF 1 table and do more 
 proactive truncation of the table. 
 When things calm down, we will attempt to replicate the issue and watch GC 
 and other logs.
 Any suggestion for things to look for/enable tracing on would be welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9558) Cassandra-stress regression in 2.2

2015-06-24 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600116#comment-14600116
 ] 

T Jake Luciani commented on CASSANDRA-9558:
---

So can we go with a patch that changes the defaults for stress to use v2 and 8 
connections in the pool?

 Cassandra-stress regression in 2.2
 --

 Key: CASSANDRA-9558
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9558
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: T Jake Luciani
 Fix For: 2.2.x

 Attachments: 2.1.log, 2.2.log, CASSANDRA-9558-2.patch, 
 CASSANDRA-9558-ProtocolV2.patch, atolber-CASSANDRA-9558-stress.tgz, 
 atolber-trunk-driver-coalescing-disabled.txt, 
 stress-2.1-java-driver-2.0.9.2.log, stress-2.1-java-driver-2.2+PATCH.log, 
 stress-2.1-java-driver-2.2.log, stress-2.2-java-driver-2.2+PATCH.log, 
 stress-2.2-java-driver-2.2.log


 We are seeing some regression in performance when using cassandra-stress 2.2. 
 You can see the difference at this url:
 http://riptano.github.io/cassandra_performance/graph_v5/graph.html?stats=stress_regression.jsonmetric=op_rateoperation=1_writesmoothing=1show_aggregates=truexmin=0xmax=108.57ymin=0ymax=168147.1
 The cassandra version of the cluster doesn't seem to have any impact. 
 //cc [~tjake] [~benedict]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9558) Cassandra-stress regression in 2.2

2015-06-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600135#comment-14600135
 ] 

Benedict commented on CASSANDRA-9558:
-

The reason I ask about the size of the cluster this is being tested on, is this 
worsens performance as the cluster grows, as we coalesce fewer messages. So 
this could be improving our benchmark performance at the expense of real-world 
performance.

 Cassandra-stress regression in 2.2
 --

 Key: CASSANDRA-9558
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9558
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
Assignee: T Jake Luciani
 Fix For: 2.2.x

 Attachments: 2.1.log, 2.2.log, CASSANDRA-9558-2.patch, 
 CASSANDRA-9558-ProtocolV2.patch, atolber-CASSANDRA-9558-stress.tgz, 
 atolber-trunk-driver-coalescing-disabled.txt, 
 stress-2.1-java-driver-2.0.9.2.log, stress-2.1-java-driver-2.2+PATCH.log, 
 stress-2.1-java-driver-2.2.log, stress-2.2-java-driver-2.2+PATCH.log, 
 stress-2.2-java-driver-2.2.log


 We are seeing some regression in performance when using cassandra-stress 2.2. 
 You can see the difference at this url:
 http://riptano.github.io/cassandra_performance/graph_v5/graph.html?stats=stress_regression.jsonmetric=op_rateoperation=1_writesmoothing=1show_aggregates=truexmin=0xmax=108.57ymin=0ymax=168147.1
 The cassandra version of the cluster doesn't seem to have any impact. 
 //cc [~tjake] [~benedict]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9640) Nodetool repair of very wide, large rows causes GC pressure and destabilization

2015-06-24 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600323#comment-14600323
 ] 

Jeff Jirsa commented on CASSANDRA-9640:
---

Any reason to believe this is new vs a dupe of CASSANDRA-9549 ? Repair causes 
streaming, streaming causes compaction, compaction contributes to 
CASSANDRA-9549 . 





 Nodetool repair of very wide, large rows causes GC pressure and 
 destabilization
 ---

 Key: CASSANDRA-9640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
 Project: Cassandra
  Issue Type: Bug
 Environment: AWS, ~8GB heap
Reporter: Constance Eustace
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1.x

 Attachments: syslog.zip


 We've noticed our nodes becoming unstable with large, unrecoverable Old Gen 
 GCs until OOM.
 This appears to be around the time of repair, and the specific cause seems to 
 be one of our report computation tables that involves possible very wide rows 
 with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
 We truncate this occasionally, and we also had disabled this computation 
 report for a bit and noticed better node stabiliy.
 I wish I had more specifics. We are switching to an RF 1 table and do more 
 proactive truncation of the table. 
 When things calm down, we will attempt to replicate the issue and watch GC 
 and other logs.
 Any suggestion for things to look for/enable tracing on would be welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9640) Nodetool repair of very wide, large rows causes GC pressure and destabilization

2015-06-24 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600078#comment-14600078
 ] 

Constance Eustace edited comment on CASSANDRA-9640 at 6/24/15 8:37 PM:
---

attached syslog.zip

the destabilization occurs around the end of _0040 and continues into the next 
logfile

I suspect that multiple huge/wide partition keys are being resolved in 
parallel, and that may be filling the heap.

entity_etljob is the processing table that has the ultra-huge rows



was (Author: cowardlydragon):
attached syslog.zip

the destabilization occurs around the end of _0040 and continues into the next 
logfile

I suspect that multiple huge/wide partition keys are being resolved in 
parallel, and that may be filling the heap.


 Nodetool repair of very wide, large rows causes GC pressure and 
 destabilization
 ---

 Key: CASSANDRA-9640
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9640
 Project: Cassandra
  Issue Type: Bug
 Environment: AWS, ~8GB heap
Reporter: Constance Eustace
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1.x

 Attachments: syslog.zip


 We've noticed our nodes becoming unstable with large, unrecoverable Old Gen 
 GCs until OOM.
 This appears to be around the time of repair, and the specific cause seems to 
 be one of our report computation tables that involves possible very wide rows 
 with 10GB of data in it. THis is an RF 3 table in a four-node cluster.
 We truncate this occasionally, and we also had disabled this computation 
 report for a bit and noticed better node stabiliy.
 I wish I had more specifics. We are switching to an RF 1 table and do more 
 proactive truncation of the table. 
 When things calm down, we will attempt to replicate the issue and watch GC 
 and other logs.
 Any suggestion for things to look for/enable tracing on would be welcome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9647) Tables created by cassandra-stress are omitted in DESCRIBE KEYSPACE

2015-06-24 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-9647:

Reproduced In: 2.2.0 rc1

 Tables created by cassandra-stress are omitted in DESCRIBE KEYSPACE
 ---

 Key: CASSANDRA-9647
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9647
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire
Assignee: T Jake Luciani
Priority: Minor
  Labels: cqlsh, stress

 CASSANDRA-9374 modified cassandra-stress to only use CQL for creating its 
 schema. This seems to work, as I'm testing on a cluster with start_rpc:false.
 However, when I try to run a DESCRIBE on the schema it omits the tables, 
 complaining that they were created with a legacy API:
 {code}
 cqlsh DESCRIBE KEYSPACE keyspace1 ;
 CREATE KEYSPACE keyspace1 WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': '1'}  AND durable_writes = true;
 /*
 Warning: Table keyspace1.counter1 omitted because it has constructs not 
 compatible with CQL (was created via legacy API).
 Approximate structure, for reference:
 (this should not be used to reproduce this schema)
 CREATE TABLE keyspace1.counter1 (
 key blob PRIMARY KEY,
 C0 counter,
 C1 counter,
 C2 counter,
 C3 counter,
 C4 counter
 ) WITH COMPACT STORAGE
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 */
 /*
 Warning: Table keyspace1.standard1 omitted because it has constructs not 
 compatible with CQL (was created via legacy API).
 Approximate structure, for reference:
 (this should not be used to reproduce this schema)
 CREATE TABLE keyspace1.standard1 (
 key blob PRIMARY KEY,
 C0 blob,
 C1 blob,
 C2 blob,
 C3 blob,
 C4 blob
 ) WITH COMPACT STORAGE
 AND bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 */
 cqlsh 
 {code}
 Note that it attempts to describe them anyway, but they are commented out and 
 shouldn't be used to restore from.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9528) Improve log output from unit tests

2015-06-24 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600180#comment-14600180
 ] 

Ariel Weisberg commented on CASSANDRA-9528:
---

I fixed it in the logback config. I updated the rolling to the logs directory, 
but not the uncompressed pre-rolling output.

 Improve log output from unit tests
 --

 Key: CASSANDRA-9528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9528
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.0 beta 1


 * Single log output file per suite
 * stdout/stderr to the same log file with proper interleaving
 * Don't interleave interactive output from unit tests run concurrently to the 
 console. Print everything about the test once the test has completed.
 * Fetch and compress log files as part of artifacts collected by cassci



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9646) Duplicated schema change event for table creation

2015-06-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9646:
---
Fix Version/s: 2.2.x

 Duplicated schema change event for table creation
 -

 Key: CASSANDRA-9646
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9646
 Project: Cassandra
  Issue Type: Bug
 Environment: OSX 10.9
Reporter: Jorge Bay
  Labels: client-impacting
 Fix For: 2.2.x


 When I create a table (or a function), I'm getting notifications for 2 
 changes:
 - Target:KEYSPACE and type: UPDATED
 - Target: TABLE AND type CREATED.
 I think the first one should not be there. This only occurs with C* 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9090) Allow JMX over SSL directly from nodetool

2015-06-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9090:
---
Reviewer: Jason Brown

 Allow JMX over SSL directly from nodetool
 -

 Key: CASSANDRA-9090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9090
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Philip Thompson
 Fix For: 3.x, 2.1.x, 2.0.x

 Attachments: cassandra-2.0-9090.patch, cassandra-2.1-9090-1.patch, 
 cassandra-2.1-9090.patch, cassandra-2.2-9090.patch


 Currently cqlsh allows users to connect via SSL to their cassandra cluster 
 via command line. 
 Nodetool only offers username/password authentication [1], and if users want 
 to use SSL, they need to use jconsole [2]. We should support nodetool 
 connecting via SSL in the same way cqlsh does.
 [1] http://wiki.apache.org/cassandra/JmxSecurity
 [2] https://www.lullabot.com/blog/article/monitor-java-jmx
 [3] 
 http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9090) Allow JMX over SSL directly from nodetool

2015-06-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9090:
---
Reviewer:   (was: Jason Brown)

 Allow JMX over SSL directly from nodetool
 -

 Key: CASSANDRA-9090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9090
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Philip Thompson
 Fix For: 3.x, 2.1.x, 2.0.x

 Attachments: cassandra-2.0-9090.patch, cassandra-2.1-9090-1.patch, 
 cassandra-2.1-9090.patch, cassandra-2.2-9090.patch


 Currently cqlsh allows users to connect via SSL to their cassandra cluster 
 via command line. 
 Nodetool only offers username/password authentication [1], and if users want 
 to use SSL, they need to use jconsole [2]. We should support nodetool 
 connecting via SSL in the same way cqlsh does.
 [1] http://wiki.apache.org/cassandra/JmxSecurity
 [2] https://www.lullabot.com/blog/article/monitor-java-jmx
 [3] 
 http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9090) Allow JMX over SSL directly from nodetool

2015-06-24 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599478#comment-14599478
 ] 

Jason Brown commented on CASSANDRA-9090:


I suspect it's too late to add new features to 2.0, so I'll only focus on 2.1 
and higher for review.

 Allow JMX over SSL directly from nodetool
 -

 Key: CASSANDRA-9090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9090
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Philip Thompson
 Fix For: 3.x, 2.1.x, 2.0.x

 Attachments: cassandra-2.0-9090.patch, cassandra-2.1-9090-1.patch, 
 cassandra-2.1-9090.patch, cassandra-2.2-9090.patch


 Currently cqlsh allows users to connect via SSL to their cassandra cluster 
 via command line. 
 Nodetool only offers username/password authentication [1], and if users want 
 to use SSL, they need to use jconsole [2]. We should support nodetool 
 connecting via SSL in the same way cqlsh does.
 [1] http://wiki.apache.org/cassandra/JmxSecurity
 [2] https://www.lullabot.com/blog/article/monitor-java-jmx
 [3] 
 http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8413) Bloom filter false positive ratio is not honoured

2015-06-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599498#comment-14599498
 ] 

Benedict commented on CASSANDRA-8413:
-

[~snazy]: sorry for the embarassingly slow review. I think I conflated this 
with your other bf ticket which requires a bit more thought (on my part).

I think I would prefer if we settle on the _old_ way being inverted - perhaps 
even just called hasOldBfHashOrder, and we just swap the {{indexes[0]}} and 
{{indexes[1]}} positions in {{getHashBuckets}} - and then flip them iff we have 
the old layout. It's a small thing, but I think it is clearer if the expired 
way of doing things is considered the exceptional and extra work case.

Otherwise, can we rebase and get cassci vetting? The versioning conditions may 
need revisiting also.

 Bloom filter false positive ratio is not honoured
 -

 Key: CASSANDRA-8413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Robert Stupp
 Fix For: 3.x

 Attachments: 8413-patch.txt, 8413.hack-3.0.txt, 8413.hack.txt


 Whilst thinking about CASSANDRA-7438 and hash bits, I realised we have a 
 problem with sabotaging our bloom filters when using the murmur3 partitioner. 
 I have performed a very quick test to confirm this risk is real.
 Since a typical cluster uses the same murmur3 hash for partitioning as we do 
 for bloom filter lookups, and we own a contiguous range, we can guarantee 
 that the top X bits collide for all keys on the node. This translates into 
 poor bloom filter distribution. I quickly hacked LongBloomFilterTest to 
 simulate the problem, and the result in these tests is _up to_ a doubling of 
 the actual false positive ratio. The actual change will depend on the key 
 distribution, the number of keys, the false positive ratio, the number of 
 nodes, the token distribution, etc. But seems to be a real problem for 
 non-vnode clusters of at least ~128 nodes in size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6490) Please delete old releases from mirroring system

2015-06-24 Thread Sebb (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599517#comment-14599517
 ] 

Sebb commented on CASSANDRA-6490:
-

The recommended way to do this is as follows:

Following a successful release vote:
- publish the release artifacts (add to dist/release/cassandra)
- publish Maven artifacts via Nexus (if relevant)
- wait a day for mirrors to catch up (most will do so in less than a day)
- update the website download page to point to the new releases; older releases 
links should point to the archive server (this is published immediately)
- send the announce message
- a few days later, delete the older releases from dist/release/cassandra

That should not result in any 404s.

 Please delete old releases from mirroring system
 

 Key: CASSANDRA-6490
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6490
 Project: Cassandra
  Issue Type: Bug
 Environment: http://www.apache.org/dist/cassandra/
Reporter: Sebb
Assignee: T Jake Luciani

 To reduce the load on the ASF mirrors, projects are required to delete old 
 releases [1]
 Please can you remove all non-current releases?
 Thanks!
 [Note that older releases are always available from the ASF archive server]
 Any links to older releases on download pages should first be adjusted to 
 point to the archive server.
 [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9528) Improve log output from unit tests

2015-06-24 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599519#comment-14599519
 ] 

Ariel Weisberg commented on CASSANDRA-9528:
---

There is no pattern so I don't get why only the tar.gz are picked up.
{noformat}
${JENKINS_HOME}/automaton/bin/ctool run ${BUILD_TAG} 0 cd 
cassandra/build/test/ ; tar czf ${BUILD_TAG}_logs.tar.gz logs/ || true
{noformat}

 Improve log output from unit tests
 --

 Key: CASSANDRA-9528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9528
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.0 beta 1


 * Single log output file per suite
 * stdout/stderr to the same log file with proper interleaving
 * Don't interleave interactive output from unit tests run concurrently to the 
 console. Print everything about the test once the test has completed.
 * Fetch and compress log files as part of artifacts collected by cassci



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9064) [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe table statement

2015-06-24 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599452#comment-14599452
 ] 

Adam Holmberg commented on CASSANDRA-9064:
--

[~blerer] it's already fixed, tested in main. Not sure if C* wants to package 
now, or wait for an official release. (?)

 [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe 
 table statement
 

 Key: CASSANDRA-9064
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9064
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.3 on mac os x
Reporter: Sujeet Gholap
Assignee: Adam Holmberg
  Labels: cqlsh
 Fix For: 2.1.x


 Here's how to reproduce:
 1) Create a table with LeveledCompactionStrategy
 CREATE keyspace foo WITH REPLICATION = {'class': 'SimpleStrategy', 
 'replication_factor' : 3};
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH compaction = {'class': 'LeveledCompactionStrategy'};
 2) Describe the table and save the output
 cqlsh -e describe table foo.bar
 Output should be something like
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH bloom_filter_fp_chance = 0.1
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 3) Save the output to repro.cql
 4) Drop the table foo.bar
 cqlsh -e drop table foo.bar
 5) Run the create table statement we saved
 cqlsh -f repro.cql
 6) Expected: normal execution without an error
 7) Reality:
 ConfigurationException: ErrorMessage code=2300 [Query invalid because of 
 configuration issue] message=Properties specified [min_threshold, 
 max_threshold] are not understood by LeveledCompactionStrategy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9560) Changing durable_writes on a keyspace is only applied after restart of node

2015-06-24 Thread Fred A (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599369#comment-14599369
 ] 

Fred A commented on CASSANDRA-9560:
---

Is this patch safe to apply?

 Changing durable_writes on a keyspace is only applied after restart of node
 ---

 Key: CASSANDRA-9560
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9560
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Single node
Reporter: Fred A
Assignee: Carl Yeksigian
 Fix For: 2.1.x


 When mutations for a column family is about to be applied, the cached 
 instance of the keyspace metadata is read. But the schema mutation for 
 durable_writes hasn't been applied to this cached instance.
 I'm not too familiar with the codebase but after some debugging (2.1.3), it's 
 somehow related to:
 {code:title=org.apache.cassandra.db.Mutation.java|borderStyle=solid}
 public void apply()
 {
Keyspace ks = Keyspace.open(keyspaceName);
 ks.apply(this, ks.metadata.durableWrites);
 }
 {code}
 Where a cached instance of the keyspace is opened but it's metadata hasn't 
 been updated with the earlier applied durable_writes mutation, since it seems 
 that the cached keyspace instance is lazily build at startup but after that, 
 never updated. I'm also a little bit concerned if other values in the cached 
 keyspace instance suffers from the same issue, e.g. replication_factor... 
 I've seen the same issue in 2.1.5 and the only way to resolve this issue is 
 to restart the node to let the keyspace instance cache reload from disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9064) [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe table statement

2015-06-24 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599455#comment-14599455
 ] 

Benjamin Lerer commented on CASSANDRA-9064:
---

[~thobbs] What is your opinion?

 [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe 
 table statement
 

 Key: CASSANDRA-9064
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9064
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.3 on mac os x
Reporter: Sujeet Gholap
Assignee: Adam Holmberg
  Labels: cqlsh
 Fix For: 2.1.x


 Here's how to reproduce:
 1) Create a table with LeveledCompactionStrategy
 CREATE keyspace foo WITH REPLICATION = {'class': 'SimpleStrategy', 
 'replication_factor' : 3};
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH compaction = {'class': 'LeveledCompactionStrategy'};
 2) Describe the table and save the output
 cqlsh -e describe table foo.bar
 Output should be something like
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH bloom_filter_fp_chance = 0.1
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 3) Save the output to repro.cql
 4) Drop the table foo.bar
 cqlsh -e drop table foo.bar
 5) Run the create table statement we saved
 cqlsh -f repro.cql
 6) Expected: normal execution without an error
 7) Reality:
 ConfigurationException: ErrorMessage code=2300 [Query invalid because of 
 configuration issue] message=Properties specified [min_threshold, 
 max_threshold] are not understood by LeveledCompactionStrategy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9090) Allow JMX over SSL directly from nodetool

2015-06-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9090:
---
Reviewer: Jason Brown

 Allow JMX over SSL directly from nodetool
 -

 Key: CASSANDRA-9090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9090
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Philip Thompson
 Fix For: 3.x, 2.1.x, 2.0.x

 Attachments: cassandra-2.0-9090.patch, cassandra-2.1-9090-1.patch, 
 cassandra-2.1-9090.patch, cassandra-2.2-9090.patch


 Currently cqlsh allows users to connect via SSL to their cassandra cluster 
 via command line. 
 Nodetool only offers username/password authentication [1], and if users want 
 to use SSL, they need to use jconsole [2]. We should support nodetool 
 connecting via SSL in the same way cqlsh does.
 [1] http://wiki.apache.org/cassandra/JmxSecurity
 [2] https://www.lullabot.com/blog/article/monitor-java-jmx
 [3] 
 http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9560) Changing durable_writes on a keyspace is only applied after restart of node

2015-06-24 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599510#comment-14599510
 ] 

Carl Yeksigian commented on CASSANDRA-9560:
---

Yes, it should be safe to apply; I've tested it locally and it does what's 
expected.

 Changing durable_writes on a keyspace is only applied after restart of node
 ---

 Key: CASSANDRA-9560
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9560
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Single node
Reporter: Fred A
Assignee: Carl Yeksigian
 Fix For: 2.1.x


 When mutations for a column family is about to be applied, the cached 
 instance of the keyspace metadata is read. But the schema mutation for 
 durable_writes hasn't been applied to this cached instance.
 I'm not too familiar with the codebase but after some debugging (2.1.3), it's 
 somehow related to:
 {code:title=org.apache.cassandra.db.Mutation.java|borderStyle=solid}
 public void apply()
 {
Keyspace ks = Keyspace.open(keyspaceName);
 ks.apply(this, ks.metadata.durableWrites);
 }
 {code}
 Where a cached instance of the keyspace is opened but it's metadata hasn't 
 been updated with the earlier applied durable_writes mutation, since it seems 
 that the cached keyspace instance is lazily build at startup but after that, 
 never updated. I'm also a little bit concerned if other values in the cached 
 keyspace instance suffers from the same issue, e.g. replication_factor... 
 I've seen the same issue in 2.1.5 and the only way to resolve this issue is 
 to restart the node to let the keyspace instance cache reload from disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-24 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599512#comment-14599512
 ] 

Ariel Weisberg commented on CASSANDRA-9499:
---

That works for me.

Thinking aloud a lot of the pain we face is from using streams which pretty 
much inevitably are wrappers around ByteBuffers instead of having an ecosystem 
around ByteBuffers. I think throughout C* we end up with the worst of both 
worlds. OHC doesn't need to do that so it could define it's key and value 
serializers to accept ByteBuffers?

You can go from a ByteBuffer to a stream pretty easily, but not the other way 
around.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9160) Migrate CQL dtests to unit tests

2015-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600486#comment-14600486
 ] 

Stefania commented on CASSANDRA-9160:
-

I've opened CASSANDRA-9649 for the timestamp potential problem.

Thanks for your help, reviewing this must have been a ton of work too!

 Migrate CQL dtests to unit tests
 

 Key: CASSANDRA-9160
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9160
 Project: Cassandra
  Issue Type: Test
Reporter: Sylvain Lebresne
Assignee: Stefania

 We have CQL tests in 2 places: dtests and unit tests. The unit tests are 
 actually somewhat better in the sense that they have the ability to test both 
 prepared and unprepared statements at the flip of a switch. It's also better 
 to have all those tests in the same place so we can improve the test 
 framework in only one place (CASSANDRA-7959, CASSANDRA-9159, etc...). So we 
 should move the CQL dtests to the unit tests (which will be a good occasion 
 to organize them better).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600380#comment-14600380
 ] 

Stefania commented on CASSANDRA-8894:
-

Thanks, {{estimatedRowSize}} has been renamed to {{estimatedPartitionSize}} in 
CASSANDRA-9448, just not committed yet.

 Our default buffer size for (uncompressed) buffered reads should be smaller, 
 and based on the expected record size
 --

 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
 Fix For: 3.x


 A large contributor to slower buffered reads than mmapped is likely that we 
 read a full 64Kb at once, when average record sizes may be as low as 140 
 bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
 each read will touch 32 of these, meaning we are unlikely to almost ever be 
 hitting the TLB, and will be incurring at least 30 unnecessary misses each 
 time (as well as the other costs of larger than necessary accesses). When 
 working with an SSD there is little to no benefit reading more than 4Kb at 
 once, and in either case reading more data than we need is wasteful. So, I 
 propose selecting a buffer size that is the next larger power of 2 than our 
 average record size (with a minimum of 4Kb), so that we expect to read in one 
 operation. I also propose that we create a pool of these buffers up-front, 
 and that we ensure they are all exactly aligned to a virtual page, so that 
 the source and target operations each touch exactly one virtual page per 4Kb 
 of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9649) Paxos ballot in StorageProxy could clash

2015-06-24 Thread Stefania (JIRA)
Stefania created CASSANDRA-9649:
---

 Summary: Paxos ballot in StorageProxy could clash
 Key: CASSANDRA-9649
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9649
 Project: Cassandra
  Issue Type: Bug
Reporter: Stefania
Assignee: Stefania
Priority: Minor


This code in {{StorageProxy.beginAndRepairPaxos()}} takes a timestamp in 
microseconds but divides it by 1000 before adding one. So if the summary is 
null, ballotMillis would be the same for up to 1000 possible state timestamp 
values:

{code}
long currentTime = (state.getTimestamp() / 1000) + 1;
long ballotMillis = summary == null
 ? currentTime
 : Math.max(currentTime, 1 +
UUIDGen.unixTimestamp(summary.mostRecentInProgressCommit.ballot));
UUID ballot = UUIDGen.getTimeUUID(ballotMillis);
{code}

{{state.getTimestamp()}} returns the time in micro seconds and it ensures to 
add one microsecond to any previously used timestamp if the client sends the 
same or an older timestamp. 

Initially I used this code in {{ModificationStatement.casInternal()}}, 
introduced by CASSANDRA-9160 to support cas unit tests, but occasionally these 
tests were failing. It was only when I ensured uniqueness of the ballot that 
the tests started to pass reliably.

I wonder if we could ever have the same issue in StorageProxy?

cc [~jbellis] and [~slebresne] for CASSANDRA-7801



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-06-24 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600585#comment-14600585
 ] 

Stefania commented on CASSANDRA-8894:
-

I pushed a commit to fix the rounding, thanks.

I've run a couple of cperf tests:

- [one|http://cstar.datastax.com/tests/id/27ca75a2-1ae0-11e5-bede-42010af0688f] 
comparing trunk and 8894 both with disk_access_mode standard, here we can see 
8894 is better when reading (~6000 ops per second).
- [one|http://cstar.datastax.com/tests/id/e1d62922-1ae1-11e5-9038-42010af0688f] 
comparing trunk mmap and 8894 standard, here we can see trunk is still better 
than 8894 when reading (~4000 ops per second).

CI is still running.

 Our default buffer size for (uncompressed) buffered reads should be smaller, 
 and based on the expected record size
 --

 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
 Fix For: 3.x


 A large contributor to slower buffered reads than mmapped is likely that we 
 read a full 64Kb at once, when average record sizes may be as low as 140 
 bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
 each read will touch 32 of these, meaning we are unlikely to almost ever be 
 hitting the TLB, and will be incurring at least 30 unnecessary misses each 
 time (as well as the other costs of larger than necessary accesses). When 
 working with an SSD there is little to no benefit reading more than 4Kb at 
 once, and in either case reading more data than we need is wasteful. So, I 
 propose selecting a buffer size that is the next larger power of 2 than our 
 average record size (with a minimum of 4Kb), so that we expect to read in one 
 operation. I also propose that we create a pool of these buffers up-front, 
 and that we ensure they are all exactly aligned to a virtual page, so that 
 the source and target operations each touch exactly one virtual page per 4Kb 
 of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9643) Warn when an extra-large partition is compacted

2015-06-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9643:
--
Description: 
We used to log a warning when compacting an extra-large partition 
(CompactionController.getCompactedRow) but took that out as part of 
CASSANDRA-6142.  Let's add back a warning.  (Perhaps in SSTW.append, since 
MetadataCollector.update doesn't know the partition key and we will want to 
include that in the log.)

Size threshold should be configurable in cassandra.yaml and default to 100MB.

  was:
We used to log a warning when compacting an extra-large row 
(CompactionController.getCompactedRow) but took that out as part of 
CASSANDRA-6142.  Let's add back a warning.  (Perhaps in SSTW.append, since 
MetadataCollector.update doesn't know the partition key and we will want to 
include that in the log.)

Row size should be configurable in cassandra.yaml and default to 100MB.


 Warn when an extra-large partition is compacted
 ---

 Key: CASSANDRA-9643
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9643
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Stefania
 Fix For: 2.1.x


 We used to log a warning when compacting an extra-large partition 
 (CompactionController.getCompactedRow) but took that out as part of 
 CASSANDRA-6142.  Let's add back a warning.  (Perhaps in SSTW.append, since 
 MetadataCollector.update doesn't know the partition key and we will want to 
 include that in the log.)
 Size threshold should be configurable in cassandra.yaml and default to 100MB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9643) Warn when an extra-large partition is compacted

2015-06-24 Thread Erick Ramirez (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600543#comment-14600543
 ] 

Erick Ramirez commented on CASSANDRA-9643:
--

The previous message format included the offending partition key (yep, that's a 
43GB partition in production):

{noformat}
INFO CompactionController Compacting large row acmeKS/veryWideTable:key5678 
(45262476248 bytes) incrementally
{noformat}

It would be great if we could maintain this format since it makes it easy to 
identify the table and partition without having to trawl through the cfstats 
output.

 Warn when an extra-large partition is compacted
 ---

 Key: CASSANDRA-9643
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9643
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Stefania
 Fix For: 2.1.x


 We used to log a warning when compacting an extra-large row 
 (CompactionController.getCompactedRow) but took that out as part of 
 CASSANDRA-6142.  Let's add back a warning.  (Perhaps in SSTW.append, since 
 MetadataCollector.update doesn't know the partition key and we will want to 
 include that in the log.)
 Row size should be configurable in cassandra.yaml and default to 100MB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599534#comment-14599534
 ] 

Benedict edited comment on CASSANDRA-9499 at 6/24/15 3:05 PM:
--

Obviously this is out-of-scope, but the only real problem a stream solves is 
ensuring there is always enough room. That and being easily passed to JDK 
tools, and supporting functionality like vint encoding. However we could 
certainly explore a lightweight stream API that exposes the BB and just has 
ensureRemaining() exposed (plus WritableByteChannel methods for calls that may 
be too large for a buffer). vint coding can be done via static method calls. 
*if* it's workable, I would be strongly in favour of this, as right now the 
method invocation costs for writing/reading streams are really significant. It 
isn't a small undertaking, though. But nor is it that huge. 

It is definitely something we should explore, to see how viable it is. In a 
follow-up ticket, of course :)


was (Author: benedict):
Obviously this is out-of-scope, but the only real problem a stream solves is 
ensuring there is always enough room. That and being easily passed to JDK 
tools, and supporting functionality like vint encoding. However we could 
certainly explore a lightweight stream API that exposes the BB and just has 
ensureRemaining() exposed. vint coding can be done via static method calls. 
*if* it's workable, I would be strongly in favour of this, as right now the 
method invocation costs for writing/reading streams are really significant. It 
isn't a small undertaking, though. But nor is it that huge. 

It is definitely something we should explore, to see how viable it is. In a 
follow-up ticket, of course :)

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9560) Changing durable_writes on a keyspace is only applied after restart of node

2015-06-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599567#comment-14599567
 ] 

Aleksey Yeschenko commented on CASSANDRA-9560:
--

References to {{Keyspace}} are hoisted all over the place (most obviously in 
{{ColumnFamilyStore}}), so unfortunately it's not possible to just swap the 
{{Keyspace}} instance with an updated one. We must update the reference to 
{{KSMetaData}} in {{Keyspace}} in place.

Also, a patch for 2.0 would be handy - we'll likely have one more release, and 
this is a relatively serious issue.

 Changing durable_writes on a keyspace is only applied after restart of node
 ---

 Key: CASSANDRA-9560
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9560
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Single node
Reporter: Fred A
Assignee: Carl Yeksigian
 Fix For: 2.1.x


 When mutations for a column family is about to be applied, the cached 
 instance of the keyspace metadata is read. But the schema mutation for 
 durable_writes hasn't been applied to this cached instance.
 I'm not too familiar with the codebase but after some debugging (2.1.3), it's 
 somehow related to:
 {code:title=org.apache.cassandra.db.Mutation.java|borderStyle=solid}
 public void apply()
 {
Keyspace ks = Keyspace.open(keyspaceName);
 ks.apply(this, ks.metadata.durableWrites);
 }
 {code}
 Where a cached instance of the keyspace is opened but it's metadata hasn't 
 been updated with the earlier applied durable_writes mutation, since it seems 
 that the cached keyspace instance is lazily build at startup but after that, 
 never updated. I'm also a little bit concerned if other values in the cached 
 keyspace instance suffers from the same issue, e.g. replication_factor... 
 I've seen the same issue in 2.1.5 and the only way to resolve this issue is 
 to restart the node to let the keyspace instance cache reload from disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9560) Changing durable_writes on a keyspace is only applied after restart of node

2015-06-24 Thread Fred A (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599587#comment-14599587
 ] 

Fred A commented on CASSANDRA-9560:
---

Could this also affect updating the replication factor? It seems like changing 
replication factor somehow works anyway.

 Changing durable_writes on a keyspace is only applied after restart of node
 ---

 Key: CASSANDRA-9560
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9560
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Single node
Reporter: Fred A
Assignee: Carl Yeksigian
 Fix For: 2.1.x


 When mutations for a column family is about to be applied, the cached 
 instance of the keyspace metadata is read. But the schema mutation for 
 durable_writes hasn't been applied to this cached instance.
 I'm not too familiar with the codebase but after some debugging (2.1.3), it's 
 somehow related to:
 {code:title=org.apache.cassandra.db.Mutation.java|borderStyle=solid}
 public void apply()
 {
Keyspace ks = Keyspace.open(keyspaceName);
 ks.apply(this, ks.metadata.durableWrites);
 }
 {code}
 Where a cached instance of the keyspace is opened but it's metadata hasn't 
 been updated with the earlier applied durable_writes mutation, since it seems 
 that the cached keyspace instance is lazily build at startup but after that, 
 never updated. I'm also a little bit concerned if other values in the cached 
 keyspace instance suffers from the same issue, e.g. replication_factor... 
 I've seen the same issue in 2.1.5 and the only way to resolve this issue is 
 to restart the node to let the keyspace instance cache reload from disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: rollback short-circuit logical expression change

2015-06-24 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk cb062839f - 7392fb96d


rollback short-circuit logical expression change


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7392fb96
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7392fb96
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7392fb96

Branch: refs/heads/trunk
Commit: 7392fb96dbe4d1d9d70a691b721c1bb7b359dd78
Parents: cb06283
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jun 24 16:07:28 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jun 24 16:07:28 2015 +0100

--
 .../apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7392fb96/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java
index 8f2cee0..9137ba2 100644
--- a/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java
@@ -258,7 +258,7 @@ public abstract class UnbufferedDataOutputStreamPlus 
extends DataOutputStreamPlu
 for (int i = 0 ; i  length ; i++)
 {
 int ch = str.charAt(i);
-if ((ch  0)  (ch = 127))
+if ((ch  0)  (ch = 127))
 utfCount += 1;
 else if (ch = 2047)
 utfCount += 2;



[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599549#comment-14599549
 ] 

Benedict commented on CASSANDRA-9499:
-

bq. I won't object to getting to fix them 

If they're existing failures, don't worry. It just looked like they were 
specific to the branches including -madness, as none of the other branches of 
yours had so many failures, but both branches with -madness did.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9090) Allow JMX over SSL directly from nodetool

2015-06-24 Thread Marcus Olsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Olsson updated CASSANDRA-9090:
-
Attachment: cassandra-2.2-9090.patch
cassandra-2.0-9090.patch
cassandra-2.1-9090-1.patch

Removed an unused import from the original patch and created separate patches 
for 2.0 and 2.2 as well. The 2.2 patch should apply cleanly to trunk.

 Allow JMX over SSL directly from nodetool
 -

 Key: CASSANDRA-9090
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9090
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Philip Thompson
 Fix For: 3.x, 2.1.x, 2.0.x

 Attachments: cassandra-2.0-9090.patch, cassandra-2.1-9090-1.patch, 
 cassandra-2.1-9090.patch, cassandra-2.2-9090.patch


 Currently cqlsh allows users to connect via SSL to their cassandra cluster 
 via command line. 
 Nodetool only offers username/password authentication [1], and if users want 
 to use SSL, they need to use jconsole [2]. We should support nodetool 
 connecting via SSL in the same way cqlsh does.
 [1] http://wiki.apache.org/cassandra/JmxSecurity
 [2] https://www.lullabot.com/blog/article/monitor-java-jmx
 [3] 
 http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-24 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14599534#comment-14599534
 ] 

Benedict commented on CASSANDRA-9499:
-

Obviously this is out-of-scope, but the only real problem a stream solves is 
ensuring there is always enough room. That and being easily passed to JDK 
tools, and supporting functionality like vint encoding. However we could 
certainly explore a lightweight stream API that exposes the BB and just has 
ensureRemaining() exposed. vint coding can be done via static method calls. 
*if* it's workable, I would be strongly in favour of this, as right now the 
method invocation costs for writing/reading streams are really significant. It 
isn't a small undertaking, though. But nor is it that huge. 

It is definitely something we should explore, to see how viable it is. In a 
follow-up ticket, of course :)

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-24 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600666#comment-14600666
 ] 

Study Hsueh edited comment on CASSANDRA-9607 at 6/25/15 4:37 AM:
-

My colleague has repeated the query in 2.1.3 again, and the cluster went down 
again. So the root cause should be the query.


was (Author: study):
My colleague have repeated the query in 2.1.3 again, and the cluster went down 
again. So the root cause should be the query.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.x, 2.2.x

 Attachments: GC_state.png, cassandra.yaml, client_blocked_thread.png, 
 cpu_profile.png, dump.tdump, load.png, log.zip, schema.zip, vm_monitor.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6434) Repair-aware gc grace period

2015-06-24 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600708#comment-14600708
 ] 

Marcus Eriksson commented on CASSANDRA-6434:


working on it - on top of CASSANDRA-8099, so mostly trying to digest that code 
so far

 Repair-aware gc grace period 
 -

 Key: CASSANDRA-6434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6434
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: sankalp kohli
Assignee: Marcus Eriksson
 Fix For: 3.0 beta 1


 Since the reason for gcgs is to ensure that we don't purge tombstones until 
 every replica has been notified, it's redundant in a world where we're 
 tracking repair times per sstable (and repairing frequentily), i.e., a world 
 where we default to incremental repair a la CASSANDRA-5351.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6434) Repair-aware gc grace period

2015-06-24 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600705#comment-14600705
 ] 

sankalp kohli commented on CASSANDRA-6434:
--

[~krummas]  Any updates on this? 

 Repair-aware gc grace period 
 -

 Key: CASSANDRA-6434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6434
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: sankalp kohli
Assignee: Marcus Eriksson
 Fix For: 3.0 beta 1


 Since the reason for gcgs is to ensure that we don't purge tombstones until 
 every replica has been notified, it's redundant in a world where we're 
 tracking repair times per sstable (and repairing frequentily), i.e., a world 
 where we default to incremental repair a la CASSANDRA-5351.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-24 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14600657#comment-14600657
 ] 

Study Hsueh commented on CASSANDRA-9607:


My colleague said he performed the following queries before the whole cluster 
went down.
1. SELECT * FROM ginger.supply_ad_log - timeout
2. SELECT rmaxSpaceId FROM Supply_Ad_Log WHERE rmaxSpaceId = ? AND salesChannel 
= ? AND hourstamp = ? - query was hanged and cluster did not have responsed

The client tool was gocql and he did not specify `pagesize` (default: 0).

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.x, 2.2.x

 Attachments: GC_state.png, cassandra.yaml, client_blocked_thread.png, 
 cpu_profile.png, dump.tdump, load.png, log.zip, schema.zip, vm_monitor.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[15/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
new file mode 100644
index 000..3f6fdda
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AggregationTest.java
@@ -0,0 +1,1481 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.operations;
+
+import java.math.BigDecimal;
+import java.text.SimpleDateFormat;
+import java.util.Calendar;
+import java.util.Date;
+import java.util.TimeZone;
+
+import org.apache.commons.lang3.time.DateUtils;
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.functions.Functions;
+import org.apache.cassandra.cql3.functions.UDAggregate;
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.exceptions.FunctionExecutionException;
+import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.serializers.Int32Serializer;
+import org.apache.cassandra.service.ClientState;
+import org.apache.cassandra.transport.Event;
+import org.apache.cassandra.transport.messages.ResultMessage;
+
+public class AggregationTest extends CQLTester
+{
+@Test
+public void testFunctions() throws Throwable
+{
+createTable(CREATE TABLE %s (a int, b int, c double, d decimal, 
primary key (a, b)));
+
+// Test with empty table
+assertColumnNames(execute(SELECT COUNT(*) FROM %s), count);
+assertRows(execute(SELECT COUNT(*) FROM %s), row(0L));
+assertColumnNames(execute(SELECT max(b), min(b), sum(b), avg(b) , 
max(c), sum(c), avg(c), sum(d), avg(d) FROM %s),
+  system.max(b), system.min(b), system.sum(b), 
system.avg(b), system.max(c), system.sum(c), system.avg(c), 
system.sum(d), system.avg(d));
+assertRows(execute(SELECT max(b), min(b), sum(b), avg(b) , max(c), 
sum(c), avg(c), sum(d), avg(d) FROM %s),
+   row(null, null, 0, 0, null, 0.0, 0.0, new BigDecimal(0), 
new BigDecimal(0)));
+
+execute(INSERT INTO %s (a, b, c, d) VALUES (1, 1, 11.5, 11.5));
+execute(INSERT INTO %s (a, b, c, d) VALUES (1, 2, 9.5, 1.5));
+execute(INSERT INTO %s (a, b, c, d) VALUES (1, 3, 9.0, 2.0));
+
+assertRows(execute(SELECT max(b), min(b), sum(b), avg(b) , max(c), 
sum(c), avg(c), sum(d), avg(d) FROM %s),
+   row(3, 1, 6, 2, 11.5, 30.0, 10.0, new BigDecimal(15.0), 
new BigDecimal(5.0)));
+
+execute(INSERT INTO %s (a, b, d) VALUES (1, 5, 1.0));
+assertRows(execute(SELECT COUNT(*) FROM %s), row(4L));
+assertRows(execute(SELECT COUNT(1) FROM %s), row(4L));
+assertRows(execute(SELECT COUNT(b), count(c) FROM %s), row(4L, 3L));
+}
+
+@Test
+public void testFunctionsWithCompactStorage() throws Throwable
+{
+createTable(CREATE TABLE %s (a int , b int, c double, primary key(a, 
b) ) WITH COMPACT STORAGE);
+
+execute(INSERT INTO %s (a, b, c) VALUES (1, 1, 11.5));
+execute(INSERT INTO %s (a, b, c) VALUES (1, 2, 9.5));
+execute(INSERT INTO %s (a, b, c) VALUES (1, 3, 9.0));
+
+assertRows(execute(SELECT max(b), min(b), sum(b), avg(b) , max(c), 
sum(c), avg(c) FROM %s),
+   row(3, 1, 6, 2, 11.5, 30.0, 10.0));
+
+assertRows(execute(SELECT COUNT(*) FROM %s), row(3L));
+assertRows(execute(SELECT COUNT(1) FROM %s), row(3L));
+assertRows(execute(SELECT COUNT(*) FROM %s WHERE a = 1 AND b  1), 
row(2L));
+assertRows(execute(SELECT COUNT(1) FROM %s WHERE a = 1 AND b  1), 
row(2L));
+assertRows(execute(SELECT max(b), min(b), sum(b), avg(b) , max(c), 
sum(c), avg(c) FROM %s WHERE a = 1 AND b  1),
+   row(3, 2, 5, 2, 9.5, 18.5, 9.25));
+}
+
+@Test
+public void testInvalidCalls() throws Throwable
+{
+createTable(CREATE TABLE %s (a int, b 

[19/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
new file mode 100644
index 000..0b812c6
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
@@ -0,0 +1,645 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.entities;
+
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.Locale;
+import java.util.Map;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+
+import org.junit.Test;
+
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.exceptions.SyntaxException;
+import org.apache.cassandra.utils.FBUtilities;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public class SecondaryIndexTest extends CQLTester
+{
+private static final int TOO_BIG = 1024 * 65;
+
+@Test
+public void testCreateAndDropIndex() throws Throwable
+{
+testCreateAndDropIndex(test, false);
+testCreateAndDropIndex(test2, true);
+}
+
+@Test
+public void testCreateAndDropIndexWithQuotedIdentifier() throws Throwable
+{
+testCreateAndDropIndex(\quoted_ident\, false);
+testCreateAndDropIndex(\quoted_ident2\, true);
+}
+
+@Test
+public void testCreateAndDropIndexWithCamelCaseIdentifier() throws 
Throwable
+{
+testCreateAndDropIndex(CamelCase, false);
+testCreateAndDropIndex(CamelCase2, true);
+}
+
+/**
+ * Test creating and dropping an index with the specified name.
+ *
+ * @param indexName the index name
+ * @param addKeyspaceOnDrop add the keyspace name in the drop statement
+ * @throws Throwable if an error occurs
+ */
+private void testCreateAndDropIndex(String indexName, boolean 
addKeyspaceOnDrop) throws Throwable
+{
+execute(USE system);
+assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found, DROP 
INDEX  + indexName + ;);
+
+createTable(CREATE TABLE %s (a int primary key, b int););
+createIndex(CREATE INDEX  + indexName +  ON %s(b););
+createIndex(CREATE INDEX IF NOT EXISTS  + indexName +  ON %s(b););
+
+assertInvalidMessage(Index already exists, CREATE INDEX  + 
indexName +  ON %s(b));
+
+execute(INSERT INTO %s (a, b) values (?, ?);, 0, 0);
+execute(INSERT INTO %s (a, b) values (?, ?);, 1, 1);
+execute(INSERT INTO %s (a, b) values (?, ?);, 2, 2);
+execute(INSERT INTO %s (a, b) values (?, ?);, 3, 1);
+
+assertRows(execute(SELECT * FROM %s where b = ?, 1), row(1, 1), 
row(3, 1));
+assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found in any 
of the tables of keyspace 'system',
+ DROP INDEX  + indexName);
+
+if (addKeyspaceOnDrop)
+{
+dropIndex(DROP INDEX  + KEYSPACE + . + indexName);
+}
+else
+{
+execute(USE  + KEYSPACE);
+execute(DROP INDEX  + indexName);
+}
+
+assertInvalidMessage(No secondary indexes on the restricted columns 
support the provided operators,
+ SELECT * FROM %s where b = ?, 1);
+dropIndex(DROP INDEX IF EXISTS  + indexName);
+assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found, DROP 
INDEX  + indexName);
+}
+
+/**
+ * Removes the quotes from the specified index name.
+ *
+ * @param indexName the index name from which the quotes must be removed.
+ * @return the unquoted index name.
+ */
+private static String removeQuotes(String indexName)
+{
+return StringUtils.remove(indexName, '\');
+

[1/9] cassandra git commit: Migrate CQL tests from dtest to unit tests

2015-06-24 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 3caf0e029 - f797bfa4d


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/validation/operations/SelectOrderedPartitionerTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectOrderedPartitionerTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectOrderedPartitionerTest.java
new file mode 100644
index 000..a2ba71e
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectOrderedPartitionerTest.java
@@ -0,0 +1,325 @@
+package org.apache.cassandra.cql3.validation.operations;
+
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import static junit.framework.Assert.assertNull;
+import static org.junit.Assert.assertEquals;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.dht.ByteOrderedPartitioner;
+
+/**
+ * SELECT statement tests that require a ByteOrderedPartitioner
+ */
+public class SelectOrderedPartitionerTest extends CQLTester
+{
+@BeforeClass
+public static void setUp()
+{
+DatabaseDescriptor.setPartitioner(new ByteOrderedPartitioner());
+}
+
+@Test
+public void testTokenFunctionWithSingleColumnPartitionKey() throws 
Throwable
+{
+createTable(CREATE TABLE IF NOT EXISTS %s (a int PRIMARY KEY, b 
text));
+execute(INSERT INTO %s (a, b) VALUES (0, 'a'));
+
+assertRows(execute(SELECT * FROM %s WHERE token(a) = token(?), 0), 
row(0, a));
+assertRows(execute(SELECT * FROM %s WHERE token(a) = token(?) and 
token(a)  token(?), 0, 1), row(0, a));
+assertInvalid(SELECT * FROM %s WHERE token(a)  token(?), a);
+assertInvalid(SELECT * FROM %s WHERE token(a, b) = token(?, ?), 
b, 0);
+assertInvalid(SELECT * FROM %s WHERE token(a) = token(?) and 
token(a) = token(?), 0, 1);
+assertInvalid(SELECT * FROM %s WHERE token(a) = token(?) and 
token(a) = token(?), 0, 1);
+assertInvalidSyntax(SELECT * FROM %s WHERE token(a) = token(?) and 
token(a) IN (token(?)), 0, 1);
+}
+
+@Test
+public void testTokenFunctionWithPartitionKeyAndClusteringKeyArguments() 
throws Throwable
+{
+createTable(CREATE TABLE IF NOT EXISTS %s (a int, b text, PRIMARY KEY 
(a, b)));
+assertInvalid(SELECT * FROM %s WHERE token(a, b)  token(0, 'c'));
+}
+
+@Test
+public void testTokenFunctionWithMultiColumnPartitionKey() throws Throwable
+{
+createTable(CREATE TABLE IF NOT EXISTS %s (a int, b text, PRIMARY KEY 
((a, b;
+execute(INSERT INTO %s (a, b) VALUES (0, 'a'));
+execute(INSERT INTO %s (a, b) VALUES (0, 'b'));
+execute(INSERT INTO %s (a, b) VALUES (0, 'c'));
+
+assertRows(execute(SELECT * FROM %s WHERE token(a, b)  token(?, ?), 
0, a),
+   row(0, b),
+   row(0, c));
+assertRows(execute(SELECT * FROM %s WHERE token(a, b)  token(?, ?) 
and token(a, b)  token(?, ?),
+   0, a,
+   0, d),
+   row(0, b),
+   row(0, c));
+assertInvalid(SELECT * FROM %s WHERE token(a)  token(?) and token(b) 
 token(?), 0, a);
+assertInvalid(SELECT * FROM %s WHERE token(a)  token(?, ?) and 
token(a)  token(?, ?) and token(b)  token(?, ?) , 0, a, 0, d, 0, a);
+assertInvalid(SELECT * FROM %s WHERE token(b, a)  token(0, 'c'));
+}
+
+@Test
+public void testTokenFunctionWithCompoundPartitionAndClusteringCols() 
throws Throwable
+{
+createTable(CREATE TABLE IF NOT EXISTS %s (a int, b int, c int, d 
int, PRIMARY KEY ((a, b), c, d)));
+// just test that the queries don't error
+execute(SELECT * FROM %s WHERE token(a, b)  token(0, 0) AND c  10 
ALLOW FILTERING;);
+execute(SELECT * FROM %s WHERE c  10 AND token(a, b)  token(0, 0) 
ALLOW FILTERING;);
+execute(SELECT * FROM %s WHERE token(a, b)  token(0, 0) AND (c, d)  
(0, 0) ALLOW FILTERING;);
+execute(SELECT * FROM %s WHERE (c, d)  (0, 0) AND token(a, b)  
token(0, 0) ALLOW FILTERING;);
+}
+
+/**
+ * Test undefined columns
+ * migrated from cql_tests.py:TestCQL.undefined_column_handling_test()
+ */
+@Test
+public void testUndefinedColumns() throws Throwable
+{
+createTable(CREATE TABLE %s (k int PRIMARY KEY, v1 int, v2 int,));
+
+execute(INSERT INTO %s (k, v1, v2) VALUES (0, 0, 0));
+execute(INSERT INTO %s (k, v1) VALUES (1, 1));
+execute(INSERT INTO %s (k, v1, v2) VALUES (2, 2, 2));
+
+Object[][] rows = getRows(execute(SELECT v2 FROM %s));
+assertEquals(0, rows[0][0]);
+assertEquals(null, rows[1][0]);
+assertEquals(2, rows[2][0]);
+
+rows = getRows(execute(SELECT v2 FROM %s 

[24/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/UFAuthTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UFAuthTest.java 
b/test/unit/org/apache/cassandra/cql3/UFAuthTest.java
deleted file mode 100644
index 2c36bd1..000
--- a/test/unit/org/apache/cassandra/cql3/UFAuthTest.java
+++ /dev/null
@@ -1,724 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import java.lang.reflect.Field;
-import java.util.*;
-
-import com.google.common.base.Joiner;
-import com.google.common.collect.ImmutableSet;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import org.apache.cassandra.auth.*;
-import org.apache.cassandra.config.DatabaseDescriptor;
-import org.apache.cassandra.cql3.functions.Function;
-import org.apache.cassandra.cql3.functions.FunctionName;
-import org.apache.cassandra.cql3.functions.Functions;
-import org.apache.cassandra.cql3.statements.BatchStatement;
-import org.apache.cassandra.cql3.statements.ModificationStatement;
-import org.apache.cassandra.exceptions.*;
-import org.apache.cassandra.service.ClientState;
-import org.apache.cassandra.utils.Pair;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-
-public class UFAuthTest extends CQLTester
-{
-private static final Logger logger = 
LoggerFactory.getLogger(UFAuthTest.class);
-
-String roleName = test_role;
-AuthenticatedUser user;
-RoleResource role;
-ClientState clientState;
-
-@BeforeClass
-public static void setupAuthorizer()
-{
-try
-{
-IAuthorizer authorizer = new StubAuthorizer();
-Field authorizerField = 
DatabaseDescriptor.class.getDeclaredField(authorizer);
-authorizerField.setAccessible(true);
-authorizerField.set(null, authorizer);
-DatabaseDescriptor.setPermissionsValidity(0);
-}
-catch (IllegalAccessException | NoSuchFieldException e)
-{
-throw new RuntimeException(e);
-}
-}
-
-@Before
-public void setup() throws Throwable
-{
-((StubAuthorizer) DatabaseDescriptor.getAuthorizer()).clear();
-setupClientState();
-setupTable(CREATE TABLE %s (k int, v1 int, v2 int, PRIMARY KEY (k, 
v1)));
-}
-
-@Test
-public void functionInSelection() throws Throwable
-{
-String functionName = createSimpleFunction();
-String cql = String.format(SELECT k, %s FROM %s WHERE k = 1;,
-   functionCall(functionName),
-   KEYSPACE + . + currentTable());
-assertPermissionsOnFunction(cql, functionName);
-}
-
-@Test
-public void functionInSelectPKRestriction() throws Throwable
-{
-String functionName = createSimpleFunction();
-String cql = String.format(SELECT * FROM %s WHERE k = %s,
-   KEYSPACE + . + currentTable(),
-   functionCall(functionName));
-assertPermissionsOnFunction(cql, functionName);
-}
-
-@Test
-public void functionInSelectClusteringRestriction() throws Throwable
-{
-String functionName = createSimpleFunction();
-String cql = String.format(SELECT * FROM %s WHERE k = 0 AND v1 = %s,
-   KEYSPACE + . + currentTable(),
-   functionCall(functionName));
-assertPermissionsOnFunction(cql, functionName);
-}
-
-@Test
-public void functionInSelectInRestriction() throws Throwable
-{
-String functionName = createSimpleFunction();
-String cql = String.format(SELECT * FROM %s WHERE k IN (%s, %s),
-   KEYSPACE + . + currentTable(),
-   functionCall(functionName),
-   functionCall(functionName));
-assertPermissionsOnFunction(cql, functionName);
- 

[29/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/CQLTester.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CQLTester.java 
b/test/unit/org/apache/cassandra/cql3/CQLTester.java
index d47b9d2..4b4631e 100644
--- a/test/unit/org/apache/cassandra/cql3/CQLTester.java
+++ b/test/unit/org/apache/cassandra/cql3/CQLTester.java
@@ -35,11 +35,18 @@ import org.junit.*;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.datastax.driver.core.*;
+import static junit.framework.Assert.assertNotNull;
+import com.datastax.driver.core.Cluster;
+import com.datastax.driver.core.ColumnDefinitions;
+import com.datastax.driver.core.DataType;
+import com.datastax.driver.core.ProtocolVersion;
 import com.datastax.driver.core.ResultSet;
+import com.datastax.driver.core.Row;
+import com.datastax.driver.core.Session;
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.concurrent.ScheduledExecutors;
 import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
 import org.apache.cassandra.cql3.functions.FunctionName;
 import org.apache.cassandra.cql3.statements.ParsedStatement;
@@ -51,6 +58,7 @@ import org.apache.cassandra.db.marshal.TupleType;
 import org.apache.cassandra.exceptions.CassandraException;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.exceptions.SyntaxException;
+import org.apache.cassandra.dht.Murmur3Partitioner;
 import org.apache.cassandra.io.util.FileUtils;
 import org.apache.cassandra.serializers.TypeSerializer;
 import org.apache.cassandra.service.ClientState;
@@ -71,6 +79,7 @@ public abstract class CQLTester
 public static final String KEYSPACE = cql_test_keyspace;
 public static final String KEYSPACE_PER_TEST = cql_test_keyspace_alt;
 protected static final boolean USE_PREPARED_VALUES = 
Boolean.valueOf(System.getProperty(cassandra.test.use_prepared, true));
+protected static final long ROW_CACHE_SIZE_IN_MB = 
Integer.valueOf(System.getProperty(cassandra.test.row_cache_size_in_mb, 0));
 private static final AtomicInteger seqNumber = new AtomicInteger();
 
 private static org.apache.cassandra.transport.Server server;
@@ -79,7 +88,7 @@ public abstract class CQLTester
 private static final Cluster[] cluster;
 private static final Session[] session;
 
-static int maxProtocolVersion;
+public static int maxProtocolVersion;
 static {
 int version;
 for (version = 1; version = Server.CURRENT_VERSION; )
@@ -129,9 +138,12 @@ public abstract class CQLTester
 private boolean usePrepared = USE_PREPARED_VALUES;
 
 @BeforeClass
-public static void setUpClass() throws Throwable
+public static void setUpClass()
 {
-schemaChange(String.format(CREATE KEYSPACE IF NOT EXISTS %s WITH 
replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}, 
KEYSPACE));
+if (ROW_CACHE_SIZE_IN_MB  0)
+DatabaseDescriptor.setRowCacheSizeInMB(ROW_CACHE_SIZE_IN_MB);
+
+DatabaseDescriptor.setPartitioner(Murmur3Partitioner.instance);
 }
 
 @AfterClass
@@ -151,6 +163,7 @@ public abstract class CQLTester
 @Before
 public void beforeTest() throws Throwable
 {
+schemaChange(String.format(CREATE KEYSPACE IF NOT EXISTS %s WITH 
replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}, 
KEYSPACE));
 schemaChange(String.format(CREATE KEYSPACE IF NOT EXISTS %s WITH 
replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}, 
KEYSPACE_PER_TEST));
 }
 
@@ -178,16 +191,16 @@ public abstract class CQLTester
 {
 try
 {
-for (int i = tablesToDrop.size() - 1; i =0; i--)
+for (int i = tablesToDrop.size() - 1; i = 0; i--)
 schemaChange(String.format(DROP TABLE IF EXISTS 
%s.%s, KEYSPACE, tablesToDrop.get(i)));
 
-for (int i = aggregatesToDrop.size() - 1; i =0; i--)
+for (int i = aggregatesToDrop.size() - 1; i = 0; i--)
 schemaChange(String.format(DROP AGGREGATE IF EXISTS 
%s, aggregatesToDrop.get(i)));
 
-for (int i = functionsToDrop.size() - 1; i =0; i--)
+for (int i = functionsToDrop.size() - 1; i = 0; i--)
 schemaChange(String.format(DROP FUNCTION IF EXISTS 
%s, functionsToDrop.get(i)));
 
-for (int i = typesToDrop.size() - 1; i =0; i--)
+for (int i = typesToDrop.size() - 1; i = 0; i--)
 schemaChange(String.format(DROP TYPE IF EXISTS 
%s.%s, KEYSPACE, typesToDrop.get(i)));
 
 // Dropping doesn't delete the sstables. It's not a huge 
deal but it's cleaner to cleanup after us

[20/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
new file mode 100644
index 000..7f8fa0b
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/validation/entities/JsonTest.java
@@ -0,0 +1,958 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.entities;
+
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.cql3.Json;
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.dht.ByteOrderedPartitioner;
+import org.apache.cassandra.serializers.SimpleDateSerializer;
+import org.apache.cassandra.serializers.TimeSerializer;
+import org.apache.cassandra.utils.ByteBufferUtil;
+
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.math.BigDecimal;
+import java.math.BigInteger;
+import java.net.InetAddress;
+import java.text.SimpleDateFormat;
+import java.util.Date;
+import java.util.UUID;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public class JsonTest extends CQLTester
+{
+@BeforeClass
+public static void setUp()
+{
+DatabaseDescriptor.setPartitioner(ByteOrderedPartitioner.instance);
+}
+
+@Test
+public void testFromJsonFct() throws Throwable
+{
+String typeName = createType(CREATE TYPE %s (a int, b uuid, c 
settext));
+createTable(CREATE TABLE %s ( +
+k int PRIMARY KEY,  +
+asciival ascii,  +
+bigintval bigint,  +
+blobval blob,  +
+booleanval boolean,  +
+dateval date,  +
+decimalval decimal,  +
+doubleval double,  +
+floatval float,  +
+inetval inet,  +
+intval int,  +
+textval text,  +
+timeval time,  +
+timestampval timestamp,  +
+timeuuidval timeuuid,  +
+uuidval uuid, +
+varcharval varchar,  +
+varintval varint,  +
+listval listint,  +
+frozenlistval frozenlistint,  +
+setval setuuid,  +
+frozensetval frozensetuuid,  +
+mapval mapascii, int, +
+frozenmapval frozenmapascii, int, +
+tupleval frozentupleint, ascii, uuid, +
+udtval frozen + typeName + ));
+
+
+// fromJson() can only be used when the receiver type is known
+assertInvalidMessage(fromJson() cannot be used in the selection 
clause, SELECT fromJson(asciival) FROM %s, 0, 0);
+
+String func1 = createFunction(KEYSPACE, int, CREATE FUNCTION %s (a 
int) CALLED ON NULL INPUT RETURNS text LANGUAGE java AS $$ return a.toString(); 
$$);
+createFunctionOverload(func1, int, CREATE FUNCTION %s (a text) 
CALLED ON NULL INPUT RETURNS text LANGUAGE java AS $$ return new String(a); 
$$);
+
+assertInvalidMessage(Ambiguous call to function,
+INSERT INTO %s (k, textval) VALUES (?,  + func1 + 
(fromJson(?))), 0, 123);
+
+// fails JSON parsing
+assertInvalidMessage(Could not decode JSON string 
'\u038E\u0394\u03B4\u03E0',
+INSERT INTO %s (k, asciival) VALUES (?, fromJson(?)), 0, 
\u038E\u0394\u03B4\u03E0);
+
+// handle nulls
+execute(INSERT INTO %s (k, asciival) VALUES (?, fromJson(?)), 0, 
null);
+
+//  ascii 
+execute(INSERT INTO %s (k, asciival) VALUES (?, fromJson(?)), 0, 
\ascii text\);
+assertRows(execute(SELECT k, asciival FROM %s WHERE k = ?, 0), 
row(0, ascii text));
+
+execute(INSERT INTO %s (k, asciival) VALUES (?, fromJson(?)), 0, 
\ascii \\\ text\);
+assertRows(execute(SELECT k, asciival FROM %s WHERE k = ?, 0), 
row(0, ascii \ text));
+
+

[3/9] cassandra git commit: Migrate CQL tests from dtest to unit tests

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
new file mode 100644
index 000..cc7d2a4
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
@@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.operations;
+
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Keyspace;
+import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.exceptions.SyntaxException;
+
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+public class AlterTest extends CQLTester
+{
+@Test
+public void testAddList() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text););
+execute(ALTER TABLE %s ADD myCollection listtext;);
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', ['first element']););
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test, 
list(first element)));
+}
+
+@Test
+public void testDropList() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text, 
myCollection listtext););
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', ['first element']););
+execute(ALTER TABLE %s DROP myCollection;);
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test));
+}
+@Test
+public void testAddMap() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text););
+execute(ALTER TABLE %s ADD myCollection maptext, text;);
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', { '1' : 'first element'}););
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test, 
map(1, first element)));
+}
+
+@Test
+public void testDropMap() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text, 
myCollection maptext, text););
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', { '1' : 'first element'}););
+execute(ALTER TABLE %s DROP myCollection;);
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test));
+}
+
+@Test
+public void testDropListAndAddListWithSameName() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text, 
myCollection listtext););
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', ['first element']););
+execute(ALTER TABLE %s DROP myCollection;);
+execute(ALTER TABLE %s ADD myCollection listtext;);
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test, 
null));
+execute(UPDATE %s set myCollection = ['second element'] WHERE id = 
'test';);
+assertRows(execute(SELECT * FROM %s;), row(test, first test, 
list(second element)));
+}
+@Test
+public void testDropListAndAddMapWithSameName() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text, 
myCollection listtext););
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', ['first element']););
+execute(ALTER TABLE %s DROP myCollection;);
+
+assertInvalid(ALTER TABLE %s ADD myCollection mapint, int;);
+}
+
+@Test
+public void testChangeStrategyWithUnquotedAgrument() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY););
+
+assertInvalidSyntaxMessage(no viable alternative at input '}',
+   ALTER TABLE %s WITH caching = {'keys' : 
'all', 'rows_per_partition' : ALL};);
+}
+
+@Test
+// tests 

[03/32] cassandra git commit: Migrate CQL tests from dtest to unit tests

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
new file mode 100644
index 000..cc7d2a4
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/AlterTest.java
@@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.operations;
+
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.Keyspace;
+import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.exceptions.SyntaxException;
+
+import org.junit.Test;
+
+import static org.junit.Assert.assertEquals;
+
+public class AlterTest extends CQLTester
+{
+@Test
+public void testAddList() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text););
+execute(ALTER TABLE %s ADD myCollection listtext;);
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', ['first element']););
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test, 
list(first element)));
+}
+
+@Test
+public void testDropList() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text, 
myCollection listtext););
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', ['first element']););
+execute(ALTER TABLE %s DROP myCollection;);
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test));
+}
+@Test
+public void testAddMap() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text););
+execute(ALTER TABLE %s ADD myCollection maptext, text;);
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', { '1' : 'first element'}););
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test, 
map(1, first element)));
+}
+
+@Test
+public void testDropMap() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text, 
myCollection maptext, text););
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', { '1' : 'first element'}););
+execute(ALTER TABLE %s DROP myCollection;);
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test));
+}
+
+@Test
+public void testDropListAndAddListWithSameName() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text, 
myCollection listtext););
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', ['first element']););
+execute(ALTER TABLE %s DROP myCollection;);
+execute(ALTER TABLE %s ADD myCollection listtext;);
+
+assertRows(execute(SELECT * FROM %s;), row(test, first test, 
null));
+execute(UPDATE %s set myCollection = ['second element'] WHERE id = 
'test';);
+assertRows(execute(SELECT * FROM %s;), row(test, first test, 
list(second element)));
+}
+@Test
+public void testDropListAndAddMapWithSameName() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY, content text, 
myCollection listtext););
+execute(INSERT INTO %s (id, content , myCollection) VALUES ('test', 
'first test', ['first element']););
+execute(ALTER TABLE %s DROP myCollection;);
+
+assertInvalid(ALTER TABLE %s ADD myCollection mapint, int;);
+}
+
+@Test
+public void testChangeStrategyWithUnquotedAgrument() throws Throwable
+{
+createTable(CREATE TABLE %s (id text PRIMARY KEY););
+
+assertInvalidSyntaxMessage(no viable alternative at input '}',
+   ALTER TABLE %s WITH caching = {'keys' : 
'all', 'rows_per_partition' : ALL};);
+}
+
+@Test
+// tests 

[10/32] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-24 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20364f48
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20364f48
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20364f48

Branch: refs/heads/trunk
Commit: 20364f486f1442fccf134ee75ccab1151c0a102c
Parents: 0d4065e f797bfa
Author: Josh McKenzie josh.mcken...@datastax.com
Authored: Wed Jun 24 12:11:10 2015 -0400
Committer: Josh McKenzie josh.mcken...@datastax.com
Committed: Wed Jun 24 12:11:10 2015 -0400

--

--




[5/9] cassandra git commit: Migrate CQL tests from dtest to unit tests

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/validation/entities/FrozenCollectionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/FrozenCollectionsTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/FrozenCollectionsTest.java
new file mode 100644
index 000..beed560
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/FrozenCollectionsTest.java
@@ -0,0 +1, @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.entities;
+
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.db.marshal.*;
+import org.apache.cassandra.dht.ByteOrderedPartitioner;
+import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.exceptions.SyntaxException;
+import org.apache.commons.lang3.StringUtils;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+
+public class FrozenCollectionsTest extends CQLTester
+{
+@BeforeClass
+public static void setUpClass()
+{
+DatabaseDescriptor.setPartitioner(new ByteOrderedPartitioner());
+}
+
+@Test
+public void testPartitionKeyUsage() throws Throwable
+{
+createTable(CREATE TABLE %s (k frozensetint PRIMARY KEY, v int));
+
+execute(INSERT INTO %s (k, v) VALUES (?, ?), set(), 1);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), set(1, 2, 3), 1);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), set(4, 5, 6), 0);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), set(7, 8, 9), 0);
+
+// overwrite with an update
+execute(UPDATE %s SET v=? WHERE k=?, 0, set());
+execute(UPDATE %s SET v=? WHERE k=?, 0, set(1, 2, 3));
+
+assertRows(execute(SELECT * FROM %s),
+row(set(), 0),
+row(set(1, 2, 3), 0),
+row(set(4, 5, 6), 0),
+row(set(7, 8, 9), 0)
+);
+
+assertRows(execute(SELECT k FROM %s),
+row(set()),
+row(set(1, 2, 3)),
+row(set(4, 5, 6)),
+row(set(7, 8, 9))
+);
+
+assertRows(execute(SELECT * FROM %s LIMIT 2),
+row(set(), 0),
+row(set(1, 2, 3), 0)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE k=?, set(4, 5, 6)),
+row(set(4, 5, 6), 0)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE k=?, set()),
+row(set(), 0)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE k IN ?, list(set(4, 5, 6), 
set())),
+row(set(4, 5, 6), 0),
+row(set(), 0)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE token(k) = token(?), 
set(4, 5, 6)),
+row(set(4, 5, 6), 0),
+row(set(7, 8, 9), 0)
+);
+
+assertInvalid(INSERT INTO %s (k, v) VALUES (null, 0));
+
+execute(DELETE FROM %s WHERE k=?, set());
+execute(DELETE FROM %s WHERE k=?, set(4, 5, 6));
+assertRows(execute(SELECT * FROM %s),
+row(set(1, 2, 3), 0),
+row(set(7, 8, 9), 0)
+);
+}
+
+@Test
+public void testNestedPartitionKeyUsage() throws Throwable
+{
+createTable(CREATE TABLE %s (k frozenmapsetint, listint 
PRIMARY KEY, v int));
+
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(), 1);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(), list(1, 2, 
3)), 0);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(1, 2, 3), 
list(1, 2, 3)), 1);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(4, 5, 6), 
list(1, 2, 3)), 0);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(7, 8, 9), 
list(1, 2, 3)), 0);
+
+// overwrite with an update
+execute(UPDATE %s SET v=? WHERE k=?, 0, map());
+

[06/32] cassandra git commit: Migrate CQL tests from dtest to unit tests

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/ThriftCompatibilityTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/ThriftCompatibilityTest.java 
b/test/unit/org/apache/cassandra/cql3/ThriftCompatibilityTest.java
index 662800b..6f0e439 100644
--- a/test/unit/org/apache/cassandra/cql3/ThriftCompatibilityTest.java
+++ b/test/unit/org/apache/cassandra/cql3/ThriftCompatibilityTest.java
@@ -23,6 +23,7 @@ import org.junit.Test;
 import org.apache.cassandra.SchemaLoader;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
 
 public class ThriftCompatibilityTest extends SchemaLoader
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/TimestampTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/TimestampTest.java 
b/test/unit/org/apache/cassandra/cql3/TimestampTest.java
deleted file mode 100644
index 6673904..000
--- a/test/unit/org/apache/cassandra/cql3/TimestampTest.java
+++ /dev/null
@@ -1,36 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import org.junit.Test;
-
-public class TimestampTest extends CQLTester
-{
-@Test
-public void testNegativeTimestamps() throws Throwable
-{
-createTable(CREATE TABLE %s (k int PRIMARY KEY, v int));
-
-execute(INSERT INTO %s (k, v) VALUES (?, ?) USING TIMESTAMP ?, 1, 1, 
-42L);
-assertRows(execute(SELECT writetime(v) FROM %s WHERE k = ?, 1),
-row(-42L)
-);
-
-assertInvalid(INSERT INTO %s (k, v) VALUES (?, ?) USING TIMESTAMP ?, 
2, 2, Long.MIN_VALUE);
-}
-}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/TupleTypeTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/TupleTypeTest.java 
b/test/unit/org/apache/cassandra/cql3/TupleTypeTest.java
deleted file mode 100644
index ce935e3..000
--- a/test/unit/org/apache/cassandra/cql3/TupleTypeTest.java
+++ /dev/null
@@ -1,101 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import org.junit.Test;
-
-public class TupleTypeTest extends CQLTester
-{
-@Test
-public void testTuplePutAndGet() throws Throwable
-{
-String[] valueTypes = {frozentupleint, text, double, tupleint, 
text, double};
-for (String valueType : valueTypes)
-{
-createTable(CREATE TABLE %s (k int PRIMARY KEY, t  + valueType + 
));
-
-execute(INSERT INTO %s (k, t) VALUES (?, ?), 0, tuple(3, foo, 
3.4));
-execute(INSERT INTO %s (k, t) VALUES (?, ?), 1, tuple(8, bar, 
0.2));
-assertAllRows(
-row(0, tuple(3, foo, 3.4)),
-row(1, tuple(8, bar, 0.2))
-);
-
-// nulls
-execute(INSERT INTO %s (k, t) VALUES (?, ?), 2, tuple(5, null, 
3.4));
-assertRows(execute(SELECT * FROM %s WHERE k=?, 2),
-row(2, tuple(5, null, 3.4))
-);
-
-// incomplete tuple
-execute(INSERT INTO %s (k, t) VALUES (?, ?), 3, tuple(5, bar));
-assertRows(execute(SELECT * FROM %s WHERE k=?, 3),
-   

[8/9] cassandra git commit: Migrate CQL tests from dtest to unit tests

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java 
b/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
deleted file mode 100644
index 18e1be5..000
--- a/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
+++ /dev/null
@@ -1,101 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import java.util.Locale;
-
-import org.apache.commons.lang.StringUtils;
-
-import org.junit.Test;
-
-public class CreateIndexStatementTest extends CQLTester
-{
-@Test
-public void testCreateAndDropIndex() throws Throwable
-{
-testCreateAndDropIndex(test, false);
-testCreateAndDropIndex(test2, true);
-}
-
-@Test
-public void testCreateAndDropIndexWithQuotedIdentifier() throws Throwable
-{
-testCreateAndDropIndex(\quoted_ident\, false);
-testCreateAndDropIndex(\quoted_ident2\, true);
-}
-
-@Test
-public void testCreateAndDropIndexWithCamelCaseIdentifier() throws 
Throwable
-{
-testCreateAndDropIndex(CamelCase, false);
-testCreateAndDropIndex(CamelCase2, true);
-}
-
-/**
- * Test creating and dropping an index with the specified name.
- *
- * @param indexName the index name
- * @param addKeyspaceOnDrop add the keyspace name in the drop statement
- * @throws Throwable if an error occurs
- */
-private void testCreateAndDropIndex(String indexName, boolean 
addKeyspaceOnDrop) throws Throwable
-{
-execute(USE system);
-assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found, DROP 
INDEX  + indexName + ;);
-
-createTable(CREATE TABLE %s (a int primary key, b int););
-createIndex(CREATE INDEX  + indexName +  ON %s(b););
-createIndex(CREATE INDEX IF NOT EXISTS  + indexName +  ON %s(b););
-
-assertInvalidMessage(Index already exists, CREATE INDEX  + 
indexName +  ON %s(b));
-
-execute(INSERT INTO %s (a, b) values (?, ?);, 0, 0);
-execute(INSERT INTO %s (a, b) values (?, ?);, 1, 1);
-execute(INSERT INTO %s (a, b) values (?, ?);, 2, 2);
-execute(INSERT INTO %s (a, b) values (?, ?);, 3, 1);
-
-assertRows(execute(SELECT * FROM %s where b = ?, 1), row(1, 1), 
row(3, 1));
-assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found in any 
of the tables of keyspace 'system', DROP INDEX  + indexName);
-
-if (addKeyspaceOnDrop)
-{
-dropIndex(DROP INDEX  + KEYSPACE + . + indexName);
-}
-else
-{
-execute(USE  + KEYSPACE);
-dropIndex(DROP INDEX  + indexName);
-}
-
-assertInvalidMessage(No secondary indexes on the restricted columns 
support the provided operators,
- SELECT * FROM %s where b = ?, 1);
-dropIndex(DROP INDEX IF EXISTS  + indexName);
-assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found, DROP 
INDEX  + indexName);
-}
-
-/**
- * Removes the quotes from the specified index name.
- *
- * @param indexName the index name from which the quotes must be removed.
- * @return the unquoted index name.
- */
-private static String removeQuotes(String indexName)
-{
-return StringUtils.remove(indexName, '\');
-}
-}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/CreateTableTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CreateTableTest.java 
b/test/unit/org/apache/cassandra/cql3/CreateTableTest.java
deleted file mode 100644
index 14d2c2b..000
--- a/test/unit/org/apache/cassandra/cql3/CreateTableTest.java
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor 

[22/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UseStatementTest.java 
b/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
deleted file mode 100644
index 66b4b42..000
--- a/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import org.junit.Test;
-
-public class UseStatementTest extends CQLTester
-{
-@Test
-public void testUseStatementWithBindVariable() throws Throwable
-{
-assertInvalidSyntaxMessage(Bind variables cannot be used for keyspace 
names, USE ?);
-}
-}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UserTypesTest.java 
b/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
deleted file mode 100644
index bfd3e9f..000
--- a/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
+++ /dev/null
@@ -1,334 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import org.junit.Test;
-
-public class UserTypesTest extends CQLTester
-{
-@Test
-public void testInvalidField() throws Throwable
-{
-String myType = createType(CREATE TYPE %s (f int));
-createTable(CREATE TABLE %s (k int PRIMARY KEY, v frozen + myType + 
));
-
-// 's' is not a field of myType
-assertInvalid(INSERT INTO %s (k, v) VALUES (?, {s : ?}), 0, 1);
-}
-
-@Test
-public void testCassandra8105() throws Throwable
-{
-String ut1 = createType(CREATE TYPE %s (a int, b int));
-String ut2 = createType(CREATE TYPE %s (j frozen + KEYSPACE + . + 
ut1 + , k int));
-createTable(CREATE TABLE %s (x int PRIMARY KEY, y setfrozen + 
KEYSPACE + . + ut2 + ));
-execute(INSERT INTO %s (x, y) VALUES (1, { { k: 1 } }));
-
-String ut3 = createType(CREATE TYPE %s (a int, b int));
-String ut4 = createType(CREATE TYPE %s (j frozen + KEYSPACE + . + 
ut3 + , k int));
-createTable(CREATE TABLE %s (x int PRIMARY KEY, y listfrozen + 
KEYSPACE + . + ut4 + ));
-execute(INSERT INTO %s (x, y) VALUES (1, [ { k: 1 } ]));
-
-String ut5 = createType(CREATE TYPE %s (a int, b int));
-String ut6 = createType(CREATE TYPE %s (i int, j frozen + KEYSPACE 
+ . + ut5 + ));
-createTable(CREATE TABLE %s (x int PRIMARY KEY, y setfrozen + 
KEYSPACE + . + ut6 + ));
-execute(INSERT INTO %s (x, y) VALUES (1, { { i: 1 } }));
-}
-
-@Test
-public void testFor7684() throws Throwable
-{
-String myType = createType(CREATE TYPE %s (x double));
-createTable(CREATE TABLE %s (k int, v frozen + myType + , b 
boolean static, PRIMARY KEY (k, v)));
-
-execute(INSERT INTO %s(k, v) VALUES (?, {x:?}), 1, -104.99251);
-execute(UPDATE %s SET b = ? WHERE k = ?, true, 1);
-
-assertRows(execute(SELECT v.x FROM %s WHERE k = ? AND v = {x:?}, 1, 
-104.99251),
-row(-104.99251)
-);
-
-flush();
-
-assertRows(execute(SELECT v.x FROM %s WHERE k = ? AND v = {x:?}, 1, 
-104.99251),
-   

[11/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
new file mode 100644
index 000..506bdaf
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectTest.java
@@ -0,0 +1,1336 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.operations;
+
+import java.nio.ByteBuffer;
+import java.util.UUID;
+
+import org.junit.Test;
+
+import junit.framework.Assert;
+import org.apache.cassandra.cql3.UntypedResultSet;
+import org.apache.cassandra.cql3.CQLTester;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Test column ranges and ordering with static column in table
+ */
+public class SelectTest extends CQLTester
+{
+@Test
+public void testSingleClustering() throws Throwable
+{
+createTable(CREATE TABLE %s (p text, c text, v text, s text static, 
PRIMARY KEY (p, c)));
+
+execute(INSERT INTO %s(p, c, v, s) values (?, ?, ?, ?), p1, k1, 
v1, sv1);
+execute(INSERT INTO %s(p, c, v) values (?, ?, ?), p1, k2, v2);
+execute(INSERT INTO %s(p, s) values (?, ?), p2, sv2);
+
+assertRows(execute(SELECT * FROM %s WHERE p=?, p1),
+row(p1, k1, sv1, v1),
+row(p1, k2, sv1, v2)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE p=?, p2),
+row(p2, null, sv2, null)
+);
+
+// Ascending order
+
+assertRows(execute(SELECT * FROM %s WHERE p=? ORDER BY c ASC, p1),
+row(p1, k1, sv1, v1),
+row(p1, k2, sv1, v2)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE p=? ORDER BY c ASC, p2),
+row(p2, null, sv2, null)
+);
+
+// Descending order
+
+assertRows(execute(SELECT * FROM %s WHERE p=? ORDER BY c DESC, p1),
+row(p1, k2, sv1, v2),
+row(p1, k1, sv1, v1)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE p=? ORDER BY c DESC, p2),
+row(p2, null, sv2, null)
+);
+
+// No order with one relation
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c=?, p1, k1),
+row(p1, k1, sv1, v1),
+row(p1, k2, sv1, v2)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c=?, p1, k2),
+row(p1, k2, sv1, v2)
+);
+
+assertEmpty(execute(SELECT * FROM %s WHERE p=? AND c=?, p1, 
k3));
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c =?, p1, k1),
+row(p1, k1, sv1, v1)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c=?, p1, k1),
+row(p1, k1, sv1, v1)
+);
+
+assertEmpty(execute(SELECT * FROM %s WHERE p=? AND c=?, p1, 
k0));
+
+// Ascending with one relation
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c=? ORDER BY c 
ASC, p1, k1),
+row(p1, k1, sv1, v1),
+row(p1, k2, sv1, v2)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c=? ORDER BY c 
ASC, p1, k2),
+row(p1, k2, sv1, v2)
+);
+
+assertEmpty(execute(SELECT * FROM %s WHERE p=? AND c=? ORDER BY c 
ASC, p1, k3));
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c =? ORDER BY c 
ASC, p1, k1),
+row(p1, k1, sv1, v1)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c=? ORDER BY c 
ASC, p1, k1),
+row(p1, k1, sv1, v1)
+);
+
+assertEmpty(execute(SELECT * FROM %s WHERE p=? AND c=? ORDER BY c 
ASC, p1, k0));
+
+// Descending with one relation
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c=? ORDER BY c 
DESC, p1, k1),
+row(p1, k2, sv1, v2),
+row(p1, k1, sv1, v1)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE p=? AND c=? ORDER BY c 
DESC, p1, k2),
+row(p1, k2, sv1, v2)
+

[05/31] cassandra git commit: Migrate CQL tests from dtest to unit tests

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/validation/entities/FrozenCollectionsTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/FrozenCollectionsTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/FrozenCollectionsTest.java
new file mode 100644
index 000..beed560
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/FrozenCollectionsTest.java
@@ -0,0 +1, @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.entities;
+
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.db.marshal.*;
+import org.apache.cassandra.dht.ByteOrderedPartitioner;
+import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.exceptions.InvalidRequestException;
+import org.apache.cassandra.exceptions.SyntaxException;
+import org.apache.commons.lang3.StringUtils;
+import org.junit.Assert;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import static org.junit.Assert.assertEquals;
+
+public class FrozenCollectionsTest extends CQLTester
+{
+@BeforeClass
+public static void setUpClass()
+{
+DatabaseDescriptor.setPartitioner(new ByteOrderedPartitioner());
+}
+
+@Test
+public void testPartitionKeyUsage() throws Throwable
+{
+createTable(CREATE TABLE %s (k frozensetint PRIMARY KEY, v int));
+
+execute(INSERT INTO %s (k, v) VALUES (?, ?), set(), 1);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), set(1, 2, 3), 1);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), set(4, 5, 6), 0);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), set(7, 8, 9), 0);
+
+// overwrite with an update
+execute(UPDATE %s SET v=? WHERE k=?, 0, set());
+execute(UPDATE %s SET v=? WHERE k=?, 0, set(1, 2, 3));
+
+assertRows(execute(SELECT * FROM %s),
+row(set(), 0),
+row(set(1, 2, 3), 0),
+row(set(4, 5, 6), 0),
+row(set(7, 8, 9), 0)
+);
+
+assertRows(execute(SELECT k FROM %s),
+row(set()),
+row(set(1, 2, 3)),
+row(set(4, 5, 6)),
+row(set(7, 8, 9))
+);
+
+assertRows(execute(SELECT * FROM %s LIMIT 2),
+row(set(), 0),
+row(set(1, 2, 3), 0)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE k=?, set(4, 5, 6)),
+row(set(4, 5, 6), 0)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE k=?, set()),
+row(set(), 0)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE k IN ?, list(set(4, 5, 6), 
set())),
+row(set(4, 5, 6), 0),
+row(set(), 0)
+);
+
+assertRows(execute(SELECT * FROM %s WHERE token(k) = token(?), 
set(4, 5, 6)),
+row(set(4, 5, 6), 0),
+row(set(7, 8, 9), 0)
+);
+
+assertInvalid(INSERT INTO %s (k, v) VALUES (null, 0));
+
+execute(DELETE FROM %s WHERE k=?, set());
+execute(DELETE FROM %s WHERE k=?, set(4, 5, 6));
+assertRows(execute(SELECT * FROM %s),
+row(set(1, 2, 3), 0),
+row(set(7, 8, 9), 0)
+);
+}
+
+@Test
+public void testNestedPartitionKeyUsage() throws Throwable
+{
+createTable(CREATE TABLE %s (k frozenmapsetint, listint 
PRIMARY KEY, v int));
+
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(), 1);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(), list(1, 2, 
3)), 0);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(1, 2, 3), 
list(1, 2, 3)), 1);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(4, 5, 6), 
list(1, 2, 3)), 0);
+execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(7, 8, 9), 
list(1, 2, 3)), 0);
+
+// overwrite with an update
+execute(UPDATE %s SET v=? WHERE k=?, 0, map());
+

[18/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/entities/UFAuthTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/UFAuthTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/entities/UFAuthTest.java
new file mode 100644
index 000..498f0dd
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/validation/entities/UFAuthTest.java
@@ -0,0 +1,728 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.entities;
+
+import java.lang.reflect.Field;
+import java.util.*;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.ImmutableSet;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.auth.*;
+import org.apache.cassandra.config.DatabaseDescriptor;
+import org.apache.cassandra.cql3.Attributes;
+import org.apache.cassandra.cql3.CQLStatement;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.functions.Function;
+import org.apache.cassandra.cql3.functions.FunctionName;
+import org.apache.cassandra.cql3.functions.Functions;
+import org.apache.cassandra.cql3.statements.BatchStatement;
+import org.apache.cassandra.cql3.statements.ModificationStatement;
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.exceptions.*;
+import org.apache.cassandra.service.ClientState;
+import org.apache.cassandra.utils.Pair;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public class UFAuthTest extends CQLTester
+{
+private static final Logger logger = 
LoggerFactory.getLogger(UFAuthTest.class);
+
+String roleName = test_role;
+AuthenticatedUser user;
+RoleResource role;
+ClientState clientState;
+
+@BeforeClass
+public static void setupAuthorizer()
+{
+try
+{
+IAuthorizer authorizer = new StubAuthorizer();
+Field authorizerField = 
DatabaseDescriptor.class.getDeclaredField(authorizer);
+authorizerField.setAccessible(true);
+authorizerField.set(null, authorizer);
+DatabaseDescriptor.setPermissionsValidity(0);
+}
+catch (IllegalAccessException | NoSuchFieldException e)
+{
+throw new RuntimeException(e);
+}
+}
+
+@Before
+public void setup() throws Throwable
+{
+((StubAuthorizer) DatabaseDescriptor.getAuthorizer()).clear();
+setupClientState();
+setupTable(CREATE TABLE %s (k int, v1 int, v2 int, PRIMARY KEY (k, 
v1)));
+}
+
+@Test
+public void functionInSelection() throws Throwable
+{
+String functionName = createSimpleFunction();
+String cql = String.format(SELECT k, %s FROM %s WHERE k = 1;,
+   functionCall(functionName),
+   KEYSPACE + . + currentTable());
+assertPermissionsOnFunction(cql, functionName);
+}
+
+@Test
+public void functionInSelectPKRestriction() throws Throwable
+{
+String functionName = createSimpleFunction();
+String cql = String.format(SELECT * FROM %s WHERE k = %s,
+   KEYSPACE + . + currentTable(),
+   functionCall(functionName));
+assertPermissionsOnFunction(cql, functionName);
+}
+
+@Test
+public void functionInSelectClusteringRestriction() throws Throwable
+{
+String functionName = createSimpleFunction();
+String cql = String.format(SELECT * FROM %s WHERE k = 0 AND v1 = %s,
+   KEYSPACE + . + currentTable(),
+   functionCall(functionName));
+assertPermissionsOnFunction(cql, functionName);
+}
+
+@Test
+public void functionInSelectInRestriction() throws Throwable
+{
+String functionName = createSimpleFunction();
+String cql = String.format(SELECT 

[22/32] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UseStatementTest.java 
b/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
deleted file mode 100644
index 66b4b42..000
--- a/test/unit/org/apache/cassandra/cql3/UseStatementTest.java
+++ /dev/null
@@ -1,29 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import org.junit.Test;
-
-public class UseStatementTest extends CQLTester
-{
-@Test
-public void testUseStatementWithBindVariable() throws Throwable
-{
-assertInvalidSyntaxMessage(Bind variables cannot be used for keyspace 
names, USE ?);
-}
-}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UserTypesTest.java 
b/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
deleted file mode 100644
index bfd3e9f..000
--- a/test/unit/org/apache/cassandra/cql3/UserTypesTest.java
+++ /dev/null
@@ -1,334 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import org.junit.Test;
-
-public class UserTypesTest extends CQLTester
-{
-@Test
-public void testInvalidField() throws Throwable
-{
-String myType = createType(CREATE TYPE %s (f int));
-createTable(CREATE TABLE %s (k int PRIMARY KEY, v frozen + myType + 
));
-
-// 's' is not a field of myType
-assertInvalid(INSERT INTO %s (k, v) VALUES (?, {s : ?}), 0, 1);
-}
-
-@Test
-public void testCassandra8105() throws Throwable
-{
-String ut1 = createType(CREATE TYPE %s (a int, b int));
-String ut2 = createType(CREATE TYPE %s (j frozen + KEYSPACE + . + 
ut1 + , k int));
-createTable(CREATE TABLE %s (x int PRIMARY KEY, y setfrozen + 
KEYSPACE + . + ut2 + ));
-execute(INSERT INTO %s (x, y) VALUES (1, { { k: 1 } }));
-
-String ut3 = createType(CREATE TYPE %s (a int, b int));
-String ut4 = createType(CREATE TYPE %s (j frozen + KEYSPACE + . + 
ut3 + , k int));
-createTable(CREATE TABLE %s (x int PRIMARY KEY, y listfrozen + 
KEYSPACE + . + ut4 + ));
-execute(INSERT INTO %s (x, y) VALUES (1, [ { k: 1 } ]));
-
-String ut5 = createType(CREATE TYPE %s (a int, b int));
-String ut6 = createType(CREATE TYPE %s (i int, j frozen + KEYSPACE 
+ . + ut5 + ));
-createTable(CREATE TABLE %s (x int PRIMARY KEY, y setfrozen + 
KEYSPACE + . + ut6 + ));
-execute(INSERT INTO %s (x, y) VALUES (1, { { i: 1 } }));
-}
-
-@Test
-public void testFor7684() throws Throwable
-{
-String myType = createType(CREATE TYPE %s (x double));
-createTable(CREATE TABLE %s (k int, v frozen + myType + , b 
boolean static, PRIMARY KEY (k, v)));
-
-execute(INSERT INTO %s(k, v) VALUES (?, {x:?}), 1, -104.99251);
-execute(UPDATE %s SET b = ? WHERE k = ?, true, 1);
-
-assertRows(execute(SELECT v.x FROM %s WHERE k = ? AND v = {x:?}, 1, 
-104.99251),
-row(-104.99251)
-);
-
-flush();
-
-assertRows(execute(SELECT v.x FROM %s WHERE k = ? AND v = {x:?}, 1, 
-104.99251),
-   

[14/32] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/operations/BatchTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/BatchTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/BatchTest.java
new file mode 100644
index 000..1447845
--- /dev/null
+++ b/test/unit/org/apache/cassandra/cql3/validation/operations/BatchTest.java
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.cql3.validation.operations;
+
+import org.junit.Test;
+
+import org.apache.cassandra.cql3.CQLTester;
+
+public class BatchTest extends CQLTester
+{
+/**
+ * Test batch statements
+ * migrated from cql_tests.py:TestCQL.batch_test()
+ */
+@Test
+public void testBatch() throws Throwable
+{
+createTable(CREATE TABLE %s (userid text PRIMARY KEY, name text, 
password text));
+
+String query = BEGIN BATCH\n
+   + INSERT INTO %1$s (userid, password, name) VALUES 
('user2', 'ch@ngem3b', 'second user');\n
+   + UPDATE %1$s SET password = 'ps22dhds' WHERE userid = 
'user3';\n
+   + INSERT INTO %1$s (userid, password) VALUES ('user4', 
'ch@ngem3c');\n
+   + DELETE name FROM %1$s WHERE userid = 'user1';\n
+   + APPLY BATCH;;
+
+execute(query);
+}
+
+/**
+ * Migrated from cql_tests.py:TestCQL.batch_and_list_test()
+ */
+@Test
+public void testBatchAndList() throws Throwable
+{
+createTable(CREATE TABLE %s (k int PRIMARY KEY, l listint));
+
+execute(BEGIN BATCH  +
+UPDATE %1$s SET l = l +[ 1 ] WHERE k = 0;  +
+UPDATE %1$s SET l = l + [ 2 ] WHERE k = 0;  +
+UPDATE %1$s SET l = l + [ 3 ] WHERE k = 0;  +
+APPLY BATCH);
+
+assertRows(execute(SELECT l FROM %s WHERE k = 0),
+   row(list(1, 2, 3)));
+
+execute(BEGIN BATCH  +
+UPDATE %1$s SET l =[ 1 ] + l WHERE k = 1;  +
+UPDATE %1$s SET l = [ 2 ] + l WHERE k = 1;  +
+UPDATE %1$s SET l = [ 3 ] + l WHERE k = 1;  +
+APPLY BATCH );
+
+assertRows(execute(SELECT l FROM %s WHERE k = 1),
+   row(list(3, 2, 1)));
+}
+
+/**
+ * Migrated from cql_tests.py:TestCQL.bug_6115_test()
+ */
+@Test
+public void testBatchDeleteInsert() throws Throwable
+{
+createTable(CREATE TABLE %s (k int, v int, PRIMARY KEY (k, v)));
+
+execute(INSERT INTO %s (k, v) VALUES (0, 1));
+execute(BEGIN BATCH DELETE FROM %1$s WHERE k=0 AND v=1; INSERT INTO 
%1$s (k, v) VALUES (0, 2); APPLY BATCH);
+
+assertRows(execute(SELECT * FROM %s),
+   row(0, 2));
+}
+
+@Test
+public void testBatchWithUnset() throws Throwable
+{
+createTable(CREATE TABLE %s (k int PRIMARY KEY, s text, i int));
+
+// test batch and update
+String qualifiedTable = keyspace() + . + currentTable();
+execute(BEGIN BATCH  +
+INSERT INTO %s (k, s, i) VALUES (100, 'batchtext', 7);  +
+INSERT INTO  + qualifiedTable +  (k, s, i) VALUES (111, 
'batchtext', 7);  +
+UPDATE  + qualifiedTable +  SET s=?, i=? WHERE k = 100;  +
+UPDATE  + qualifiedTable +  SET s=?, i=? WHERE k=111;  +
+APPLY BATCH;, null, unset(), unset(), null);
+assertRows(execute(SELECT k, s, i FROM %s where k in (100,111)),
+   row(100, null, 7),
+   row(111, batchtext, null)
+);
+}
+}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/operations/CreateTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/CreateTest.java 
b/test/unit/org/apache/cassandra/cql3/validation/operations/CreateTest.java
new file mode 100644
index 000..f3d98ff
--- /dev/null
+++ 

[31/32] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
2.2 commit for CASSANDRA-9160


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/01115f72
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/01115f72
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/01115f72

Branch: refs/heads/trunk
Commit: 01115f72fc50b603ece0a00431308abec24706b7
Parents: 20364f4
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Wed Jun 24 12:11:46 2015 -0400
Committer: Josh McKenzie josh.mcken...@datastax.com
Committed: Wed Jun 24 12:11:46 2015 -0400

--
 .../cassandra/config/DatabaseDescriptor.java|6 +
 .../org/apache/cassandra/cql3/ResultSet.java|9 +
 .../apache/cassandra/cql3/UntypedResultSet.java |2 +-
 .../cql3/statements/BatchStatement.java |   69 +-
 .../cql3/statements/CQL3CasRequest.java |8 +-
 .../cql3/statements/ModificationStatement.java  |   70 +-
 .../cql3/statements/SelectStatement.java|   84 +-
 .../cql3/statements/TruncateStatement.java  |   13 +-
 .../apache/cassandra/service/StorageProxy.java  |4 +-
 .../org/apache/cassandra/utils/UUIDGen.java |   16 +-
 .../org/apache/cassandra/cql3/ManyRowsTest.java |   92 +
 .../apache/cassandra/cql3/AggregationTest.java  | 1479 --
 .../org/apache/cassandra/cql3/AliasTest.java|   40 -
 .../apache/cassandra/cql3/AlterTableTest.java   |  113 -
 .../org/apache/cassandra/cql3/CQLTester.java|  172 +-
 .../apache/cassandra/cql3/CollectionsTest.java  |  340 ---
 .../cassandra/cql3/ContainsRelationTest.java|  283 --
 .../cassandra/cql3/CrcCheckChanceTest.java  |  159 --
 .../cql3/CreateAndAlterKeyspaceTest.java|   37 -
 .../cql3/CreateIndexStatementTest.java  |  101 -
 .../apache/cassandra/cql3/CreateTableTest.java  |   69 -
 .../cql3/CreateTriggerStatementTest.java|  121 -
 .../cassandra/cql3/FrozenCollectionsTest.java   | 1101 
 .../cql3/IndexedValuesValidationTest.java   |  149 -
 .../org/apache/cassandra/cql3/JsonTest.java |  947 ---
 .../apache/cassandra/cql3/ModificationTest.java |  112 -
 .../cassandra/cql3/MultiColumnRelationTest.java |  936 ---
 .../org/apache/cassandra/cql3/PgStringTest.java |   76 -
 .../cassandra/cql3/RangeDeletionTest.java   |   35 -
 .../apache/cassandra/cql3/RoleSyntaxTest.java   |   51 -
 .../cql3/SSTableMetadataTrackingTest.java   |  160 --
 .../cql3/SecondaryIndexOnMapEntriesTest.java|  337 ---
 .../cql3/SelectWithTokenFunctionTest.java   |  233 --
 .../cassandra/cql3/SelectionOrderingTest.java   |  233 --
 .../cql3/SingleColumnRelationTest.java  |  553 
 .../SliceQueryFilterWithTombstonesTest.java |  170 --
 .../cassandra/cql3/StaticColumnsQueryTest.java  |  280 --
 .../cassandra/cql3/ThriftCompatibilityTest.java |1 +
 .../apache/cassandra/cql3/TimestampTest.java|   36 -
 .../apache/cassandra/cql3/TupleTypeTest.java|  114 -
 .../org/apache/cassandra/cql3/TypeCastTest.java |   54 -
 .../org/apache/cassandra/cql3/TypeTest.java |   89 -
 .../org/apache/cassandra/cql3/UFAuthTest.java   |  724 -
 .../cassandra/cql3/UFIdentificationTest.java|  376 ---
 test/unit/org/apache/cassandra/cql3/UFTest.java | 2585 -
 .../apache/cassandra/cql3/UseStatementTest.java |   29 -
 .../apache/cassandra/cql3/UserTypesTest.java|  334 ---
 .../selection/SelectionColumnMappingTest.java   |9 +
 .../validation/entities/CollectionsTest.java|  588 
 .../cql3/validation/entities/CountersTest.java  |  115 +
 .../cql3/validation/entities/DateTypeTest.java  |   39 +
 .../entities/FrozenCollectionsTest.java |  
 .../cql3/validation/entities/JsonTest.java  |  958 +++
 .../SecondaryIndexOnMapEntriesTest.java |  348 +++
 .../validation/entities/SecondaryIndexTest.java |  645 +
 .../validation/entities/StaticColumnsTest.java  |  271 ++
 .../cql3/validation/entities/TimestampTest.java |  155 ++
 .../cql3/validation/entities/TimeuuidTest.java  |   81 +
 .../cql3/validation/entities/TupleTypeTest.java |  171 ++
 .../cql3/validation/entities/TypeTest.java  |   92 +
 .../cql3/validation/entities/UFAuthTest.java|  728 +
 .../entities/UFIdentificationTest.java  |  380 +++
 .../cql3/validation/entities/UFTest.java| 2596 ++
 .../cql3/validation/entities/UserTypesTest.java |  404 +++
 .../miscellaneous/CrcCheckChanceTest.java   |  160 ++
 .../validation/miscellaneous/OverflowTest.java  |  331 +++
 .../validation/miscellaneous/PgStringTest.java  |   77 +
 .../miscellaneous/RoleSyntaxTest.java   |   53 +
 .../SSTableMetadataTrackingTest.java|  161 ++
 .../miscellaneous/TombstonesTest.java   |  171 ++
 .../validation/operations/AggregationTest.java  | 1481 ++
 .../cql3/validation/operations/AlterTest.java   |  203 ++
 

[28/32] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/FrozenCollectionsTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/FrozenCollectionsTest.java 
b/test/unit/org/apache/cassandra/cql3/FrozenCollectionsTest.java
deleted file mode 100644
index cdb7489..000
--- a/test/unit/org/apache/cassandra/cql3/FrozenCollectionsTest.java
+++ /dev/null
@@ -1,1101 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import org.apache.cassandra.db.marshal.*;
-import org.apache.cassandra.exceptions.ConfigurationException;
-import org.apache.cassandra.exceptions.InvalidRequestException;
-import org.apache.cassandra.exceptions.SyntaxException;
-import org.apache.commons.lang3.StringUtils;
-import org.junit.Assert;
-import org.junit.Test;
-
-import java.util.ArrayList;
-import java.util.Arrays;
-import java.util.List;
-
-import static org.junit.Assert.assertEquals;
-
-public class FrozenCollectionsTest extends CQLTester
-{
-@Test
-public void testPartitionKeyUsage() throws Throwable
-{
-createTable(CREATE TABLE %s (k frozensetint PRIMARY KEY, v int));
-
-execute(INSERT INTO %s (k, v) VALUES (?, ?), set(), 1);
-execute(INSERT INTO %s (k, v) VALUES (?, ?), set(1, 2, 3), 1);
-execute(INSERT INTO %s (k, v) VALUES (?, ?), set(4, 5, 6), 0);
-execute(INSERT INTO %s (k, v) VALUES (?, ?), set(7, 8, 9), 0);
-
-// overwrite with an update
-execute(UPDATE %s SET v=? WHERE k=?, 0, set());
-execute(UPDATE %s SET v=? WHERE k=?, 0, set(1, 2, 3));
-
-assertRows(execute(SELECT * FROM %s),
-row(set(), 0),
-row(set(1, 2, 3), 0),
-row(set(4, 5, 6), 0),
-row(set(7, 8, 9), 0)
-);
-
-assertRows(execute(SELECT k FROM %s),
-row(set()),
-row(set(1, 2, 3)),
-row(set(4, 5, 6)),
-row(set(7, 8, 9))
-);
-
-assertRows(execute(SELECT * FROM %s LIMIT 2),
-row(set(), 0),
-row(set(1, 2, 3), 0)
-);
-
-assertRows(execute(SELECT * FROM %s WHERE k=?, set(4, 5, 6)),
-row(set(4, 5, 6), 0)
-);
-
-assertRows(execute(SELECT * FROM %s WHERE k=?, set()),
-row(set(), 0)
-);
-
-assertRows(execute(SELECT * FROM %s WHERE k IN ?, list(set(4, 5, 6), 
set())),
-   row(set(), 0),
-   row(set(4, 5, 6), 0)
-);
-
-assertRows(execute(SELECT * FROM %s WHERE token(k) = token(?), 
set(4, 5, 6)),
-row(set(4, 5, 6), 0),
-row(set(7, 8, 9), 0)
-);
-
-assertInvalid(INSERT INTO %s (k, v) VALUES (null, 0));
-
-execute(DELETE FROM %s WHERE k=?, set());
-execute(DELETE FROM %s WHERE k=?, set(4, 5, 6));
-assertRows(execute(SELECT * FROM %s),
-row(set(1, 2, 3), 0),
-row(set(7, 8, 9), 0)
-);
-}
-
-@Test
-public void testNestedPartitionKeyUsage() throws Throwable
-{
-createTable(CREATE TABLE %s (k frozenmapsetint, listint 
PRIMARY KEY, v int));
-
-execute(INSERT INTO %s (k, v) VALUES (?, ?), map(), 1);
-execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(), list(1, 2, 
3)), 0);
-execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(1, 2, 3), 
list(1, 2, 3)), 1);
-execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(4, 5, 6), 
list(1, 2, 3)), 0);
-execute(INSERT INTO %s (k, v) VALUES (?, ?), map(set(7, 8, 9), 
list(1, 2, 3)), 0);
-
-// overwrite with an update
-execute(UPDATE %s SET v=? WHERE k=?, 0, map());
-execute(UPDATE %s SET v=? WHERE k=?, 0, map(set(1, 2, 3), list(1, 2, 
3)));
-
-assertRows(execute(SELECT * FROM %s),
-row(map(), 0),
-row(map(set(), list(1, 2, 3)), 0),
-row(map(set(1, 2, 3), list(1, 2, 3)), 0),
-row(map(set(4, 5, 6), list(1, 2, 3)), 0),
-row(map(set(7, 8, 9), list(1, 2, 3)), 0)
-);
-
-assertRows(execute(SELECT k FROM %s),

[08/32] cassandra git commit: Migrate CQL tests from dtest to unit tests

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java 
b/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
deleted file mode 100644
index 18e1be5..000
--- a/test/unit/org/apache/cassandra/cql3/CreateIndexStatementTest.java
+++ /dev/null
@@ -1,101 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import java.util.Locale;
-
-import org.apache.commons.lang.StringUtils;
-
-import org.junit.Test;
-
-public class CreateIndexStatementTest extends CQLTester
-{
-@Test
-public void testCreateAndDropIndex() throws Throwable
-{
-testCreateAndDropIndex(test, false);
-testCreateAndDropIndex(test2, true);
-}
-
-@Test
-public void testCreateAndDropIndexWithQuotedIdentifier() throws Throwable
-{
-testCreateAndDropIndex(\quoted_ident\, false);
-testCreateAndDropIndex(\quoted_ident2\, true);
-}
-
-@Test
-public void testCreateAndDropIndexWithCamelCaseIdentifier() throws 
Throwable
-{
-testCreateAndDropIndex(CamelCase, false);
-testCreateAndDropIndex(CamelCase2, true);
-}
-
-/**
- * Test creating and dropping an index with the specified name.
- *
- * @param indexName the index name
- * @param addKeyspaceOnDrop add the keyspace name in the drop statement
- * @throws Throwable if an error occurs
- */
-private void testCreateAndDropIndex(String indexName, boolean 
addKeyspaceOnDrop) throws Throwable
-{
-execute(USE system);
-assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found, DROP 
INDEX  + indexName + ;);
-
-createTable(CREATE TABLE %s (a int primary key, b int););
-createIndex(CREATE INDEX  + indexName +  ON %s(b););
-createIndex(CREATE INDEX IF NOT EXISTS  + indexName +  ON %s(b););
-
-assertInvalidMessage(Index already exists, CREATE INDEX  + 
indexName +  ON %s(b));
-
-execute(INSERT INTO %s (a, b) values (?, ?);, 0, 0);
-execute(INSERT INTO %s (a, b) values (?, ?);, 1, 1);
-execute(INSERT INTO %s (a, b) values (?, ?);, 2, 2);
-execute(INSERT INTO %s (a, b) values (?, ?);, 3, 1);
-
-assertRows(execute(SELECT * FROM %s where b = ?, 1), row(1, 1), 
row(3, 1));
-assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found in any 
of the tables of keyspace 'system', DROP INDEX  + indexName);
-
-if (addKeyspaceOnDrop)
-{
-dropIndex(DROP INDEX  + KEYSPACE + . + indexName);
-}
-else
-{
-execute(USE  + KEYSPACE);
-dropIndex(DROP INDEX  + indexName);
-}
-
-assertInvalidMessage(No secondary indexes on the restricted columns 
support the provided operators,
- SELECT * FROM %s where b = ?, 1);
-dropIndex(DROP INDEX IF EXISTS  + indexName);
-assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found, DROP 
INDEX  + indexName);
-}
-
-/**
- * Removes the quotes from the specified index name.
- *
- * @param indexName the index name from which the quotes must be removed.
- * @return the unquoted index name.
- */
-private static String removeQuotes(String indexName)
-{
-return StringUtils.remove(indexName, '\');
-}
-}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f797bfa4/test/unit/org/apache/cassandra/cql3/CreateTableTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/CreateTableTest.java 
b/test/unit/org/apache/cassandra/cql3/CreateTableTest.java
deleted file mode 100644
index 14d2c2b..000
--- a/test/unit/org/apache/cassandra/cql3/CreateTableTest.java
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor 

[24/32] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/UFAuthTest.java
--
diff --git a/test/unit/org/apache/cassandra/cql3/UFAuthTest.java 
b/test/unit/org/apache/cassandra/cql3/UFAuthTest.java
deleted file mode 100644
index 2c36bd1..000
--- a/test/unit/org/apache/cassandra/cql3/UFAuthTest.java
+++ /dev/null
@@ -1,724 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an AS IS BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.cassandra.cql3;
-
-import java.lang.reflect.Field;
-import java.util.*;
-
-import com.google.common.base.Joiner;
-import com.google.common.collect.ImmutableSet;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-import org.apache.cassandra.auth.*;
-import org.apache.cassandra.config.DatabaseDescriptor;
-import org.apache.cassandra.cql3.functions.Function;
-import org.apache.cassandra.cql3.functions.FunctionName;
-import org.apache.cassandra.cql3.functions.Functions;
-import org.apache.cassandra.cql3.statements.BatchStatement;
-import org.apache.cassandra.cql3.statements.ModificationStatement;
-import org.apache.cassandra.exceptions.*;
-import org.apache.cassandra.service.ClientState;
-import org.apache.cassandra.utils.Pair;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-
-public class UFAuthTest extends CQLTester
-{
-private static final Logger logger = 
LoggerFactory.getLogger(UFAuthTest.class);
-
-String roleName = test_role;
-AuthenticatedUser user;
-RoleResource role;
-ClientState clientState;
-
-@BeforeClass
-public static void setupAuthorizer()
-{
-try
-{
-IAuthorizer authorizer = new StubAuthorizer();
-Field authorizerField = 
DatabaseDescriptor.class.getDeclaredField(authorizer);
-authorizerField.setAccessible(true);
-authorizerField.set(null, authorizer);
-DatabaseDescriptor.setPermissionsValidity(0);
-}
-catch (IllegalAccessException | NoSuchFieldException e)
-{
-throw new RuntimeException(e);
-}
-}
-
-@Before
-public void setup() throws Throwable
-{
-((StubAuthorizer) DatabaseDescriptor.getAuthorizer()).clear();
-setupClientState();
-setupTable(CREATE TABLE %s (k int, v1 int, v2 int, PRIMARY KEY (k, 
v1)));
-}
-
-@Test
-public void functionInSelection() throws Throwable
-{
-String functionName = createSimpleFunction();
-String cql = String.format(SELECT k, %s FROM %s WHERE k = 1;,
-   functionCall(functionName),
-   KEYSPACE + . + currentTable());
-assertPermissionsOnFunction(cql, functionName);
-}
-
-@Test
-public void functionInSelectPKRestriction() throws Throwable
-{
-String functionName = createSimpleFunction();
-String cql = String.format(SELECT * FROM %s WHERE k = %s,
-   KEYSPACE + . + currentTable(),
-   functionCall(functionName));
-assertPermissionsOnFunction(cql, functionName);
-}
-
-@Test
-public void functionInSelectClusteringRestriction() throws Throwable
-{
-String functionName = createSimpleFunction();
-String cql = String.format(SELECT * FROM %s WHERE k = 0 AND v1 = %s,
-   KEYSPACE + . + currentTable(),
-   functionCall(functionName));
-assertPermissionsOnFunction(cql, functionName);
-}
-
-@Test
-public void functionInSelectInRestriction() throws Throwable
-{
-String functionName = createSimpleFunction();
-String cql = String.format(SELECT * FROM %s WHERE k IN (%s, %s),
-   KEYSPACE + . + currentTable(),
-   functionCall(functionName),
-   functionCall(functionName));
-assertPermissionsOnFunction(cql, functionName);
- 

[19/32] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
new file mode 100644
index 000..0b812c6
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
@@ -0,0 +1,645 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.entities;
+
+import java.nio.ByteBuffer;
+import java.util.HashMap;
+import java.util.Locale;
+import java.util.Map;
+import java.util.UUID;
+
+import org.apache.commons.lang3.StringUtils;
+
+import org.junit.Test;
+
+import org.apache.cassandra.cql3.CQLTester;
+import org.apache.cassandra.exceptions.ConfigurationException;
+import org.apache.cassandra.exceptions.SyntaxException;
+import org.apache.cassandra.utils.FBUtilities;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+public class SecondaryIndexTest extends CQLTester
+{
+private static final int TOO_BIG = 1024 * 65;
+
+@Test
+public void testCreateAndDropIndex() throws Throwable
+{
+testCreateAndDropIndex(test, false);
+testCreateAndDropIndex(test2, true);
+}
+
+@Test
+public void testCreateAndDropIndexWithQuotedIdentifier() throws Throwable
+{
+testCreateAndDropIndex(\quoted_ident\, false);
+testCreateAndDropIndex(\quoted_ident2\, true);
+}
+
+@Test
+public void testCreateAndDropIndexWithCamelCaseIdentifier() throws 
Throwable
+{
+testCreateAndDropIndex(CamelCase, false);
+testCreateAndDropIndex(CamelCase2, true);
+}
+
+/**
+ * Test creating and dropping an index with the specified name.
+ *
+ * @param indexName the index name
+ * @param addKeyspaceOnDrop add the keyspace name in the drop statement
+ * @throws Throwable if an error occurs
+ */
+private void testCreateAndDropIndex(String indexName, boolean 
addKeyspaceOnDrop) throws Throwable
+{
+execute(USE system);
+assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found, DROP 
INDEX  + indexName + ;);
+
+createTable(CREATE TABLE %s (a int primary key, b int););
+createIndex(CREATE INDEX  + indexName +  ON %s(b););
+createIndex(CREATE INDEX IF NOT EXISTS  + indexName +  ON %s(b););
+
+assertInvalidMessage(Index already exists, CREATE INDEX  + 
indexName +  ON %s(b));
+
+execute(INSERT INTO %s (a, b) values (?, ?);, 0, 0);
+execute(INSERT INTO %s (a, b) values (?, ?);, 1, 1);
+execute(INSERT INTO %s (a, b) values (?, ?);, 2, 2);
+execute(INSERT INTO %s (a, b) values (?, ?);, 3, 1);
+
+assertRows(execute(SELECT * FROM %s where b = ?, 1), row(1, 1), 
row(3, 1));
+assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found in any 
of the tables of keyspace 'system',
+ DROP INDEX  + indexName);
+
+if (addKeyspaceOnDrop)
+{
+dropIndex(DROP INDEX  + KEYSPACE + . + indexName);
+}
+else
+{
+execute(USE  + KEYSPACE);
+execute(DROP INDEX  + indexName);
+}
+
+assertInvalidMessage(No secondary indexes on the restricted columns 
support the provided operators,
+ SELECT * FROM %s where b = ?, 1);
+dropIndex(DROP INDEX IF EXISTS  + indexName);
+assertInvalidMessage(Index ' + 
removeQuotes(indexName.toLowerCase(Locale.US)) + ' could not be found, DROP 
INDEX  + indexName);
+}
+
+/**
+ * Removes the quotes from the specified index name.
+ *
+ * @param indexName the index name from which the quotes must be removed.
+ * @return the unquoted index name.
+ */
+private static String removeQuotes(String indexName)
+{
+return StringUtils.remove(indexName, '\');
+

cassandra git commit: Expand upgrade testing for commitlog changes

2015-06-24 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 01115f72f - a384faaa8


Expand upgrade testing for commitlog changes

Patch by blambov; reviewed by jmckenzie for CASSANDRA-9346


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a384faaa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a384faaa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a384faaa

Branch: refs/heads/cassandra-2.2
Commit: a384faaa8aa2c5f0f313011a30ef64e7e795ab1e
Parents: 01115f7
Author: Branimir Lambov branimir.lam...@datastax.com
Authored: Wed Jun 24 12:47:59 2015 -0400
Committer: Josh McKenzie josh.mcken...@datastax.com
Committed: Wed Jun 24 12:47:59 2015 -0400

--
 .../db/commitlog/CommitLogReplayer.java |   2 +-
 .../2.0/CommitLog-3-1431528750790.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750791.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750792.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750793.log   | Bin 0 - 2097152 bytes
 test/data/legacy-commitlog/2.0/hash.txt |   3 +
 .../2.1/CommitLog-4-1431529069529.log   | Bin 0 - 2097152 bytes
 .../2.1/CommitLog-4-1431529069530.log   | Bin 0 - 2097152 bytes
 test/data/legacy-commitlog/2.1/hash.txt |   3 +
 .../db/commitlog/CommitLogStressTest.java   | 217 +---
 .../db/commitlog/CommitLogUpgradeTest.java  | 143 +++
 .../db/commitlog/CommitLogUpgradeTestMaker.java | 250 +++
 12 files changed, 527 insertions(+), 91 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index a59e70e..176f64b 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@ -281,7 +281,7 @@ public class CommitLogReplayer
 return;
 if (globalPosition.segment == desc.id)
 reader.seek(globalPosition.position);
-replaySyncSection(reader, -1, desc);
+replaySyncSection(reader, (int) reader.getPositionLimit(), 
desc);
 return;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750790.log
--
diff --git a/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750790.log 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750790.log
new file mode 100644
index 000..3301331
Binary files /dev/null and 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750790.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750791.log
--
diff --git a/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750791.log 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750791.log
new file mode 100644
index 000..04314d6
Binary files /dev/null and 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750791.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750792.log
--
diff --git a/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750792.log 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750792.log
new file mode 100644
index 000..a9af9e4
Binary files /dev/null and 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750792.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750793.log
--
diff --git a/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750793.log 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750793.log
new file mode 100644
index 000..3301331
Binary files /dev/null and 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750793.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/hash.txt
--
diff --git a/test/data/legacy-commitlog/2.0/hash.txt 
b/test/data/legacy-commitlog/2.0/hash.txt
new file mode 100644
index 000..4bbec02
--- /dev/null
+++ b/test/data/legacy-commitlog/2.0/hash.txt
@@ -0,0 +1,3 @@
+cfid = 

[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-24 Thread jmckenzie
Merge branch 'cassandra-2.2' into trunk

Conflicts:
test/long/org/apache/cassandra/db/commitlog/CommitLogStressTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f0dd5df
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f0dd5df
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f0dd5df

Branch: refs/heads/trunk
Commit: 3f0dd5dff8118d1512dd3dec71dee5d3ac188599
Parents: bc9b0be a384faa
Author: Josh McKenzie josh.mcken...@datastax.com
Authored: Wed Jun 24 12:50:26 2015 -0400
Committer: Josh McKenzie josh.mcken...@datastax.com
Committed: Wed Jun 24 12:50:26 2015 -0400

--
 .../db/commitlog/CommitLogReplayer.java |   2 +-
 .../2.0/CommitLog-3-1431528750790.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750791.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750792.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750793.log   | Bin 0 - 2097152 bytes
 test/data/legacy-commitlog/2.0/hash.txt |   3 +
 .../2.1/CommitLog-4-1431529069529.log   | Bin 0 - 2097152 bytes
 .../2.1/CommitLog-4-1431529069530.log   | Bin 0 - 2097152 bytes
 test/data/legacy-commitlog/2.1/hash.txt |   3 +
 .../db/commitlog/CommitLogStressTest.java   | 217 +---
 .../db/commitlog/CommitLogUpgradeTest.java  | 143 +++
 .../db/commitlog/CommitLogUpgradeTestMaker.java | 250 +++
 12 files changed, 527 insertions(+), 91 deletions(-)
--




[1/2] cassandra git commit: Expand upgrade testing for commitlog changes

2015-06-24 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/trunk bc9b0be32 - 3f0dd5dff


Expand upgrade testing for commitlog changes

Patch by blambov; reviewed by jmckenzie for CASSANDRA-9346


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a384faaa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a384faaa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a384faaa

Branch: refs/heads/trunk
Commit: a384faaa8aa2c5f0f313011a30ef64e7e795ab1e
Parents: 01115f7
Author: Branimir Lambov branimir.lam...@datastax.com
Authored: Wed Jun 24 12:47:59 2015 -0400
Committer: Josh McKenzie josh.mcken...@datastax.com
Committed: Wed Jun 24 12:47:59 2015 -0400

--
 .../db/commitlog/CommitLogReplayer.java |   2 +-
 .../2.0/CommitLog-3-1431528750790.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750791.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750792.log   | Bin 0 - 2097152 bytes
 .../2.0/CommitLog-3-1431528750793.log   | Bin 0 - 2097152 bytes
 test/data/legacy-commitlog/2.0/hash.txt |   3 +
 .../2.1/CommitLog-4-1431529069529.log   | Bin 0 - 2097152 bytes
 .../2.1/CommitLog-4-1431529069530.log   | Bin 0 - 2097152 bytes
 test/data/legacy-commitlog/2.1/hash.txt |   3 +
 .../db/commitlog/CommitLogStressTest.java   | 217 +---
 .../db/commitlog/CommitLogUpgradeTest.java  | 143 +++
 .../db/commitlog/CommitLogUpgradeTestMaker.java | 250 +++
 12 files changed, 527 insertions(+), 91 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
index a59e70e..176f64b 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java
@@ -281,7 +281,7 @@ public class CommitLogReplayer
 return;
 if (globalPosition.segment == desc.id)
 reader.seek(globalPosition.position);
-replaySyncSection(reader, -1, desc);
+replaySyncSection(reader, (int) reader.getPositionLimit(), 
desc);
 return;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750790.log
--
diff --git a/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750790.log 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750790.log
new file mode 100644
index 000..3301331
Binary files /dev/null and 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750790.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750791.log
--
diff --git a/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750791.log 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750791.log
new file mode 100644
index 000..04314d6
Binary files /dev/null and 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750791.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750792.log
--
diff --git a/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750792.log 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750792.log
new file mode 100644
index 000..a9af9e4
Binary files /dev/null and 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750792.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750793.log
--
diff --git a/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750793.log 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750793.log
new file mode 100644
index 000..3301331
Binary files /dev/null and 
b/test/data/legacy-commitlog/2.0/CommitLog-3-1431528750793.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a384faaa/test/data/legacy-commitlog/2.0/hash.txt
--
diff --git a/test/data/legacy-commitlog/2.0/hash.txt 
b/test/data/legacy-commitlog/2.0/hash.txt
new file mode 100644
index 000..4bbec02
--- /dev/null
+++ b/test/data/legacy-commitlog/2.0/hash.txt
@@ -0,0 +1,3 @@
+cfid = 

[13/31] cassandra git commit: 2.2 commit for CASSANDRA-9160

2015-06-24 Thread jmckenzie
http://git-wip-us.apache.org/repos/asf/cassandra/blob/01115f72/test/unit/org/apache/cassandra/cql3/validation/operations/SelectMultiColumnRelationTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectMultiColumnRelationTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectMultiColumnRelationTest.java
new file mode 100644
index 000..552e39e
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectMultiColumnRelationTest.java
@@ -0,0 +1,962 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.cql3.validation.operations;
+
+import org.junit.Test;
+
+import org.apache.cassandra.cql3.CQLTester;
+
+import static org.junit.Assert.assertEquals;
+
+public class SelectMultiColumnRelationTest extends CQLTester
+{
+@Test
+public void testSingleClusteringInvalidQueries() throws Throwable
+{
+for (String compactOption : new String[] { ,  WITH COMPACT STORAGE 
})
+{
+createTable(CREATE TABLE %s (a int, b int, c int, PRIMARY KEY (a, 
b)) + compactOption);
+
+assertInvalidSyntax(SELECT * FROM %s WHERE () = (?, ?), 1, 2);
+assertInvalidMessage(b cannot be restricted by more than one 
relation if it includes an Equal,
+ SELECT * FROM %s WHERE a = 0 AND (b) = (?) 
AND (b)  (?), 0, 0);
+assertInvalidMessage(More than one restriction was found for the 
start bound on b,
+ SELECT * FROM %s WHERE a = 0 AND (b)  (?) 
AND (b)  (?), 0, 1);
+assertInvalidMessage(Multi-column relations can only be applied 
to clustering columns but was applied to: a,
+ SELECT * FROM %s WHERE (a, b) = (?, ?), 0, 
0);
+}
+}
+
+@Test
+public void testMultiClusteringInvalidQueries() throws Throwable
+{
+for (String compactOption : new String[] { ,  WITH COMPACT STORAGE 
})
+{
+createTable(CREATE TABLE %s (a int, b int, c int, d int, PRIMARY 
KEY (a, b, c, d)) + compactOption);
+
+assertInvalidSyntax(SELECT * FROM %s WHERE a = 0 AND (b, c)  
());
+assertInvalidMessage(Expected 2 elements in value tuple, but got 
3: (?, ?, ?),
+ SELECT * FROM %s WHERE a = 0 AND (b, c)  
(?, ?, ?), 1, 2, 3);
+assertInvalidMessage(Invalid null value in condition for column 
c,
+ SELECT * FROM %s WHERE a = 0 AND (b, c)  
(?, ?), 1, null);
+
+// Wrong order of columns
+assertInvalidMessage(Clustering columns must appear in the 
PRIMARY KEY order in multi-column relations: (d, c, b) = (?, ?, ?),
+ SELECT * FROM %s WHERE a = 0 AND (d, c, b) = 
(?, ?, ?), 0, 0, 0);
+assertInvalidMessage(Clustering columns must appear in the 
PRIMARY KEY order in multi-column relations: (d, c, b)  (?, ?, ?),
+ SELECT * FROM %s WHERE a = 0 AND (d, c, b)  
(?, ?, ?), 0, 0, 0);
+
+// Wrong number of values
+assertInvalidMessage(Expected 3 elements in value tuple, but got 
2: (?, ?),
+ SELECT * FROM %s WHERE a=0 AND (b, c, d) IN 
((?, ?)), 0, 1);
+assertInvalidMessage(Expected 3 elements in value tuple, but got 
5: (?, ?, ?, ?, ?),
+ SELECT * FROM %s WHERE a=0 AND (b, c, d) IN 
((?, ?, ?, ?, ?)), 0, 1, 2, 3, 4);
+
+// Missing first clustering column
+assertInvalidMessage(PRIMARY KEY column \c\ cannot be 
restricted as preceding column \b\ is not restricted,
+ SELECT * FROM %s WHERE a = 0 AND (c, d) = 
(?, ?), 0, 0);
+assertInvalidMessage(PRIMARY KEY column \c\ cannot be 
restricted as preceding column \b\ is not restricted,
+ SELECT * FROM %s WHERE a = 0 AND (c, d)  
(?, ?), 0, 0);
+
+// Nulls
+assertInvalidMessage(Invalid null value in condition for columns: 
[b, c, d],
+ SELECT * 

  1   2   >