[jira] [Commented] (CASSANDRA-9064) [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe table statement

2015-06-19 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593987#comment-14593987
 ] 

Adam Holmberg commented on CASSANDRA-9064:
--

https://datastax-oss.atlassian.net/browse/PYTHON-352

 [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe 
 table statement
 

 Key: CASSANDRA-9064
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9064
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.3 on mac os x
Reporter: Sujeet Gholap
Assignee: Benjamin Lerer
  Labels: cqlsh
 Fix For: 2.1.x


 Here's how to reproduce:
 1) Create a table with LeveledCompactionStrategy
 CREATE keyspace foo WITH REPLICATION = {'class': 'SimpleStrategy', 
 'replication_factor' : 3};
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH compaction = {'class': 'LeveledCompactionStrategy'};
 2) Describe the table and save the output
 cqlsh -e describe table foo.bar
 Output should be something like
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH bloom_filter_fp_chance = 0.1
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 3) Save the output to repro.cql
 4) Drop the table foo.bar
 cqlsh -e drop table foo.bar
 5) Run the create table statement we saved
 cqlsh -f repro.cql
 6) Expected: normal execution without an error
 7) Reality:
 ConfigurationException: ErrorMessage code=2300 [Query invalid because of 
 configuration issue] message=Properties specified [min_threshold, 
 max_threshold] are not understood by LeveledCompactionStrategy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9607:
-
Fix Version/s: 2.2.0 rc2

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.x, 2.2.0 rc2

 Attachments: cassandra.yaml, load.png, log.zip, schema.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9626) Make C* work in all locales

2015-06-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593887#comment-14593887
 ] 

Robert Stupp edited comment on CASSANDRA-9626 at 6/19/15 8:06 PM:
--

Attached patch against 2.2 sets the file-encoding to UTF-8 and locale to en_US.
I don't think that we ever need to take the machine's real locale or file 
encoding into account since C* daemon is not user facing. This may change if 
tools get localizations.


was (Author: snazy):
Attached patch sets the file-encoding to UTF-8 and locale to en_US.
I don't think that we ever need to take the machine's real locale or file 
encoding into account since C* daemon is not user facing. This may change if 
tools get localizations.

 Make C* work in all locales
 ---

 Key: CASSANDRA-9626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9626
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Assignee: Robert Stupp
Priority: Minor
 Attachments: 9626.txt


 Default locale and default charset has immediate effect on how strings are 
 encoded and handles - e.g. via {{String.toLowerCase()}} or {{new 
 String(byte[])}}.
 Problems with different default locales + charsets don't become obvious for 
 US and most European regional settings. But some regional OS settings will 
 cause severe errors. Example: {{BILLY.toLowerCase()}} returns {{bılly}} 
 with Locale tr_TR (take a look at the second letter - it's an i without the 
 dot).
 (ref: 
 http://blog.thetaphi.de/2012/07/default-locales-default-charsets-and.html)
 It's not a problem I'm currently facing, but it could become a problem for 
 some users. A quick fix could be to set default locale and charset in the 
 start scripts - maybe that's all we need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9607:

Assignee: Tyler Hobbs  (was: Benedict)

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip, schema.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9626) Make C* work in all locales

2015-06-19 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-9626:

Attachment: 9626.txt

Attached patch sets the file-encoding to UTF-8 and locale to en_US.
I don't think that we ever need to take the machine's real locale or file 
encoding into account since C* daemon is not user facing. This may change if 
tools get localizations.

 Make C* work in all locales
 ---

 Key: CASSANDRA-9626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9626
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Priority: Minor
 Attachments: 9626.txt


 Default locale and default charset has immediate effect on how strings are 
 encoded and handles - e.g. via {{String.toLowerCase()}} or {{new 
 String(byte[])}}.
 Problems with different default locales + charsets don't become obvious for 
 US and most European regional settings. But some regional OS settings will 
 cause severe errors. Example: {{BILLY.toLowerCase()}} returns {{bılly}} 
 with Locale tr_TR (take a look at the second letter - it's an i without the 
 dot).
 (ref: 
 http://blog.thetaphi.de/2012/07/default-locales-default-charsets-and.html)
 It's not a problem I'm currently facing, but it could become a problem for 
 some users. A quick fix could be to set default locale and charset in the 
 start scripts - maybe that's all we need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9607:
-
Fix Version/s: 2.1.x

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Tyler Hobbs
Priority: Critical
 Fix For: 2.1.x, 2.2.0 rc2

 Attachments: cassandra.yaml, load.png, log.zip, schema.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7085) Specialized query filters for CQL3

2015-06-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14594286#comment-14594286
 ] 

sankalp kohli commented on CASSANDRA-7085:
--

We can query all columns but if we find live columns for all the columns being 
queried, we can return and not read from older sstables. 

 Specialized query filters for CQL3
 --

 Key: CASSANDRA-7085
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7085
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
  Labels: cql, perfomance
 Fix For: 3.x


 The semantic of CQL makes it so that the current {{NamesQueryFilter}} and 
 {{SliceQueryFilter}} are not always as efficient as we could be. Namely, when 
 a {{SELECT}} only selects a handful of columns, we still have to query to 
 query all the columns of the select rows to distinguish between 'live row but 
 with no data for the queried columns' and 'no row' (see CASSANDRA-6588 for 
 more details).
 We can solve that however by adding new filters (name and slice) specialized 
 for CQL. The new name filter would be a list of row prefix + a list of CQL 
 column names (instead of one list of cell names). The slice filter would 
 still take a ColumnSlice[] but would add the list of column names we care 
 about for each row.
 The new sstable readers that goes with those filter would use the list of 
 column names to filter out all the cells we don't care about, so we don't 
 have to ship those back to the coordinator to skip them there, yet would know 
 to still return the row marker when necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6377) ALLOW FILTERING should allow seq scan filtering

2015-06-19 Thread Srini (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14594225#comment-14594225
 ] 

Srini commented on CASSANDRA-6377:
--

Just to be clear what I'm referring to, let me give an example.

Primary Key  (Key1, key2, key3, Key4, Key5)

where Key1 is the partitioning key.

Assume, this is what we want to do:

Select * from merchant_data where Key1 = ‘abc’  and Key2 = ‘’  and Key4 = 
‘’  ALLOW FILTERING;

This is what the current version of Cassandra allows:
Select * from merchant_data where Key1 = ‘abc’  and Key2 = ‘’

The difference between both of them is that in the second query the application 
has to filter with in its own logic for Key4, where as Cassandra (had it 
allowed) could have done under 1st query.

It would be a huge performance difference as it avoids network load/latency 
between Cassandra node and the client. Reducing the use of secondary indexes 
and using the core strengths of Cassandra would be extremely beneficial for 
Cassandra's adaptability across many use cases.

I do  see where this can be abused if the partition contains thousands of rows, 
but by forcing ALLOW FILTERING clause, the burden would be on the client as 
they have to make a conscious decision.


 ALLOW FILTERING should allow seq scan filtering
 ---

 Key: CASSANDRA-6377
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6377
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 3.x


 CREATE TABLE emp_table2 (
 empID int PRIMARY KEY,
 firstname text,
 lastname text,
 b_mon text,
 b_day text,
 b_yr text,
 );
 INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
VALUES (100,'jane','doe','oct','31','1980');
 INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
VALUES (101,'john','smith','jan','01','1981');
 INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
VALUES (102,'mary','jones','apr','15','1982');
 INSERT INTO emp_table2 (empID,firstname,lastname,b_mon,b_day,b_yr) 
VALUES (103,'tim','best','oct','25','1982');

 SELECT b_mon,b_day,b_yr,firstname,lastname FROM emp_table2 
 WHERE b_mon='oct' ALLOW FILTERING;
 Bad Request: No indexed columns present in by-columns clause with Equal 
 operator



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-06-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14594344#comment-14594344
 ] 

Stefania commented on CASSANDRA-9448:
-

Yes, actually there is already a test being added: 
https://github.com/riptano/cassandra-dtest/pull/335.

We just need to modify it with the new names.

 Metrics should use up to date nomenclature
 --

 Key: CASSANDRA-9448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Stefania
  Labels: docs-impacting, jmx
 Fix For: 3.0 beta 1


 There are a number of exposed metrics that currently are named using the old 
 nomenclature of columnfamily and rows (meaning partitions).
 It would be good to audit all metrics and update any names to match what they 
 actually represent; we should probably do that in a single sweep to avoid a 
 confusing mixture of old and new terminology. 
 As we'd need to do this in a major release, I've initially set the fixver for 
 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-06-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593121#comment-14593121
 ] 

Stefania commented on CASSANDRA-9448:
-

Regarding the documentation, is [this|http://wiki.apache.org/cassandra/Metrics] 
the only page listing all metrics?

 Metrics should use up to date nomenclature
 --

 Key: CASSANDRA-9448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Stefania
  Labels: docs-impacting
 Fix For: 3.0 beta 1


 There are a number of exposed metrics that currently are named using the old 
 nomenclature of columnfamily and rows (meaning partitions).
 It would be good to audit all metrics and update any names to match what they 
 actually represent; we should probably do that in a single sweep to avoid a 
 confusing mixture of old and new terminology. 
 As we'd need to do this in a major release, I've initially set the fixver for 
 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7066) Simplify (and unify) cleanup of compaction leftovers

2015-06-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593135#comment-14593135
 ] 

Stefania commented on CASSANDRA-7066:
-

[~benedict], this is ready for another round of review.

Few things to notice:

- During an upgrade we need to cleanup legacy tmp and tmplink files, and 
optionally system.compactions_in_progress sstable files. I added some code that 
picks up the legacy temporary files together with the new temporary files (used 
only for metadata generation) and deletes them. I also enhanced a preflight 
check to cope with them. However, I don't see why we need to do this every 
time. It would be ideal to check only during an upgrade, but I don't want to 
add one more step to the upgrade procedure either. As far as I understand users 
are supposed to run nodetool drain before stopping and nodetool upgradesstables 
after restarting, cc [~iamaleksey]. Perhaps a nodetool drain is enough to 
ensure no previous temporary files or we should enhance it to this effect? As 
for compactions_in_progress, the table is not loaded but its folder is not 
deleted either. It would be rather easy to enhance the new standalone tool, 
sstablelister, to report legacy tmp files and even compactions_in_progress 
sstable files so that they can be deleted, however this would be an additional 
manual step to perform during the upgrade.

- The existing management of {{cfstore.metric.totalDiskSpaceUsed}} in the 
Tracker was a bit weak, in that we would decrement it without checking if we 
had added anything to it, we would just rely on the tracker being registered 
with the deleting task, which is no longer applicable. I solved it by 
registering another runOnGlobalRelease to decrement the counter, but perhaps 
you can think of something a bit cleaner.

- SSTableDeletingTask is gone, there is an SSTableTidier in the TransactionLogs 
class that does the same job. When all the sstable tidiers have finished, we 
run the transaction tidier. We retry failed deletions both for the sstable 
tidiers and the transaction tidier. We do need to be careful to delete the txn 
log tracking the files to keep (the new for commit and the old for abort) as 
soon as the transaction is completed and in the same thread, or else we may get 
inconsistent behavior in getTemporaryFiles(). 

- I changed CQLSSTableWriter so that it has a single transaction per inner 
writer of SSTableSimpleUnsortedWriter, so that its behavior should be a little 
bit more similar to the existing behavior.
 

 Simplify (and unify) cleanup of compaction leftovers
 

 Key: CASSANDRA-7066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7066
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Stefania
Priority: Minor
  Labels: compaction
 Fix For: 3.x

 Attachments: 7066.txt


 Currently we manage a list of in-progress compactions in a system table, 
 which we use to cleanup incomplete compactions when we're done. The problem 
 with this is that 1) it's a bit clunky (and leaves us in positions where we 
 can unnecessarily cleanup completed files, or conversely not cleanup files 
 that have been superceded); and 2) it's only used for a regular compaction - 
 no other compaction types are guarded in the same way, so can result in 
 duplication if we fail before deleting the replacements.
 I'd like to see each sstable store in its metadata its direct ancestors, and 
 on startup we simply delete any sstables that occur in the union of all 
 ancestor sets. This way as soon as we finish writing we're capable of 
 cleaning up any leftovers, so we never get duplication. It's also much easier 
 to reason about.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9622) count/max/min aggregates not created for the blob type

2015-06-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593409#comment-14593409
 ] 

Aleksey Yeschenko commented on CASSANDRA-9622:
--

cassci:
- 
http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-9622-2.2-dtest/
- 
http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-9622-2.2-testall/
- 
http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-9622-trunk-dtest/
- 
http://cassci.datastax.com/view/Dev/view/iamaleksey/job/iamaleksey-9622-trunk-testall/

 count/max/min aggregates not created for the blob type
 --

 Key: CASSANDRA-9622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9622
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.2.0 rc2


 Static blob in {{Functions}} piggy-backs on type conversions declaration 
 loop, and the latter skips the blob type. As a result, count, max, and min 
 functions are not being defined for {{blob}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7085) Specialized query filters for CQL3

2015-06-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593427#comment-14593427
 ] 

Sylvain Lebresne commented on CASSANDRA-7085:
-

I was unfortunately wrong, what I suggest here doesn't work.

Let me first recall the problem one more time. The semantic of CQL is that 
whatever the columns selected by a query are, a row is included in the result 
set if it is live, even if it has no data for the queried columns. This means 
that even if a query selects only a few columns, we still might need to know if 
it has live data for any of the other columns in case the selected columns 
don't have live data. And we cannot rely on the row marker since {{UPDATE}} 
don't even set the row marker (since CASSANDRA-6782). Hence the fact that we 
currently query every columns every time.

Now, my initial idea for this ticket (which is actually implemented in the 
current patch for CASSANDRA-8099 but doesn't work) was to say: let's only query 
the columns we want, but record the maximum timestamp for any live data that is 
not included in the query in the result (which in practice means we still read 
all columns from disk but only send up the stack what we care about). We can 
then use that max timestamp to decide if a row exists or not if we needed.  But 
we don't know what would happen during reconciliation for the data we haven't 
queried, so that this live timestamp idea is bogus.

So back to square one: I'm not sure we can preserve the CQL semantic without 
querying all columns. And I'm not sure breaking everyone by changing the 
semantic now is a good idea.

The one thing we can easily do (and that wouldn't be too much work) would be to 
query all columns, but only include the values for the columns the query truly 
cares about (we're only interested in knowing if those columns are live or 
not). This would be slightly better than what we do now, but not a whole lot.

And so I think we should seriously consider re-opening CASSANDRA-6588: it's not 
perfect but it's better than not having the option imo.


 Specialized query filters for CQL3
 --

 Key: CASSANDRA-7085
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7085
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
  Labels: cql, perfomance
 Fix For: 3.x


 The semantic of CQL makes it so that the current {{NamesQueryFilter}} and 
 {{SliceQueryFilter}} are not always as efficient as we could be. Namely, when 
 a {{SELECT}} only selects a handful of columns, we still have to query to 
 query all the columns of the select rows to distinguish between 'live row but 
 with no data for the queried columns' and 'no row' (see CASSANDRA-6588 for 
 more details).
 We can solve that however by adding new filters (name and slice) specialized 
 for CQL. The new name filter would be a list of row prefix + a list of CQL 
 column names (instead of one list of cell names). The slice filter would 
 still take a ColumnSlice[] but would add the list of column names we care 
 about for each row.
 The new sstable readers that goes with those filter would use the list of 
 column names to filter out all the cells we don't care about, so we don't 
 have to ship those back to the coordinator to skip them there, yet would know 
 to still return the row marker when necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-19 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593219#comment-14593219
 ] 

Andreas Schnitzerling commented on CASSANDRA-8535:
--

It is not helping always. In some circumstances it's crashing again.

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9064) [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe table statement

2015-06-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593404#comment-14593404
 ] 

Benjamin Lerer commented on CASSANDRA-9064:
---

Before the {{LeveledCompactionStrategy}} was implemented the {{min_threshold}} 
and {{max_threshold}} options were considered as mandatory. Due to that they 
are always set within the {{schema_columnfamilies}} table which is used by the 
python driver to build the describe table statement. 

This will be changed by CASSANDRA-6717 in the 3.0 version. For the versions 
prior to 3.0, the only way to fix it is to add a some filtering logic to the 
python driver.

 [LeveledCompactionStrategy] cqlsh can't run cql produced by its own describe 
 table statement
 

 Key: CASSANDRA-9064
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9064
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.3 on mac os x
Reporter: Sujeet Gholap
Assignee: Benjamin Lerer
  Labels: cqlsh
 Fix For: 2.1.x


 Here's how to reproduce:
 1) Create a table with LeveledCompactionStrategy
 CREATE keyspace foo WITH REPLICATION = {'class': 'SimpleStrategy', 
 'replication_factor' : 3};
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH compaction = {'class': 'LeveledCompactionStrategy'};
 2) Describe the table and save the output
 cqlsh -e describe table foo.bar
 Output should be something like
 CREATE TABLE foo.bar (
 spam text PRIMARY KEY
 ) WITH bloom_filter_fp_chance = 0.1
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = '99.0PERCENTILE';
 3) Save the output to repro.cql
 4) Drop the table foo.bar
 cqlsh -e drop table foo.bar
 5) Run the create table statement we saved
 cqlsh -f repro.cql
 6) Expected: normal execution without an error
 7) Reality:
 ConfigurationException: ErrorMessage code=2300 [Query invalid because of 
 configuration issue] message=Properties specified [min_threshold, 
 max_threshold] are not understood by LeveledCompactionStrategy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-19 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593421#comment-14593421
 ] 

Andreas Schnitzerling commented on CASSANDRA-8535:
--

I try to fix it with 2 finalizers. One in SequentialWriter and one in 
RandomAccessReader. Latter helped in C* 2.0.x. One node I run w/ both 
finalizers and file-trace + referenceCount and one I use only the 
SequentialWriter-finalizer.

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9623) Added column does not sort as the last column

2015-06-19 Thread Marcin Pietraszek (JIRA)
Marcin Pietraszek created CASSANDRA-9623:


 Summary: Added column does not sort as the last column
 Key: CASSANDRA-9623
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9623
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcin Pietraszek


After adding new machines to existing cluster running cleanup one of the tables 
ends with:

{noformat}
ERROR [CompactionExecutor:1015] 2015-06-19 11:24:05,038 CassandraDaemon.java 
(line 199) Exception in thread Thread[CompactionExecutor:1015,1,main]
java.lang.AssertionError: Added column does not sort as the last column
at 
org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121)
at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:155)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
at 
org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
at 
org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)
at 
org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
at 
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
at 
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

We're using patched 2.0.13-190ef4f



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9618) Consider deprecating sstable2json/json2sstable in 2.2

2015-06-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593349#comment-14593349
 ] 

Sylvain Lebresne commented on CASSANDRA-9618:
-

As said in the description, I think we should at least deprecate json2sstable. 
Regarding sstable2json, my preference would be to replace it with a tool that 
makes it clear that the purpose is to debug sstable and that is tailored to 
that need. In particular, even though we could preserve a json output for the 
sake of tools that would want to consume the output, I suspect that we can do 
something more readable for human beings (and having multiple outputs wouldn't 
be that hard if we design it correctly). I also think it could be useful to 
have the output optionally include the offset in the file of the different 
elements (always for the sake of debugging), etc We could do all of this 
and still call the tool sstable2json, but if it's gonna break anyone using the 
sstable2json tool, I'd rather just have it be a new tool with a more 
appropriate name (something like 'sstableInspector' or whatnot). Doing so (and 
thus deprecating sstable2json) also give us a little bit more flexibility as to 
when we introduce the new tool. I'd love to see that new tool in 3.0 and 
hopefully it will be (it's not all that much work in the first place), but if 
push comes to shove only having it in 3.1/3.2 doesn't sound like the end of the 
world to me.

 Consider deprecating sstable2json/json2sstable in 2.2
 -

 Key: CASSANDRA-9618
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9618
 Project: Cassandra
  Issue Type: Task
Reporter: Sylvain Lebresne
 Fix For: 2.2.0 rc2


 The rational is explained in CASSANDRA-7464 but to rephrase a bit:
 * json2sstable is pretty much useless, {{CQLSSTableWriter}} is way more 
 flexible if you need to write sstable directly.
 * sstable2json is really only potentially useful for debugging, but it's 
 pretty bad at that (it's output is not really all that helpul in modern 
 Cassandra in particular).
 Now, it happens that updating those tool for CASSANDRA-8099, while possible, 
 is a bit involved. So I don't think it make sense to invest effort in 
 maintain these tools. So I propose to deprecate these in 2.2 with removal in 
 3.0.
 I'll note that having a tool to help debugging sstable can be useful, but I 
 propose to add a tool for that purpose with CASSANDRA-7464.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9622) count/max/min aggregates not created for the blob type

2015-06-19 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-9622:


 Summary: count/max/min aggregates not created for the blob type
 Key: CASSANDRA-9622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9622
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.2.0 rc2


Static blob in {{Functions}} piggy-backs on type conversions declaration loop, 
and the latter skips the blob type. As a result, count, max, and min functions 
are not being defined for {{blob}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-06-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593213#comment-14593213
 ] 

Stefania commented on CASSANDRA-9448:
-

So far here is what I found:

- Groups:
-- ColumnFamily - Table

- Types 
-- ColumnFamily - Table
-- IndexColumnFamily - IndexTable

- Metrics:
-- EstimatedRowSizeHistogram - EstimatedPartitionSizeHistogram
-- EstimatedRowCount - EstimatedPartitionCount
-- MinRowSize - MinPartitionSize
-- MaxRowSize - MaxPartitionSize
-- MeanRowSize - MeanPartitionSize
-- RowCacheHit - PartitionCacheHit
-- RowCacheHitOutOfRange - PartitionCacheHitOutOfRange
-- RowCacheMiss - PartitionCacheMiss

I also renamed the related classes and methods.

 Metrics should use up to date nomenclature
 --

 Key: CASSANDRA-9448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Stefania
  Labels: docs-impacting
 Fix For: 3.0 beta 1


 There are a number of exposed metrics that currently are named using the old 
 nomenclature of columnfamily and rows (meaning partitions).
 It would be good to audit all metrics and update any names to match what they 
 actually represent; we should probably do that in a single sweep to avoid a 
 confusing mixture of old and new terminology. 
 As we'd need to do this in a major release, I've initially set the fixver for 
 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9622) count/max/min aggregates not created for the blob type

2015-06-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593344#comment-14593344
 ] 

Aleksey Yeschenko commented on CASSANDRA-9622:
--

Fixed in https://github.com/iamaleksey/cassandra/commits/9622-2.2.

cassci results to be added later.


 count/max/min aggregates not created for the blob type
 --

 Key: CASSANDRA-9622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9622
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.2.0 rc2


 Static blob in {{Functions}} piggy-backs on type conversions declaration 
 loop, and the latter skips the blob type. As a result, count, max, and min 
 functions are not being defined for {{blob}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-06-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593359#comment-14593359
 ] 

Sylvain Lebresne commented on CASSANDRA-8099:
-

You're right, it's not easy to fix at all. However, allowing such disorder 
officially crosses the boundaries of my comfort zone, even if it could work 
as of the current code.

So I'm starting to warm up to the idea of introducing new boundary types of 
marker (which we'll need 2 of: excl_close-incl_open and incl_close-excl_close). 
Provided we make sure iterators *must* use those boundary when appropriate, I 
think we dodge that problem. It will complicate things somewhat but giving the 
other benefits discussed above, maybe that's the better solution.

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-19 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589635#comment-14589635
 ] 

Andreas Schnitzerling edited comment on CASSANDRA-8535 at 6/19/15 8:59 AM:
---

On Windows 7, files and directories cannot be renamed, if they're still open, 
so we need to close them before:
{code:title=src/java/org/apache/cassandra/io/sstable/SSTableWriter.java}
private PairDescriptor, StatsMetadata close(FinishType type, long repairedAt)

Descriptor descriptor = this.descriptor;
if (type.isFinal)
{
dataFile.writeFullChecksum(descriptor);
writeMetadata(descriptor, metadataComponents);
// save the table of components
SSTable.appendTOC(descriptor, components);
+++ dataFile.close();
descriptor = rename(descriptor, components);
}
{code}
I'm using Win7-32bit and got the same stack trace. So 32bit or 64bit is not the 
issue.


was (Author: andie78):
On Windows 7, files and directories cannot be renamed, if they're still open, 
so we need to close them before:
{code:title=src/java/org/apache/cassandra/io/sstable/SSTableWriter.java}
private PairDescriptor, StatsMetadata close(FinishType type, long repairedAt)

Descriptor descriptor = this.descriptor;
if (type.isFinal)
{
dataFile.writeFullChecksum(descriptor);
writeMetadata(descriptor, metadataComponents);
// save the table of components
SSTable.appendTOC(descriptor, components);
+++ dataFile.close();
descriptor = rename(descriptor, components);
}
{code}
Can somebody check/test and maybe commit it? Thx.

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The 

[jira] [Commented] (CASSANDRA-9620) Write timeout error for batch request gives wrong consistency

2015-06-19 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593272#comment-14593272
 ] 

Stefan Podkowinski commented on CASSANDRA-9620:
---

I think this happens in case the coordinator got a timeout while trying to 
write the batch log. Writing the batch log will always be done using CL ONE. 
This can be verified by checking the {{writeType}} flag in the exception which 
should be {{BATCH_LOG}} in your error case. 


 Write timeout error for batch request gives wrong consistency
 -

 Key: CASSANDRA-9620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9620
 Project: Cassandra
  Issue Type: Bug
Reporter: Jorge Bay

 In case there is a write time out error when executing a batch with a 
 consistency higher than ONE, error information returned is incorrect:
 {{Cassandra timeout during write query at consistency ONE (0 replica(s) 
 acknowledged the write over 1 required}}
 *Consistency is always ONE and required acks is always 1*.
 To reproduce (pseudo-code):
 {code:java}
 //create a 2 node cluster
 createTwoNodeCluster();
 var session = cluster.connect();
 session.execute(create keyspace ks1 WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor' : 2});
 session.execute(create table ks1.tbl1 (k text primary key, i int));
 session.execute(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES 
 ('one', 1)).setConsistencyLevel(ConsistencyLevel.ALL));
 //Stop the second node
 stopNode2();
 var batch = new BatchStatement();
 batch.add(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES ('two', 
 2)));
 batch.add(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES ('three', 
 3)));
 //This line will throw a WriteTimeoutException with a wrong consistency
 //Caused by an error response from Cassandra
 session.execute(batch.setConsistencyLevel(ConsistencyLevel.ALL));
 {code}
 Wrong error information could affect driver retry policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9606) this query is not supported in new version

2015-06-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593172#comment-14593172
 ] 

Benjamin Lerer commented on CASSANDRA-9606:
---

We discussed it offline with Sylvain and it is clear that it is confusing. I 
will make them equivalent.

 this query is not supported in new version
 --

 Key: CASSANDRA-9606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9606
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.6
 jdk 1.7.0_55
Reporter: zhaoyan
Assignee: Benjamin Lerer

 Background:
 1、create a table:
 {code}
 CREATE TABLE test (
 a int,
 b int,
 c int,
   d int,
 PRIMARY KEY (a, b, c)
 );
 {code}
 2、query by a=1 and b6
 {code}
 select * from test where a=1 and b6;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (6 rows)
 {code}
 3、query by page
 first page:
 {code}
 select * from test where a=1 and b6 limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
 (2 rows)
 {code}
 second page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
 (2 rows)
 {code}
 last page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,5) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (2 rows)
 {code}
 question:
 this query by page is ok when cassandra 2.0.8.
 but is not supported in the latest version 2.1.6
 when execute:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
 {code}
 get one error message:
 InvalidRequest: code=2200 [Invalid query] message=Column b cannot have 
 both tuple-notation inequalities and single-column inequalities: (b, c)  (3, 
 2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-06-19 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593271#comment-14593271
 ] 

Branimir Lambov commented on CASSANDRA-8099:


Thinking about this a bit more, I see that this is very difficult to fix. When 
the reducer issues a pair one of the markers is out of its place in the stream, 
thus we would need to delay the stream to be able to place it correctly. This 
would have a non-trivial performance impact.

Instead, I think we should officially permit this kind of disorder (e.g. 
{{39\[71\] 39=\[7\] 39\[7\] 39=\[8\]}} from above where {{39=\[7\] 
39\[7\]}} is invalid and covered by the outer pair of markers) in the 
unfiltered stream and remove the invalid ranges in the compaction writer. The 
merge algorithm is able to deal with such ranges correctly and filtering just 
removes them. We have to document it well and make sure the relevant code is 
tested with examples of this.

Even without removal in the compaction writer, the only kind of trouble I can 
see such a range introducing is seeking to the wrong marker in a binary / 
indexed search, but this should be ok as the correct marker is certain to 
follow before any live data.

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-06-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593121#comment-14593121
 ] 

Stefania edited comment on CASSANDRA-9448 at 6/19/15 8:36 AM:
--

Regarding the documentation, is [this|http://wiki.apache.org/cassandra/Metrics] 
the only page listing all metrics?

Other than nodetool tablestats and tablehistograms are there other way to test 
the metrics?


was (Author: stefania):
Regarding the documentation, is [this|http://wiki.apache.org/cassandra/Metrics] 
the only page listing all metrics?

 Metrics should use up to date nomenclature
 --

 Key: CASSANDRA-9448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Stefania
  Labels: docs-impacting
 Fix For: 3.0 beta 1


 There are a number of exposed metrics that currently are named using the old 
 nomenclature of columnfamily and rows (meaning partitions).
 It would be good to audit all metrics and update any names to match what they 
 actually represent; we should probably do that in a single sweep to avoid a 
 confusing mixture of old and new terminology. 
 As we'd need to do this in a major release, I've initially set the fixver for 
 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9490) testcase failure : testWithDeletes(org.apache.cassandra.io.sstable.SSTableMetadataTest):

2015-06-19 Thread Pallavi Bhardwaj (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593227#comment-14593227
 ] 

Pallavi Bhardwaj commented on CASSANDRA-9490:
-

Hi,

Any update/suggestion regarding this issue ?

Thanks.

 testcase failure : 
 testWithDeletes(org.apache.cassandra.io.sstable.SSTableMetadataTest):
 

 Key: CASSANDRA-9490
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9490
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
 Environment: Red Hat enterprise Linux ; Arch : PPC64le
Reporter: Pallavi Bhardwaj
 Fix For: 2.1.x


 While executing the unit test cases, I observed the following failure,
 [junit] Testcase: 
 testWithDeletes(org.apache.cassandra.io.sstable.SSTableMetadataTest): 
 FAILED
 [junit] expected:-2.038078123E9 but was:1.432716678E9
 [junit] junit.framework.AssertionFailedError: expected:-2.038078123E9 
 but was:1.432716678E9
 [junit] at 
 org.apache.cassandra.io.sstable.SSTableMetadataTest.testWithDeletes(SSTableMetadataTest.java:156)
 [junit]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9621) Repair of the SystemDistributed keyspace creates a non-trivial amount of memory pressure

2015-06-19 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-9621:
---

 Summary: Repair of the SystemDistributed keyspace creates a 
non-trivial amount of memory pressure
 Key: CASSANDRA-9621
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9621
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Marcus Eriksson
Priority: Minor


When a repair without any particular option is triggered, the 
{{SystemDistributed}} keyspace is repaired for all range, and in particular the 
{{repair_history}} table. For every range, that table is written and flushed 
(as part of normal repair), meaning that every range triggers the creation of a 
new 1MB slab region (this also triggers quite a few compactions that also write 
and flush {{compaction_progress}} at every start and end).

I don't know how much of a big deal this will be in practice, but I wonder if 
it's really useful to repair the {{repair_*}} tables by default so maybe we 
could exclude the SystemDistributed keyspace from default repairs and only 
repair it if asked explicitly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9621) Repair of the SystemDistributed keyspace creates a non-trivial amount of memory pressure

2015-06-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9621:

Fix Version/s: 2.2.0 rc2

 Repair of the SystemDistributed keyspace creates a non-trivial amount of 
 memory pressure
 

 Key: CASSANDRA-9621
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9621
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.2.0 rc2


 When a repair without any particular option is triggered, the 
 {{SystemDistributed}} keyspace is repaired for all range, and in particular 
 the {{repair_history}} table. For every range, that table is written and 
 flushed (as part of normal repair), meaning that every range triggers the 
 creation of a new 1MB slab region (this also triggers quite a few compactions 
 that also write and flush {{compaction_progress}} at every start and end).
 I don't know how much of a big deal this will be in practice, but I wonder if 
 it's really useful to repair the {{repair_*}} tables by default so maybe we 
 could exclude the SystemDistributed keyspace from default repairs and only 
 repair it if asked explicitly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9620) Write timeout error for batch request gives wrong consistency

2015-06-19 Thread Jorge Bay (JIRA)
Jorge Bay created CASSANDRA-9620:


 Summary: Write timeout error for batch request gives wrong 
consistency
 Key: CASSANDRA-9620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9620
 Project: Cassandra
  Issue Type: Bug
Reporter: Jorge Bay


In case there is a write time out error when executing a batch with a 
consistency higher than ONE, error information returned is incorrect:

{{Cassandra timeout during write query at consistency ONE (0 replica(s) 
acknowledged the write over 1 required}}

*Consistency is always ONE and required acks is always 1*.

To reproduce (pseudo-code):
{code:java}
//create a 2 node cluster
createTwoNodeCluster();
var session = cluster.connect();
session.execute(create keyspace ks1 WITH replication = {'class': 
'SimpleStrategy', 'replication_factor' : 2});
session.execute(create table ks1.tbl1 (k text primary key, i int));
session.execute(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES ('one', 
1)).setConsistencyLevel(ConsistencyLevel.ALL));
//Stop the second node
stopNode2();
var batch = new BatchStatement();
batch.add(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES ('two', 2)));
batch.add(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES ('three', 
3)));
//This line will throw a WriteTimeoutException with a wrong consistency
//Caused by an error response from Cassandra
session.execute(batch.setConsistencyLevel(ConsistencyLevel.ALL));
{code}

Wrong error information could affect driver retry policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9620) Write timeout error for batch request gives wrong consistency

2015-06-19 Thread Jorge Bay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593277#comment-14593277
 ] 

Jorge Bay commented on CASSANDRA-9620:
--

You are right, it is for the batch log.

Maybe it would be a good idea to adjust the error message (either on Cassandra 
side or at driver level) to let users know that what failed was the initial 
batch log write as the current message can cause the user to understand that 
they are not sending the correct consistency level.

 Write timeout error for batch request gives wrong consistency
 -

 Key: CASSANDRA-9620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9620
 Project: Cassandra
  Issue Type: Bug
Reporter: Jorge Bay

 In case there is a write time out error when executing a batch with a 
 consistency higher than ONE, error information returned is incorrect:
 {{Cassandra timeout during write query at consistency ONE (0 replica(s) 
 acknowledged the write over 1 required}}
 *Consistency is always ONE and required acks is always 1*.
 To reproduce (pseudo-code):
 {code:java}
 //create a 2 node cluster
 createTwoNodeCluster();
 var session = cluster.connect();
 session.execute(create keyspace ks1 WITH replication = {'class': 
 'SimpleStrategy', 'replication_factor' : 2});
 session.execute(create table ks1.tbl1 (k text primary key, i int));
 session.execute(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES 
 ('one', 1)).setConsistencyLevel(ConsistencyLevel.ALL));
 //Stop the second node
 stopNode2();
 var batch = new BatchStatement();
 batch.add(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES ('two', 
 2)));
 batch.add(new SimpleStatement(INSERT INTO ks1.tbl1 (k, i) VALUES ('three', 
 3)));
 //This line will throw a WriteTimeoutException with a wrong consistency
 //Caused by an error response from Cassandra
 session.execute(batch.setConsistencyLevel(ConsistencyLevel.ALL));
 {code}
 Wrong error information could affect driver retry policies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9622) count/max/min aggregates not created for the blob type

2015-06-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593475#comment-14593475
 ] 

Robert Stupp edited comment on CASSANDRA-9622 at 6/19/15 2:47 PM:
--

+1 - [if VARCHAR or TEXT is skipped in the 2nd 
loop|https://github.com/iamaleksey/cassandra/commit/50bd850819ca63d77d775fc5782383f3b33fc877#diff-839e1959dccaf59098424ba9f7ad788eR77]



was (Author: snazy):
+1 - 
[https://github.com/iamaleksey/cassandra/commit/50bd850819ca63d77d775fc5782383f3b33fc877#diff-839e1959dccaf59098424ba9f7ad788eR77|if
 VARCHAR or TEXT is skipped in the 2nd loop]


 count/max/min aggregates not created for the blob type
 --

 Key: CASSANDRA-9622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9622
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.2.0 rc2


 Static blob in {{Functions}} piggy-backs on type conversions declaration 
 loop, and the latter skips the blob type. As a result, count, max, and min 
 functions are not being defined for {{blob}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-19 Thread aleksey
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1af3c3b9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1af3c3b9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1af3c3b9

Branch: refs/heads/trunk
Commit: 1af3c3b98961df39aff3159d6a929e3de9194542
Parents: e246ec2 716b253
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jun 19 17:56:33 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jun 19 17:56:33 2015 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/cql3/functions/Functions.java | 38 +++-
 2 files changed, 30 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1af3c3b9/CHANGES.txt
--
diff --cc CHANGES.txt
index 9b4e474,4886850..c631c8d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 +3.0:
 + * Add nodetool command to replay batchlog (CASSANDRA-9547)
 + * Make file buffer cache independent of paths being read (CASSANDRA-8897)
 + * Remove deprecated legacy Hadoop code (CASSANDRA-9353)
 + * Decommissioned nodes will not rejoin the cluster (CASSANDRA-8801)
 + * Change gossip stabilization to use endpoit size (CASSANDRA-9401)
 + * Change default garbage collector to G1 (CASSANDRA-7486)
 + * Populate TokenMetadata early during startup (CASSANDRA-9317)
 + * undeprecate cache recentHitRate (CASSANDRA-6591)
 +
 +
  2.2
+  * Fix mixing min, max, and count aggregates for blob type (CASSANRA-9622)
   * Rename class for DATE type in Java driver (CASSANDRA-9563)
   * Duplicate compilation of UDFs on coordinator (CASSANDRA-9475)
   * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)



[jira] [Commented] (CASSANDRA-9132) resumable_bootstrap_test can hang

2015-06-19 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593544#comment-14593544
 ] 

Jim Witschey commented on CASSANDRA-9132:
-

This still fails periodically on trunk while waiting for {{Starting listening 
for CQL clients}} 
[cassci|http://cassci.datastax.com/view/trunk/job/trunk_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/].
 I can open a new ticket if you think this is a new bug, but for the moment I'm 
reopening.

 resumable_bootstrap_test can hang
 -

 Key: CASSANDRA-9132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9132
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Fix For: 2.1.6, 2.0.16, 2.2.0 rc1

 Attachments: 9132-2.0.txt, 9132-followup-2.0.txt


 The {{bootstrap_test.TestBootstrap.resumable_bootstrap_test}} can hang 
 sometimes.  It looks like the following line never completes:
 {noformat}
 node3.watch_log_for(Listening for thrift clients...)
 {noformat}
 I'm not familiar enough with the recent bootstrap changes to know why that's 
 not happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-6588) Add a 'NO EMPTY RESULTS' filter to SELECT

2015-06-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reopened CASSANDRA-6588:
-

Reopening since CASSANDRA-7085 doesn't work (see my comment there).

I obviously don't love that we need some new syntax for this, but I do think 
it's the more practical solution (breaking users by changing semantics sounds 
to me like a very bad idea, so having this option sounds better than not having 
it).

 Add a 'NO EMPTY RESULTS' filter to SELECT
 -

 Key: CASSANDRA-6588
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6588
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor

 It is the semantic of CQL that a (CQL) row exists as long as it has one 
 non-null column (including the PK columns, which, given that no PK columns 
 can be null, means that it's enough to have the PK set for a row to exist). 
 This does means that the result to
 {noformat}
 CREATE TABLE test (k int PRIMARY KEY, v1 int, v2 int);
 INSERT INTO test(k, v1) VALUES (0, 4);
 SELECT v2 FROM test;
 {noformat}
 must be (and is)
 {noformat}
  v2
 --
  null
 {noformat}
 That fact does mean however that when we only select a few columns of a row, 
 we still need to find out rows that exist but have no values for the selected 
 columns. Long story short, given how the storage engine works, this means we 
 need to query full (CQL) rows even when only some of the columns are selected 
 because that's the only way to distinguish between the row exists but have 
 no value for the selected columns and the row doesn't exist. I'll note in 
 particular that, due to CASSANDRA-5762, we can't unfortunately rely on the 
 row marker to optimize that out.
 Now, when you selects only a subsets of the columns of a row, there is many 
 cases where you don't care about rows that exists but have no value for the 
 columns you requested and are happy to filter those out. So, for those cases, 
 we could provided a new SELECT filter. Outside the potential convenience (not 
 having to filter empty results client side), one interesting part is that 
 when this filter is provided, we could optimize a bit by only querying the 
 columns selected, since we wouldn't need to return rows that exists but have 
 no values for the selected columns.
 For the exact syntax, there is probably a bunch of options. For instance:
 * {{SELECT NON EMPTY(v2, v3) FROM test}}: the vague rational for putting it 
 in the SELECT part is that such filter is kind of in the spirit to DISTINCT.  
 Possibly a bit ugly outside of that.
 * {{SELECT v2, v3 FROM test NO EMPTY RESULTS}} or {{SELECT v2, v3 FROM test 
 NO EMPTY ROWS}} or {{SELECT v2, v3 FROM test NO EMPTY}}: the last one is 
 shorter but maybe a bit less explicit. As for {{RESULTS}} versus {{ROWS}}, 
 the only small object to {{NO EMPTY ROWS}} could be that it might suggest it 
 is filtering non existing rows (I mean, the fact we never ever return non 
 existing rows should hint that it's not what it does but well...) while we're 
 just filtering empty resultSet rows.
 Of course, if there is a pre-existing SQL syntax for that, it's even better, 
 though a very quick search didn't turn anything. Other suggestions welcome 
 too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593474#comment-14593474
 ] 

Benedict commented on CASSANDRA-9499:
-

Looking good. I've pushed some suggestions to the bit fiddling, for both 
clarity and performance 
[here|https://github.com/belliottsmith/cassandra/tree/C-9499-suggestions]. 
Including efficient size computation, some extraneous operation removal, and 
what seemed to me clearer names to methods (please feel free to dispute). I had 
to strip out the debugging statements to figure out what was going on, hope 
that isn't a problem for you.

It's still more computationally involved than I would _like_, and I feel there 
may be some more efficiencies. So I will mull it over a little more, but wanted 
to upload what I had so we didn't tread too much on each other.

Some further suggestions I did not implement: 

* I'm not sure why we need the try/finally block, and I'd prefer we removed it 
- there's nothing that should be able to throw an exception between our setting 
and unsetting the limit.
* The extra reading in NIODataInputStream, I would prefer we extract to a 
separate method, and force it to not be inlined. Since it should be a rare 
occurrence that we need to do this, and we incur more expensive costs than this 
inner method invocation when this does happen, I would prefer to keep the rest 
of the method as tightly packed for icache behaviour as possible

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593500#comment-14593500
 ] 

Benedict commented on CASSANDRA-9499:
-

and, just riffing here: it looks to me like it might be better to 
{{prepareReadPrimitive(1)}} and grab our {{firstByte}} before we do anything 
else, since that permits us to short-circuit more quickly.

It would also be nice to simply generalise {{readMinimum}}, to choose when it 
throws EOF. We don't buy much from grabbing the limit inline, I don't think.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9622) count/max/min aggregates not created for the blob type

2015-06-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593475#comment-14593475
 ] 

Robert Stupp commented on CASSANDRA-9622:
-

+1 - 
[https://github.com/iamaleksey/cassandra/commit/50bd850819ca63d77d775fc5782383f3b33fc877#diff-839e1959dccaf59098424ba9f7ad788eR77]if
 VARCHAR or TEXT is skipped in the 2nd loop]


 count/max/min aggregates not created for the blob type
 --

 Key: CASSANDRA-9622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9622
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.2.0 rc2


 Static blob in {{Functions}} piggy-backs on type conversions declaration 
 loop, and the latter skips the blob type. As a result, count, max, and min 
 functions are not being defined for {{blob}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9622) count/max/min aggregates not created for the blob type

2015-06-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593475#comment-14593475
 ] 

Robert Stupp edited comment on CASSANDRA-9622 at 6/19/15 2:46 PM:
--

+1 - 
[https://github.com/iamaleksey/cassandra/commit/50bd850819ca63d77d775fc5782383f3b33fc877#diff-839e1959dccaf59098424ba9f7ad788eR77|if
 VARCHAR or TEXT is skipped in the 2nd loop]



was (Author: snazy):
+1 - 
[https://github.com/iamaleksey/cassandra/commit/50bd850819ca63d77d775fc5782383f3b33fc877#diff-839e1959dccaf59098424ba9f7ad788eR77]if
 VARCHAR or TEXT is skipped in the 2nd loop]


 count/max/min aggregates not created for the blob type
 --

 Key: CASSANDRA-9622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9622
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.2.0 rc2


 Static blob in {{Functions}} piggy-backs on type conversions declaration 
 loop, and the latter skips the blob type. As a result, count, max, and min 
 functions are not being defined for {{blob}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-19 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14592307#comment-14592307
 ] 

Ariel Weisberg edited comment on CASSANDRA-9499 at 6/19/15 2:46 PM:


I have the proposed encoding implemented and have the beginnings of an 
efficient implementation for BufferedDataOutputStreamPlus and NIOInputStream. 
There are probably still branches that could be removed as well as bit fiddling 
that could be more efficient and clearer. I am going to try and clean it up, 
but I could use feedback.

I couldn't come up with an efficient implementation for computeUnsignedVIntSize 
(when you haven't encoded it yet).
 
I also created the branch 
https://github.com/aweisberg/cassandra/commits/C-9499-madness where I removed 
Encoded*Stream and changed the serializers to use DataInputPlus which extends 
DataInput to add the varint decoding methods.

I am using 1s for the extension bits. I am also emitting the bytes in little 
endian order although it seems like I would need to do that at least for the 
first byte. I could emit the rest of the bytes in big endian order for the 
getLong(). Right now I have to reverse them because the ByteBuffer is big 
endian. l am wagering it is faster than swapping the settings.


was (Author: aweisberg):
I have the proposed encoding implemented and have the beginnings of an 
efficient implementation for BufferedDataOutputStreamPlus and NIOInputStream. 
There are probably still branches that could be removed as well as bit fiddling 
that could be more efficient and clearer. I am going to try and clean it up, 
but I could use feedback.

I couldn't come up with an efficient implementation for computeUnsignedVIntSize 
(when you haven't encoded it yet).
 
I also created the branch 
https://github.com/aweisberg/cassandra/commits/C-9499-madness where I removed 
Encoded*Stream and changed the serializers to use DataInputPlus which extends 
DataInput to add the varint decoding methods. I haven't rebased that on top of 
C-9499 yet.

I am using 1s for the extension bits. I am also emitting the bytes in little 
endian order although it seems like I would need to do that at least for the 
first byte. I could emit the rest of the bytes in big endian order for the 
getLong(). Right now I have to reverse them because the ByteBuffer is big 
endian. l am wagering it is faster than swapping the settings.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9618) Consider deprecating sstable2json/json2sstable in 2.2

2015-06-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593484#comment-14593484
 ] 

Jonathan Ellis commented on CASSANDRA-9618:
---

+1

 Consider deprecating sstable2json/json2sstable in 2.2
 -

 Key: CASSANDRA-9618
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9618
 Project: Cassandra
  Issue Type: Task
Reporter: Sylvain Lebresne
 Fix For: 2.2.0 rc2


 The rational is explained in CASSANDRA-7464 but to rephrase a bit:
 * json2sstable is pretty much useless, {{CQLSSTableWriter}} is way more 
 flexible if you need to write sstable directly.
 * sstable2json is really only potentially useful for debugging, but it's 
 pretty bad at that (it's output is not really all that helpul in modern 
 Cassandra in particular).
 Now, it happens that updating those tool for CASSANDRA-8099, while possible, 
 is a bit involved. So I don't think it make sense to invest effort in 
 maintain these tools. So I propose to deprecate these in 2.2 with removal in 
 3.0.
 I'll note that having a tool to help debugging sstable can be useful, but I 
 propose to add a tool for that purpose with CASSANDRA-7464.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9622) count/max/min aggregates not created for the blob type

2015-06-19 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593483#comment-14593483
 ] 

Aleksey Yeschenko commented on CASSANDRA-9622:
--

Committed to 2.2 in {{716b253a771d50c2365608cf7cbc992e3683feed}} and merged 
into trunk, thanks.

 count/max/min aggregates not created for the blob type
 --

 Key: CASSANDRA-9622
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9622
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.2.0 rc2


 Static blob in {{Functions}} piggy-backs on type conversions declaration 
 loop, and the latter skips the blob type. As a result, count, max, and min 
 functions are not being defined for {{blob}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593529#comment-14593529
 ] 

Joshua McKenzie commented on CASSANDRA-8535:


[~maximp]: What was the node doing at the time you saw that error? Did you run 
a major compaction or repair on it around that time or was it just during 
normal operations?

[~Andie78]: Back-porting specific finalizers into the 2.0 branch isn't an 
option at this time. If you want Windows support you're really going to need to 
run the latest from the 2.1 line or preferably 2.2 when it releases (it's in RC 
phases now).

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-9132) resumable_bootstrap_test can hang

2015-06-19 Thread Jim Witschey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Witschey reopened CASSANDRA-9132:
-

 resumable_bootstrap_test can hang
 -

 Key: CASSANDRA-9132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9132
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Fix For: 2.1.6, 2.0.16, 2.2.0 rc1

 Attachments: 9132-2.0.txt, 9132-followup-2.0.txt


 The {{bootstrap_test.TestBootstrap.resumable_bootstrap_test}} can hang 
 sometimes.  It looks like the following line never completes:
 {noformat}
 node3.watch_log_for(Listening for thrift clients...)
 {noformat}
 I'm not familiar enough with the recent bootstrap changes to know why that's 
 not happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix mixing min, max, and count aggregates for blob type

2015-06-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 947edf1f1 - 716b253a7


Fix mixing min, max, and count aggregates for blob type

patch by Aleksey Yeschenko; reviewed by Robert Stupp for CASSANDRA-9622


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/716b253a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/716b253a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/716b253a

Branch: refs/heads/cassandra-2.2
Commit: 716b253a771d50c2365608cf7cbc992e3683feed
Parents: 947edf1
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jun 19 14:29:22 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jun 19 17:55:20 2015 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/cql3/functions/Functions.java | 38 +++-
 2 files changed, 30 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/716b253a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 56f0dc0..4886850 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Fix mixing min, max, and count aggregates for blob type (CASSANRA-9622)
  * Rename class for DATE type in Java driver (CASSANDRA-9563)
  * Duplicate compilation of UDFs on coordinator (CASSANDRA-9475)
  * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/716b253a/src/java/org/apache/cassandra/cql3/functions/Functions.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/Functions.java 
b/src/java/org/apache/cassandra/cql3/functions/Functions.java
index d6a8ab0..018c35c 100644
--- a/src/java/org/apache/cassandra/cql3/functions/Functions.java
+++ b/src/java/org/apache/cassandra/cql3/functions/Functions.java
@@ -64,18 +64,25 @@ public abstract class Functions
 {
 // Note: because text and varchar ends up being synonymous, our 
automatic makeToBlobFunction doesn't work
 // for varchar, so we special case it below. We also skip blob for 
obvious reasons.
-if (type == CQL3Type.Native.VARCHAR || type == 
CQL3Type.Native.BLOB)
-continue;
-
-declare(BytesConversionFcts.makeToBlobFunction(type.getType()));
-declare(BytesConversionFcts.makeFromBlobFunction(type.getType()));
-
-declare(AggregateFcts.makeCountFunction(type.getType()));
-declare(AggregateFcts.makeMaxFunction(type.getType()));
-declare(AggregateFcts.makeMinFunction(type.getType()));
+if (type != CQL3Type.Native.VARCHAR  type != 
CQL3Type.Native.BLOB)
+{
+
declare(BytesConversionFcts.makeToBlobFunction(type.getType()));
+
declare(BytesConversionFcts.makeFromBlobFunction(type.getType()));
+}
 }
 declare(BytesConversionFcts.VarcharAsBlobFct);
 declare(BytesConversionFcts.BlobAsVarcharFact);
+
+for (CQL3Type type : CQL3Type.Native.values())
+{
+// special case varchar to avoid duplicating functions for UTF8Type
+if (type != CQL3Type.Native.VARCHAR)
+{
+declare(AggregateFcts.makeCountFunction(type.getType()));
+declare(AggregateFcts.makeMaxFunction(type.getType()));
+declare(AggregateFcts.makeMinFunction(type.getType()));
+}
+}
 declare(AggregateFcts.sumFunctionForInt32);
 declare(AggregateFcts.sumFunctionForLong);
 declare(AggregateFcts.sumFunctionForFloat);
@@ -340,6 +347,19 @@ public abstract class Functions
 return all;
 }
 
+/*
+ * We need to compare the CQL3 representation of the type because comparing
+ * the AbstractType will fail for example if a UDT has been changed.
+ * Reason is that UserType.equals() takes the field names and types into 
account.
+ * Example CQL sequence that would fail when comparing AbstractType:
+ *CREATE TYPE foo ...
+ *CREATE FUNCTION bar ( par foo ) RETURNS foo ...
+ *ALTER TYPE foo ADD ...
+ * or
+ *ALTER TYPE foo ALTER ...
+ * or
+ *ALTER TYPE foo RENAME ...
+ */
 public static boolean typeEquals(AbstractType? t1, AbstractType? t2)
 {
 return t1.asCQL3Type().toString().equals(t2.asCQL3Type().toString());



[1/2] cassandra git commit: Fix mixing min, max, and count aggregates for blob type

2015-06-19 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk e246ec258 - 1af3c3b98


Fix mixing min, max, and count aggregates for blob type

patch by Aleksey Yeschenko; reviewed by Robert Stupp for CASSANDRA-9622


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/716b253a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/716b253a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/716b253a

Branch: refs/heads/trunk
Commit: 716b253a771d50c2365608cf7cbc992e3683feed
Parents: 947edf1
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Jun 19 14:29:22 2015 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Jun 19 17:55:20 2015 +0300

--
 CHANGES.txt |  1 +
 .../cassandra/cql3/functions/Functions.java | 38 +++-
 2 files changed, 30 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/716b253a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 56f0dc0..4886850 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Fix mixing min, max, and count aggregates for blob type (CASSANRA-9622)
  * Rename class for DATE type in Java driver (CASSANDRA-9563)
  * Duplicate compilation of UDFs on coordinator (CASSANDRA-9475)
  * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/716b253a/src/java/org/apache/cassandra/cql3/functions/Functions.java
--
diff --git a/src/java/org/apache/cassandra/cql3/functions/Functions.java 
b/src/java/org/apache/cassandra/cql3/functions/Functions.java
index d6a8ab0..018c35c 100644
--- a/src/java/org/apache/cassandra/cql3/functions/Functions.java
+++ b/src/java/org/apache/cassandra/cql3/functions/Functions.java
@@ -64,18 +64,25 @@ public abstract class Functions
 {
 // Note: because text and varchar ends up being synonymous, our 
automatic makeToBlobFunction doesn't work
 // for varchar, so we special case it below. We also skip blob for 
obvious reasons.
-if (type == CQL3Type.Native.VARCHAR || type == 
CQL3Type.Native.BLOB)
-continue;
-
-declare(BytesConversionFcts.makeToBlobFunction(type.getType()));
-declare(BytesConversionFcts.makeFromBlobFunction(type.getType()));
-
-declare(AggregateFcts.makeCountFunction(type.getType()));
-declare(AggregateFcts.makeMaxFunction(type.getType()));
-declare(AggregateFcts.makeMinFunction(type.getType()));
+if (type != CQL3Type.Native.VARCHAR  type != 
CQL3Type.Native.BLOB)
+{
+
declare(BytesConversionFcts.makeToBlobFunction(type.getType()));
+
declare(BytesConversionFcts.makeFromBlobFunction(type.getType()));
+}
 }
 declare(BytesConversionFcts.VarcharAsBlobFct);
 declare(BytesConversionFcts.BlobAsVarcharFact);
+
+for (CQL3Type type : CQL3Type.Native.values())
+{
+// special case varchar to avoid duplicating functions for UTF8Type
+if (type != CQL3Type.Native.VARCHAR)
+{
+declare(AggregateFcts.makeCountFunction(type.getType()));
+declare(AggregateFcts.makeMaxFunction(type.getType()));
+declare(AggregateFcts.makeMinFunction(type.getType()));
+}
+}
 declare(AggregateFcts.sumFunctionForInt32);
 declare(AggregateFcts.sumFunctionForLong);
 declare(AggregateFcts.sumFunctionForFloat);
@@ -340,6 +347,19 @@ public abstract class Functions
 return all;
 }
 
+/*
+ * We need to compare the CQL3 representation of the type because comparing
+ * the AbstractType will fail for example if a UDT has been changed.
+ * Reason is that UserType.equals() takes the field names and types into 
account.
+ * Example CQL sequence that would fail when comparing AbstractType:
+ *CREATE TYPE foo ...
+ *CREATE FUNCTION bar ( par foo ) RETURNS foo ...
+ *ALTER TYPE foo ADD ...
+ * or
+ *ALTER TYPE foo ALTER ...
+ * or
+ *ALTER TYPE foo RENAME ...
+ */
 public static boolean typeEquals(AbstractType? t1, AbstractType? t2)
 {
 return t1.asCQL3Type().toString().equals(t2.asCQL3Type().toString());



[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593527#comment-14593527
 ] 

Benedict commented on CASSANDRA-9499:
-

OK, so I just pushed a change which reverses the byte order of the vint to big 
endian. This permits removing quite a few LOC, and I reckon it's now closer to 
optimal. Take a look and tell me what you think.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9132) resumable_bootstrap_test can hang

2015-06-19 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593544#comment-14593544
 ] 

Jim Witschey edited comment on CASSANDRA-9132 at 6/19/15 3:48 PM:
--

This still fails periodically on trunk while waiting for {{Starting listening 
for CQL clients}} 
([cassci|http://cassci.datastax.com/view/trunk/job/trunk_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/]).
 I can open a new ticket if you think this is a new bug, but for the moment I'm 
reopening.


was (Author: mambocab):
This still fails periodically on trunk while waiting for {{Starting listening 
for CQL clients}} 
[cassci|http://cassci.datastax.com/view/trunk/job/trunk_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/].
 I can open a new ticket if you think this is a new bug, but for the moment I'm 
reopening.

 resumable_bootstrap_test can hang
 -

 Key: CASSANDRA-9132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9132
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Fix For: 2.1.6, 2.0.16, 2.2.0 rc1

 Attachments: 9132-2.0.txt, 9132-followup-2.0.txt


 The {{bootstrap_test.TestBootstrap.resumable_bootstrap_test}} can hang 
 sometimes.  It looks like the following line never completes:
 {noformat}
 node3.watch_log_for(Listening for thrift clients...)
 {noformat}
 I'm not familiar enough with the recent bootstrap changes to know why that's 
 not happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9623) Added column does not sort as the last column

2015-06-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593602#comment-14593602
 ] 

Tyler Hobbs commented on CASSANDRA-9623:


Do you happen to know the schema for the table that this is happening on?  If 
not, can you try running cleanup against each table individually until you find 
it?  (Cleanup is a fast no-op if the table doesn't require it.)

 Added column does not sort as the last column
 -

 Key: CASSANDRA-9623
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9623
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcin Pietraszek

 After adding new machines to existing cluster running cleanup one of the 
 tables ends with:
 {noformat}
 ERROR [CompactionExecutor:1015] 2015-06-19 11:24:05,038 CassandraDaemon.java 
 (line 199) Exception in thread Thread[CompactionExecutor:1015,1,main]
 java.lang.AssertionError: Added column does not sort as the last column
 at 
 org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116)
 at 
 org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121)
 at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:155)
 at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:186)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:98)
 at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:85)
 at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74)
 at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
 at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
 at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
 at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:161)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
 at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 We're using patched 2.0.13-190ef4f



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-19 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593561#comment-14593561
 ] 

Andreas Schnitzerling commented on CASSANDRA-8535:
--

I used my (necessary) finalizer-patch from 2.0 in 2.1.7-tentative (today 
cloned) as well. 2.1.6-release crashes but 2.1.7 of today seems to work now 
with the finalizer(s). At Monday I will check the file-traces and look for 
exceptions.

Actually it looks like, 2.1.6/7 needs only a finalizer in SequentialWriter to 
work w/o crash in windows. Monday I know more after some time w/ my patch.
Has 2.2 improved io-resource-management or why should it be better for windows 
now? For my belief, cassandra has leaks in filehandles and windows seems to be 
strict compared to linux. Why else finalizers help?

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-06-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593580#comment-14593580
 ] 

Tyler Hobbs commented on CASSANDRA-9448:


bq. Other than nodetool tablestats and tablehistograms are there other way to 
test the metrics?

Do you mean with an automated test?  In the dtests you can use the jmxutils.py 
module to make JMX queries.

 Metrics should use up to date nomenclature
 --

 Key: CASSANDRA-9448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Stefania
  Labels: docs-impacting, jmx
 Fix For: 3.0 beta 1


 There are a number of exposed metrics that currently are named using the old 
 nomenclature of columnfamily and rows (meaning partitions).
 It would be good to audit all metrics and update any names to match what they 
 actually represent; we should probably do that in a single sweep to avoid a 
 confusing mixture of old and new terminology. 
 As we'd need to do this in a major release, I've initially set the fixver for 
 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9621) Repair of the SystemDistributed keyspace creates a non-trivial amount of memory pressure

2015-06-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593607#comment-14593607
 ] 

Sylvain Lebresne commented on CASSANDRA-9621:
-

Yes, though that's not only a problem for sequential repairs. That is, I could 
agree that the underlying problem is that we flush for every range, but 
CASSANDRA-9491 only has a (partial) solution for sequential repair currently. 
And since a proper solution to that in all case (including parallel repairs) is 
probably not coming before the release of 2.2, I'd suggest implementing this to 
avoid surprise for people upgrading to 2.2. Not that I feel terribly strongly, 
just feels the safer option to me.

 Repair of the SystemDistributed keyspace creates a non-trivial amount of 
 memory pressure
 

 Key: CASSANDRA-9621
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9621
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.2.0 rc2


 When a repair without any particular option is triggered, the 
 {{SystemDistributed}} keyspace is repaired for all range, and in particular 
 the {{repair_history}} table. For every range, that table is written and 
 flushed (as part of normal repair), meaning that every range triggers the 
 creation of a new 1MB slab region (this also triggers quite a few compactions 
 that also write and flush {{compaction_progress}} at every start and end).
 I don't know how much of a big deal this will be in practice, but I wonder if 
 it's really useful to repair the {{repair_*}} tables by default so maybe we 
 could exclude the SystemDistributed keyspace from default repairs and only 
 repair it if asked explicitly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9621) Repair of the SystemDistributed keyspace creates a non-trivial amount of memory pressure

2015-06-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593594#comment-14593594
 ] 

Tyler Hobbs commented on CASSANDRA-9621:


Maybe we should skip that keyspace, but CASSANDRA-9491 is really the root cause 
that needs to be fixed, I think.  With that fixed, there is less reason to do 
this.

 Repair of the SystemDistributed keyspace creates a non-trivial amount of 
 memory pressure
 

 Key: CASSANDRA-9621
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9621
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.2.0 rc2


 When a repair without any particular option is triggered, the 
 {{SystemDistributed}} keyspace is repaired for all range, and in particular 
 the {{repair_history}} table. For every range, that table is written and 
 flushed (as part of normal repair), meaning that every range triggers the 
 creation of a new 1MB slab region (this also triggers quite a few compactions 
 that also write and flush {{compaction_progress}} at every start and end).
 I don't know how much of a big deal this will be in practice, but I wonder if 
 it's really useful to repair the {{repair_*}} tables by default so maybe we 
 could exclude the SystemDistributed keyspace from default repairs and only 
 repair it if asked explicitly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9448) Metrics should use up to date nomenclature

2015-06-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9448:
---
Labels: docs-impacting jmx  (was: docs-impacting)

 Metrics should use up to date nomenclature
 --

 Key: CASSANDRA-9448
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9448
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Stefania
  Labels: docs-impacting, jmx
 Fix For: 3.0 beta 1


 There are a number of exposed metrics that currently are named using the old 
 nomenclature of columnfamily and rows (meaning partitions).
 It would be good to audit all metrics and update any names to match what they 
 actually represent; we should probably do that in a single sweep to avoid a 
 confusing mixture of old and new terminology. 
 As we'd need to do this in a major release, I've initially set the fixver for 
 3.0 beta1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593703#comment-14593703
 ] 

Benedict commented on CASSANDRA-9607:
-

Ok, so on further investigation this is *not* caused by CASSANDRA-9549. It 
appears that the nodes are dying performing range slice queries, but it also 
*appears* that these queries are utilising paging. 

[~thobbs] [~blerer] [~iamaleksey], has anything changed around these subsystems 
since 2.1.3 that might cause them to misbehave? 

I would say we should delay 2.1.7 until we figure this out, but CASSANDRA-9549 
is really pretty urgent.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Robbie Strickland (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593716#comment-14593716
 ] 

Robbie Strickland commented on CASSANDRA-9607:
--

This is interesting, because we are currently troubleshooting these exact 
symptoms on our analytics cluster when querying large tables using Spark.  We 
had suspected sstable corruption, since some tables do work.  But size appears 
to matter.  Further, as I read your comment we had just finished loading one of 
the problematic tables into a test cluster running 2.1.4, and the same Spark 
job runs problem free.  I am quite sure there's a correlation here.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593723#comment-14593723
 ] 

Tyler Hobbs commented on CASSANDRA-9607:


Sure, your schema might help us figure out what's going on.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Backport CASSANDRA-6767 to 2.0: Display min timestamp in sstablemetadata viewer

2015-06-19 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 23e66a9d1 - f778c1f88


Backport CASSANDRA-6767 to 2.0: Display min timestamp in sstablemetadata viewer


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f778c1f8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f778c1f8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f778c1f8

Branch: refs/heads/cassandra-2.0
Commit: f778c1f88f4deb075b383f3a8b24ef279585bd32
Parents: 23e66a9
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Jun 19 19:18:44 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Jun 19 19:28:54 2015 +0200

--
 CHANGES.txt| 4 
 src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java | 1 +
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f778c1f8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a235528..b5b2f32 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.17
+ * Display min timestamp in sstablemetadata viewer (CASSANDRA-6767)
+
+
 2.0.16:
  * Expose some internals of SelectStatement for inspection (CASSANDRA-9532)
  * ArrivalWindow should use primitives (CASSANDRA-9496)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f778c1f8/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java 
b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
index 64720b5..9664e9e 100644
--- a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
+++ b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
@@ -51,6 +51,7 @@ public class SSTableMetadataViewer
 
 out.printf(SSTable: %s%n, descriptor);
 out.printf(Partitioner: %s%n, metadata.partitioner);
+out.printf(Minimum timestamp: %s%n, metadata.minTimestamp);
 out.printf(Maximum timestamp: %s%n, metadata.maxTimestamp);
 out.printf(SSTable max local deletion time: %s%n, 
metadata.maxLocalDeletionTime);
 out.printf(Compression ratio: %s%n, 
metadata.compressionRatio);



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-06-19 Thread marcuse
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/efebd8f1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/efebd8f1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/efebd8f1

Branch: refs/heads/cassandra-2.1
Commit: efebd8f172d19d3c325366201235882af7531244
Parents: 718c144 f778c1f
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Jun 19 19:36:11 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Jun 19 19:36:11 2015 +0200

--

--




[1/2] cassandra git commit: Backport CASSANDRA-6767 to 2.0: Display min timestamp in sstablemetadata viewer

2015-06-19 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 718c14432 - efebd8f17


Backport CASSANDRA-6767 to 2.0: Display min timestamp in sstablemetadata viewer


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f778c1f8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f778c1f8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f778c1f8

Branch: refs/heads/cassandra-2.1
Commit: f778c1f88f4deb075b383f3a8b24ef279585bd32
Parents: 23e66a9
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Jun 19 19:18:44 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Jun 19 19:28:54 2015 +0200

--
 CHANGES.txt| 4 
 src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java | 1 +
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f778c1f8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a235528..b5b2f32 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.17
+ * Display min timestamp in sstablemetadata viewer (CASSANDRA-6767)
+
+
 2.0.16:
  * Expose some internals of SelectStatement for inspection (CASSANDRA-9532)
  * ArrivalWindow should use primitives (CASSANDRA-9496)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f778c1f8/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
--
diff --git a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java 
b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
index 64720b5..9664e9e 100644
--- a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
+++ b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
@@ -51,6 +51,7 @@ public class SSTableMetadataViewer
 
 out.printf(SSTable: %s%n, descriptor);
 out.printf(Partitioner: %s%n, metadata.partitioner);
+out.printf(Minimum timestamp: %s%n, metadata.minTimestamp);
 out.printf(Maximum timestamp: %s%n, metadata.maxTimestamp);
 out.printf(SSTable max local deletion time: %s%n, 
metadata.maxLocalDeletionTime);
 out.printf(Compression ratio: %s%n, 
metadata.compressionRatio);



[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-19 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593720#comment-14593720
 ] 

Joshua McKenzie commented on CASSANDRA-8535:


The various file access violations on Windows stem from early implementation 
details within Java regarding their choice of flags on their older I/O API's 
when they call NtCreateFile. I've rewritten our on-disk readers and writers to 
use nio which uses the FILE_SHARE_DELETE flag (see CASSANDRA-4050, 
CASSANDRA-8709) so from 2.2 onward, we can rename files and delete them without 
having to close the original file handle first. This is of particular 
importance due to the way early re-open currently works on the 2.2 branch, 
though early re-open should be skipped on Windows as I mentioned above.

We've had a couple of regressions slip in where early re-open was incorrectly 
re-applied to all platforms even if it was disabled to due some side-effects of 
changes in SSTableRewriter; my expectation is that such changes may sneak back 
into either SSTRW or SequentialWriter if we're not careful on the 2.1 due to 
the complexity therein.

I haven't had a chance to look into this ticket yet and it's not very high on 
my priority list because it's a class of problems that should be completely 
eliminated by 2.2.

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 

[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593719#comment-14593719
 ] 

Study Hsueh commented on CASSANDRA-9607:


We do not use static columns. Do you need my schema for analysis?

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593711#comment-14593711
 ] 

Tyler Hobbs commented on CASSANDRA-9607:


Perhaps CASSANDRA-8502?  [~study] do you have any tables with static columns?

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9624) unable to bootstrap; streaming fails with NullPointerException

2015-06-19 Thread Eric Evans (JIRA)
Eric Evans created CASSANDRA-9624:
-

 Summary: unable to bootstrap; streaming fails with 
NullPointerException
 Key: CASSANDRA-9624
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9624
 Project: Cassandra
  Issue Type: Bug
 Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
Reporter: Eric Evans


When attempting to bootstrap a new node into a 2.1.3 cluster, the stream source 
fails with a {{NullPointerException}}:

{noformat}
ERROR [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,264 StreamSession.java:477 - 
[Stream #60e8c120-
115f-11e5-9fee-] Streaming error occurred
java.lang.NullPointerException: null
at 
org.apache.cassandra.io.sstable.SSTableReader.getPositionsForRanges(SSTableReader.java:1277)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
at 
org.apache.cassandra.streaming.StreamSession.getSSTableSectionsForRanges(StreamSession.java:313)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
at 
org.apache.cassandra.streaming.StreamSession.addTransferRanges(StreamSession.java:266)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
at 
org.apache.cassandra.streaming.StreamSession.prepare(StreamSession.java:493) 
~[apache-cassandra-2.1.3.jar:2.1.3]
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:425)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:251)
 ~[apache-cassandra-2.1.3.jar:2.1.3]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
INFO  [STREAM-IN-/10.xx.x.xxx] 2015-06-13 00:02:01,265 
StreamResultFuture.java:180 - [Stream #60e8c120-115f-11e5-9fee-] 
Session with /10.xx.x.xx1 is complete
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9626) Make C* work in all locales

2015-06-19 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-9626:
---

 Summary: Make C* work in all locales
 Key: CASSANDRA-9626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9626
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Priority: Minor


Default locale and default charset has immediate effect on how strings are 
encoded and handles - e.g. via {{String.toLowerCase()}} or {{new 
String(byte[])}}.

Problems with different default locales + charsets don't become obvious for US 
and most European regional settings. But some regional OS settings will cause 
severe errors. Example: {{BILLY.toLowerCase()}} returns {{bılly}} with Locale 
tr_TR (take a look at the second letter - it's an i without the dot).

(ref: http://blog.thetaphi.de/2012/07/default-locales-default-charsets-and.html)

It's not a problem I'm currently facing, but it could become a problem for some 
users. A quick fix could be to set default locale and charset in the start 
scripts - maybe that's all we need.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9503) Update CQL doc reflecting current keywords

2015-06-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593851#comment-14593851
 ] 

Benjamin Lerer commented on CASSANDRA-9503:
---

I think that {{NOLOGIN}} is missing (as unrestricted) and  that {{QUORUM}} is 
not used anymore.
Otherwise, it look good to me.

 Update CQL doc reflecting current keywords
 --

 Key: CASSANDRA-9503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9503
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Adam Holmberg
Assignee: Adam Holmberg
Priority: Trivial
 Fix For: 2.2.x

 Attachments: cql_keywords.txt


 The table in doc/cql3/CQL.textile#appendixA is outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9503) Update CQL doc reflecting current keywords

2015-06-19 Thread Adam Holmberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holmberg updated CASSANDRA-9503:
-
Assignee: Benjamin Lerer  (was: Adam Holmberg)

 Update CQL doc reflecting current keywords
 --

 Key: CASSANDRA-9503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9503
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Adam Holmberg
Assignee: Benjamin Lerer
Priority: Trivial
 Fix For: 2.2.x

 Attachments: 9503-2.txt, cql_keywords.txt


 The table in doc/cql3/CQL.textile#appendixA is outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9503) Update CQL doc reflecting current keywords

2015-06-19 Thread Adam Holmberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Holmberg updated CASSANDRA-9503:
-
Attachment: 9503-2.txt

Good eye. Must have messed up my diff. 
New patch 9503-2.txt attached.

 Update CQL doc reflecting current keywords
 --

 Key: CASSANDRA-9503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9503
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Adam Holmberg
Assignee: Adam Holmberg
Priority: Trivial
 Fix For: 2.2.x

 Attachments: 9503-2.txt, cql_keywords.txt


 The table in doc/cql3/CQL.textile#appendixA is outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9625) GraphiteReporter not reporting

2015-06-19 Thread Eric Evans (JIRA)
Eric Evans created CASSANDRA-9625:
-

 Summary: GraphiteReporter not reporting
 Key: CASSANDRA-9625
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9625
 Project: Cassandra
  Issue Type: Bug
 Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
Reporter: Eric Evans


When upgrading from 2.1.3 to 2.1.6, the Graphite metrics reporter stops 
working.  The usual startup is logged, and one batch of samples is sent, but 
the reporting interval comes and goes, and no other samples are ever sent.  The 
logs are free from errors.

Frustratingly, metrics reporting works in our smaller (staging) environment on 
2.1.6; We are able to reproduce this on all 6 of production nodes, but not on a 
3 node (otherwise identical) staging cluster (maybe it takes a certain level of 
concurrency?).

Attached is a thread dump, and our metrics.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9625) GraphiteReporter not reporting

2015-06-19 Thread Eric Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Evans updated CASSANDRA-9625:
--
Attachment: thread-dump.log

 GraphiteReporter not reporting
 --

 Key: CASSANDRA-9625
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9625
 Project: Cassandra
  Issue Type: Bug
 Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
Reporter: Eric Evans
 Attachments: thread-dump.log


 When upgrading from 2.1.3 to 2.1.6, the Graphite metrics reporter stops 
 working.  The usual startup is logged, and one batch of samples is sent, but 
 the reporting interval comes and goes, and no other samples are ever sent.  
 The logs are free from errors.
 Frustratingly, metrics reporting works in our smaller (staging) environment 
 on 2.1.6; We are able to reproduce this on all 6 of production nodes, but not 
 on a 3 node (otherwise identical) staging cluster (maybe it takes a certain 
 level of concurrency?).
 Attached is a thread dump, and our metrics.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9625) GraphiteReporter not reporting

2015-06-19 Thread Eric Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Evans updated CASSANDRA-9625:
--
Attachment: metrics.yaml

 GraphiteReporter not reporting
 --

 Key: CASSANDRA-9625
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9625
 Project: Cassandra
  Issue Type: Bug
 Environment: Debian Jessie, 7u79-2.5.5-1~deb8u1, Cassandra 2.1.3
Reporter: Eric Evans
 Attachments: metrics.yaml, thread-dump.log


 When upgrading from 2.1.3 to 2.1.6, the Graphite metrics reporter stops 
 working.  The usual startup is logged, and one batch of samples is sent, but 
 the reporting interval comes and goes, and no other samples are ever sent.  
 The logs are free from errors.
 Frustratingly, metrics reporting works in our smaller (staging) environment 
 on 2.1.6; We are able to reproduce this on all 6 of production nodes, but not 
 on a 3 node (otherwise identical) staging cluster (maybe it takes a certain 
 level of concurrency?).
 Attached is a thread dump, and our metrics.yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593753#comment-14593753
 ] 

Study Hsueh commented on CASSANDRA-9607:


Ok, I have uploaded my schema.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip, schema.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-19 Thread Study Hsueh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Study Hsueh updated CASSANDRA-9607:
---
Attachment: schema.zip

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7, Oracle JDK 8
 VM: Azure VM Standard A3 * 6
 RAM: 7 GB
 Cores: 4
Reporter: Study Hsueh
Assignee: Benedict
Priority: Critical
 Attachments: cassandra.yaml, load.png, log.zip, schema.zip


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9558) Cassandra-stress regression in 2.2

2015-06-19 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14593780#comment-14593780
 ] 

Norman Maurer commented on CASSANDRA-9558:
--

Sorry for been late to the party, but it somehow got lost in my inbox :(

So from a netty standpoint your are right flushing from outside the EventLoop 
is pretty expensive as it will need to wakeup the selector if it is not 
already woken up and processing stuff. 

So the best thing you can do is either always write / flush etc from within the 
EventLoop or try to minimize the flushes from outside the EventLoop. That said 
if you point me to the place in your code where you do the flush and the other 
stuff I'm happy to have a look and see if I can give you some idea how to 
improve. 

Just let me know!

 Cassandra-stress regression in 2.2
 --

 Key: CASSANDRA-9558
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9558
 Project: Cassandra
  Issue Type: Bug
Reporter: Alan Boudreault
 Fix For: 2.2.0 rc2

 Attachments: 2.1.log, 2.2.log, CASSANDRA-9558-2.patch, 
 CASSANDRA-9558-ProtocolV2.patch, atolber-CASSANDRA-9558-stress.tgz, 
 atolber-trunk-driver-coalescing-disabled.txt, 
 stress-2.1-java-driver-2.0.9.2.log, stress-2.1-java-driver-2.2+PATCH.log, 
 stress-2.1-java-driver-2.2.log, stress-2.2-java-driver-2.2+PATCH.log, 
 stress-2.2-java-driver-2.2.log


 We are seeing some regression in performance when using cassandra-stress 2.2. 
 You can see the difference at this url:
 http://riptano.github.io/cassandra_performance/graph_v5/graph.html?stats=stress_regression.jsonmetric=op_rateoperation=1_writesmoothing=1show_aggregates=truexmin=0xmax=108.57ymin=0ymax=168147.1
 The cassandra version of the cluster doesn't seem to have any impact. 
 //cc [~tjake] [~benedict]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)