[jira] [Commented] (CASSANDRA-6471) Executing a prepared CREATE KEYSPACE multiple times doesn't work

2013-12-12 Thread Amichai Rothman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846142#comment-13846142
 ] 

Amichai Rothman commented on CASSANDRA-6471:


Does this apply to 2.0.x as well?

 Executing a prepared CREATE KEYSPACE multiple times doesn't work
 

 Key: CASSANDRA-6471
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6471
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Fix For: 1.2.13

 Attachments: 6471.txt


 See user reports on the java driver JIRA: 
 https://datastax-oss.atlassian.net/browse/JAVA-223. Preparing CREATE KEYSPACE 
 queries is not particularly useful but there is no reason for it to be broken.
 The reason is that calling KSPropDef/CFPropDef.validate() methods are not 
 idempotent. Attaching simple patch to fix.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6471) Executing a prepared CREATE KEYSPACE multiple times doesn't work

2013-12-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846147#comment-13846147
 ] 

Sylvain Lebresne commented on CASSANDRA-6471:
-

It does.

 Executing a prepared CREATE KEYSPACE multiple times doesn't work
 

 Key: CASSANDRA-6471
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6471
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Trivial
 Fix For: 1.2.13

 Attachments: 6471.txt


 See user reports on the java driver JIRA: 
 https://datastax-oss.atlassian.net/browse/JAVA-223. Preparing CREATE KEYSPACE 
 queries is not particularly useful but there is no reason for it to be broken.
 The reason is that calling KSPropDef/CFPropDef.validate() methods are not 
 idempotent. Attaching simple patch to fix.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5357) Query cache

2013-12-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846150#comment-13846150
 ] 

Sylvain Lebresne commented on CASSANDRA-5357:
-

bq. That's a very interesting idea, and a good fit with existing best practices

Isn't that pretty much exactly the initial idea for CASSANDRA-1956 (except 
maybe that the filter would be hard-coded to the head of the row) to which 
you argued that a query cache was more generic and was handling 2ndary indexes 
in particular (note that I'm against the idea, it had my preference initially 
if only for simplicity sake, I'm just trying to make sure I understand the 
though process on this)?

 Query cache
 ---

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Vijay

 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6151) CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated

2013-12-12 Thread Shridhar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846157#comment-13846157
 ] 

Shridhar commented on CASSANDRA-6151:
-

[~devP] I have modified already existing patch(V3) which works for IN clause on 
partition keys. It seems to be working for us for 1.2.10 version. Attached is 
the modified patch(V4) which should be applied on top of V3 patch.

 CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated
 

 Key: CASSANDRA-6151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6151
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Russell Alexander Spitzer
Assignee: Alex Liu
Priority: Minor
 Attachments: 6151-1.2-branch.txt, 6151-v2-1.2-branch.txt, 
 6151-v3-1.2-branch.txt, 6151-v4-1.2.10-branch.txt


 From 
 http://stackoverflow.com/questions/19189649/composite-key-in-cassandra-with-pig/19211546#19211546
 The user was attempting to load a single partition using a where clause in a 
 pig load statement. 
 CQL Table
 {code}
 CREATE table data (
   occurday  text,
   seqnumber int,
   occurtimems bigint,
   unique bigint,
   fields maptext, text,
   primary key ((occurday, seqnumber), occurtimems, unique)
 )
 {code}
 Pig Load statement Query
 {code}
 data = LOAD 
 'cql://ks/data?where_clause=seqnumber%3D10%20AND%20occurday%3D%272013-10-01%27'
  USING CqlStorage();
 {code}
 This results in an exception when processed by the the CqlPagingRecordReader 
 which attempts to page this query even though it contains at most one 
 partition key. This leads to an invalid CQL statement. 
 CqlPagingRecordReader Query
 {code}
 SELECT * FROM data WHERE token(occurday,seqnumber)  ? AND
 token(occurday,seqnumber) = ? AND occurday='A Great Day' 
 AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
 {code}
 Exception
 {code}
  InvalidRequestException(why:occurday cannot be restricted by more than one 
 relation if it includes an Equal)
 {code}
 I'm not sure it is worth the special case but, a modification to not use the 
 paging record reader when the entire partition key is specified would solve 
 this issue. 
 h3. Solution
  If it have EQUAL clauses for all the partitioning keys, we use Query 
 {code}
   SELECT * FROM data 
   WHERE occurday='A Great Day' 
AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
 {code}
 instead of 
 {code}
   SELECT * FROM data 
   WHERE token(occurday,seqnumber)  ? 
AND token(occurday,seqnumber) = ? 
AND occurday='A Great Day' 
AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
 {code}
 The base line implementation is to retrieve all data of all rows around the 
 ring. This new feature is to retrieve all data of a wide row. It's a one 
 level lower than the base line. It helps for the use case where user is only 
 interested in a specific wide row, so the user doesn't spend whole job to 
 retrieve all the rows around the ring.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6151) CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated

2013-12-12 Thread Shridhar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shridhar updated CASSANDRA-6151:


Attachment: 6151-v4-1.2.10-branch.txt

 CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated
 

 Key: CASSANDRA-6151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6151
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Russell Alexander Spitzer
Assignee: Alex Liu
Priority: Minor
 Attachments: 6151-1.2-branch.txt, 6151-v2-1.2-branch.txt, 
 6151-v3-1.2-branch.txt, 6151-v4-1.2.10-branch.txt


 From 
 http://stackoverflow.com/questions/19189649/composite-key-in-cassandra-with-pig/19211546#19211546
 The user was attempting to load a single partition using a where clause in a 
 pig load statement. 
 CQL Table
 {code}
 CREATE table data (
   occurday  text,
   seqnumber int,
   occurtimems bigint,
   unique bigint,
   fields maptext, text,
   primary key ((occurday, seqnumber), occurtimems, unique)
 )
 {code}
 Pig Load statement Query
 {code}
 data = LOAD 
 'cql://ks/data?where_clause=seqnumber%3D10%20AND%20occurday%3D%272013-10-01%27'
  USING CqlStorage();
 {code}
 This results in an exception when processed by the the CqlPagingRecordReader 
 which attempts to page this query even though it contains at most one 
 partition key. This leads to an invalid CQL statement. 
 CqlPagingRecordReader Query
 {code}
 SELECT * FROM data WHERE token(occurday,seqnumber)  ? AND
 token(occurday,seqnumber) = ? AND occurday='A Great Day' 
 AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
 {code}
 Exception
 {code}
  InvalidRequestException(why:occurday cannot be restricted by more than one 
 relation if it includes an Equal)
 {code}
 I'm not sure it is worth the special case but, a modification to not use the 
 paging record reader when the entire partition key is specified would solve 
 this issue. 
 h3. Solution
  If it have EQUAL clauses for all the partitioning keys, we use Query 
 {code}
   SELECT * FROM data 
   WHERE occurday='A Great Day' 
AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
 {code}
 instead of 
 {code}
   SELECT * FROM data 
   WHERE token(occurday,seqnumber)  ? 
AND token(occurday,seqnumber) = ? 
AND occurday='A Great Day' 
AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
 {code}
 The base line implementation is to retrieve all data of all rows around the 
 ring. This new feature is to retrieve all data of a wide row. It's a one 
 level lower than the base line. It helps for the use case where user is only 
 interested in a specific wide row, so the user doesn't spend whole job to 
 retrieve all the rows around the ring.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2013-12-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6476:


Summary: Assertion error in MessagingService.addCallback  (was: Assertion 
error in native transport)

 Assertion error in MessagingService.addCallback
 ---

 Key: CASSANDRA-6476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.2 DCE
Reporter: Theo Hultberg
Assignee: Sylvain Lebresne

 Two of the three Cassandra nodes in one of our clusters just started behaving 
 very strange about an hour ago. Within a minute of each other they started 
 logging AssertionErrors (see stack traces here: 
 https://gist.github.com/iconara/7917438) over and over again. The client lost 
 connection with the nodes at roughly the same time. The nodes were still up, 
 and even if no clients were connected to them they continued logging the same 
 errors over and over.
 The errors are in the native transport (specifically 
 MessagingService.addCallback) which makes me suspect that it has something to 
 do with a test that we started running this afternoon. I've just implemented 
 support for frame compression in my CQL driver cql-rb. About two hours before 
 this happened I deployed a version of the application which enabled Snappy 
 compression on all frames larger than 64 bytes. It's not impossible that 
 there is a bug somewhere in the driver or compression library that caused 
 this -- but at the same time, it feels like it shouldn't be possible to make 
 C* a zombie with a bad frame.
 Restarting seems to have got them back running again, but I suspect they will 
 go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6478) Importing sstables through sstableloader tombstoned data

2013-12-12 Thread Mathijs Vogelzang (JIRA)
Mathijs Vogelzang created CASSANDRA-6478:


 Summary: Importing sstables through sstableloader tombstoned data
 Key: CASSANDRA-6478
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6478
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.0.3
Reporter: Mathijs Vogelzang


We've tried to import sstables from a snapshot of a 1.2.10 cluster into a 
running 2.0.3 cluster. When using sstableloader, for some reason we couldn't 
retrieve some of the data. When investigating further, it turned out that 
tombstones in the far future were created for some rows. (sstable2json returned 
the correct data, but with an addition of metadata: {deletionInfo:
{markedForDeleteAt:1796952039620607,localDeletionTime:0}} to the rows that 
seemed missing).
This happened again exactly the same way when we cleared the new cluster and 
ran sstableloader again.

The sstables itself seemed fine, they were working on the old cluster, 
upgradesstables tells there's nothing to upgrade, and we were finally able to 
move our data correctly by copying the SSTables with scp into the right 
directory on the hosts of the new clusters worked fine (but naturally this 
required much more disk space than when sstableloader only sends the relevant 
parts).




--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6478) Importing sstables through sstableloader tombstoned data

2013-12-12 Thread Mathijs Vogelzang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mathijs Vogelzang updated CASSANDRA-6478:
-

Since Version: 2.0.3
Fix Version/s: 2.0.3

 Importing sstables through sstableloader tombstoned data
 

 Key: CASSANDRA-6478
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6478
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.0.3
Reporter: Mathijs Vogelzang
 Fix For: 2.0.3


 We've tried to import sstables from a snapshot of a 1.2.10 cluster into a 
 running 2.0.3 cluster. When using sstableloader, for some reason we couldn't 
 retrieve some of the data. When investigating further, it turned out that 
 tombstones in the far future were created for some rows. (sstable2json 
 returned the correct data, but with an addition of metadata: 
 {deletionInfo:
 {markedForDeleteAt:1796952039620607,localDeletionTime:0}} to the rows 
 that seemed missing).
 This happened again exactly the same way when we cleared the new cluster and 
 ran sstableloader again.
 The sstables itself seemed fine, they were working on the old cluster, 
 upgradesstables tells there's nothing to upgrade, and we were finally able to 
 move our data correctly by copying the SSTables with scp into the right 
 directory on the hosts of the new clusters worked fine (but naturally this 
 required much more disk space than when sstableloader only sends the relevant 
 parts).



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6318) IN predicates on non-primary-key columns (%s) is not yet supported

2013-12-12 Thread Sergey Nagaytsev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846244#comment-13846244
 ] 

Sergey Nagaytsev commented on CASSANDRA-6318:
-

OK, i have heard it, so:

1) How important is it deemed by the team ? My usage scenario is RDBMS-oriented 
(but happily JOIN-less) DBAL + business application, it makes IN() queries on 
any columns, like they are indexed integer IDs. Does the team consider this 
usage scenario as main or popular ? What is the mental image of C* usage in the 
team ? Log/sensor dump with expiry, single-task inverse index, focused social 
site w/ few purpose tables (user,post/picture/whatever,like/comment,tag) and 
few inverse indices ? How far is it from mine - take a big business app or 
project of it, switch from Oracle to C* in one config line, fix bugs if any ?

2) How hard is it to implement ? What are classes involved ? What is magnitude 
of changes - just add one loop instead of single execution, or will it take 
ground-up architecture overhaul ? If i comment out the exception in 
cassandra.cql3.statements.SelectStatement.RawStatement#prepare, what and where 
will break ? 

 IN predicates on non-primary-key columns (%s) is not yet supported
 --

 Key: CASSANDRA-6318
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6318
 Project: Cassandra
  Issue Type: Bug
Reporter: Sergey Nagaytsev
  Labels: cql3
 Attachments: CASSANDRA_6318_test.cql


 Query:
 SELECT * FROM post WHERE blog IN (1,2) AND author=3 ALLOW FILTERING -- 
 contrived
 Error: IN predicates on non-primary-key columns (blog) is not yet supported
 Please either implement, set milestone or say will never be implemented !
 P.S. Did search, seemingly found no issue/plan related to it. Maybe 
 CASSANDRA-6048 ?
 P.S.2 What is recommended workaround for this ? Manual index tables, what are 
 design guidelines for them ?



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2013-12-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846256#comment-13846256
 ] 

Sylvain Lebresne commented on CASSANDRA-6476:
-

MessagingService ain't the native transport (fyi, the native transport code 
doesn't leak outside the org.apache.cassandra.transport package), it's the 
intra-cluster messaging. In fact the stack trace shows that the write that 
trigger it don't even come from the native protocol but from thrift (which 
means you either use thrift for some things or something is whack).

But truth is, given the stack trace, where the writes comes from doesn't 
matter.  The assertion that fails is the line
{noformat}
assert previous == null;
{noformat}
in MessagingService.addCallback. And that's where things stop to make sense to 
me. This means that we tried to add a new message to the callback map but there 
was one with the same messageId already. Except that messageId is very 
straighforwardly generated by an {{incrementAndGet}} on an static 
AtomicInteger. And as far as I can tell, no other code inserts in the callback 
map without grabing a new messageId this way (except setCallbackForTests, but 
it does is only use in a unit test).

Therefore, it seems the only way such messageId conflict could happen is that 
we've gone full cycle on the AtomicInteger and hit the same id again. But 
entries in callbacks expire after the rpc timeout, so that implies  4 billions 
requests in about 10 seconds. Sounds pretty unlikely to me.

But I might be missing something obvious: [~jbellis], I believe you might be 
more familiar with MessagingService, any idea?


 Assertion error in MessagingService.addCallback
 ---

 Key: CASSANDRA-6476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.2 DCE
Reporter: Theo Hultberg
Assignee: Sylvain Lebresne

 Two of the three Cassandra nodes in one of our clusters just started behaving 
 very strange about an hour ago. Within a minute of each other they started 
 logging AssertionErrors (see stack traces here: 
 https://gist.github.com/iconara/7917438) over and over again. The client lost 
 connection with the nodes at roughly the same time. The nodes were still up, 
 and even if no clients were connected to them they continued logging the same 
 errors over and over.
 The errors are in the native transport (specifically 
 MessagingService.addCallback) which makes me suspect that it has something to 
 do with a test that we started running this afternoon. I've just implemented 
 support for frame compression in my CQL driver cql-rb. About two hours before 
 this happened I deployed a version of the application which enabled Snappy 
 compression on all frames larger than 64 bytes. It's not impossible that 
 there is a bug somewhere in the driver or compression library that caused 
 this -- but at the same time, it feels like it shouldn't be possible to make 
 C* a zombie with a bad frame.
 Restarting seems to have got them back running again, but I suspect they will 
 go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2013-12-12 Thread Theo Hultberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846262#comment-13846262
 ] 

Theo Hultberg commented on CASSANDRA-6476:
--

Sorry, there was another stack trace I meant to attach to the same gist that 
said something about the native transport. I've added it now: 
https://gist.github.com/iconara/7917438 (see the second file). Those errors 
started with ERROR [Native-Transport-Requests:7924] which made me make the 
connection between us changing to compressed requests and the errors (since 
cql-rb only runs over the CQL protocol).

I've looked at the logs but my untrained eyes don't find any more hints as to 
what happened. I can post the full logs if that helps you.

 Assertion error in MessagingService.addCallback
 ---

 Key: CASSANDRA-6476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.2 DCE
Reporter: Theo Hultberg
Assignee: Sylvain Lebresne

 Two of the three Cassandra nodes in one of our clusters just started behaving 
 very strange about an hour ago. Within a minute of each other they started 
 logging AssertionErrors (see stack traces here: 
 https://gist.github.com/iconara/7917438) over and over again. The client lost 
 connection with the nodes at roughly the same time. The nodes were still up, 
 and even if no clients were connected to them they continued logging the same 
 errors over and over.
 The errors are in the native transport (specifically 
 MessagingService.addCallback) which makes me suspect that it has something to 
 do with a test that we started running this afternoon. I've just implemented 
 support for frame compression in my CQL driver cql-rb. About two hours before 
 this happened I deployed a version of the application which enabled Snappy 
 compression on all frames larger than 64 bytes. It's not impossible that 
 there is a bug somewhere in the driver or compression library that caused 
 this -- but at the same time, it feels like it shouldn't be possible to make 
 C* a zombie with a bad frame.
 Restarting seems to have got them back running again, but I suspect they will 
 go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6413) Saved KeyCache prints success to log; but no file present

2013-12-12 Thread Chris Burroughs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846264#comment-13846264
 ] 

Chris Burroughs commented on CASSANDRA-6413:


As a followup note, this bug appears to have prevented any of the system-* 
KeyCaches from being saved.

 Saved KeyCache prints success to log; but no file present
 -

 Key: CASSANDRA-6413
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6413
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11
Reporter: Chris Burroughs
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.13, 2.0.4

 Attachments: 6413-v2.txt, CASSANDRA-1.2-6413.patch


 Cluster has a single keyspace with 3 CFs.  All used to have ROWS_ONLY, two 
 were switched to KEYS_ONLY about 2 days ago.  Row cache continues to save 
 fine, but there is no saved key cache file present on any node in the cluster.
 {noformat}
 6925: INFO [CompactionExecutor:12] 2013-11-27 10:12:02,284 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 118 ms
 6941:DEBUG [CompactionExecutor:14] 2013-11-27 10:17:02,163 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 6942: INFO [CompactionExecutor:14] 2013-11-27 10:17:02,310 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 146 ms
 8745:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,140 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8746: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 143 ms
 8747:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,283 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 8748: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 289) Saved KeyCache (21181 items) in 342 ms
 8749:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,625 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8750: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 134 ms
 8751:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,759 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8752: INFO [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 8753:DEBUG [CompactionExecutor:6] 2013-11-27 10:37:25,893 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 8754: INFO [CompactionExecutor:6] 2013-11-27 10:37:26,026 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 133 ms
 9915:DEBUG [CompactionExecutor:18] 2013-11-27 10:42:01,851 
 AutoSavingCache.java (line 233) Deleting old KeyCache files.
 9916: INFO [CompactionExecutor:18] 2013-11-27 10:42:02,185 
 AutoSavingCache.java (line 289) Saved KeyCache (22067 items) in 334 ms
 9917:DEBUG [CompactionExecutor:17] 2013-11-27 10:42:02,279 
 AutoSavingCache.java (line 233) Deleting old RowCache files.
 9918: INFO [CompactionExecutor:17] 2013-11-27 10:42:02,411 
 AutoSavingCache.java (line 289) Saved RowCache (5 items) in 131 ms
 {noformat}
 {noformat}
 $ ll ~/shared/saved_caches/
 total 3472
 -rw-rw-r-- 1 cassandra cassandra 3551608 Nov 27 10:42 Foo-Bar-RowCache-b.db
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6462) cleanup ClassCastException

2013-12-12 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846267#comment-13846267
 ] 

Andreas Schnitzerling commented on CASSANDRA-6462:
--

What can I do now? repair doesn't help and major compact doesn't help. Cleanup 
still stops with SSTableReader / ClassCastException.
Reset node (delete all and start new)? Wait for 2.0.4? Tx.

 cleanup ClassCastException
 --

 Key: CASSANDRA-6462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6462
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Windows 7 / Java 1.7.0.25
Reporter: Andreas Schnitzerling
Assignee: Jonathan Ellis
  Labels: cleanup, compaction
 Fix For: 2.0.4


 I enlarged the cluseter from 4 to 8 nodes. During cleaning up the old nodes 
 with nodetool cleanup it breaks up with exception. I started cleanup from a 
 different computer to manage them sequentially.
 {panel:title=cmd.exe}
 Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.ClassCastException: 
 org.apach
 e.cassandra.io.sstable.SSTableReader$EmptyCompactionScanner cannot be cast to 
 or
 g.apache.cassandra.io.sstable.SSTableScanner
 at java.util.concurrent.FutureTask.report(Unknown Source)
 at java.util.concurrent.FutureTask.get(Unknown Source)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTabl
 eOperation(CompactionManager.java:227)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(C
 ompactionManager.java:265)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilySt
 ore.java:1054)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(Stor
 ageService.java:2038)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at sun.reflect.misc.Trampoline.invoke(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
 at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
 So
 urce)
 at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
 So
 urce)
 at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown
 Source)
 at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
 at javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown 
 Sou
 rce)
 at javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown 
 Sour
 ce)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run
 (Unknown Source)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(U
 nknown Source)
 at javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown 
 Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
 at sun.rmi.transport.Transport$1.run(Unknown Source)
 at sun.rmi.transport.Transport$1.run(Unknown Source)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Unknown Source)
 at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
 at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown 
 Sou
 rce)
 at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown 
 Sour
 ce)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.lang.ClassCastException: 
 org.apache.cassandra.io.sstable.SSTable
 Reader$EmptyCompactionScanner cannot be cast to 
 org.apache.cassandra.io.sstable.
 SSTableScanner
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompact
 

[jira] [Commented] (CASSANDRA-5839) Save repair data to system table

2013-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846268#comment-13846268
 ] 

Jimmy Mårdell commented on CASSANDRA-5839:
--

I agree on changing the primary key, that's nice.

In an RDBMS I would agree on storing stats separately. Not as obvious in this 
case. What do you consider stats (and not status) in the current schema?

There are other reasons for running repair than just ensuring tombstones are 
replicated, so I don't think a hardcode factor based on gc_grace is the way to 
go.


 Save repair data to system table
 

 Key: CASSANDRA-5839
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5839
 Project: Cassandra
  Issue Type: New Feature
  Components: Core, Tools
Reporter: Jonathan Ellis
Assignee: Jimmy Mårdell
Priority: Minor
 Fix For: 2.0.4

 Attachments: 2.0.4-5839-draft.patch


 As noted in CASSANDRA-2405, it would be useful to store repair results, 
 particularly with sub-range repair available (CASSANDRA-5280).



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5357) Query cache

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846344#comment-13846344
 ] 

Jonathan Ellis commented on CASSANDRA-5357:
---

Yeah, but we've already given up on 2i since that turns out to be a mess. :)

 Query cache
 ---

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Vijay

 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6478) Importing sstables through sstableloader tombstoned data

2013-12-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6478:
--

Since Version:   (was: 2.0.3)
Fix Version/s: (was: 2.0.3)
   2.0.4
 Assignee: Tyler Hobbs

 Importing sstables through sstableloader tombstoned data
 

 Key: CASSANDRA-6478
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6478
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Cassandra 2.0.3
Reporter: Mathijs Vogelzang
Assignee: Tyler Hobbs
 Fix For: 2.0.4


 We've tried to import sstables from a snapshot of a 1.2.10 cluster into a 
 running 2.0.3 cluster. When using sstableloader, for some reason we couldn't 
 retrieve some of the data. When investigating further, it turned out that 
 tombstones in the far future were created for some rows. (sstable2json 
 returned the correct data, but with an addition of metadata: 
 {deletionInfo:
 {markedForDeleteAt:1796952039620607,localDeletionTime:0}} to the rows 
 that seemed missing).
 This happened again exactly the same way when we cleared the new cluster and 
 ran sstableloader again.
 The sstables itself seemed fine, they were working on the old cluster, 
 upgradesstables tells there's nothing to upgrade, and we were finally able to 
 move our data correctly by copying the SSTables with scp into the right 
 directory on the hosts of the new clusters worked fine (but naturally this 
 required much more disk space than when sstableloader only sends the relevant 
 parts).



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6479) RejectedExecutionException during drain action

2013-12-12 Thread koray sariteke (JIRA)
koray sariteke created CASSANDRA-6479:
-

 Summary: RejectedExecutionException during drain action
 Key: CASSANDRA-6479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6479
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.0.2
oracle jdk 1.7.0_45
os: ubuntu 
Reporter: koray sariteke


For rolling restart upgrade process from 2.0.2 to 2.0.3, ran nodetool drain 
command and get exception at system.log

 INFO [FlushWriter:295] 2013-12-12 16:50:26,324 Memtable.java (line 328) 
Writing Memtable-compactions_in_progress@353013048(0/0 serialized/live bytes, 1 
ops)
ERROR [CompactionExecutor:561] 2013-12-12 16:50:26,324 CassandraDaemon.java 
(line 187) Exception in thread Thread[CompactionExecutor:561,1,main]
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
down
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:145)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
at 
org.apache.cassandra.db.ColumnFamilyStore.switchMemtable(ColumnFamilyStore.java:746)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:811)
at 
org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:423)
at 
org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:197)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
 INFO [CompactionExecutor:561] 2013-12-12 16:50:26,328 ColumnFamilyStore.java 
(line 734) Enqueuing flush of 
Memtable-compactions_in_progress@1617654553(182/6925 serialized/live bytes, 7 
ops)
ERROR [CompactionExecutor:561] 2013-12-12 16:50:26,328 CassandraDaemon.java 
(line 187) Exception in thread Thread[CompactionExecutor:561,1,main]
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
down
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at 
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:145)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
at 
org.apache.cassandra.db.ColumnFamilyStore.switchMemtable(ColumnFamilyStore.java:746)
at 
org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:811)
at 
org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:423)
at 
org.apache.cassandra.db.SystemKeyspace.startCompaction(SystemKeyspace.java:187)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:107)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 

[jira] [Commented] (CASSANDRA-6462) cleanup ClassCastException

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846352#comment-13846352
 ] 

Jonathan Ellis commented on CASSANDRA-6462:
---

That, or cherry pick the fix yourself.

 cleanup ClassCastException
 --

 Key: CASSANDRA-6462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6462
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Windows 7 / Java 1.7.0.25
Reporter: Andreas Schnitzerling
Assignee: Jonathan Ellis
  Labels: cleanup, compaction
 Fix For: 2.0.4


 I enlarged the cluseter from 4 to 8 nodes. During cleaning up the old nodes 
 with nodetool cleanup it breaks up with exception. I started cleanup from a 
 different computer to manage them sequentially.
 {panel:title=cmd.exe}
 Error occurred during cleanup
 java.util.concurrent.ExecutionException: java.lang.ClassCastException: 
 org.apach
 e.cassandra.io.sstable.SSTableReader$EmptyCompactionScanner cannot be cast to 
 or
 g.apache.cassandra.io.sstable.SSTableScanner
 at java.util.concurrent.FutureTask.report(Unknown Source)
 at java.util.concurrent.FutureTask.get(Unknown Source)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performAllSSTabl
 eOperation(CompactionManager.java:227)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performCleanup(C
 ompactionManager.java:265)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.forceCleanup(ColumnFamilySt
 ore.java:1054)
 at 
 org.apache.cassandra.service.StorageService.forceKeyspaceCleanup(Stor
 ageService.java:2038)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at sun.reflect.misc.Trampoline.invoke(Unknown Source)
 at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at sun.reflect.misc.MethodUtil.invoke(Unknown Source)
 at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
 So
 urce)
 at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(Unknown 
 So
 urce)
 at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(Unknown Source)
 at com.sun.jmx.mbeanserver.PerInterface.invoke(Unknown Source)
 at com.sun.jmx.mbeanserver.MBeanSupport.invoke(Unknown Source)
 at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(Unknown
 Source)
 at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(Unknown Source)
 at javax.management.remote.rmi.RMIConnectionImpl.doOperation(Unknown 
 Sou
 rce)
 at javax.management.remote.rmi.RMIConnectionImpl.access$300(Unknown 
 Sour
 ce)
 at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run
 (Unknown Source)
 at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(U
 nknown Source)
 at javax.management.remote.rmi.RMIConnectionImpl.invoke(Unknown 
 Source)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source)
 at sun.rmi.transport.Transport$1.run(Unknown Source)
 at sun.rmi.transport.Transport$1.run(Unknown Source)
 at java.security.AccessController.doPrivileged(Native Method)
 at sun.rmi.transport.Transport.serviceCall(Unknown Source)
 at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source)
 at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown 
 Sou
 rce)
 at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown 
 Sour
 ce)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.lang.ClassCastException: 
 org.apache.cassandra.io.sstable.SSTable
 Reader$EmptyCompactionScanner cannot be cast to 
 org.apache.cassandra.io.sstable.
 SSTableScanner
 at 
 org.apache.cassandra.db.compaction.CompactionManager.doCleanupCompact
 ion(CompactionManager.java:563)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.access$400(Compa
 ctionManager.java:62)
 at 
 

[jira] [Resolved] (CASSANDRA-6479) RejectedExecutionException during drain action

2013-12-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6479.
---

Resolution: Duplicate

Duplicates CASSANDRA-1483.  TLDR it's harmless.

 RejectedExecutionException during drain action
 --

 Key: CASSANDRA-6479
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6479
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.0.2
 oracle jdk 1.7.0_45
 os: ubuntu 
Reporter: koray sariteke

 For rolling restart upgrade process from 2.0.2 to 2.0.3, ran nodetool drain 
 command and get exception at system.log
  INFO [FlushWriter:295] 2013-12-12 16:50:26,324 Memtable.java (line 328) 
 Writing Memtable-compactions_in_progress@353013048(0/0 serialized/live bytes, 
 1 ops)
 ERROR [CompactionExecutor:561] 2013-12-12 16:50:26,324 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:561,1,main]
 java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
 down
   at 
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
   at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
   at 
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
   at 
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:145)
   at 
 java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.switchMemtable(ColumnFamilyStore.java:746)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:811)
   at 
 org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:423)
   at 
 org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:197)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:225)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:744)
  INFO [CompactionExecutor:561] 2013-12-12 16:50:26,328 ColumnFamilyStore.java 
 (line 734) Enqueuing flush of 
 Memtable-compactions_in_progress@1617654553(182/6925 serialized/live bytes, 7 
 ops)
 ERROR [CompactionExecutor:561] 2013-12-12 16:50:26,328 CassandraDaemon.java 
 (line 187) Exception in thread Thread[CompactionExecutor:561,1,main]
 java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut 
 down
   at 
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61)
   at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
   at 
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
   at 
 org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:145)
   at 
 java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.switchMemtable(ColumnFamilyStore.java:746)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:811)
   at 
 org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:423)
   at 
 org.apache.cassandra.db.SystemKeyspace.startCompaction(SystemKeyspace.java:187)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:107)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
   at 
 

[jira] [Commented] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846377#comment-13846377
 ] 

Jonathan Ellis commented on CASSANDRA-6476:
---

bq. that's where things stop to make sense to me. This means that we tried to 
add a new message to the callback map but there was one with the same messageId 
already. Except that messageId is very straighforwardly generated by an 
incrementAndGet on an static AtomicInteger

Right.  I'm not sure why that assert even exists TBH; I have some idea that in 
the super distant past SP used to manually inject callbacks in some cases but 
if so that code is long dead.

Is it possible a bug in native compression code is corrupting random crap 
elsewhere in the JVM?

 Assertion error in MessagingService.addCallback
 ---

 Key: CASSANDRA-6476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.2 DCE
Reporter: Theo Hultberg
Assignee: Sylvain Lebresne

 Two of the three Cassandra nodes in one of our clusters just started behaving 
 very strange about an hour ago. Within a minute of each other they started 
 logging AssertionErrors (see stack traces here: 
 https://gist.github.com/iconara/7917438) over and over again. The client lost 
 connection with the nodes at roughly the same time. The nodes were still up, 
 and even if no clients were connected to them they continued logging the same 
 errors over and over.
 The errors are in the native transport (specifically 
 MessagingService.addCallback) which makes me suspect that it has something to 
 do with a test that we started running this afternoon. I've just implemented 
 support for frame compression in my CQL driver cql-rb. About two hours before 
 this happened I deployed a version of the application which enabled Snappy 
 compression on all frames larger than 64 bytes. It's not impossible that 
 there is a bug somewhere in the driver or compression library that caused 
 this -- but at the same time, it feels like it shouldn't be possible to make 
 C* a zombie with a bad frame.
 Restarting seems to have got them back running again, but I suspect they will 
 go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2013-12-12 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846405#comment-13846405
 ] 

Sylvain Lebresne commented on CASSANDRA-6476:
-

bq. Is it possible a bug in native compression code is corrupting random crap 
elsewhere in the JVM?

I have no clue, that would be a pretty serious JVM bug imo if that was the 
case. It would also be uncanny for random corrupted crap to trigger the same 
assertion on different nodes (but well, everything is possible). All I can say 
in that matter is that the native protocol uses the same compression libs than 
sstable compression and in basically the same way.

 Assertion error in MessagingService.addCallback
 ---

 Key: CASSANDRA-6476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.2 DCE
Reporter: Theo Hultberg
Assignee: Sylvain Lebresne

 Two of the three Cassandra nodes in one of our clusters just started behaving 
 very strange about an hour ago. Within a minute of each other they started 
 logging AssertionErrors (see stack traces here: 
 https://gist.github.com/iconara/7917438) over and over again. The client lost 
 connection with the nodes at roughly the same time. The nodes were still up, 
 and even if no clients were connected to them they continued logging the same 
 errors over and over.
 The errors are in the native transport (specifically 
 MessagingService.addCallback) which makes me suspect that it has something to 
 do with a test that we started running this afternoon. I've just implemented 
 support for frame compression in my CQL driver cql-rb. About two hours before 
 this happened I deployed a version of the application which enabled Snappy 
 compression on all frames larger than 64 bytes. It's not impossible that 
 there is a bug somewhere in the driver or compression library that caused 
 this -- but at the same time, it feels like it shouldn't be possible to make 
 C* a zombie with a bad frame.
 Restarting seems to have got them back running again, but I suspect they will 
 go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6480) Custom secondary index options in CQL3

2013-12-12 Thread JIRA
Andrés de la Peña created CASSANDRA-6480:


 Summary: Custom secondary index options in CQL3
 Key: CASSANDRA-6480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6480
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Andrés de la Peña


The CQL3 create index statement syntax does not allow to specify the options 
map internally used by custom indexes. 



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing

2013-12-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846460#comment-13846460
 ] 

Tyler Hobbs commented on CASSANDRA-6008:


bq. Right, but that should only affect purging range tombstones, since we write 
the row tombstone based on emptyColumnFamily, not on the Reducer container.

Well, it's true that we'll still write out the row tombstone, but we'll fail to 
purge the cells that it shadows (except for the first one), so the delete will 
appear to have worked, but both the tombstone and cells will exist in the new 
sstable. After gcGrace has passed, the row tombstone will be purged and any 
cells that remain will be revived.

 Getting 'This should never happen' error at startup due to sstables missing
 ---

 Key: CASSANDRA-6008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: John Carrino
Assignee: Tyler Hobbs
 Fix For: 2.0.4

 Attachments: 6008-2.0-v1.patch, 6008-trunk-v1.patch


 Exception encountered during startup: Unfinished compactions reference 
 missing sstables. This should never happen since compactions are marked 
 finished before we start removing the old sstables
 This happens when sstables that have been compacted away are removed, but 
 they still have entries in the system.compactions_in_progress table.
 Normally this should not happen because the entries in 
 system.compactions_in_progress are deleted before the old sstables are 
 deleted.
 However at startup recovery time, old sstables are deleted (NOT BEFORE they 
 are removed from the compactions_in_progress table) and then after that is 
 done it does a truncate using SystemKeyspace.discardCompactionsInProgress
 We ran into a case where the disk filled up and the node died and was bounced 
 and then failed to truncate this table on startup, and then got stuck hitting 
 this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers.
 Maybe on startup we can delete from this table incrementally as we clean 
 stuff up in the same way that compactions delete from this table before they 
 delete old sstables.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846464#comment-13846464
 ] 

Jonathan Ellis commented on CASSANDRA-6476:
---

[~iconara] did you see the MS asserts on multiple nodes?

 Assertion error in MessagingService.addCallback
 ---

 Key: CASSANDRA-6476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.2 DCE
Reporter: Theo Hultberg
Assignee: Sylvain Lebresne

 Two of the three Cassandra nodes in one of our clusters just started behaving 
 very strange about an hour ago. Within a minute of each other they started 
 logging AssertionErrors (see stack traces here: 
 https://gist.github.com/iconara/7917438) over and over again. The client lost 
 connection with the nodes at roughly the same time. The nodes were still up, 
 and even if no clients were connected to them they continued logging the same 
 errors over and over.
 The errors are in the native transport (specifically 
 MessagingService.addCallback) which makes me suspect that it has something to 
 do with a test that we started running this afternoon. I've just implemented 
 support for frame compression in my CQL driver cql-rb. About two hours before 
 this happened I deployed a version of the application which enabled Snappy 
 compression on all frames larger than 64 bytes. It's not impossible that 
 there is a bug somewhere in the driver or compression library that caused 
 this -- but at the same time, it feels like it shouldn't be possible to make 
 C* a zombie with a bad frame.
 Restarting seems to have got them back running again, but I suspect they will 
 go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6447) SELECT someColumns FROM table results in AssertionError in AbstractQueryPager.discardFirst

2013-12-12 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6447:


Attachment: 6447.txt

I believe the fix is a tiny bit less trivial than that. If the first row in 
discardFirst has no live data, we need to check the following rows until we 
find one to discard, otherwise paging would end up return twice the same 
result. Not sure why discardFirst is not handling that correctly since 
discardLast is, but anyway, attaching patch to fix (the patch also slightly 
modify discardLast because it was actually not handling the case where there 
was less live rows than we want to discard).

 SELECT someColumns FROM table results in AssertionError in 
 AbstractQueryPager.discardFirst
 --

 Key: CASSANDRA-6447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6447
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cluster: single node server (ubuntu)
 Cassandra version: 2.0.3 (server/client)
 Client: Datastax cassandra-driver-core 2.0.0-rc1
Reporter: Julien Aymé
Assignee: Julien Aymé
 Fix For: 2.0.4

 Attachments: 6447.txt, cassandra-2.0-6447.patch, stacktrace.txt


 I have a query which must read all the rows from the table:
 Query: SELECT key, col1, col2, col3 FROM mytable
 Here is the corresponding code (this is using datastax driver):
 {code}
 ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM 
 mytable);
 for (Row row : result) {
  // do some work with row
 }
 {code}
 Messages sent from the client to Cassandra:
 * 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=null, serialCl=ONE])}}
 * 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
 cap=410474], serialCl=ONE])}}
 On the first message, everything is fine, and the server returns 5000 rows.
 On the second message, paging is in progress, and the server fails in 
 AbstractQueryPager.discardFirst: AssertionError (stack trace attached).
 Here is some more info (step by step debugging on reception of 2nd message):
 {code}
 AbstractQueryPager.fetchPage(int):
 * pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
 * containsPreviousLast(rows.get(0)) returns true
 - AbstractQueryPager.discardFirst(ListRow):
 * rows size=5002
 * first=TreeMapBackedSortedColumns[with TreeMap size=1]
 - AbstractQueryPager.discardHead(ColumnFamily, ...):
 * counter = ColumnCounter$GroupByPrefix
 * iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
 * Column c = DeletedColumn
 * counter.count() - c.isLive returns false (c is DeletedColumn)
 * counter.live() = 0
 * iter.hasNext() returns false
 * Math.min(0, toDiscard==1) returns 0
 - AbstractQueryPager.discardFirst(ListRow):
 * discarded = 0;
 * count = newCf.getColumnCount() = 0;
 {code}
 -  assert discarded == 1 *throws AssertionError*



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6481) Batchlog endpoint candidates should be picked randomly, not sorted by proximity

2013-12-12 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6481:
-

Attachment: 6481.txt

 Batchlog endpoint candidates should be picked randomly, not sorted by 
 proximity
 ---

 Key: CASSANDRA-6481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6481
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.2.13, 2.0.4

 Attachments: 6481.txt


 Batchlog endpoint candidates should be picked randomly, not sorted by 
 proximity. I'll be lazy and just copy-paste some lines from IRC:
 [20:23:27] rbranson:   is there an issue where batch logs tend to get written 
 to a subset of the nodes?
 [20:28:04] rbranson:   I mean all the write batches are going thru 10% of the 
 nodes
 [20:28:16] rbranson:   it means writes won't scale linearly w/the cluster size
 Attaching a trivial patch.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6481) Batchlog endpoint candidates should be picked randomly, not sorted by proximity

2013-12-12 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-6481:


 Summary: Batchlog endpoint candidates should be picked randomly, 
not sorted by proximity
 Key: CASSANDRA-6481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6481
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.2.13, 2.0.4
 Attachments: 6481.txt

Batchlog endpoint candidates should be picked randomly, not sorted by 
proximity. I'll be lazy and just copy-paste some lines from IRC:

[20:23:27] rbranson: is there an issue where batch logs tend to get written 
to a subset of the nodes?
[20:28:04] rbranson: I mean all the write batches are going thru 10% of the 
nodes
[20:28:16] rbranson: it means writes won't scale linearly w/the cluster size

Attaching a trivial patch.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846492#comment-13846492
 ] 

Jonathan Ellis commented on CASSANDRA-6008:
---

How about this to clean it up a bit more? 
https://github.com/jbellis/cassandra/tree/CASSANDRA-6008

 Getting 'This should never happen' error at startup due to sstables missing
 ---

 Key: CASSANDRA-6008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: John Carrino
Assignee: Tyler Hobbs
 Fix For: 2.0.4

 Attachments: 6008-2.0-v1.patch, 6008-trunk-v1.patch


 Exception encountered during startup: Unfinished compactions reference 
 missing sstables. This should never happen since compactions are marked 
 finished before we start removing the old sstables
 This happens when sstables that have been compacted away are removed, but 
 they still have entries in the system.compactions_in_progress table.
 Normally this should not happen because the entries in 
 system.compactions_in_progress are deleted before the old sstables are 
 deleted.
 However at startup recovery time, old sstables are deleted (NOT BEFORE they 
 are removed from the compactions_in_progress table) and then after that is 
 done it does a truncate using SystemKeyspace.discardCompactionsInProgress
 We ran into a case where the disk filled up and the node died and was bounced 
 and then failed to truncate this table on startup, and then got stuck hitting 
 this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers.
 Maybe on startup we can delete from this table incrementally as we clean 
 stuff up in the same way that compactions delete from this table before they 
 delete old sstables.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6481) Batchlog endpoint candidates should be picked randomly, not sorted by proximity

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846497#comment-13846497
 ] 

Jonathan Ellis commented on CASSANDRA-6481:
---

If dsnitch actually worked then I think this would be fine, but fixing dsnitch 
is probably out of scope here so +1

 Batchlog endpoint candidates should be picked randomly, not sorted by 
 proximity
 ---

 Key: CASSANDRA-6481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6481
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.2.13, 2.0.4

 Attachments: 6481.txt


 Batchlog endpoint candidates should be picked randomly, not sorted by 
 proximity. I'll be lazy and just copy-paste some lines from IRC:
 [20:23:27] rbranson:   is there an issue where batch logs tend to get written 
 to a subset of the nodes?
 [20:28:04] rbranson:   I mean all the write batches are going thru 10% of the 
 nodes
 [20:28:16] rbranson:   it means writes won't scale linearly w/the cluster size
 Attaching a trivial patch.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


git commit: Randomize batchlog candidates selection

2013-12-12 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 6ab82a469 - f7f7598a2


Randomize batchlog candidates selection

patch by Aleksey Yeschenko; reviewed by Jonathan Ellis for
CASSANDRA-6481


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7f7598a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7f7598a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7f7598a

Branch: refs/heads/cassandra-1.2
Commit: f7f7598a20e20ff3c4ee8e0e5b425680a480d9e0
Parents: 6ab82a4
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 20:53:58 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 20:53:58 2013 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7f7598a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6f5f23b..fa48a27 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -16,6 +16,7 @@
(CASSANDRA-6413)
  * (Hadoop) add describe_local_ring (CASSANDRA-6268)
  * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
+ * Randomize batchlog candidates selection (CASSANDRA-6481)
 
 
 1.2.12

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7f7598a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index f195285..3e9f2cb 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -447,8 +447,7 @@ public class StorageProxy implements StorageProxyMBean
 
 if (candidates.size()  2)
 {
-IEndpointSnitch snitch = DatabaseDescriptor.getEndpointSnitch();
-snitch.sortByProximity(FBUtilities.getBroadcastAddress(), 
candidates);
+Collections.shuffle(candidates);
 candidates = candidates.subList(0, 2);
 }
 



[jira] [Commented] (CASSANDRA-6481) Batchlog endpoint candidates should be picked randomly, not sorted by proximity

2013-12-12 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846502#comment-13846502
 ] 

Jason Brown commented on CASSANDRA-6481:


+1 on the patch, as well.

[~jbellis] In what way do you think dsnitch doesn't work?

 Batchlog endpoint candidates should be picked randomly, not sorted by 
 proximity
 ---

 Key: CASSANDRA-6481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6481
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.2.13, 2.0.4

 Attachments: 6481.txt


 Batchlog endpoint candidates should be picked randomly, not sorted by 
 proximity. I'll be lazy and just copy-paste some lines from IRC:
 [20:23:27] rbranson:   is there an issue where batch logs tend to get written 
 to a subset of the nodes?
 [20:28:04] rbranson:   I mean all the write batches are going thru 10% of the 
 nodes
 [20:28:16] rbranson:   it means writes won't scale linearly w/the cluster size
 Attaching a trivial patch.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[2/3] git commit: Randomize batchlog candidates selection

2013-12-12 Thread aleksey
Randomize batchlog candidates selection

patch by Aleksey Yeschenko; reviewed by Jonathan Ellis for
CASSANDRA-6481


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7f7598a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7f7598a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7f7598a

Branch: refs/heads/cassandra-2.0
Commit: f7f7598a20e20ff3c4ee8e0e5b425680a480d9e0
Parents: 6ab82a4
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 20:53:58 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 20:53:58 2013 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7f7598a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6f5f23b..fa48a27 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -16,6 +16,7 @@
(CASSANDRA-6413)
  * (Hadoop) add describe_local_ring (CASSANDRA-6268)
  * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
+ * Randomize batchlog candidates selection (CASSANDRA-6481)
 
 
 1.2.12

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7f7598a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index f195285..3e9f2cb 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -447,8 +447,7 @@ public class StorageProxy implements StorageProxyMBean
 
 if (candidates.size()  2)
 {
-IEndpointSnitch snitch = DatabaseDescriptor.getEndpointSnitch();
-snitch.sortByProximity(FBUtilities.getBroadcastAddress(), 
candidates);
+Collections.shuffle(candidates);
 candidates = candidates.subList(0, 2);
 }
 



[1/3] git commit: Update versions for 1.2.13 release

2013-12-12 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 c99734c5f - f5526d540


Update versions for 1.2.13 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ab82a46
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ab82a46
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ab82a46

Branch: refs/heads/cassandra-2.0
Commit: 6ab82a46984ccbf5eed4244ceef6fa30d781eebb
Parents: faa9d51
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 11 15:17:25 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 11 15:17:25 2013 +0100

--
 NEWS.txt | 9 +
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ab82a46/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 915729a..6293448 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -14,6 +14,15 @@ restore snapshots created with the previous major version 
using the
 using the provided 'sstableupgrade' tool.
 
 
+1.2.13
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see 1.2.12 if you are 
upgrading
+  from a previous version.
+
+
 1.2.12
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ab82a46/build.xml
--
diff --git a/build.xml b/build.xml
index 8efb7a3..23f8c71 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=1.2.12/
+property name=base.version value=1.2.13/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ab82a46/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 93894b1..b419cb7 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (1.2.13) unstable; urgency=low
+
+  * New release
+
+ -- Sylvain Lebresne slebre...@apache.org  Wed, 11 Dec 2013 15:16:39 +0100
+
 cassandra (1.2.12) unstable; urgency=low
 
   * New release



[3/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-12 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
NEWS.txt
build.xml
debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5526d54
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5526d54
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5526d54

Branch: refs/heads/cassandra-2.0
Commit: f5526d5405630a6b7af3ce9e24e102f38d1c268e
Parents: c99734c f7f7598
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 20:56:30 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 20:56:30 2013 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5526d54/CHANGES.txt
--
diff --cc CHANGES.txt
index e0338db,fa48a27..30f863e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -22,45 -16,10 +22,46 @@@ Merged from 1.2
 (CASSANDRA-6413)
   * (Hadoop) add describe_local_ring (CASSANDRA-6268)
   * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
+  * Randomize batchlog candidates selection (CASSANDRA-6481)
  
  
 -1.2.12
 +2.0.3
 + * Fix FD leak on slice read path (CASSANDRA-6275)
 + * Cancel read meter task when closing SSTR (CASSANDRA-6358)
 + * free off-heap IndexSummary during bulk (CASSANDRA-6359)
 + * Recover from IOException in accept() thread (CASSANDRA-6349)
 + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338)
 + * Fix trying to hint timed out counter writes (CASSANDRA-6322)
 + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809)
 + * Avoid flushing compaction_history after each operation (CASSANDRA-6287)
 + * Fix repair assertion error when tombstones expire (CASSANDRA-6277)
 + * Skip loading corrupt key cache (CASSANDRA-6260)
 + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274)
 + * Compact hottest sstables first and optionally omit coldest from
 +   compaction entirely (CASSANDRA-6109)
 + * Fix modifying column_metadata from thrift (CASSANDRA-6182)
 + * cqlsh: fix LIST USERS output (CASSANDRA-6242)
 + * Add IRequestSink interface (CASSANDRA-6248)
 + * Update memtable size while flushing (CASSANDRA-6249)
 + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252)
 + * Require Permission.SELECT for CAS updates (CASSANDRA-6247)
 + * New CQL-aware SSTableWriter (CASSANDRA-5894)
 + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270)
 + * Correctly throw error when frame too large (CASSANDRA-5981)
 + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299)
 + * Fix CQL3 table validation in Thrift (CASSANDRA-6140)
 + * Fix bug missing results with IN clauses (CASSANDRA-6327)
 + * Fix paging with reversed slices (CASSANDRA-6343)
 + * Set minTimestamp correctly to be able to drop expired sstables 
(CASSANDRA-6337)
 + * Support NaN and Infinity as float literals (CASSANDRA-6003)
 + * Remove RF from nodetool ring output (CASSANDRA-6289)
 + * Fix attempting to flush empty rows (CASSANDRA-6374)
 + * Fix potential out of bounds exception when paging (CASSANDRA-6333)
 +Merged from 1.2:
 + * Optimize FD phi calculation (CASSANDRA-6386)
 + * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 + * Don't list CQL3 table in CLI describe even if named explicitely 
 +   (CASSANDRA-5750)
   * Invalidate row cache when dropping CF (CASSANDRA-6351)
   * add non-jamm path for cached statements (CASSANDRA-6293)
   * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5526d54/src/java/org/apache/cassandra/service/StorageProxy.java
--



[4/4] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-12 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/56e48423
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/56e48423
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/56e48423

Branch: refs/heads/trunk
Commit: 56e4842321a1158be2f1f738205c56c6e5b82d73
Parents: 706058c f5526d5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 20:57:09 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 20:57:09 2013 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/56e48423/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/56e48423/src/java/org/apache/cassandra/service/StorageProxy.java
--



[1/4] git commit: Update versions for 1.2.13 release

2013-12-12 Thread aleksey
Updated Branches:
  refs/heads/trunk 706058c78 - 56e484232


Update versions for 1.2.13 release


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ab82a46
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ab82a46
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ab82a46

Branch: refs/heads/trunk
Commit: 6ab82a46984ccbf5eed4244ceef6fa30d781eebb
Parents: faa9d51
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Dec 11 15:17:25 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Dec 11 15:17:25 2013 +0100

--
 NEWS.txt | 9 +
 build.xml| 2 +-
 debian/changelog | 6 ++
 3 files changed, 16 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ab82a46/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 915729a..6293448 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -14,6 +14,15 @@ restore snapshots created with the previous major version 
using the
 using the provided 'sstableupgrade' tool.
 
 
+1.2.13
+==
+
+Upgrading
+-
+- Nothing specific to this release, but please see 1.2.12 if you are 
upgrading
+  from a previous version.
+
+
 1.2.12
 ==
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ab82a46/build.xml
--
diff --git a/build.xml b/build.xml
index 8efb7a3..23f8c71 100644
--- a/build.xml
+++ b/build.xml
@@ -25,7 +25,7 @@
 property name=debuglevel value=source,lines,vars/
 
 !-- default version and SCM information --
-property name=base.version value=1.2.12/
+property name=base.version value=1.2.13/
 property name=scm.connection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.developerConnection 
value=scm:git://git.apache.org/cassandra.git/
 property name=scm.url 
value=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=tree/

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ab82a46/debian/changelog
--
diff --git a/debian/changelog b/debian/changelog
index 93894b1..b419cb7 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,9 @@
+cassandra (1.2.13) unstable; urgency=low
+
+  * New release
+
+ -- Sylvain Lebresne slebre...@apache.org  Wed, 11 Dec 2013 15:16:39 +0100
+
 cassandra (1.2.12) unstable; urgency=low
 
   * New release



[2/4] git commit: Randomize batchlog candidates selection

2013-12-12 Thread aleksey
Randomize batchlog candidates selection

patch by Aleksey Yeschenko; reviewed by Jonathan Ellis for
CASSANDRA-6481


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7f7598a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7f7598a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7f7598a

Branch: refs/heads/trunk
Commit: f7f7598a20e20ff3c4ee8e0e5b425680a480d9e0
Parents: 6ab82a4
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 20:53:58 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 20:53:58 2013 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7f7598a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6f5f23b..fa48a27 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -16,6 +16,7 @@
(CASSANDRA-6413)
  * (Hadoop) add describe_local_ring (CASSANDRA-6268)
  * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
+ * Randomize batchlog candidates selection (CASSANDRA-6481)
 
 
 1.2.12

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7f7598a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index f195285..3e9f2cb 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -447,8 +447,7 @@ public class StorageProxy implements StorageProxyMBean
 
 if (candidates.size()  2)
 {
-IEndpointSnitch snitch = DatabaseDescriptor.getEndpointSnitch();
-snitch.sortByProximity(FBUtilities.getBroadcastAddress(), 
candidates);
+Collections.shuffle(candidates);
 candidates = candidates.subList(0, 2);
 }
 



[3/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-12 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
NEWS.txt
build.xml
debian/changelog


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f5526d54
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f5526d54
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f5526d54

Branch: refs/heads/trunk
Commit: f5526d5405630a6b7af3ce9e24e102f38d1c268e
Parents: c99734c f7f7598
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 20:56:30 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 20:56:30 2013 +0300

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5526d54/CHANGES.txt
--
diff --cc CHANGES.txt
index e0338db,fa48a27..30f863e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -22,45 -16,10 +22,46 @@@ Merged from 1.2
 (CASSANDRA-6413)
   * (Hadoop) add describe_local_ring (CASSANDRA-6268)
   * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
+  * Randomize batchlog candidates selection (CASSANDRA-6481)
  
  
 -1.2.12
 +2.0.3
 + * Fix FD leak on slice read path (CASSANDRA-6275)
 + * Cancel read meter task when closing SSTR (CASSANDRA-6358)
 + * free off-heap IndexSummary during bulk (CASSANDRA-6359)
 + * Recover from IOException in accept() thread (CASSANDRA-6349)
 + * Improve Gossip tolerance of abnormally slow tasks (CASSANDRA-6338)
 + * Fix trying to hint timed out counter writes (CASSANDRA-6322)
 + * Allow restoring specific columnfamilies from archived CL (CASSANDRA-4809)
 + * Avoid flushing compaction_history after each operation (CASSANDRA-6287)
 + * Fix repair assertion error when tombstones expire (CASSANDRA-6277)
 + * Skip loading corrupt key cache (CASSANDRA-6260)
 + * Fixes for compacting larger-than-memory rows (CASSANDRA-6274)
 + * Compact hottest sstables first and optionally omit coldest from
 +   compaction entirely (CASSANDRA-6109)
 + * Fix modifying column_metadata from thrift (CASSANDRA-6182)
 + * cqlsh: fix LIST USERS output (CASSANDRA-6242)
 + * Add IRequestSink interface (CASSANDRA-6248)
 + * Update memtable size while flushing (CASSANDRA-6249)
 + * Provide hooks around CQL2/CQL3 statement execution (CASSANDRA-6252)
 + * Require Permission.SELECT for CAS updates (CASSANDRA-6247)
 + * New CQL-aware SSTableWriter (CASSANDRA-5894)
 + * Reject CAS operation when the protocol v1 is used (CASSANDRA-6270)
 + * Correctly throw error when frame too large (CASSANDRA-5981)
 + * Fix serialization bug in PagedRange with 2ndary indexes (CASSANDRA-6299)
 + * Fix CQL3 table validation in Thrift (CASSANDRA-6140)
 + * Fix bug missing results with IN clauses (CASSANDRA-6327)
 + * Fix paging with reversed slices (CASSANDRA-6343)
 + * Set minTimestamp correctly to be able to drop expired sstables 
(CASSANDRA-6337)
 + * Support NaN and Infinity as float literals (CASSANDRA-6003)
 + * Remove RF from nodetool ring output (CASSANDRA-6289)
 + * Fix attempting to flush empty rows (CASSANDRA-6374)
 + * Fix potential out of bounds exception when paging (CASSANDRA-6333)
 +Merged from 1.2:
 + * Optimize FD phi calculation (CASSANDRA-6386)
 + * Improve initial FD phi estimate when starting up (CASSANDRA-6385)
 + * Don't list CQL3 table in CLI describe even if named explicitely 
 +   (CASSANDRA-5750)
   * Invalidate row cache when dropping CF (CASSANDRA-6351)
   * add non-jamm path for cached statements (CASSANDRA-6293)
   * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f5526d54/src/java/org/apache/cassandra/service/StorageProxy.java
--



[jira] [Commented] (CASSANDRA-6481) Batchlog endpoint candidates should be picked randomly, not sorted by proximity

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846516#comment-13846516
 ] 

Jonathan Ellis commented on CASSANDRA-6481:
---

bq. In what way do you think dsnitch doesn't work

CASSANDRA-6465

 Batchlog endpoint candidates should be picked randomly, not sorted by 
 proximity
 ---

 Key: CASSANDRA-6481
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6481
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 1.2.13, 2.0.4

 Attachments: 6481.txt


 Batchlog endpoint candidates should be picked randomly, not sorted by 
 proximity. I'll be lazy and just copy-paste some lines from IRC:
 [20:23:27] rbranson:   is there an issue where batch logs tend to get written 
 to a subset of the nodes?
 [20:28:04] rbranson:   I mean all the write batches are going thru 10% of the 
 nodes
 [20:28:16] rbranson:   it means writes won't scale linearly w/the cluster size
 Attaching a trivial patch.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[4/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-12 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ca7335e6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ca7335e6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ca7335e6

Branch: refs/heads/trunk
Commit: ca7335e651a963033b890f6d6d56c893f48b5ae3
Parents: f5526d5 79f7d6b
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Dec 12 12:04:56 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Dec 12 12:04:56 2013 -0600

--
 bin/cassandra.in.sh | 7 +++
 bin/json2sstable| 2 +-
 bin/nodetool| 2 +-
 bin/sstable2json| 2 +-
 bin/sstablekeys | 2 +-
 bin/sstableloader   | 2 +-
 bin/sstablescrub| 2 +-
 bin/sstablesplit| 2 +-
 bin/sstableupgrade  | 2 +-
 9 files changed, 15 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/json2sstable
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/nodetool
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/sstable2json
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/sstablekeys
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/sstableloader
--



[1/6] git commit: Set javaagent for tools. Patch by Sam Tunnecliffe, reviewed by brandonwilliams for CASSANDRA-6404

2013-12-12 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 f7f7598a2 - 79f7d6baf
  refs/heads/cassandra-2.0 f5526d540 - ca7335e65
  refs/heads/trunk 56e484232 - 448e4d46f


Set javaagent for tools.
Patch by Sam Tunnecliffe, reviewed by brandonwilliams for CASSANDRA-6404


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/79f7d6ba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/79f7d6ba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/79f7d6ba

Branch: refs/heads/cassandra-1.2
Commit: 79f7d6baff8644e31d6444fed8a18e85126d4ae9
Parents: f7f7598
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Dec 12 12:04:01 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Dec 12 12:04:01 2013 -0600

--
 bin/cassandra.in.sh | 7 +++
 bin/json2sstable| 2 +-
 bin/nodetool| 2 +-
 bin/sstable2json| 2 +-
 bin/sstablekeys | 2 +-
 bin/sstableloader   | 2 +-
 bin/sstablescrub| 2 +-
 bin/sstablesplit| 2 +-
 bin/sstableupgrade  | 2 +-
 9 files changed, 15 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/cassandra.in.sh
--
diff --git a/bin/cassandra.in.sh b/bin/cassandra.in.sh
index 2d5a932..29e0d0e 100644
--- a/bin/cassandra.in.sh
+++ b/bin/cassandra.in.sh
@@ -39,3 +39,10 @@ CLASSPATH=$CASSANDRA_CONF:$cassandra_bin
 for jar in $CASSANDRA_HOME/lib/*.jar; do
 CLASSPATH=$CLASSPATH:$jar
 done
+
+# set JVM javaagent opts to avoid warnings/errors
+if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
+  || [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
+then
+JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
+fi
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/json2sstable
--
diff --git a/bin/json2sstable b/bin/json2sstable
index f41afd3..4a9e7bb 100755
--- a/bin/json2sstable
+++ b/bin/json2sstable
@@ -43,7 +43,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableImport $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/nodetool
--
diff --git a/bin/nodetool b/bin/nodetool
index d4c0439..3e3824c 100755
--- a/bin/nodetool
+++ b/bin/nodetool
@@ -85,7 +85,7 @@ case `uname` in
 ;;
 esac
 
-$JAVA -cp $CLASSPATH \
+$JAVA $JAVA_AGENT -cp $CLASSPATH \
   -Xmx32m \
   -Dlog4j.configuration=log4j-tools.properties \
   -Dstorage-config=$CASSANDRA_CONF \

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstable2json
--
diff --git a/bin/sstable2json b/bin/sstable2json
index 9b116ce..63e904d 100755
--- a/bin/sstable2json
+++ b/bin/sstable2json
@@ -44,7 +44,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableExport $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstablekeys
--
diff --git a/bin/sstablekeys b/bin/sstablekeys
index 81cffd0..32f0339 100755
--- a/bin/sstablekeys
+++ b/bin/sstablekeys
@@ -48,7 +48,7 @@ if [ $# -eq 0 ]; then
 exit 2
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableExport $@ -e
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstableloader
--
diff --git a/bin/sstableloader b/bin/sstableloader
index 7696e10..245775f 100755
--- a/bin/sstableloader
+++ b/bin/sstableloader
@@ -43,7 +43,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -ea -cp $CLASSPATH -Xmx256M \
+$JAVA $JAVA_AGENT -ea -cp $CLASSPATH -Xmx256M \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.BulkLoader $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstablescrub
--
diff --git a/bin/sstablescrub b/bin/sstablescrub
index 1266ea7..31ecf02 100755
--- a/bin/sstablescrub
+++ 

[3/6] git commit: Set javaagent for tools. Patch by Sam Tunnecliffe, reviewed by brandonwilliams for CASSANDRA-6404

2013-12-12 Thread brandonwilliams
Set javaagent for tools.
Patch by Sam Tunnecliffe, reviewed by brandonwilliams for CASSANDRA-6404


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/79f7d6ba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/79f7d6ba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/79f7d6ba

Branch: refs/heads/trunk
Commit: 79f7d6baff8644e31d6444fed8a18e85126d4ae9
Parents: f7f7598
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Dec 12 12:04:01 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Dec 12 12:04:01 2013 -0600

--
 bin/cassandra.in.sh | 7 +++
 bin/json2sstable| 2 +-
 bin/nodetool| 2 +-
 bin/sstable2json| 2 +-
 bin/sstablekeys | 2 +-
 bin/sstableloader   | 2 +-
 bin/sstablescrub| 2 +-
 bin/sstablesplit| 2 +-
 bin/sstableupgrade  | 2 +-
 9 files changed, 15 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/cassandra.in.sh
--
diff --git a/bin/cassandra.in.sh b/bin/cassandra.in.sh
index 2d5a932..29e0d0e 100644
--- a/bin/cassandra.in.sh
+++ b/bin/cassandra.in.sh
@@ -39,3 +39,10 @@ CLASSPATH=$CASSANDRA_CONF:$cassandra_bin
 for jar in $CASSANDRA_HOME/lib/*.jar; do
 CLASSPATH=$CLASSPATH:$jar
 done
+
+# set JVM javaagent opts to avoid warnings/errors
+if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
+  || [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
+then
+JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
+fi
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/json2sstable
--
diff --git a/bin/json2sstable b/bin/json2sstable
index f41afd3..4a9e7bb 100755
--- a/bin/json2sstable
+++ b/bin/json2sstable
@@ -43,7 +43,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableImport $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/nodetool
--
diff --git a/bin/nodetool b/bin/nodetool
index d4c0439..3e3824c 100755
--- a/bin/nodetool
+++ b/bin/nodetool
@@ -85,7 +85,7 @@ case `uname` in
 ;;
 esac
 
-$JAVA -cp $CLASSPATH \
+$JAVA $JAVA_AGENT -cp $CLASSPATH \
   -Xmx32m \
   -Dlog4j.configuration=log4j-tools.properties \
   -Dstorage-config=$CASSANDRA_CONF \

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstable2json
--
diff --git a/bin/sstable2json b/bin/sstable2json
index 9b116ce..63e904d 100755
--- a/bin/sstable2json
+++ b/bin/sstable2json
@@ -44,7 +44,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableExport $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstablekeys
--
diff --git a/bin/sstablekeys b/bin/sstablekeys
index 81cffd0..32f0339 100755
--- a/bin/sstablekeys
+++ b/bin/sstablekeys
@@ -48,7 +48,7 @@ if [ $# -eq 0 ]; then
 exit 2
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableExport $@ -e
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstableloader
--
diff --git a/bin/sstableloader b/bin/sstableloader
index 7696e10..245775f 100755
--- a/bin/sstableloader
+++ b/bin/sstableloader
@@ -43,7 +43,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -ea -cp $CLASSPATH -Xmx256M \
+$JAVA $JAVA_AGENT -ea -cp $CLASSPATH -Xmx256M \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.BulkLoader $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstablescrub
--
diff --git a/bin/sstablescrub b/bin/sstablescrub
index 1266ea7..31ecf02 100755
--- a/bin/sstablescrub
+++ b/bin/sstablescrub
@@ -47,7 +47,7 @@ if [ x$MAX_HEAP_SIZE = x ]; then
 MAX_HEAP_SIZE=256M
 fi
 
-$JAVA -ea -cp $CLASSPATH -Xmx$MAX_HEAP_SIZE \
+$JAVA $JAVA_AGENT -ea 

[5/6] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-12 Thread brandonwilliams
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ca7335e6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ca7335e6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ca7335e6

Branch: refs/heads/cassandra-2.0
Commit: ca7335e651a963033b890f6d6d56c893f48b5ae3
Parents: f5526d5 79f7d6b
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Dec 12 12:04:56 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Dec 12 12:04:56 2013 -0600

--
 bin/cassandra.in.sh | 7 +++
 bin/json2sstable| 2 +-
 bin/nodetool| 2 +-
 bin/sstable2json| 2 +-
 bin/sstablekeys | 2 +-
 bin/sstableloader   | 2 +-
 bin/sstablescrub| 2 +-
 bin/sstablesplit| 2 +-
 bin/sstableupgrade  | 2 +-
 9 files changed, 15 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/json2sstable
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/nodetool
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/sstable2json
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/sstablekeys
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ca7335e6/bin/sstableloader
--



[2/6] git commit: Set javaagent for tools. Patch by Sam Tunnecliffe, reviewed by brandonwilliams for CASSANDRA-6404

2013-12-12 Thread brandonwilliams
Set javaagent for tools.
Patch by Sam Tunnecliffe, reviewed by brandonwilliams for CASSANDRA-6404


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/79f7d6ba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/79f7d6ba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/79f7d6ba

Branch: refs/heads/cassandra-2.0
Commit: 79f7d6baff8644e31d6444fed8a18e85126d4ae9
Parents: f7f7598
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Dec 12 12:04:01 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Dec 12 12:04:01 2013 -0600

--
 bin/cassandra.in.sh | 7 +++
 bin/json2sstable| 2 +-
 bin/nodetool| 2 +-
 bin/sstable2json| 2 +-
 bin/sstablekeys | 2 +-
 bin/sstableloader   | 2 +-
 bin/sstablescrub| 2 +-
 bin/sstablesplit| 2 +-
 bin/sstableupgrade  | 2 +-
 9 files changed, 15 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/cassandra.in.sh
--
diff --git a/bin/cassandra.in.sh b/bin/cassandra.in.sh
index 2d5a932..29e0d0e 100644
--- a/bin/cassandra.in.sh
+++ b/bin/cassandra.in.sh
@@ -39,3 +39,10 @@ CLASSPATH=$CASSANDRA_CONF:$cassandra_bin
 for jar in $CASSANDRA_HOME/lib/*.jar; do
 CLASSPATH=$CLASSPATH:$jar
 done
+
+# set JVM javaagent opts to avoid warnings/errors
+if [ $JVM_VENDOR != OpenJDK -o $JVM_VERSION \ 1.6.0 ] \
+  || [ $JVM_VERSION = 1.6.0 -a $JVM_PATCH_VERSION -ge 23 ]
+then
+JAVA_AGENT=$JAVA_AGENT -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
+fi
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/json2sstable
--
diff --git a/bin/json2sstable b/bin/json2sstable
index f41afd3..4a9e7bb 100755
--- a/bin/json2sstable
+++ b/bin/json2sstable
@@ -43,7 +43,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableImport $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/nodetool
--
diff --git a/bin/nodetool b/bin/nodetool
index d4c0439..3e3824c 100755
--- a/bin/nodetool
+++ b/bin/nodetool
@@ -85,7 +85,7 @@ case `uname` in
 ;;
 esac
 
-$JAVA -cp $CLASSPATH \
+$JAVA $JAVA_AGENT -cp $CLASSPATH \
   -Xmx32m \
   -Dlog4j.configuration=log4j-tools.properties \
   -Dstorage-config=$CASSANDRA_CONF \

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstable2json
--
diff --git a/bin/sstable2json b/bin/sstable2json
index 9b116ce..63e904d 100755
--- a/bin/sstable2json
+++ b/bin/sstable2json
@@ -44,7 +44,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableExport $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstablekeys
--
diff --git a/bin/sstablekeys b/bin/sstablekeys
index 81cffd0..32f0339 100755
--- a/bin/sstablekeys
+++ b/bin/sstablekeys
@@ -48,7 +48,7 @@ if [ $# -eq 0 ]; then
 exit 2
 fi
 
-$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
+$JAVA $JAVA_AGENT -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.SSTableExport $@ -e
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstableloader
--
diff --git a/bin/sstableloader b/bin/sstableloader
index 7696e10..245775f 100755
--- a/bin/sstableloader
+++ b/bin/sstableloader
@@ -43,7 +43,7 @@ if [ -z $CLASSPATH ]; then
 exit 1
 fi
 
-$JAVA -ea -cp $CLASSPATH -Xmx256M \
+$JAVA $JAVA_AGENT -ea -cp $CLASSPATH -Xmx256M \
 -Dlog4j.configuration=log4j-tools.properties \
 org.apache.cassandra.tools.BulkLoader $@
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/79f7d6ba/bin/sstablescrub
--
diff --git a/bin/sstablescrub b/bin/sstablescrub
index 1266ea7..31ecf02 100755
--- a/bin/sstablescrub
+++ b/bin/sstablescrub
@@ -47,7 +47,7 @@ if [ x$MAX_HEAP_SIZE = x ]; then
 MAX_HEAP_SIZE=256M
 fi
 
-$JAVA -ea -cp $CLASSPATH -Xmx$MAX_HEAP_SIZE \
+$JAVA 

[6/6] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-12 Thread brandonwilliams
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/448e4d46
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/448e4d46
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/448e4d46

Branch: refs/heads/trunk
Commit: 448e4d46f0e050ddbb7b43aace463ff881eff7c5
Parents: 56e4842 ca7335e
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Dec 12 12:05:07 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Dec 12 12:05:07 2013 -0600

--
 bin/cassandra.in.sh | 7 +++
 bin/json2sstable| 2 +-
 bin/nodetool| 2 +-
 bin/sstable2json| 2 +-
 bin/sstablekeys | 2 +-
 bin/sstableloader   | 2 +-
 bin/sstablescrub| 2 +-
 bin/sstablesplit| 2 +-
 bin/sstableupgrade  | 2 +-
 9 files changed, 15 insertions(+), 8 deletions(-)
--




[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing

2013-12-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846522#comment-13846522
 ] 

Tyler Hobbs commented on CASSANDRA-6008:


+1 on the cleanup

 Getting 'This should never happen' error at startup due to sstables missing
 ---

 Key: CASSANDRA-6008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: John Carrino
Assignee: Tyler Hobbs
 Fix For: 2.0.4

 Attachments: 6008-2.0-v1.patch, 6008-trunk-v1.patch


 Exception encountered during startup: Unfinished compactions reference 
 missing sstables. This should never happen since compactions are marked 
 finished before we start removing the old sstables
 This happens when sstables that have been compacted away are removed, but 
 they still have entries in the system.compactions_in_progress table.
 Normally this should not happen because the entries in 
 system.compactions_in_progress are deleted before the old sstables are 
 deleted.
 However at startup recovery time, old sstables are deleted (NOT BEFORE they 
 are removed from the compactions_in_progress table) and then after that is 
 done it does a truncate using SystemKeyspace.discardCompactionsInProgress
 We ran into a case where the disk filled up and the node died and was bounced 
 and then failed to truncate this table on startup, and then got stuck hitting 
 this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers.
 Maybe on startup we can delete from this table incrementally as we clean 
 stuff up in the same way that compactions delete from this table before they 
 delete old sstables.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Resolved] (CASSANDRA-6404) Tools emit ERRORs and WARNINGs about missing javaagent

2013-12-12 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6404.
-

   Resolution: Fixed
Fix Version/s: 1.2.14
   2.0.4
Reproduced In: 2.0.3, 1.2.12  (was: 1.2.12, 2.0.3)

I was going to say we can just ignore the warning via log4j-tools.conf, but 
thinking it through I think it makes sense to use jamm when possible with a lot 
of these.  Committed, thanks.

 Tools emit ERRORs and WARNINGs about missing javaagent 
 ---

 Key: CASSANDRA-6404
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6404
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 2.0.4, 1.2.14

 Attachments: 0001-Set-javaagent-when-running-tools-in-bin.patch


 The combination of CASSANDRA-6107  CASSANDRA-6293 has lead to a number of 
 the tools shipped in bin/ to display the following warnings when run:
 {code}
 ERROR 15:21:47,337 Unable to initialize MemoryMeter (jamm not specified as 
 javaagent).  This means Cassandra will be unable to measure object sizes 
 accurately and may consequently OOM.
  WARN 15:21:47,506 MemoryMeter uninitialized (jamm not specified as java 
 agent); KeyCache size in JVM Heap will not be calculated accurately. Usually 
 this means cassandra-env.sh disabled jamm because you are u
 {code}
 Although harmless, these are a bit disconcerting. The simplest fix seems to 
 be to set the javaagent switch as we do for the main C* launch.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


git commit: Fix CHANGES.txt

2013-12-12 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 79f7d6baf - f7c914485


Fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7c91448
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7c91448
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7c91448

Branch: refs/heads/cassandra-1.2
Commit: f7c9144852d3d61f998691595d727c24dea65a85
Parents: 79f7d6b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 21:14:29 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 21:14:56 2013 +0300

--
 CHANGES.txt | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7c91448/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa48a27..b7bbe09 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+1.2.14
+ * Randomize batchlog candidates selection (CASSANDRA-6481)
+
+
 1.2.13
  * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
  * Optimize FD phi calculation (CASSANDRA-6386)
@@ -16,7 +20,6 @@
(CASSANDRA-6413)
  * (Hadoop) add describe_local_ring (CASSANDRA-6268)
  * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
- * Randomize batchlog candidates selection (CASSANDRA-6481)
 
 
 1.2.12



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-12 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e6eb5506
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e6eb5506
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e6eb5506

Branch: refs/heads/cassandra-2.0
Commit: e6eb5506ae82cf7f6558e3b39fccbb3a7b90cce0
Parents: ca7335e f7c9144
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 21:16:53 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 21:16:53 2013 +0300

--

--




[1/3] git commit: Fix CHANGES.txt

2013-12-12 Thread aleksey
Updated Branches:
  refs/heads/trunk 448e4d46f - d16d5c4f2


Fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7c91448
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7c91448
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7c91448

Branch: refs/heads/trunk
Commit: f7c9144852d3d61f998691595d727c24dea65a85
Parents: 79f7d6b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 21:14:29 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 21:14:56 2013 +0300

--
 CHANGES.txt | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7c91448/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa48a27..b7bbe09 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+1.2.14
+ * Randomize batchlog candidates selection (CASSANDRA-6481)
+
+
 1.2.13
  * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
  * Optimize FD phi calculation (CASSANDRA-6386)
@@ -16,7 +20,6 @@
(CASSANDRA-6413)
  * (Hadoop) add describe_local_ring (CASSANDRA-6268)
  * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
- * Randomize batchlog candidates selection (CASSANDRA-6481)
 
 
 1.2.12



[1/2] git commit: Fix CHANGES.txt

2013-12-12 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 ca7335e65 - e6eb5506a


Fix CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7c91448
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7c91448
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7c91448

Branch: refs/heads/cassandra-2.0
Commit: f7c9144852d3d61f998691595d727c24dea65a85
Parents: 79f7d6b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 21:14:29 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 21:14:56 2013 +0300

--
 CHANGES.txt | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7c91448/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa48a27..b7bbe09 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+1.2.14
+ * Randomize batchlog candidates selection (CASSANDRA-6481)
+
+
 1.2.13
  * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345)
  * Optimize FD phi calculation (CASSANDRA-6386)
@@ -16,7 +20,6 @@
(CASSANDRA-6413)
  * (Hadoop) add describe_local_ring (CASSANDRA-6268)
  * Fix handling of concurrent directory creation failure (CASSANDRA-6459)
- * Randomize batchlog candidates selection (CASSANDRA-6481)
 
 
 1.2.12



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-12-12 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e6eb5506
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e6eb5506
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e6eb5506

Branch: refs/heads/trunk
Commit: e6eb5506ae82cf7f6558e3b39fccbb3a7b90cce0
Parents: ca7335e f7c9144
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 21:16:53 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 21:16:53 2013 +0300

--

--




[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-12 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d16d5c4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d16d5c4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d16d5c4f

Branch: refs/heads/trunk
Commit: d16d5c4f2992c6b36f5afe02a856d8eabc87eee9
Parents: 448e4d4 e6eb550
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Dec 12 21:17:24 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Dec 12 21:17:24 2013 +0300

--

--




[1/3] git commit: Fix row tombstones in larger-than-memory compactions patch by thobbs; reviewed by jbellis for CASSANDRA-6008

2013-12-12 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 e6eb5506a - 0d8da2ee3


Fix row tombstones in larger-than-memory compactions
patch by thobbs; reviewed by jbellis for CASSANDRA-6008


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3edb62bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3edb62bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3edb62bf

Branch: refs/heads/cassandra-2.0
Commit: 3edb62bf773617aeb3a348edc5667a6b0bad0ffe
Parents: e6eb550
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Dec 12 23:28:13 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:17:33 2013 +0600

--
 CHANGES.txt |  1 +
 .../db/AbstractThreadUnsafeSortedColumns.java   |  6 +-
 .../cassandra/db/AtomicSortedColumns.java   |  4 +-
 .../org/apache/cassandra/db/ColumnFamily.java   | 11 ++-
 .../apache/cassandra/db/ColumnFamilyStore.java  | 21 -
 .../org/apache/cassandra/db/ColumnIndex.java|  2 +-
 .../org/apache/cassandra/db/DeletionInfo.java   | 76 +-
 .../org/apache/cassandra/db/DeletionTime.java   | 16 
 .../apache/cassandra/db/RangeTombstoneList.java |  2 +-
 .../db/compaction/LazilyCompactedRow.java   | 54 +++--
 test/unit/org/apache/cassandra/Util.java|  6 +-
 .../org/apache/cassandra/db/KeyCacheTest.java   |  2 +-
 .../db/compaction/CompactionsPurgeTest.java | 84 +---
 .../streaming/StreamingTransferTest.java|  4 +-
 14 files changed, 220 insertions(+), 69 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3edb62bf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 30f863e..d573e37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.4
+ * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008)
  * Fix cleanup ClassCastException (CASSANDRA-6462)
  * Reduce gossip memory use by interning VersionedValue strings 
(CASSANDRA-6410)
  * Allow specifying datacenters to participate in a repair (CASSANDRA-6218)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3edb62bf/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
--
diff --git 
a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java 
b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
index 1b245eb..36b051b 100644
--- a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
@@ -59,7 +59,11 @@ public abstract class AbstractThreadUnsafeSortedColumns 
extends ColumnFamily
 deletionInfo = newInfo;
 }
 
-public void maybeResetDeletionTimes(int gcBefore)
+/**
+ * Purges any tombstones with a local deletion time before gcBefore.
+ * @param gcBefore a timestamp (in seconds) before which tombstones should 
be purged
+ */
+public void purgeTombstones(int gcBefore)
 {
 deletionInfo.purge(gcBefore);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3edb62bf/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index f6a6b83..b44d8bf 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -120,12 +120,12 @@ public class AtomicSortedColumns extends ColumnFamily
 ref.set(ref.get().with(newInfo));
 }
 
-public void maybeResetDeletionTimes(int gcBefore)
+public void purgeTombstones(int gcBefore)
 {
 while (true)
 {
 Holder current = ref.get();
-if (!current.deletionInfo.hasIrrelevantData(gcBefore))
+if (!current.deletionInfo.hasPurgeableTombstones(gcBefore))
 break;
 
 DeletionInfo purgedInfo = current.deletionInfo.copy();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3edb62bf/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamily.java 
b/src/java/org/apache/cassandra/db/ColumnFamily.java
index 47b14b9..2c00071 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@ -185,7 +185,11 @@ public abstract class ColumnFamily implements 
IterableColumn, IRowCacheEntry
 public abstract void delete(DeletionTime deletionTime);

[2/3] git commit: clarify that we only collect row-level tombstone in LCR constructor

2013-12-12 Thread jbellis
clarify that we only collect row-level tombstone in LCR constructor


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e9a7b8c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e9a7b8c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e9a7b8c

Branch: refs/heads/cassandra-2.0
Commit: 4e9a7b8c7fa55df9cda4ac06f77ee9c69b85314d
Parents: 3edb62b
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Dec 12 23:43:59 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:26:14 2013 +0600

--
 .../db/compaction/LazilyCompactedRow.java   | 26 ++--
 1 file changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e9a7b8c/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 3b7a3d4..0d33b22 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -58,8 +58,7 @@ public class LazilyCompactedRow extends AbstractCompactedRow 
implements Iterable
 private boolean closed;
 private ColumnIndex.Builder indexBuilder;
 private final SecondaryIndexManager.Updater indexer;
-private long maxTombstoneTimestamp;
-private DeletionInfo deletionInfo;
+private DeletionTime maxRowTombstone;
 
 public LazilyCompactedRow(CompactionController controller, List? extends 
OnDiskAtomIterator rows)
 {
@@ -70,23 +69,23 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 
 // Combine top-level tombstones, keeping the one with the highest 
markedForDeleteAt timestamp.  This may be
 // purged (depending on gcBefore), but we need to remember it to 
properly delete columns during the merge
-deletionInfo = DeletionInfo.live();
-maxTombstoneTimestamp = Long.MIN_VALUE;
+maxRowTombstone = DeletionTime.LIVE;
 for (OnDiskAtomIterator row : rows)
 {
-DeletionInfo delInfo = row.getColumnFamily().deletionInfo();
-maxTombstoneTimestamp = Math.max(maxTombstoneTimestamp, 
delInfo.maxTimestamp());
-deletionInfo = deletionInfo.add(delInfo);
+DeletionTime rowTombstone = 
row.getColumnFamily().deletionInfo().getTopLevelDeletion();
+if (maxRowTombstone.compareTo(rowTombstone)  0)
+maxRowTombstone = rowTombstone;
 }
 
+
 // Don't pass maxTombstoneTimestamp to shouldPurge since we might well 
have cells with
 // tombstones newer than the row-level tombstones we've seen -- but we 
won't know that
 // until we iterate over them.  By passing MAX_VALUE we will only 
purge if there are
 // no other versions of this row present.
 this.shouldPurge = controller.shouldPurge(key, Long.MAX_VALUE);
 
-emptyColumnFamily = 
ArrayBackedSortedColumns.factory.create(controller.cfs.metadata);
-emptyColumnFamily.setDeletionInfo(deletionInfo.copy());
+emptyColumnFamily = 
EmptyColumns.factory.create(controller.cfs.metadata);
+emptyColumnFamily.delete(maxRowTombstone);
 if (shouldPurge)
 emptyColumnFamily.purgeTombstones(controller.gcBefore);
 }
@@ -113,7 +112,7 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 // (however, if there are zero columns, iterator() will not be called 
by ColumnIndexer and reducer will be null)
 columnStats = new ColumnStats(reducer == null ? 0 : reducer.columns,
   reducer == null ? Long.MAX_VALUE : 
reducer.minTimestampSeen,
-  reducer == null ? maxTombstoneTimestamp 
: Math.max(maxTombstoneTimestamp, reducer.maxTimestampSeen),
+  reducer == null ? 
maxRowTombstone.markedForDeleteAt : Math.max(maxRowTombstone.markedForDeleteAt, 
reducer.maxTimestampSeen),
   reducer == null ? Integer.MIN_VALUE : 
reducer.maxLocalDeletionTimeSeen,
   reducer == null ? new 
StreamingHistogram(SSTable.TOMBSTONE_HISTOGRAM_BIN_SIZE) : reducer.tombstones,
   reducer == null ? 
Collections.ByteBufferemptyList() : reducer.minColumnNameSeen,
@@ -193,8 +192,9 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 private class Reducer extends MergeIterator.ReducerOnDiskAtom, OnDiskAtom
 {
 // all columns reduced 

[3/3] git commit: fix stats to omit purged row tombstone

2013-12-12 Thread jbellis
fix stats to omit purged row tombstone


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d8da2ee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d8da2ee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d8da2ee

Branch: refs/heads/cassandra-2.0
Commit: 0d8da2ee3c9de7b890b5630c3e0c74b8c80e63dc
Parents: 4e9a7b8
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 00:35:41 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:35:41 2013 +0600

--
 .../org/apache/cassandra/db/compaction/LazilyCompactedRow.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d8da2ee/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 0d33b22..0ad3de2 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -112,7 +112,7 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 // (however, if there are zero columns, iterator() will not be called 
by ColumnIndexer and reducer will be null)
 columnStats = new ColumnStats(reducer == null ? 0 : reducer.columns,
   reducer == null ? Long.MAX_VALUE : 
reducer.minTimestampSeen,
-  reducer == null ? 
maxRowTombstone.markedForDeleteAt : Math.max(maxRowTombstone.markedForDeleteAt, 
reducer.maxTimestampSeen),
+  reducer == null ? 
emptyColumnFamily.maxTimestamp() : Math.max(emptyColumnFamily.maxTimestamp(), 
reducer.maxTimestampSeen),
   reducer == null ? Integer.MIN_VALUE : 
reducer.maxLocalDeletionTimeSeen,
   reducer == null ? new 
StreamingHistogram(SSTable.TOMBSTONE_HISTOGRAM_BIN_SIZE) : reducer.tombstones,
   reducer == null ? 
Collections.ByteBufferemptyList() : reducer.minColumnNameSeen,



[6/8] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-12 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5d25d6d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5d25d6d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5d25d6d2

Branch: refs/heads/trunk
Commit: 5d25d6d22a17e64347e1d311921ae61b52cb3ae3
Parents: 0bfa210 0d8da2e
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 00:35:48 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:35:48 2013 +0600

--

--




[8/8] git commit: Merge remote-tracking branch 'origin/trunk' into trunk

2013-12-12 Thread jbellis
Merge remote-tracking branch 'origin/trunk' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a9b93c25
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a9b93c25
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a9b93c25

Branch: refs/heads/trunk
Commit: a9b93c257304a4bf76f301d483e3264fba934f80
Parents: b171148 d16d5c4
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 00:46:57 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:46:57 2013 +0600

--

--




[3/8] git commit: clarify that we only collect row-level tombstone in LCR constructor

2013-12-12 Thread jbellis
clarify that we only collect row-level tombstone in LCR constructor


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4e9a7b8c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4e9a7b8c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4e9a7b8c

Branch: refs/heads/trunk
Commit: 4e9a7b8c7fa55df9cda4ac06f77ee9c69b85314d
Parents: 3edb62b
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Dec 12 23:43:59 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:26:14 2013 +0600

--
 .../db/compaction/LazilyCompactedRow.java   | 26 ++--
 1 file changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4e9a7b8c/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 3b7a3d4..0d33b22 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -58,8 +58,7 @@ public class LazilyCompactedRow extends AbstractCompactedRow 
implements Iterable
 private boolean closed;
 private ColumnIndex.Builder indexBuilder;
 private final SecondaryIndexManager.Updater indexer;
-private long maxTombstoneTimestamp;
-private DeletionInfo deletionInfo;
+private DeletionTime maxRowTombstone;
 
 public LazilyCompactedRow(CompactionController controller, List? extends 
OnDiskAtomIterator rows)
 {
@@ -70,23 +69,23 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 
 // Combine top-level tombstones, keeping the one with the highest 
markedForDeleteAt timestamp.  This may be
 // purged (depending on gcBefore), but we need to remember it to 
properly delete columns during the merge
-deletionInfo = DeletionInfo.live();
-maxTombstoneTimestamp = Long.MIN_VALUE;
+maxRowTombstone = DeletionTime.LIVE;
 for (OnDiskAtomIterator row : rows)
 {
-DeletionInfo delInfo = row.getColumnFamily().deletionInfo();
-maxTombstoneTimestamp = Math.max(maxTombstoneTimestamp, 
delInfo.maxTimestamp());
-deletionInfo = deletionInfo.add(delInfo);
+DeletionTime rowTombstone = 
row.getColumnFamily().deletionInfo().getTopLevelDeletion();
+if (maxRowTombstone.compareTo(rowTombstone)  0)
+maxRowTombstone = rowTombstone;
 }
 
+
 // Don't pass maxTombstoneTimestamp to shouldPurge since we might well 
have cells with
 // tombstones newer than the row-level tombstones we've seen -- but we 
won't know that
 // until we iterate over them.  By passing MAX_VALUE we will only 
purge if there are
 // no other versions of this row present.
 this.shouldPurge = controller.shouldPurge(key, Long.MAX_VALUE);
 
-emptyColumnFamily = 
ArrayBackedSortedColumns.factory.create(controller.cfs.metadata);
-emptyColumnFamily.setDeletionInfo(deletionInfo.copy());
+emptyColumnFamily = 
EmptyColumns.factory.create(controller.cfs.metadata);
+emptyColumnFamily.delete(maxRowTombstone);
 if (shouldPurge)
 emptyColumnFamily.purgeTombstones(controller.gcBefore);
 }
@@ -113,7 +112,7 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 // (however, if there are zero columns, iterator() will not be called 
by ColumnIndexer and reducer will be null)
 columnStats = new ColumnStats(reducer == null ? 0 : reducer.columns,
   reducer == null ? Long.MAX_VALUE : 
reducer.minTimestampSeen,
-  reducer == null ? maxTombstoneTimestamp 
: Math.max(maxTombstoneTimestamp, reducer.maxTimestampSeen),
+  reducer == null ? 
maxRowTombstone.markedForDeleteAt : Math.max(maxRowTombstone.markedForDeleteAt, 
reducer.maxTimestampSeen),
   reducer == null ? Integer.MIN_VALUE : 
reducer.maxLocalDeletionTimeSeen,
   reducer == null ? new 
StreamingHistogram(SSTable.TOMBSTONE_HISTOGRAM_BIN_SIZE) : reducer.tombstones,
   reducer == null ? 
Collections.ByteBufferemptyList() : reducer.minColumnNameSeen,
@@ -193,8 +192,9 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 private class Reducer extends MergeIterator.ReducerOnDiskAtom, OnDiskAtom
 {
 // all columns reduced together will 

[7/8] git commit: add back shouldPurge check before counter merging

2013-12-12 Thread jbellis
add back shouldPurge check before counter merging


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b1711488
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b1711488
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b1711488

Branch: refs/heads/trunk
Commit: b1711488801781106c90e9143678f94d102e11dd
Parents: 5d25d6d
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 00:44:13 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:44:13 2013 +0600

--
 .../apache/cassandra/db/compaction/LazilyCompactedRow.java   | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b1711488/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 23457bc..bb00d23 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -90,7 +90,7 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 merger = Iterators.filter(MergeIterator.get(rows, 
emptyColumnFamily.getComparator().onDiskAtomComparator, reducer), 
Predicates.notNull());
 }
 
-private static ColumnFamily removeDeletedAndOldShards(DecoratedKey key, 
boolean shouldPurge, CompactionController controller, ColumnFamily cf)
+private static void removeDeletedAndOldShards(ColumnFamily cf, boolean 
shouldPurge, DecoratedKey key, CompactionController controller)
 {
 // We should only purge cell tombstones if shouldPurge is true, but 
regardless, it's still ok to remove cells that
 // are shadowed by a row or range tombstone; 
removeDeletedColumnsOnly(cf, Integer.MIN_VALUE) will accomplish this
@@ -99,10 +99,8 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 ColumnFamilyStore.removeDeletedColumnsOnly(cf, overriddenGCBefore, 
controller.cfs.indexManager.updaterFor(key));
 
 // if we have counters, remove old shards
-if (cf.metadata().getDefaultValidator().isCommutative())
+if (shouldPurge  cf.metadata().getDefaultValidator().isCommutative())
 CounterColumn.mergeAndRemoveOldShards(key, cf, 
controller.gcBefore, controller.mergeShardBefore);
-
-return cf;
 }
 
 public RowIndexEntry write(long currentPosition, DataOutput out) throws 
IOException
@@ -260,7 +258,7 @@ public class LazilyCompactedRow extends AbstractCompactedRow
 boolean shouldPurge = 
container.getSortedColumns().iterator().next().timestamp()  
maxPurgeableTimestamp;
 // when we clear() the container, it removes the deletion 
info, so this needs to be reset each time
 container.delete(maxRowTombstone);
-removeDeletedAndOldShards(key, shouldPurge, controller, 
container);
+removeDeletedAndOldShards(container, shouldPurge, key, 
controller);
 IteratorColumn iter = container.iterator();
 if (!iter.hasNext())
 {



[4/8] git commit: merge from 2.0

2013-12-12 Thread jbellis
merge from 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0bfa210d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0bfa210d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0bfa210d

Branch: refs/heads/trunk
Commit: 0bfa210d071b664b37d6ba5ee4eda280f47d7b0e
Parents: d53c838 4e9a7b8
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 00:34:09 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:34:09 2013 +0600

--
 .../db/compaction/LazilyCompactedRow.java   | 30 
 1 file changed, 18 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0bfa210d/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 8237ff5,0d33b22..23457bc
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@@ -56,9 -58,7 +56,9 @@@ public class LazilyCompactedRow extend
  private boolean closed;
  private ColumnIndex.Builder indexBuilder;
  private final SecondaryIndexManager.Updater indexer;
 +private final Reducer reducer;
 +private final IteratorOnDiskAtom merger;
- private DeletionInfo deletionInfo;
+ private DeletionTime maxRowTombstone;
  
  public LazilyCompactedRow(CompactionController controller, List? extends 
OnDiskAtomIterator rows)
  {
@@@ -69,36 -69,25 +69,40 @@@
  
  // Combine top-level tombstones, keeping the one with the highest 
markedForDeleteAt timestamp.  This may be
  // purged (depending on gcBefore), but we need to remember it to 
properly delete columns during the merge
- deletionInfo = DeletionInfo.live();
+ maxRowTombstone = DeletionTime.LIVE;
  for (OnDiskAtomIterator row : rows)
- deletionInfo = 
deletionInfo.add(row.getColumnFamily().deletionInfo());
+ {
+ DeletionTime rowTombstone = 
row.getColumnFamily().deletionInfo().getTopLevelDeletion();
+ if (maxRowTombstone.compareTo(rowTombstone)  0)
+ maxRowTombstone = rowTombstone;
+ }
  
 -
 -// Don't pass maxTombstoneTimestamp to shouldPurge since we might 
well have cells with
 -// tombstones newer than the row-level tombstones we've seen -- but 
we won't know that
 -// until we iterate over them.  By passing MAX_VALUE we will only 
purge if there are
 -// no other versions of this row present.
 -this.shouldPurge = controller.shouldPurge(key, Long.MAX_VALUE);
 +// tombstones with a localDeletionTime before this can be purged.  
This is the minimum timestamp for any sstable
 +// containing `key` outside of the set of sstables involved in this 
compaction.
 +maxPurgeableTimestamp = controller.maxPurgeableTimestamp(key);
  
- emptyColumnFamily = 
ArrayBackedSortedColumns.factory.create(controller.cfs.metadata);
- emptyColumnFamily.setDeletionInfo(deletionInfo.copy());
- if (deletionInfo.maxTimestamp()  maxPurgeableTimestamp)
+ emptyColumnFamily = 
EmptyColumns.factory.create(controller.cfs.metadata);
+ emptyColumnFamily.delete(maxRowTombstone);
 -if (shouldPurge)
++if (maxRowTombstone.markedForDeleteAt  maxPurgeableTimestamp)
  emptyColumnFamily.purgeTombstones(controller.gcBefore);
 +
 +reducer = new Reducer();
 +merger = Iterators.filter(MergeIterator.get(rows, 
emptyColumnFamily.getComparator().onDiskAtomComparator, reducer), 
Predicates.notNull());
 +}
 +
 +private static ColumnFamily removeDeletedAndOldShards(DecoratedKey key, 
boolean shouldPurge, CompactionController controller, ColumnFamily cf)
 +{
 +// We should only purge cell tombstones if shouldPurge is true, but 
regardless, it's still ok to remove cells that
 +// are shadowed by a row or range tombstone; 
removeDeletedColumnsOnly(cf, Integer.MIN_VALUE) will accomplish this
 +// without purging tombstones.
 +int overriddenGCBefore = shouldPurge ? controller.gcBefore : 
Integer.MIN_VALUE;
 +ColumnFamilyStore.removeDeletedColumnsOnly(cf, overriddenGCBefore, 
controller.cfs.indexManager.updaterFor(key));
 +
 +// if we have counters, remove old shards
 +if (cf.metadata().getDefaultValidator().isCommutative())
 +CounterColumn.mergeAndRemoveOldShards(key, cf, 
controller.gcBefore, controller.mergeShardBefore);
 +
 +return cf;
  }
  
  public RowIndexEntry write(long currentPosition, DataOutput out) throws 
IOException
@@@ -252,19 

[5/8] git commit: fix stats to omit purged row tombstone

2013-12-12 Thread jbellis
fix stats to omit purged row tombstone


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d8da2ee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d8da2ee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d8da2ee

Branch: refs/heads/trunk
Commit: 0d8da2ee3c9de7b890b5630c3e0c74b8c80e63dc
Parents: 4e9a7b8
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 00:35:41 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:35:41 2013 +0600

--
 .../org/apache/cassandra/db/compaction/LazilyCompactedRow.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d8da2ee/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 0d33b22..0ad3de2 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -112,7 +112,7 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 // (however, if there are zero columns, iterator() will not be called 
by ColumnIndexer and reducer will be null)
 columnStats = new ColumnStats(reducer == null ? 0 : reducer.columns,
   reducer == null ? Long.MAX_VALUE : 
reducer.minTimestampSeen,
-  reducer == null ? 
maxRowTombstone.markedForDeleteAt : Math.max(maxRowTombstone.markedForDeleteAt, 
reducer.maxTimestampSeen),
+  reducer == null ? 
emptyColumnFamily.maxTimestamp() : Math.max(emptyColumnFamily.maxTimestamp(), 
reducer.maxTimestampSeen),
   reducer == null ? Integer.MIN_VALUE : 
reducer.maxLocalDeletionTimeSeen,
   reducer == null ? new 
StreamingHistogram(SSTable.TOMBSTONE_HISTOGRAM_BIN_SIZE) : reducer.tombstones,
   reducer == null ? 
Collections.ByteBufferemptyList() : reducer.minColumnNameSeen,



[1/8] git commit: Fix row tombstones in larger-than-memory compactions patch by thobbs; reviewed by jbellis for CASSANDRA-6008

2013-12-12 Thread jbellis
Updated Branches:
  refs/heads/trunk d16d5c4f2 - a9b93c257


Fix row tombstones in larger-than-memory compactions
patch by thobbs; reviewed by jbellis for CASSANDRA-6008


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3edb62bf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3edb62bf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3edb62bf

Branch: refs/heads/trunk
Commit: 3edb62bf773617aeb3a348edc5667a6b0bad0ffe
Parents: e6eb550
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Dec 12 23:28:13 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:17:33 2013 +0600

--
 CHANGES.txt |  1 +
 .../db/AbstractThreadUnsafeSortedColumns.java   |  6 +-
 .../cassandra/db/AtomicSortedColumns.java   |  4 +-
 .../org/apache/cassandra/db/ColumnFamily.java   | 11 ++-
 .../apache/cassandra/db/ColumnFamilyStore.java  | 21 -
 .../org/apache/cassandra/db/ColumnIndex.java|  2 +-
 .../org/apache/cassandra/db/DeletionInfo.java   | 76 +-
 .../org/apache/cassandra/db/DeletionTime.java   | 16 
 .../apache/cassandra/db/RangeTombstoneList.java |  2 +-
 .../db/compaction/LazilyCompactedRow.java   | 54 +++--
 test/unit/org/apache/cassandra/Util.java|  6 +-
 .../org/apache/cassandra/db/KeyCacheTest.java   |  2 +-
 .../db/compaction/CompactionsPurgeTest.java | 84 +---
 .../streaming/StreamingTransferTest.java|  4 +-
 14 files changed, 220 insertions(+), 69 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3edb62bf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 30f863e..d573e37 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.4
+ * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008)
  * Fix cleanup ClassCastException (CASSANDRA-6462)
  * Reduce gossip memory use by interning VersionedValue strings 
(CASSANDRA-6410)
  * Allow specifying datacenters to participate in a repair (CASSANDRA-6218)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3edb62bf/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
--
diff --git 
a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java 
b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
index 1b245eb..36b051b 100644
--- a/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AbstractThreadUnsafeSortedColumns.java
@@ -59,7 +59,11 @@ public abstract class AbstractThreadUnsafeSortedColumns 
extends ColumnFamily
 deletionInfo = newInfo;
 }
 
-public void maybeResetDeletionTimes(int gcBefore)
+/**
+ * Purges any tombstones with a local deletion time before gcBefore.
+ * @param gcBefore a timestamp (in seconds) before which tombstones should 
be purged
+ */
+public void purgeTombstones(int gcBefore)
 {
 deletionInfo.purge(gcBefore);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3edb62bf/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index f6a6b83..b44d8bf 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -120,12 +120,12 @@ public class AtomicSortedColumns extends ColumnFamily
 ref.set(ref.get().with(newInfo));
 }
 
-public void maybeResetDeletionTimes(int gcBefore)
+public void purgeTombstones(int gcBefore)
 {
 while (true)
 {
 Holder current = ref.get();
-if (!current.deletionInfo.hasIrrelevantData(gcBefore))
+if (!current.deletionInfo.hasPurgeableTombstones(gcBefore))
 break;
 
 DeletionInfo purgedInfo = current.deletionInfo.copy();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3edb62bf/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamily.java 
b/src/java/org/apache/cassandra/db/ColumnFamily.java
index 47b14b9..2c00071 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@ -185,7 +185,11 @@ public abstract class ColumnFamily implements 
IterableColumn, IRowCacheEntry
 public abstract void delete(DeletionTime deletionTime);
 protected 

[2/8] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-12 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d53c838c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d53c838c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d53c838c

Branch: refs/heads/trunk
Commit: d53c838c9d2f89ac6c88c8306f2302f7fbc6b33d
Parents: 448e4d4 3edb62b
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 00:17:45 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 00:18:14 2013 +0600

--
 .../db/AbstractThreadUnsafeSortedColumns.java   |  6 +-
 .../cassandra/db/AtomicSortedColumns.java   |  4 +-
 .../org/apache/cassandra/db/ColumnFamily.java   | 11 ++-
 .../apache/cassandra/db/ColumnFamilyStore.java  | 26 ++-
 .../org/apache/cassandra/db/ColumnIndex.java|  2 +-
 .../org/apache/cassandra/db/DeletionInfo.java   | 76 ++-
 .../org/apache/cassandra/db/DeletionTime.java   | 16 
 .../apache/cassandra/db/RangeTombstoneList.java |  2 +-
 .../db/compaction/CompactionController.java |  5 +-
 .../db/compaction/LazilyCompactedRow.java   | 77 +++-
 test/unit/org/apache/cassandra/Util.java|  6 +-
 .../org/apache/cassandra/db/KeyCacheTest.java   |  2 +-
 .../db/compaction/CompactionsPurgeTest.java | 77 ++--
 .../streaming/StreamingTransferTest.java|  4 +-
 14 files changed, 236 insertions(+), 78 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d53c838c/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 396bbd3,d585407..4e54af0
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -885,6 -898,17 +897,16 @@@ public class ColumnFamilyStore implemen
  return null;
  }
  
 -removeDeletedColumnsOnly(cf, gcBefore, indexer);
 -return removeDeletedCF(cf, gcBefore);
++return removeDeletedCF(removeDeletedColumnsOnly(cf, gcBefore, 
indexer), gcBefore);
+ }
+ 
+ /**
+  * Removes only per-cell tombstones, cells that are shadowed by a 
row-level or range tombstone, or
+  * columns that have been dropped from the schema (for CQL3 tables only).
+  * @return the updated ColumnFamily
+  */
 -public static long removeDeletedColumnsOnly(ColumnFamily cf, int 
gcBefore, SecondaryIndexManager.Updater indexer)
++public static ColumnFamily removeDeletedColumnsOnly(ColumnFamily cf, int 
gcBefore, SecondaryIndexManager.Updater indexer)
+ {
  IteratorColumn iter = cf.iterator();
  DeletionInfo.InOrderTester tester = cf.inOrderDeletionTester();
  boolean hasDroppedColumns = 
!cf.metadata.getDroppedColumns().isEmpty();
@@@ -899,10 -924,15 +921,10 @@@
  {
  iter.remove();
  indexer.remove(c);
 -removedBytes += c.dataSize();
  }
  }
 -return removedBytes;
 -}
  
- return removeDeletedCF(cf, gcBefore);
 -public static long removeDeletedColumnsOnly(ColumnFamily cf, int gcBefore)
 -{
 -return removeDeletedColumnsOnly(cf, gcBefore, 
SecondaryIndexManager.nullUpdater);
++return cf;
  }
  
  // returns true if

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d53c838c/src/java/org/apache/cassandra/db/ColumnIndex.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d53c838c/src/java/org/apache/cassandra/db/DeletionTime.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d53c838c/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/CompactionController.java
index dc7730c,7edc60e..c4ce2e8
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@@ -148,24 -153,25 +148,25 @@@ public class CompactionControlle
  }
  
  /**
 - * @return true if it's okay to drop tombstones for the given row, i.e., 
if we know all the verisons of the row
 - * older than @param maxDeletionTimestamp are included in the compaction 
set
 + * @return the largest timestamp before which it's okay to drop 
tombstones for the given partition;
-  * i.e., after the maxPurgeableTimestamp there may exist newer data that 
still needs to be supressed
-  * in other sstables.
++ * i.e., 

[jira] [Commented] (CASSANDRA-6008) Getting 'This should never happen' error at startup due to sstables missing

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846612#comment-13846612
 ] 

Jonathan Ellis commented on CASSANDRA-6008:
---

Committed.

But I think you're right that there's something else going on.  I think John 
correctly identified one scenario in his original description.

 Getting 'This should never happen' error at startup due to sstables missing
 ---

 Key: CASSANDRA-6008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6008
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: John Carrino
Assignee: Tyler Hobbs
 Fix For: 2.0.4

 Attachments: 6008-2.0-v1.patch, 6008-trunk-v1.patch


 Exception encountered during startup: Unfinished compactions reference 
 missing sstables. This should never happen since compactions are marked 
 finished before we start removing the old sstables
 This happens when sstables that have been compacted away are removed, but 
 they still have entries in the system.compactions_in_progress table.
 Normally this should not happen because the entries in 
 system.compactions_in_progress are deleted before the old sstables are 
 deleted.
 However at startup recovery time, old sstables are deleted (NOT BEFORE they 
 are removed from the compactions_in_progress table) and then after that is 
 done it does a truncate using SystemKeyspace.discardCompactionsInProgress
 We ran into a case where the disk filled up and the node died and was bounced 
 and then failed to truncate this table on startup, and then got stuck hitting 
 this exception in ColumnFamilyStore.removeUnfinishedCompactionLeftovers.
 Maybe on startup we can delete from this table incrementally as we clean 
 stuff up in the same way that compactions delete from this table before they 
 delete old sstables.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6356) Proposal: Statistics.db (SSTableMetadata) format change

2013-12-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846619#comment-13846619
 ] 

Tyler Hobbs commented on CASSANDRA-6356:


[~yukim] I made a few nitpick comments inline on your commits on Github.  Other 
than those, +1.

 Proposal: Statistics.db (SSTableMetadata) format change
 ---

 Key: CASSANDRA-6356
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6356
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.1

 Attachments: 6356-v2.txt


 We started to distinguish what's loaded to heap, and what's not from 
 Statistics.db. For now, ancestors are loaded as they needed.
 Current serialization format is so adhoc that adding new metadata that are 
 not permanently hold onto memory is somewhat difficult and messy. I propose 
 to change serialization format so that a group of stats can be loaded as 
 needed.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6269) Add ability to ignore L0 on CF level

2013-12-12 Thread Matt Kapilevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846628#comment-13846628
 ] 

Matt Kapilevich commented on CASSANDRA-6269:


Perhaps a better and more flexible way would be to add 
max_sstables_to_read_from_L0 option on a CF. Default can be Integer.MAX_VALUE 
(uncapped), and for our use-case, we'd set it to zero. But in general, I think 
this option can benefit a lot of folks for whom availability is more important 
than consistency. It would give a way to have predictable read performance.

 Add ability to ignore L0 on CF level
 

 Key: CASSANDRA-6269
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6269
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matt Kapilevich
 Attachments: L0-vs-availability.png


 One of our CF's is written to only from a batch process. We use Cassandra's 
 bulk-load utility to load the data. When the load happens, the number of 
 tables in L0 increases, and then comes back down as they are compacted. While 
 the number of tables in L0 is high, there's increased load on the node, and 
 read availability suffers, since L0 is unsorted, and therefore lookups 
 against L0 are inefficient.
 This all works-as-designed, and issues around L0 are known.
 I think it would be a great addition to disable reading from L0, settable on 
 CF-level, as one of Leveled Compaction options. In our case, because the data 
 is written by a batch process, we are fine waiting a little longer while L0 
 is compacted away. However, the decrease in availability rate while this is 
 happening is an issue for us.
 I would propose to add disable_reads_from_L0 parameter to 
 compaction_strategy_options, with default being false. In cases when 
 availability is much more important than consistency, like ours, user can set 
 it to true.
 I've attached a graph that shows the relationship between our availability 
 rate and number of tables in L0.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6269) Add ability to ignore L0 on CF level

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846631#comment-13846631
 ] 

Jonathan Ellis commented on CASSANDRA-6269:
---

We wouldn't implement that in 1.2, and if you're upgrading to 2.0 you get 5371 
as Chris mentioned which is a better fix.

 Add ability to ignore L0 on CF level
 

 Key: CASSANDRA-6269
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6269
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matt Kapilevich
 Attachments: L0-vs-availability.png


 One of our CF's is written to only from a batch process. We use Cassandra's 
 bulk-load utility to load the data. When the load happens, the number of 
 tables in L0 increases, and then comes back down as they are compacted. While 
 the number of tables in L0 is high, there's increased load on the node, and 
 read availability suffers, since L0 is unsorted, and therefore lookups 
 against L0 are inefficient.
 This all works-as-designed, and issues around L0 are known.
 I think it would be a great addition to disable reading from L0, settable on 
 CF-level, as one of Leveled Compaction options. In our case, because the data 
 is written by a batch process, we are fine waiting a little longer while L0 
 is compacted away. However, the decrease in availability rate while this is 
 happening is an issue for us.
 I would propose to add disable_reads_from_L0 parameter to 
 compaction_strategy_options, with default being false. In cases when 
 availability is much more important than consistency, like ours, user can set 
 it to true.
 I've attached a graph that shows the relationship between our availability 
 rate and number of tables in L0.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Assigned] (CASSANDRA-6470) ArrayIndexOutOfBoundsException on range query from client

2013-12-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6470:
-

Assignee: Ryan King

Can you reproduce, Ryan?

 ArrayIndexOutOfBoundsException on range query from client
 -

 Key: CASSANDRA-6470
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6470
 Project: Cassandra
  Issue Type: Bug
Reporter: Enrico Scalavino
Assignee: Ryan King

 schema: 
 CREATE TABLE inboxkeyspace.inboxes(user_id bigint, message_id bigint, 
 thread_id bigint, network_id bigint, read boolean, PRIMARY KEY(user_id, 
 message_id)) WITH CLUSTERING ORDER BY (message_id DESC);
 CREATE INDEX ON inboxkeyspace.inboxes(read);
 query: 
 SELECT thread_id, message_id, network_id FROM inboxkeyspace.inboxes WHERE 
 user_id = ? AND message_id  ? AND read = ? LIMIT ? 
 The query works if run via cqlsh. However, when run through the datastax 
 client, on the client side we get a timeout exception and on the server side, 
 the Cassandra log shows this exception: 
 ERROR [ReadStage:4190] 2013-12-10 13:18:03,579 CassandraDaemon.java (line 
 187) Exception in thread Thread[ReadStage:4190,5,main]
 java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1940)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.start(SliceQueryFilter.java:261)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.makePrefix(CompositesSearcher.java:66)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.getIndexedIterator(CompositesSearcher.java:101)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1669)
 at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:109)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1423)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2013-12-12 Thread Theo Hultberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846650#comment-13846650
 ] 

Theo Hultberg commented on CASSANDRA-6476:
--

[~jbellis] Yes, two out of three nodes got the same assertion failures within a 
minute or two.

I've updated the gist (https://gist.github.com/iconara/7917438) with the full 
logs (10,000 lines) from the two nodes. The third node has nothing in its logs 
around the same time (it's all just INFO and nothing that stands out).

 Assertion error in MessagingService.addCallback
 ---

 Key: CASSANDRA-6476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.2 DCE
Reporter: Theo Hultberg
Assignee: Sylvain Lebresne

 Two of the three Cassandra nodes in one of our clusters just started behaving 
 very strange about an hour ago. Within a minute of each other they started 
 logging AssertionErrors (see stack traces here: 
 https://gist.github.com/iconara/7917438) over and over again. The client lost 
 connection with the nodes at roughly the same time. The nodes were still up, 
 and even if no clients were connected to them they continued logging the same 
 errors over and over.
 The errors are in the native transport (specifically 
 MessagingService.addCallback) which makes me suspect that it has something to 
 do with a test that we started running this afternoon. I've just implemented 
 support for frame compression in my CQL driver cql-rb. About two hours before 
 this happened I deployed a version of the application which enabled Snappy 
 compression on all frames larger than 64 bytes. It's not impossible that 
 there is a bug somewhere in the driver or compression library that caused 
 this -- but at the same time, it feels like it shouldn't be possible to make 
 C* a zombie with a bad frame.
 Restarting seems to have got them back running again, but I suspect they will 
 go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Assigned] (CASSANDRA-6470) ArrayIndexOutOfBoundsException on range query from client

2013-12-12 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-6470:
---

Assignee: Ryan McGuire  (was: Ryan King)

 ArrayIndexOutOfBoundsException on range query from client
 -

 Key: CASSANDRA-6470
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6470
 Project: Cassandra
  Issue Type: Bug
Reporter: Enrico Scalavino
Assignee: Ryan McGuire

 schema: 
 CREATE TABLE inboxkeyspace.inboxes(user_id bigint, message_id bigint, 
 thread_id bigint, network_id bigint, read boolean, PRIMARY KEY(user_id, 
 message_id)) WITH CLUSTERING ORDER BY (message_id DESC);
 CREATE INDEX ON inboxkeyspace.inboxes(read);
 query: 
 SELECT thread_id, message_id, network_id FROM inboxkeyspace.inboxes WHERE 
 user_id = ? AND message_id  ? AND read = ? LIMIT ? 
 The query works if run via cqlsh. However, when run through the datastax 
 client, on the client side we get a timeout exception and on the server side, 
 the Cassandra log shows this exception: 
 ERROR [ReadStage:4190] 2013-12-10 13:18:03,579 CassandraDaemon.java (line 
 187) Exception in thread Thread[ReadStage:4190,5,main]
 java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1940)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.start(SliceQueryFilter.java:261)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.makePrefix(CompositesSearcher.java:66)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.getIndexedIterator(CompositesSearcher.java:101)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1669)
 at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:109)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1423)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6470) ArrayIndexOutOfBoundsException on range query from client

2013-12-12 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846693#comment-13846693
 ] 

Ryan McGuire commented on CASSANDRA-6470:
-

[~enrico.scalavino] What version of Cassandra and datastax driver are you 
using? I'll try and recreate this, but if you have a test already written that 
is easily decoupled from your project can you post that too?

 ArrayIndexOutOfBoundsException on range query from client
 -

 Key: CASSANDRA-6470
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6470
 Project: Cassandra
  Issue Type: Bug
Reporter: Enrico Scalavino
Assignee: Ryan McGuire

 schema: 
 CREATE TABLE inboxkeyspace.inboxes(user_id bigint, message_id bigint, 
 thread_id bigint, network_id bigint, read boolean, PRIMARY KEY(user_id, 
 message_id)) WITH CLUSTERING ORDER BY (message_id DESC);
 CREATE INDEX ON inboxkeyspace.inboxes(read);
 query: 
 SELECT thread_id, message_id, network_id FROM inboxkeyspace.inboxes WHERE 
 user_id = ? AND message_id  ? AND read = ? LIMIT ? 
 The query works if run via cqlsh. However, when run through the datastax 
 client, on the client side we get a timeout exception and on the server side, 
 the Cassandra log shows this exception: 
 ERROR [ReadStage:4190] 2013-12-10 13:18:03,579 CassandraDaemon.java (line 
 187) Exception in thread Thread[ReadStage:4190,5,main]
 java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1940)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.start(SliceQueryFilter.java:261)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.makePrefix(CompositesSearcher.java:66)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.getIndexedIterator(CompositesSearcher.java:101)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1669)
 at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:109)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1423)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Created] (CASSANDRA-6482) Add junitreport to ant test target

2013-12-12 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-6482:
-

 Summary: Add junitreport to ant test target
 Key: CASSANDRA-6482
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6482
 Project: Cassandra
  Issue Type: Improvement
  Components: Tests
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor


Adding junitreport XML output for the unit tests will allow detailed reporting 
and historical tracking in Jenkins.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[2/3] git commit: enhance assertion failure message

2013-12-12 Thread jbellis
enhance assertion failure message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c4d3a313
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c4d3a313
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c4d3a313

Branch: refs/heads/trunk
Commit: c4d3a313885f14e802247b9354aafa4caaae9804
Parents: 0d8da2e
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 02:30:38 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 02:30:38 2013 +0600

--
 src/java/org/apache/cassandra/net/CallbackInfo.java | 9 +
 src/java/org/apache/cassandra/net/MessagingService.java | 4 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4d3a313/src/java/org/apache/cassandra/net/CallbackInfo.java
--
diff --git a/src/java/org/apache/cassandra/net/CallbackInfo.java 
b/src/java/org/apache/cassandra/net/CallbackInfo.java
index 0edfee9..3e584b4 100644
--- a/src/java/org/apache/cassandra/net/CallbackInfo.java
+++ b/src/java/org/apache/cassandra/net/CallbackInfo.java
@@ -50,4 +50,13 @@ public class CallbackInfo
 {
 return false;
 }
+
+public String toString()
+{
+return CallbackInfo( +
+   target= + target +
+   , callback= + callback +
+   , serializer= + serializer +
+   ')';
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4d3a313/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index 2259dbd..20cad82 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -535,7 +535,7 @@ public final class MessagingService implements 
MessagingServiceMBean
 assert message.verb != Verb.MUTATION; // mutations need to call the 
overload with a ConsistencyLevel
 int messageId = nextId();
 CallbackInfo previous = callbacks.put(messageId, new CallbackInfo(to, 
cb, callbackDeserializers.get(message.verb)), timeout);
-assert previous == null;
+assert previous == null : String.format(Callback already exists for 
id %d! (%s), messageId, previous);
 return messageId;
 }
 
@@ -544,7 +544,7 @@ public final class MessagingService implements 
MessagingServiceMBean
 assert message.verb == Verb.MUTATION || message.verb == 
Verb.COUNTER_MUTATION;
 int messageId = nextId();
 CallbackInfo previous = callbacks.put(messageId, new 
WriteCallbackInfo(to, cb, message, callbackDeserializers.get(message.verb), 
consistencyLevel), timeout);
-assert previous == null;
+assert previous == null : String.format(Callback already exists for 
id %d! (%s), messageId, previous);
 return messageId;
 }
 



[1/3] git commit: enhance assertion failure message

2013-12-12 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 0d8da2ee3 - c4d3a3138
  refs/heads/trunk a9b93c257 - b25ae0f92


enhance assertion failure message


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c4d3a313
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c4d3a313
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c4d3a313

Branch: refs/heads/cassandra-2.0
Commit: c4d3a313885f14e802247b9354aafa4caaae9804
Parents: 0d8da2e
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 02:30:38 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 02:30:38 2013 +0600

--
 src/java/org/apache/cassandra/net/CallbackInfo.java | 9 +
 src/java/org/apache/cassandra/net/MessagingService.java | 4 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4d3a313/src/java/org/apache/cassandra/net/CallbackInfo.java
--
diff --git a/src/java/org/apache/cassandra/net/CallbackInfo.java 
b/src/java/org/apache/cassandra/net/CallbackInfo.java
index 0edfee9..3e584b4 100644
--- a/src/java/org/apache/cassandra/net/CallbackInfo.java
+++ b/src/java/org/apache/cassandra/net/CallbackInfo.java
@@ -50,4 +50,13 @@ public class CallbackInfo
 {
 return false;
 }
+
+public String toString()
+{
+return CallbackInfo( +
+   target= + target +
+   , callback= + callback +
+   , serializer= + serializer +
+   ')';
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4d3a313/src/java/org/apache/cassandra/net/MessagingService.java
--
diff --git a/src/java/org/apache/cassandra/net/MessagingService.java 
b/src/java/org/apache/cassandra/net/MessagingService.java
index 2259dbd..20cad82 100644
--- a/src/java/org/apache/cassandra/net/MessagingService.java
+++ b/src/java/org/apache/cassandra/net/MessagingService.java
@@ -535,7 +535,7 @@ public final class MessagingService implements 
MessagingServiceMBean
 assert message.verb != Verb.MUTATION; // mutations need to call the 
overload with a ConsistencyLevel
 int messageId = nextId();
 CallbackInfo previous = callbacks.put(messageId, new CallbackInfo(to, 
cb, callbackDeserializers.get(message.verb)), timeout);
-assert previous == null;
+assert previous == null : String.format(Callback already exists for 
id %d! (%s), messageId, previous);
 return messageId;
 }
 
@@ -544,7 +544,7 @@ public final class MessagingService implements 
MessagingServiceMBean
 assert message.verb == Verb.MUTATION || message.verb == 
Verb.COUNTER_MUTATION;
 int messageId = nextId();
 CallbackInfo previous = callbacks.put(messageId, new 
WriteCallbackInfo(to, cb, message, callbackDeserializers.get(message.verb), 
consistencyLevel), timeout);
-assert previous == null;
+assert previous == null : String.format(Callback already exists for 
id %d! (%s), messageId, previous);
 return messageId;
 }
 



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-12 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b25ae0f9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b25ae0f9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b25ae0f9

Branch: refs/heads/trunk
Commit: b25ae0f921a7366ce44751867f600c8bce4d287b
Parents: a9b93c2 c4d3a31
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 02:30:58 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 02:30:58 2013 +0600

--
 src/java/org/apache/cassandra/net/CallbackInfo.java | 9 +
 src/java/org/apache/cassandra/net/MessagingService.java | 4 ++--
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b25ae0f9/src/java/org/apache/cassandra/net/MessagingService.java
--



[jira] [Commented] (CASSANDRA-6476) Assertion error in MessagingService.addCallback

2013-12-12 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846709#comment-13846709
 ] 

Jonathan Ellis commented on CASSANDRA-6476:
---

Huh.  Well, I added some extra detail to the assert in 
c4d3a313885f14e802247b9354aafa4caaae9804.  Maybe that will show a clue.

 Assertion error in MessagingService.addCallback
 ---

 Key: CASSANDRA-6476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6476
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.0.2 DCE
Reporter: Theo Hultberg
Assignee: Sylvain Lebresne

 Two of the three Cassandra nodes in one of our clusters just started behaving 
 very strange about an hour ago. Within a minute of each other they started 
 logging AssertionErrors (see stack traces here: 
 https://gist.github.com/iconara/7917438) over and over again. The client lost 
 connection with the nodes at roughly the same time. The nodes were still up, 
 and even if no clients were connected to them they continued logging the same 
 errors over and over.
 The errors are in the native transport (specifically 
 MessagingService.addCallback) which makes me suspect that it has something to 
 do with a test that we started running this afternoon. I've just implemented 
 support for frame compression in my CQL driver cql-rb. About two hours before 
 this happened I deployed a version of the application which enabled Snappy 
 compression on all frames larger than 64 bytes. It's not impossible that 
 there is a bug somewhere in the driver or compression library that caused 
 this -- but at the same time, it feels like it shouldn't be possible to make 
 C* a zombie with a bad frame.
 Restarting seems to have got them back running again, but I suspect they will 
 go down again sooner or later.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-5899) Sends all interface in native protocol notification when rpc_address=0.0.0.0

2013-12-12 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846768#comment-13846768
 ] 

Tyler Hobbs commented on CASSANDRA-5899:


Why don't we add an {{broadcast_rpc_address}} config option?  If set, that can 
be used for the system.local/peers address as well as for pushed messages.  
It's a simple solution with no downside (that I'm aware of), and the behavior 
is familiar, since we already have something analogous with 
{{broadcast_address}}.

 Sends all interface in native protocol notification when rpc_address=0.0.0.0
 

 Key: CASSANDRA-5899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5899
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor
 Fix For: 2.1


 For the native protocol notifications, when we send a new node notification, 
 we send the rpc_address of that new node. For this to be actually useful, 
 that address sent should be publicly accessible by the driver it is destined 
 to. 
 The problem is when rpc_address=0.0.0.0. Currently, we send the 
 listen_address, which is correct in the sense that we do are bind on it but 
 might not be accessible by client nodes.
 In fact, one of the good reason to use 0.0.0.0 rpc_address would be if you 
 have a private network for internode communication and another for 
 client-server communinations, but still want to be able to issue query from 
 the private network for debugging. In that case, the current behavior to send 
 listen_address doesn't really help.
 So one suggestion would be to instead send all the addresses on which the 
 (native protocol) server is bound to (which would still leave to the driver 
 the task to pick the right one, but at least it has something to pick from).
 That's relatively trivial to do in practice, but it does require a minor 
 binary protocol break to return a list instead of just one IP, which is why 
 I'm tentatively marking this 2.0. Maybe we can shove that tiny change in the 
 final (in the protocol v2 only)? Povided we agree it's a good idea of course.
 Now to be complete, for the same reasons, we would also need to store all the 
 addresses we are bound to in the peers table. That's also fairly simple and 
 the backward compatibility story is maybe a tad simpler: we could add a new 
 {{rpc_addresses}} column that would be a list and deprecate {{rpc_address}} 
 (to be removed in 2.1 for instance).



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-6269) Add ability to ignore L0 on CF level

2013-12-12 Thread Matt Kapilevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846789#comment-13846789
 ] 

Matt Kapilevich edited comment on CASSANDRA-6269 at 12/12/13 9:54 PM:
--

I would love to get 5371, but we can't upgrade to 2.0 until it's more mature, 
and part of DSE. Do you have objections to us committing this patch ourselves 
on a 1.2 branch?


was (Author: matvey14):
I would love to get 5371, but we can't upgrade to 2.0 until it's more mature, 
and part of DSE. Do you have objections to us committing this fix ourselves on 
a 1.2 branch?

 Add ability to ignore L0 on CF level
 

 Key: CASSANDRA-6269
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6269
 Project: Cassandra
  Issue Type: Improvement
Reporter: Matt Kapilevich
 Attachments: L0-vs-availability.png


 One of our CF's is written to only from a batch process. We use Cassandra's 
 bulk-load utility to load the data. When the load happens, the number of 
 tables in L0 increases, and then comes back down as they are compacted. While 
 the number of tables in L0 is high, there's increased load on the node, and 
 read availability suffers, since L0 is unsorted, and therefore lookups 
 against L0 are inefficient.
 This all works-as-designed, and issues around L0 are known.
 I think it would be a great addition to disable reading from L0, settable on 
 CF-level, as one of Leveled Compaction options. In our case, because the data 
 is written by a batch process, we are fine waiting a little longer while L0 
 is compacted away. However, the decrease in availability rate while this is 
 happening is an issue for us.
 I would propose to add disable_reads_from_L0 parameter to 
 compaction_strategy_options, with default being false. In cases when 
 availability is much more important than consistency, like ours, user can set 
 it to true.
 I've attached a graph that shows the relationship between our availability 
 rate and number of tables in L0.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6470) ArrayIndexOutOfBoundsException on range query from client

2013-12-12 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846841#comment-13846841
 ] 

Jeremiah Jordan commented on CASSANDRA-6470:


[~enrico.scalavino] are you using version 2.0.X of C* and the driver?  The 
limit being a bind parameter isn't supported unless you are. It be confusing 
the driver if you are using the 1.0.X version of the driver (not sure what 
error gets thrown if you do that).

 ArrayIndexOutOfBoundsException on range query from client
 -

 Key: CASSANDRA-6470
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6470
 Project: Cassandra
  Issue Type: Bug
Reporter: Enrico Scalavino
Assignee: Ryan McGuire

 schema: 
 CREATE TABLE inboxkeyspace.inboxes(user_id bigint, message_id bigint, 
 thread_id bigint, network_id bigint, read boolean, PRIMARY KEY(user_id, 
 message_id)) WITH CLUSTERING ORDER BY (message_id DESC);
 CREATE INDEX ON inboxkeyspace.inboxes(read);
 query: 
 SELECT thread_id, message_id, network_id FROM inboxkeyspace.inboxes WHERE 
 user_id = ? AND message_id  ? AND read = ? LIMIT ? 
 The query works if run via cqlsh. However, when run through the datastax 
 client, on the client side we get a timeout exception and on the server side, 
 the Cassandra log shows this exception: 
 ERROR [ReadStage:4190] 2013-12-10 13:18:03,579 CassandraDaemon.java (line 
 187) Exception in thread Thread[ReadStage:4190,5,main]
 java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1940)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.start(SliceQueryFilter.java:261)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.makePrefix(CompositesSearcher.java:66)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.getIndexedIterator(CompositesSearcher.java:101)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1669)
 at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:109)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1423)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-5125) Support indexes on composite column components (clustered columns)

2013-12-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5125:
--

Summary: Support indexes on composite column components (clustered columns) 
 (was: Support indexes on composite column components)

 Support indexes on composite column components (clustered columns)
 --

 Key: CASSANDRA-5125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5125
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 0001-Refactor-aliases-into-column_metadata.txt, 
 0002-Generalize-CompositeIndex-for-all-column-type.txt, 
 0003-Handle-new-type-of-IndexExpression.txt, 
 0004-Handle-partition-key-indexing.txt


 Given
 {code}
 CREATE TABLE foo (
   a int,
   b int,
   c int,
   PRIMARY KEY (a, b)
 );
 {code}
 We should support {{CREATE INDEX ON foo(b)}}.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-6470) ArrayIndexOutOfBoundsException on range query from client

2013-12-12 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846841#comment-13846841
 ] 

Jeremiah Jordan edited comment on CASSANDRA-6470 at 12/12/13 11:34 PM:
---

[~enrico.scalavino] are you using version 2.0.X of C* and the driver?  The 
limit being a bind parameter isn't supported unless you are. It may be 
confusing the driver if you are using the 1.0.X version of the driver (not sure 
what error gets thrown if you do that).


was (Author: jjordan):
[~enrico.scalavino] are you using version 2.0.X of C* and the driver?  The 
limit being a bind parameter isn't supported unless you are. It be confusing 
the driver if you are using the 1.0.X version of the driver (not sure what 
error gets thrown if you do that).

 ArrayIndexOutOfBoundsException on range query from client
 -

 Key: CASSANDRA-6470
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6470
 Project: Cassandra
  Issue Type: Bug
Reporter: Enrico Scalavino
Assignee: Ryan McGuire

 schema: 
 CREATE TABLE inboxkeyspace.inboxes(user_id bigint, message_id bigint, 
 thread_id bigint, network_id bigint, read boolean, PRIMARY KEY(user_id, 
 message_id)) WITH CLUSTERING ORDER BY (message_id DESC);
 CREATE INDEX ON inboxkeyspace.inboxes(read);
 query: 
 SELECT thread_id, message_id, network_id FROM inboxkeyspace.inboxes WHERE 
 user_id = ? AND message_id  ? AND read = ? LIMIT ? 
 The query works if run via cqlsh. However, when run through the datastax 
 client, on the client side we get a timeout exception and on the server side, 
 the Cassandra log shows this exception: 
 ERROR [ReadStage:4190] 2013-12-10 13:18:03,579 CassandraDaemon.java (line 
 187) Exception in thread Thread[ReadStage:4190,5,main]
 java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1940)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:722)
 Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.start(SliceQueryFilter.java:261)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.makePrefix(CompositesSearcher.java:66)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.getIndexedIterator(CompositesSearcher.java:101)
 at 
 org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
 at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1669)
 at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:109)
 at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1423)
 at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
 ... 3 more



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-12-12 Thread jbellis
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/611f328f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/611f328f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/611f328f

Branch: refs/heads/trunk
Commit: 611f328f3f7ad37c2e74302564e7b198d17535ab
Parents: b25ae0f fb5808d
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 05:35:17 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 05:35:17 2013 +0600

--
 CHANGES.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/611f328f/CHANGES.txt
--



[1/3] git commit: add #5125 to CHANGES

2013-12-12 Thread jbellis
Updated Branches:
  refs/heads/cassandra-2.0 c4d3a3138 - fb5808d43
  refs/heads/trunk b25ae0f92 - 611f328f3


add #5125 to CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fb5808d4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fb5808d4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fb5808d4

Branch: refs/heads/cassandra-2.0
Commit: fb5808d431bc44a39033651a8811aeef169a1df4
Parents: c4d3a31
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 05:35:07 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 05:35:07 2013 +0600

--
 CHANGES.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fb5808d4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d573e37..a4b34ca 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -345,6 +345,7 @@ Merged from 1.2:
 
 
 2.0.0-beta1
+ * Add support for indexing clustered columns (CASSANDRA-5125)
  * Removed on-heap row cache (CASSANDRA-5348)
  * use nanotime consistently for node-local timeouts (CASSANDRA-5581)
  * Avoid unnecessary second pass on name-based queries (CASSANDRA-5577)



[2/3] git commit: add #5125 to CHANGES

2013-12-12 Thread jbellis
add #5125 to CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fb5808d4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fb5808d4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fb5808d4

Branch: refs/heads/trunk
Commit: fb5808d431bc44a39033651a8811aeef169a1df4
Parents: c4d3a31
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Dec 13 05:35:07 2013 +0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Dec 13 05:35:07 2013 +0600

--
 CHANGES.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fb5808d4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index d573e37..a4b34ca 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -345,6 +345,7 @@ Merged from 1.2:
 
 
 2.0.0-beta1
+ * Add support for indexing clustered columns (CASSANDRA-5125)
  * Removed on-heap row cache (CASSANDRA-5348)
  * use nanotime consistently for node-local timeouts (CASSANDRA-5581)
  * Avoid unnecessary second pass on name-based queries (CASSANDRA-5577)



[jira] [Created] (CASSANDRA-6483) Possible Collections.sort assertion failure in STCS.filterColdSSTables

2013-12-12 Thread graham sanderson (JIRA)
graham sanderson created CASSANDRA-6483:
---

 Summary: Possible Collections.sort assertion failure in 
STCS.filterColdSSTables
 Key: CASSANDRA-6483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6483
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: graham sanderson


We have observed the following stack trace periodically:

{code}
java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
at java.util.TimSort.mergeLo(TimSort.java:747)
at java.util.TimSort.mergeAt(TimSort.java:483)
at java.util.TimSort.mergeCollapse(TimSort.java:410)
at java.util.TimSort.sort(TimSort.java:214)
at java.util.TimSort.sort(TimSort.java:173)
at java.util.Arrays.sort(Arrays.java:659)
at java.util.Collections.sort(Collections.java:217)
at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:94)
at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:59)
at 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:229)
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{code}

The comparator ant SizeTieredCompactionStrategy line 94 breaks the assertions 
in the new JDK7 default sort algorithm, because (I think just) the hotness 
value (based on meter) may be modified concurrently by another thread

This bug appears to have been introduced in CASSANDRA-6109




--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6483) Possible Collections.sort assertion failure in STCS.filterColdSSTables

2013-12-12 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846982#comment-13846982
 ] 

graham sanderson commented on CASSANDRA-6483:
-

Note the java option java.util.Arrays.useLegacyMergeSort could be used as a 
workaround, but it is unclear to me if that would produce desirable results

 Possible Collections.sort assertion failure in STCS.filterColdSSTables
 --

 Key: CASSANDRA-6483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6483
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: graham sanderson

 We have observed the following stack trace periodically:
 {code}
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
 at java.util.TimSort.mergeLo(TimSort.java:747)
 at java.util.TimSort.mergeAt(TimSort.java:483)
 at java.util.TimSort.mergeCollapse(TimSort.java:410)
 at java.util.TimSort.sort(TimSort.java:214)
 at java.util.TimSort.sort(TimSort.java:173)
 at java.util.Arrays.sort(Arrays.java:659)
 at java.util.Collections.sort(Collections.java:217)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:94)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:59)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:229)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 The comparator ant SizeTieredCompactionStrategy line 94 breaks the assertions 
 in the new JDK7 default sort algorithm, because (I think just) the hotness 
 value (based on meter) may be modified concurrently by another thread
 This bug appears to have been introduced in CASSANDRA-6109



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6483) Possible Collections.sort assertion failure in STCS.filterColdSSTables

2013-12-12 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13846987#comment-13846987
 ] 

graham sanderson commented on CASSANDRA-6483:
-

The simplest fix is probably just to precompute an IdentityMap to any of the 
mutable data, and use it from within the comparator (since the comparator 
happens to be non static)

Alternatively use a List of a new wrapper type and sort that instead.

 Possible Collections.sort assertion failure in STCS.filterColdSSTables
 --

 Key: CASSANDRA-6483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6483
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: graham sanderson

 We have observed the following stack trace periodically:
 {code}
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
 at java.util.TimSort.mergeLo(TimSort.java:747)
 at java.util.TimSort.mergeAt(TimSort.java:483)
 at java.util.TimSort.mergeCollapse(TimSort.java:410)
 at java.util.TimSort.sort(TimSort.java:214)
 at java.util.TimSort.sort(TimSort.java:173)
 at java.util.Arrays.sort(Arrays.java:659)
 at java.util.Collections.sort(Collections.java:217)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:94)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:59)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:229)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 The comparator ant SizeTieredCompactionStrategy line 94 breaks the assertions 
 in the new JDK7 default sort algorithm, because (I think just) the hotness 
 value (based on meter) may be modified concurrently by another thread
 This bug appears to have been introduced in CASSANDRA-6109



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (CASSANDRA-6483) Possible Collections.sort assertion failure in STCS.filterColdSSTables

2013-12-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6483:
--

Fix Version/s: 2.0.4
   Labels: compaction  (was: )

 Possible Collections.sort assertion failure in STCS.filterColdSSTables
 --

 Key: CASSANDRA-6483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6483
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: graham sanderson
Assignee: Tyler Hobbs
  Labels: compaction
 Fix For: 2.0.4


 We have observed the following stack trace periodically:
 {code}
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
 at java.util.TimSort.mergeLo(TimSort.java:747)
 at java.util.TimSort.mergeAt(TimSort.java:483)
 at java.util.TimSort.mergeCollapse(TimSort.java:410)
 at java.util.TimSort.sort(TimSort.java:214)
 at java.util.TimSort.sort(TimSort.java:173)
 at java.util.Arrays.sort(Arrays.java:659)
 at java.util.Collections.sort(Collections.java:217)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:94)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:59)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:229)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 The comparator ant SizeTieredCompactionStrategy line 94 breaks the assertions 
 in the new JDK7 default sort algorithm, because (I think just) the hotness 
 value (based on meter) may be modified concurrently by another thread
 This bug appears to have been introduced in CASSANDRA-6109



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Assigned] (CASSANDRA-6483) Possible Collections.sort assertion failure in STCS.filterColdSSTables

2013-12-12 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6483:
-

Assignee: Tyler Hobbs

 Possible Collections.sort assertion failure in STCS.filterColdSSTables
 --

 Key: CASSANDRA-6483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6483
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: graham sanderson
Assignee: Tyler Hobbs
  Labels: compaction
 Fix For: 2.0.4


 We have observed the following stack trace periodically:
 {code}
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
 at java.util.TimSort.mergeLo(TimSort.java:747)
 at java.util.TimSort.mergeAt(TimSort.java:483)
 at java.util.TimSort.mergeCollapse(TimSort.java:410)
 at java.util.TimSort.sort(TimSort.java:214)
 at java.util.TimSort.sort(TimSort.java:173)
 at java.util.Arrays.sort(Arrays.java:659)
 at java.util.Collections.sort(Collections.java:217)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:94)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:59)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:229)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 The comparator ant SizeTieredCompactionStrategy line 94 breaks the assertions 
 in the new JDK7 default sort algorithm, because (I think just) the hotness 
 value (based on meter) may be modified concurrently by another thread
 This bug appears to have been introduced in CASSANDRA-6109



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6470) ArrayIndexOutOfBoundsException on range query from client

2013-12-12 Thread Marcos Trama (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847099#comment-13847099
 ] 

Marcos Trama commented on CASSANDRA-6470:
-

I get the same error. I dont know when it has been started. I'm using Cassandra 
2.0.2 and Datastax Java Driver 2.0.0-beta2. Query works in cqlsh but fail when 
running in the client. I tried to re-create (DROP/CREATE) the column family, 
but the error stills.

Query in the cqlsh:

cqlsh:pollkan SELECT observer FROM observed WHERE observed = 
fa93c210-4bff-11e3-b48f-5714d8c6f3b2 AND observer  
--1000-- and blocked = false LIMIT 1;

 observer
--
 43814f60-5bb1-11e3-97c8-ad396a9e8180

(1 rows)

Query in the client:

2013-12-13/00:53:03.039/BRST [timeline_1] DEBUG 
br.com.pollkan.batch.CqlCommands Execute query [SELECT observer FROM observed 
WHERE observed = ? AND observer  ? and blocked = ? LIMIT 1;] arguments 
[[fa93c210-4bff-11e3-b48f-5714d8c6f3b2][--1000--][false]]

Error in cassandra:

ERROR [ReadStage:52] 2013-12-13 01:04:56,799 CassandraDaemon.java (line 187) 
Exception in thread Thread[ReadStage:52,5,main]
java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.cassandra.db.filter.SliceQueryFilter.start(SliceQueryFilter.java:261)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.makePrefix(CompositesSearcher.java:66)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.getIndexedIterator(CompositesSearcher.java:101)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1649)
at 
org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:109)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1414)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)
... 3 more

Error from driver log:

2013-12-13/01:05:06.798/BRST [timeline_1] ERROR 
br.com.pollkan.batch.CqlCommands Exception! [Cassandra timeout during read 
query at consistency ONE (1 responses were required but only 0 replica 
responded)]
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout 
during read query at consistency ONE (1 responses were required but only 0 
replica responded)
at 
com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:69)
at 
com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:271)
at 
com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:187)
at com.datastax.driver.core.Session.execute(Session.java:126)
at br.com.pollkan.batch.CqlCommands.executeQuery(CqlCommands.java:149)
at br.com.pollkan.batch.BaseBatch.processChild(BaseBatch.java:364)
at br.com.pollkan.batch.BaseBatch.run(BaseBatch.java:640)
at java.lang.Thread.run(Thread.java:722)

If need more information, please let me know. Tks

 ArrayIndexOutOfBoundsException on range query from client
 -

 Key: CASSANDRA-6470
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6470
 Project: Cassandra
  Issue Type: Bug
Reporter: Enrico Scalavino
Assignee: Ryan McGuire

 schema: 
 CREATE TABLE inboxkeyspace.inboxes(user_id bigint, message_id bigint, 
 thread_id bigint, network_id bigint, read boolean, PRIMARY KEY(user_id, 
 message_id)) WITH CLUSTERING ORDER BY (message_id DESC);
 CREATE INDEX ON inboxkeyspace.inboxes(read);
 query: 
 SELECT thread_id, message_id, network_id FROM inboxkeyspace.inboxes WHERE 
 user_id = ? AND message_id  ? AND read = ? LIMIT ? 
 The query works if run via cqlsh. However, when run through the datastax 
 client, on the client side we get a timeout exception and on the server side, 
 the Cassandra log shows this exception: 
 ERROR [ReadStage:4190] 2013-12-10 13:18:03,579 CassandraDaemon.java (line 
 187) Exception in thread Thread[ReadStage:4190,5,main]
 java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
 at 
 

[jira] [Comment Edited] (CASSANDRA-6470) ArrayIndexOutOfBoundsException on range query from client

2013-12-12 Thread Marcos Trama (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847099#comment-13847099
 ] 

Marcos Trama edited comment on CASSANDRA-6470 at 12/13/13 3:11 AM:
---

I get the same error. I dont know when it has been started. I'm using Cassandra 
2.0.2 and Datastax Java Driver 2.0.0-beta2. Query works in cqlsh but fail when 
running in the client. I tried to re-create (DROP/CREATE) the column family, 
but the error stills.

=
Table layout:

cqlsh:pollkan desc table observed;

CREATE TABLE observed (
  observed timeuuid,
  observer timeuuid,
  blocked boolean,
  PRIMARY KEY (observed, observer)
) WITH
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};

CREATE INDEX observedBlocked ON observed (blocked);

=
Query in the cqlsh:

cqlsh:pollkan SELECT observer FROM observed WHERE observed = 
fa93c210-4bff-11e3-b48f-5714d8c6f3b2 AND observer  
--1000-- and blocked = false LIMIT 1;

 observer
--
 43814f60-5bb1-11e3-97c8-ad396a9e8180

(1 rows)

=
Query in the client log:

2013-12-13/00:53:03.039/BRST [timeline_1] DEBUG 
br.com.pollkan.batch.CqlCommands Execute query [SELECT observer FROM observed 
WHERE observed = ? AND observer  ? and blocked = ? LIMIT 1;] arguments 
[[fa93c210-4bff-11e3-b48f-5714d8c6f3b2][--1000--][false]]

=
Error in cassandra:

ERROR [ReadStage:52] 2013-12-13 01:04:56,799 CassandraDaemon.java (line 187) 
Exception in thread Thread[ReadStage:52,5,main]
java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.cassandra.db.filter.SliceQueryFilter.start(SliceQueryFilter.java:261)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.makePrefix(CompositesSearcher.java:66)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.getIndexedIterator(CompositesSearcher.java:101)
at 
org.apache.cassandra.db.index.composites.CompositesSearcher.search(CompositesSearcher.java:53)
at 
org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:537)
at 
org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1649)
at 
org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:109)
at 
org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1414)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1927)
... 3 more

=
Error from driver log:

2013-12-13/01:05:06.798/BRST [timeline_1] ERROR 
br.com.pollkan.batch.CqlCommands Exception! [Cassandra timeout during read 
query at consistency ONE (1 responses were required but only 0 replica 
responded)]
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout 
during read query at consistency ONE (1 responses were required but only 0 
replica responded)
at 
com.datastax.driver.core.exceptions.ReadTimeoutException.copy(ReadTimeoutException.java:69)
at 
com.datastax.driver.core.ResultSetFuture.extractCauseFromExecutionException(ResultSetFuture.java:271)
at 
com.datastax.driver.core.ResultSetFuture.getUninterruptibly(ResultSetFuture.java:187)
at com.datastax.driver.core.Session.execute(Session.java:126)
at br.com.pollkan.batch.CqlCommands.executeQuery(CqlCommands.java:149)
at br.com.pollkan.batch.BaseBatch.processChild(BaseBatch.java:364)
at br.com.pollkan.batch.BaseBatch.run(BaseBatch.java:640)
at java.lang.Thread.run(Thread.java:722)

If need more information, please let me know. Tks


was (Author: marcostrama):
I get the same error. I dont know when it has been started. I'm using Cassandra 
2.0.2 and Datastax Java Driver 2.0.0-beta2. Query works in cqlsh but fail when 
running in the client. I tried to re-create (DROP/CREATE) the column family, 
but the error stills.

Query in the cqlsh:


[jira] [Commented] (CASSANDRA-6483) Possible Collections.sort assertion failure in STCS.filterColdSSTables

2013-12-12 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847129#comment-13847129
 ] 

graham sanderson commented on CASSANDRA-6483:
-

Adding my questions from dev-email thread

Note that the CASSANDRA-6109 feature claims to be “off” by default, however it 
isn’t immediately clear to me from that patch how “off” is implemented, and 
whether it is supposed to go down that code path even when “off

I’m guessing there no actual downside (other than ERROR level messages in the 
logs which cause alerts), since it just fails a subset of compaction runs?

 Possible Collections.sort assertion failure in STCS.filterColdSSTables
 --

 Key: CASSANDRA-6483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6483
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: graham sanderson
Assignee: Tyler Hobbs
  Labels: compaction
 Fix For: 2.0.4


 We have observed the following stack trace periodically:
 {code}
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
 at java.util.TimSort.mergeLo(TimSort.java:747)
 at java.util.TimSort.mergeAt(TimSort.java:483)
 at java.util.TimSort.mergeCollapse(TimSort.java:410)
 at java.util.TimSort.sort(TimSort.java:214)
 at java.util.TimSort.sort(TimSort.java:173)
 at java.util.Arrays.sort(Arrays.java:659)
 at java.util.Collections.sort(Collections.java:217)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:94)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:59)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:229)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 The comparator ant SizeTieredCompactionStrategy line 94 breaks the assertions 
 in the new JDK7 default sort algorithm, because (I think just) the hotness 
 value (based on meter) may be modified concurrently by another thread
 This bug appears to have been introduced in CASSANDRA-6109



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Comment Edited] (CASSANDRA-6483) Possible Collections.sort assertion failure in STCS.filterColdSSTables

2013-12-12 Thread graham sanderson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847129#comment-13847129
 ] 

graham sanderson edited comment on CASSANDRA-6483 at 12/13/13 3:59 AM:
---

Adding my questions from dev-email thread

Note that the CASSANDRA-6109 feature claims to be “off” by default, however it 
isn’t immediately clear to me from that patch how “off” is implemented, and 
whether it is supposed to go down that code path even when “off

I’m guessing there is no actual downside (other than ERROR level messages in 
the logs which cause alerts), since it just fails a subset of compactions?


was (Author: graham sanderson):
Adding my questions from dev-email thread

Note that the CASSANDRA-6109 feature claims to be “off” by default, however it 
isn’t immediately clear to me from that patch how “off” is implemented, and 
whether it is supposed to go down that code path even when “off

I’m guessing there no actual downside (other than ERROR level messages in the 
logs which cause alerts), since it just fails a subset of compaction runs?

 Possible Collections.sort assertion failure in STCS.filterColdSSTables
 --

 Key: CASSANDRA-6483
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6483
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: graham sanderson
Assignee: Tyler Hobbs
  Labels: compaction
 Fix For: 2.0.4


 We have observed the following stack trace periodically:
 {code}
 java.lang.IllegalArgumentException: Comparison method violates its general 
 contract!
 at java.util.TimSort.mergeLo(TimSort.java:747)
 at java.util.TimSort.mergeAt(TimSort.java:483)
 at java.util.TimSort.mergeCollapse(TimSort.java:410)
 at java.util.TimSort.sort(TimSort.java:214)
 at java.util.TimSort.sort(TimSort.java:173)
 at java.util.Arrays.sort(Arrays.java:659)
 at java.util.Collections.sort(Collections.java:217)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.filterColdSSTables(SizeTieredCompactionStrategy.java:94)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:59)
 at 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:229)
 at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:191)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 The comparator ant SizeTieredCompactionStrategy line 94 breaks the assertions 
 in the new JDK7 default sort algorithm, because (I think just) the hotness 
 value (based on meter) may be modified concurrently by another thread
 This bug appears to have been introduced in CASSANDRA-6109



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Commented] (CASSANDRA-6447) SELECT someColumns FROM table results in AssertionError in AbstractQueryPager.discardFirst

2013-12-12 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13847200#comment-13847200
 ] 

Julien Aymé commented on CASSANDRA-6447:


Thanks for looking into this issue, and sorry for the assumption that this was 
trivial (I am still not completely familiar with the architecture of Cassandra, 
but I am trying to dig into it as best as I can).

 SELECT someColumns FROM table results in AssertionError in 
 AbstractQueryPager.discardFirst
 --

 Key: CASSANDRA-6447
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6447
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cluster: single node server (ubuntu)
 Cassandra version: 2.0.3 (server/client)
 Client: Datastax cassandra-driver-core 2.0.0-rc1
Reporter: Julien Aymé
Assignee: Julien Aymé
 Fix For: 2.0.4

 Attachments: 6447.txt, cassandra-2.0-6447.patch, stacktrace.txt


 I have a query which must read all the rows from the table:
 Query: SELECT key, col1, col2, col3 FROM mytable
 Here is the corresponding code (this is using datastax driver):
 {code}
 ResultSet result = session.execute(SELECT key, col1, col2, col3 FROM 
 mytable);
 for (Row row : result) {
  // do some work with row
 }
 {code}
 Messages sent from the client to Cassandra:
 * 1st: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=null, serialCl=ONE])}}
 * 2nd: {{QUERY SELECT key, col1, col2, col3 FROM mytable([cl=ONE, vals=[], 
 skip=false, psize=5000, state=java.nio.HeapByteBuffer[pos=24 lim=80 
 cap=410474], serialCl=ONE])}}
 On the first message, everything is fine, and the server returns 5000 rows.
 On the second message, paging is in progress, and the server fails in 
 AbstractQueryPager.discardFirst: AssertionError (stack trace attached).
 Here is some more info (step by step debugging on reception of 2nd message):
 {code}
 AbstractQueryPager.fetchPage(int):
 * pageSize=5000, currentPageSize=5001, rows size=5002, liveCount=5001
 * containsPreviousLast(rows.get(0)) returns true
 - AbstractQueryPager.discardFirst(ListRow):
 * rows size=5002
 * first=TreeMapBackedSortedColumns[with TreeMap size=1]
 - AbstractQueryPager.discardHead(ColumnFamily, ...):
 * counter = ColumnCounter$GroupByPrefix
 * iter.hasNext() returns true (TreeMap$ValueIterator with TreeMap size=1)
 * Column c = DeletedColumn
 * counter.count() - c.isLive returns false (c is DeletedColumn)
 * counter.live() = 0
 * iter.hasNext() returns false
 * Math.min(0, toDiscard==1) returns 0
 - AbstractQueryPager.discardFirst(ListRow):
 * discarded = 0;
 * count = newCf.getColumnCount() = 0;
 {code}
 -  assert discarded == 1 *throws AssertionError*



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)