[jira] [Commented] (CASSANDRA-9532) Provide access to select statement's real column definitions

2015-06-17 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590358#comment-14590358
 ] 

Sam Tunnicliffe commented on CASSANDRA-9532:


Initially I was also keen on encapsulating the column definitions, however 
there are a couple of things which complicate that.

* Firstly, aliases belong to {{RawSelector}}, so we would need to extract those 
and pass them along with the list of prepared {{Selectable}} to 
{{createFactoriesAndCollectColumnDefinitions}} if we are to change that to 
collate the {{SelectionColumnMapping}}. This itself is slightly complicated by 
the recursive call in {{WithFunction.newSelectorFactory}}, as here {{args}} are 
already {{Selectable}} instances. In fact, function arguments cannot be aliased 
so it's pretty straightforward to fake up a list of null aliases here but it it 
is a bit ugly and so does reduce the net benefit of the refactoring slightly.
* The second issue, also related to the recursive call, is that we would also 
collect a mapping  column definition on each recursive call, whereas with the 
current patch we only collate for the top-level factories. To illustrate, a 
query like {{SELECT ks.function1(col1) FROM ks.table1}} should generate a 
single mapping, {{\[ks.function1 - \[col1\]\]}}. If we combine the 
{{selectorFactory}} creation with the collation though, the recursive call in 
{{WithFunction.newSelectorFactory}} which processes the args list would 
generate another mapping from the {{ColumnIdentifier}} argument, so we'd end up 
with {{\[ks.function1 - \[col1\], col1 - \[col1\]\]}}. Again, this could be 
fixed with a flag to indicate whether to add stuff to the mapping, but I'm not 
sure if those two workarounds would destroy the value of refactoring in the 
first place.

On the other points, I totally agree and have pushed commits to my dev branches 
incorporating them ( rebased the dev branches too).



 Provide access to select statement's real column definitions
 

 Key: CASSANDRA-9532
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9532
 Project: Cassandra
  Issue Type: Improvement
Reporter: mck
Assignee: Sam Tunnicliffe
 Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x

 Attachments: 9532-2.0-v2.txt, 9532-2.1-v2.txt, 9532-2.2-v2.txt, 
 9532-trunk-v2.txt, cassandra-2.0-9532.txt, cassandra-2.1-9532.txt, 
 cassandra-2.2-9532.txt, trunk-9532.txt


 Currently there is no way to get access to the real ColumnDefinitions being 
 used in a SelectStatement.
 This information is there in
 {{selectStatement.selection.columns}} but is private.
 Giving public access would make it possible for third-party implementations 
 of a {{QueryHandler}} to work accurately with the real columns being queried 
 and not have to work-around column aliases (or when the rawSelectors don't 
 map directly to ColumnDefinitions, eg in Selection.fromSelectors(..), like 
 functions), which is what one has to do today with going through 
 ResultSet.metadata.names.
 This issue provides a very minimal patch to provide access to the already 
 final and immutable fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9496) ArrivalWindow should use primitives

2015-06-17 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-9496:
---
Reviewer: Jason Brown

 ArrivalWindow should use primitives 
 

 Key: CASSANDRA-9496
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9496
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: 9496.txt


 While doing a heap analysis of a large cluster(1000+), I found that majority 
 of the strongly referenced objects on heap are coming from ArrivalWindow. 
 Currently ArrivalWindow uses BoundedStatsDeque which uses 
 LinkedBlockingDequeLong deque. 
 For a cluster of size 1000, it will translate into 2 million objects. 
 We can use primitives and use an array of long(long[]). This will cut down on 
 the number of objects and the change is not that big. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9610) Increased response time with cassandra 2.0.9 from 1.2.19

2015-06-17 Thread Maitrayee (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590480#comment-14590480
 ] 

Maitrayee edited comment on CASSANDRA-9610 at 6/17/15 8:14 PM:
---

in 1.2.19 yaml I have num_tokens: 256
How did you determine that vodes are not enabled in 1.2.19?


was (Author: maitrayee_c):
in 1.2.19 yaml I have num_tokens: 256
How did you determine that nvodes are not enabled in 1.2.19?

 Increased response time with cassandra 2.0.9 from 1.2.19
 

 Key: CASSANDRA-9610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9610
 Project: Cassandra
  Issue Type: Bug
Reporter: Maitrayee
 Attachments: servicedefinition_schema.txt, traceout_1.2.19, 
 traceout_2.0.9


 I was using Cassandra 1.2.19. Recently upgraded to 2.0.9. Queries with 
 secondary index was completing much faster in 1.2.19
 Validated this with trace on via cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9610) Increased response time with cassandra 2.0.9 from 1.2.19

2015-06-17 Thread Maitrayee (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590480#comment-14590480
 ] 

Maitrayee edited comment on CASSANDRA-9610 at 6/17/15 8:15 PM:
---

in 1.2.19 yaml I have num_tokens: 256
How did you determine that vnodes are not enabled in 1.2.19?


was (Author: maitrayee_c):
in 1.2.19 yaml I have num_tokens: 256
How did you determine that vodes are not enabled in 1.2.19?

 Increased response time with cassandra 2.0.9 from 1.2.19
 

 Key: CASSANDRA-9610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9610
 Project: Cassandra
  Issue Type: Bug
Reporter: Maitrayee
 Attachments: servicedefinition_schema.txt, traceout_1.2.19, 
 traceout_2.0.9


 I was using Cassandra 1.2.19. Recently upgraded to 2.0.9. Queries with 
 secondary index was completing much faster in 1.2.19
 Validated this with trace on via cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8303) Create a capability limitation framework

2015-06-17 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590255#comment-14590255
 ] 

Wei Deng edited comment on CASSANDRA-8303 at 6/17/15 6:29 PM:
--

I'd like to add one more use case here: we need to allow the DBA to restrict 
the application code from enabling query tracing. We know tracing is used for 
debugging query performance problems and can be very useful in the development 
stage for the Cassandra application developers and DBAs. However, if the 
developer is not careful to turn it off when they roll out the application code 
in production, then they might unknowingly incur performance overhead from 
query tracing. Sometimes this could be overwhelming and you will start to see a 
lot of dropped _TRACE messages in nodetool tpstats. The DBA needs to have 
this control to turn off the capability of enabling query tracing on all tables 
so that they don't have to worry about something they don't have control (if 
the query tracing is enabled from the C* driver) but can negatively impact the 
C* performance.


was (Author: weideng):
I'd like to add one more use case here: we need to allow the DBA to restrict 
the application code from enabling query tracing. We know tracing is used for 
debugging query performance problems and can be very useful in the development 
stage for the Cassandra application developers and DBAs. However, if the 
developer is not careful to turn it off when they roll out the application code 
in production, then they might unknowingly incur performance overhead from 
query tracing. Sometimes this could be overwhelming and you will start to see a 
lot of dropped _TRACE messages in nodetool tpstats. The DBA needs to have 
this control to turn off the capability of enabling query tracing on all tables 
so that they don't have to worry about something they don't have control but 
can negatively impact the C* performance.

 Create a capability limitation framework
 

 Key: CASSANDRA-8303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8303
 Project: Cassandra
  Issue Type: Improvement
Reporter: Anupam Arora
Assignee: Sam Tunnicliffe
 Fix For: 3.x


 In addition to our current Auth framework that acts as a white list, and 
 regulates access to data, functions, and roles, it would be beneficial to 
 have a different, capability limitation framework, that would be orthogonal 
 to Auth, and would act as a blacklist.
 Example uses:
 - take away the ability to TRUNCATE from all users but the admin (TRUNCATE 
 itself would still require MODIFY permission)
 - take away the ability to use ALLOW FILTERING from all users but 
 Spark/Hadoop (SELECT would still require SELECT permission)
 - take away the ability to use UNLOGGED BATCH from everyone (the operation 
 itself would still require MODIFY permission)
 - take away the ability to use certain consistency levels (make certain 
 tables LWT-only for all users, for example)
 Original description:
 Please provide a strict mode option in cassandra that will kick out any CQL 
 queries that are expensive, e.g. any query with ALLOWS FILTERING, 
 multi-partition queries, secondary index queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9601) Allow an initial connection timeout to be set in cqlsh

2015-06-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9601:
--
Fix Version/s: (was: 2.2.0 rc2)
   2.2.x

 Allow an initial connection timeout to be set in cqlsh
 --

 Key: CASSANDRA-9601
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9601
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Mike Adamson
Assignee: Stefania
  Labels: cqlsh
 Fix For: 2.2.x


 [PYTHON-206|https://datastax-oss.atlassian.net/browse/PYTHON-206] introduced 
 the ability to change the initial connection timeout on connections from the 
 default of 5s.
 This change was introduced because some auth providers (kerberos) can take 
 longer than 5s to complete a first time negotiation for a connection. 
 cqlsh should allow this setting to be changed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9610) Increased response time with cassandra 2.0.9 from 1.2.19

2015-06-17 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590262#comment-14590262
 ] 

Tyler Hobbs commented on CASSANDRA-9610:


Can you post the schema, the query, the tracing information, and what your 
response times are for 1.2.19 and 2.0.9?

 Increased response time with cassandra 2.0.9 from 1.2.19
 

 Key: CASSANDRA-9610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9610
 Project: Cassandra
  Issue Type: Bug
Reporter: Maitrayee

 I was using Cassandra 1.2.19. Recently upgraded to 2.0.9. Queries with 
 secondary index was completing much faster in 1.2.19
 Validated this with trace on via cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9614) I am trying to execute queries using execute_concurrent() in python but I am getting the best throughput on a concurrency of 4 whereas according to the documentation

2015-06-17 Thread Kumar Saras (JIRA)
Kumar Saras created CASSANDRA-9614:
--

 Summary: I am trying to execute queries using execute_concurrent() 
in python but I am getting the best throughput on a concurrency of 4 whereas 
according to the documentation there can be 100 concurrent requests. Why this 
kind of behavior?
 Key: CASSANDRA-9614
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9614
 Project: Cassandra
  Issue Type: Test
  Components: Documentation  website
 Environment: Linux
Reporter: Kumar Saras
 Fix For: 1.2.x






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9610) Increased response time with cassandra 2.0.9 from 1.2.19

2015-06-17 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590403#comment-14590403
 ] 

Tyler Hobbs commented on CASSANDRA-9610:


It looks like your 1.2 cluster does not have vnodes enabled (num_tokens = 1) 
and the 2.0 cluster does.  In 2.1 we've improved the performance when vnodes 
are enabled with CASSANDRA-1337, but 2.0 doesn't have this improvement.  I 
suggest these options:
* Structure your data model so that you don't rely on secondary indexes for 
queries where performance matters,
* Upgrade to 2.1, or
* Don't enable vnodes in 2.0

 Increased response time with cassandra 2.0.9 from 1.2.19
 

 Key: CASSANDRA-9610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9610
 Project: Cassandra
  Issue Type: Bug
Reporter: Maitrayee
 Attachments: servicedefinition_schema.txt, traceout_1.2.19, 
 traceout_2.0.9


 I was using Cassandra 1.2.19. Recently upgraded to 2.0.9. Queries with 
 secondary index was completing much faster in 1.2.19
 Validated this with trace on via cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9496) ArrivalWindow should use primitives

2015-06-17 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590351#comment-14590351
 ] 

sankalp kohli commented on CASSANDRA-9496:
--

The patch is against 2.0 but I think it should work for later branches. 

 ArrivalWindow should use primitives 
 

 Key: CASSANDRA-9496
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9496
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: 9496.txt


 While doing a heap analysis of a large cluster(1000+), I found that majority 
 of the strongly referenced objects on heap are coming from ArrivalWindow. 
 Currently ArrivalWindow uses BoundedStatsDeque which uses 
 LinkedBlockingDequeLong deque. 
 For a cluster of size 1000, it will translate into 2 million objects. 
 We can use primitives and use an array of long(long[]). This will cut down on 
 the number of objects and the change is not that big. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9610) Increased response time with cassandra 2.0.9 from 1.2.19

2015-06-17 Thread Maitrayee (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maitrayee updated CASSANDRA-9610:
-
Attachment: traceout_2.0.9
traceout_1.2.19
servicedefinition_schema.txt

 Increased response time with cassandra 2.0.9 from 1.2.19
 

 Key: CASSANDRA-9610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9610
 Project: Cassandra
  Issue Type: Bug
Reporter: Maitrayee
 Attachments: servicedefinition_schema.txt, traceout_1.2.19, 
 traceout_2.0.9


 I was using Cassandra 1.2.19. Recently upgraded to 2.0.9. Queries with 
 secondary index was completing much faster in 1.2.19
 Validated this with trace on via cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8303) Create a capability limitation framework

2015-06-17 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590255#comment-14590255
 ] 

Wei Deng commented on CASSANDRA-8303:
-

I'd like to add one more use case here: we need to allow the DBA to restrict 
the application code from enabling query tracing. We know tracing is used for 
debugging query performance problems and can be very useful in the development 
stage for the Cassandra application developers and DBAs. However, if the 
developer is not careful to turn it off when they roll out the application code 
in production, then they might unknowingly incur performance overhead from 
query tracing. Sometimes this could be overwhelming and you will start to see a 
lot of dropped _TRACE messages in nodetool tpstats. The DBA needs to have 
this control to turn off the capability of enabling query tracing on all tables 
so that they don't have to worry about something they don't have control but 
can negatively impact the C* performance.

 Create a capability limitation framework
 

 Key: CASSANDRA-8303
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8303
 Project: Cassandra
  Issue Type: Improvement
Reporter: Anupam Arora
Assignee: Sam Tunnicliffe
 Fix For: 3.x


 In addition to our current Auth framework that acts as a white list, and 
 regulates access to data, functions, and roles, it would be beneficial to 
 have a different, capability limitation framework, that would be orthogonal 
 to Auth, and would act as a blacklist.
 Example uses:
 - take away the ability to TRUNCATE from all users but the admin (TRUNCATE 
 itself would still require MODIFY permission)
 - take away the ability to use ALLOW FILTERING from all users but 
 Spark/Hadoop (SELECT would still require SELECT permission)
 - take away the ability to use UNLOGGED BATCH from everyone (the operation 
 itself would still require MODIFY permission)
 - take away the ability to use certain consistency levels (make certain 
 tables LWT-only for all users, for example)
 Original description:
 Please provide a strict mode option in cassandra that will kick out any CQL 
 queries that are expensive, e.g. any query with ALLOWS FILTERING, 
 multi-partition queries, secondary index queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9610) Increased response time with cassandra 2.0.9 from 1.2.19

2015-06-17 Thread Maitrayee (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590383#comment-14590383
 ] 

Maitrayee commented on CASSANDRA-9610:
--

Attached traceout and schema definition. 1.2.19 and 2.0.9 trace outs are from 2 
different env (more nodes in 1.2.19 cluster)

 Increased response time with cassandra 2.0.9 from 1.2.19
 

 Key: CASSANDRA-9610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9610
 Project: Cassandra
  Issue Type: Bug
Reporter: Maitrayee
 Attachments: servicedefinition_schema.txt, traceout_1.2.19, 
 traceout_2.0.9


 I was using Cassandra 1.2.19. Recently upgraded to 2.0.9. Queries with 
 secondary index was completing much faster in 1.2.19
 Validated this with trace on via cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-06-17 Thread jasobrown
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/utils/BoundedStatsDeque.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c159701
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c159701
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c159701

Branch: refs/heads/cassandra-2.2
Commit: 4c15970119e021dd0fe4b2fe8b4f9c594d21f334
Parents: 7c5fc40 ad8047a
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:36:59 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:36:59 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 +++--
 .../cassandra/utils/BoundedStatsDeque.java  | 68 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 ---
 5 files changed, 111 insertions(+), 140 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c159701/CHANGES.txt
--
diff --cc CHANGES.txt
index 009d974,753fb1c..8f3f9f0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,8 +1,52 @@@
 -2.0.16:
 +2.1.7
 + * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
 +Merged from 2.0
+  * ArrivalWindow should use primitives (CASSANDRA-9496)
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
 - * Backport indexed value validation fix from CASSANDRA-9057 (CASSANDRA-9564)
 +
 +
 +2.1.6
 + * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
 + * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
 + * Use ProtocolError code instead of ServerError code for native protocol
 +   error responses to unsupported protocol versions (CASSANDRA-9451)
 + * Default commitlog_sync_batch_window_in_ms changed to 2ms (CASSANDRA-9504)
 + * Fix empty partition assertion in unsorted sstable writing tools 
(CASSANDRA-9071)
 + * Ensure truncate without snapshot cannot produce corrupt responses 
(CASSANDRA-9388) 
 + * Consistent error message when a table mixes counter and non-counter
 +   columns (CASSANDRA-9492)
 + * Avoid getting unreadable keys during anticompaction (CASSANDRA-9508)
 + * (cqlsh) Better float precision by default (CASSANDRA-9224)
 + * Improve estimated row count (CASSANDRA-9107)
 + * Optimize range tombstone memory footprint (CASSANDRA-8603)
 + * Use configured gcgs in anticompaction (CASSANDRA-9397)
 + * Warn on misuse of unlogged batches (CASSANDRA-9282)
 + * Failure detector detects and ignores local pauses (CASSANDRA-9183)
 + * Add utility class to support for rate limiting a given log statement 
(CASSANDRA-9029)
 + * Add missing consistency levels to cassandra-stess (CASSANDRA-9361)
 + * Fix commitlog getCompletedTasks to not increment (CASSANDRA-9339)
 + * Fix for harmless exceptions logged as ERROR (CASSANDRA-8564)
 + * Delete processed sstables in sstablesplit/sstableupgrade (CASSANDRA-8606)
 + * Improve sstable exclusion from partition tombstones (CASSANDRA-9298)
 + * Validate the indexed column rather than the cell's contents for 2i 
(CASSANDRA-9057)
 + * Add support for top-k custom 2i queries (CASSANDRA-8717)
 + * Fix error when dropping table during compaction (CASSANDRA-9251)
 + * cassandra-stress supports validation operations over user profiles 
(CASSANDRA-8773)
 + * Add support for rate limiting log messages (CASSANDRA-9029)
 + * Log the partition key with tombstone warnings (CASSANDRA-8561)
 + * Reduce runWithCompactionsDisabled poll interval to 1ms (CASSANDRA-9271)
 + * Fix PITR commitlog replay (CASSANDRA-9195)
 + * GCInspector logs very different times (CASSANDRA-9124)
 + * Fix deleting from an empty list (CASSANDRA-9198)
 + * Update tuple and collection types that use a user-defined type when that 
UDT
 +   is modified (CASSANDRA-9148, CASSANDRA-9192)
 + * Use higher timeout for prepair and snapshot in repair (CASSANDRA-9261)
 + * Fix anticompaction blocking ANTI_ENTROPY stage (CASSANDRA-9151)
 + * Repair waits for anticompaction to finish (CASSANDRA-9097)
 + * Fix streaming not holding ref when stream error (CASSANDRA-9295)
 + * Fix canonical view returning early opened SSTables (CASSANDRA-9396)
 +Merged from 2.0:
   * Don't accumulate more range than necessary in RangeTombstone.Tracker 
(CASSANDRA-9486)
   * Add broadcast and rpc addresses to system.local (CASSANDRA-9436)
   * Always mark sstable suspect when corrupted (CASSANDRA-9478)


[jira] [Updated] (CASSANDRA-9496) ArrivalWindow should use primitives

2015-06-17 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-9496:
---
Fix Version/s: 2.2.0 rc2
   2.1.7
   2.0.16
   3.x

 ArrivalWindow should use primitives 
 

 Key: CASSANDRA-9496
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9496
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Fix For: 3.x, 2.0.16, 2.1.7, 2.2.0 rc2

 Attachments: 9496.txt


 While doing a heap analysis of a large cluster(1000+), I found that majority 
 of the strongly referenced objects on heap are coming from ArrivalWindow. 
 Currently ArrivalWindow uses BoundedStatsDeque which uses 
 LinkedBlockingDequeLong deque. 
 For a cluster of size 1000, it will translate into 2 million objects. 
 We can use primitives and use an array of long(long[]). This will cut down on 
 the number of objects and the change is not that big. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9496) ArrivalWindow should use primitives

2015-06-17 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590665#comment-14590665
 ] 

Jason Brown commented on CASSANDRA-9496:


+1, looks good. I also took the liberty of removing BoundedStatsDeque and it's 
test as they are no longer used after this change.

Commited to 2.0, 2.1, 2.2, and trunk. Thanks!

 ArrivalWindow should use primitives 
 

 Key: CASSANDRA-9496
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9496
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: 9496.txt


 While doing a heap analysis of a large cluster(1000+), I found that majority 
 of the strongly referenced objects on heap are coming from ArrivalWindow. 
 Currently ArrivalWindow uses BoundedStatsDeque which uses 
 LinkedBlockingDequeLong deque. 
 For a cluster of size 1000, it will translate into 2 million objects. 
 We can use primitives and use an array of long(long[]). This will cut down on 
 the number of objects and the change is not that big. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9609) org.apache.cassandra.db.marshal.DateType compares dates with negative timestamp incorrectly

2015-06-17 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-9609.

Resolution: Duplicate

We fixed this in CASSANDRA-5723 by introducing the TimestampType and switching 
the CQL {{timestamp}} type to that by default.

 org.apache.cassandra.db.marshal.DateType compares dates with negative 
 timestamp incorrectly
 ---

 Key: CASSANDRA-9609
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9609
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Troshanin

 We got issues in our application.
 We have column family with column names as dates.
 when we try to search these columns with some criteria like 
 {code}
 // pseudocode
 colName  '01.01.2015'
 {code}
 then we dont receive columns whos names are less then 01.01.1970, i.e. with 
 negative timestamp.
 After a small research we found out, that probably DateType class has a bug 
 in compateTo method.
 Here is very small example shat shows incorrect work of DateType 
 {code}
 import org.apache.cassandra.db.marshal.DateType;
 import org.apache.cassandra.db.marshal.LongType;
 import org.apache.cassandra.serializers.LongSerializer;
 import org.apache.cassandra.serializers.TimestampSerializer;
 import java.util.Date;
 import java.util.Locale;
 import java.util.TimeZone;
 public class TestCassandraCompare {
 static TimestampSerializer dateSerializer = TimestampSerializer.instance;
 static LongSerializer longSerializer = LongSerializer.instance;
 static DateType dateType = DateType.instance;
 static LongType longType = LongType.instance;
 public static void main(String[] args) {
 Locale.setDefault(Locale.ENGLISH);
 TimeZone.setDefault(TimeZone.getTimeZone(GMT0));
 // check longs
 doLongsCheck(10, 20);
 doLongsCheck(20, 10);
 doLongsCheck(-10, -20);
 doLongsCheck(-20, -10);
 doLongsCheck(-10, 20);
 doLongsCheck(20, -10);
 doLongsCheck(10, -20);
 doLongsCheck(-20, 10);
 // check dates
 doDatesCheck(new Date(10),  new Date(20),   1);
 doDatesCheck(new Date(20),  new Date(10),   2);
 doDatesCheck(new Date(-10), new Date(-20),  3);
 doDatesCheck(new Date(-20), new Date(-10),  4);
 doDatesCheck(new Date(-10), new Date(20),   5);
 doDatesCheck(new Date(20),  new Date(-10),  6);
 doDatesCheck(new Date(10),  new Date(-20),  7);
 doDatesCheck(new Date(-20), new Date(10),   8);
 }
 private static void doLongsCheck(long l1, long l2) {
 int cassandraCompare = longType.compare(longSerializer.serialize(l1), 
 longSerializer.serialize(l2));
 int javaCoreCompare = Long.compare(l1, l2);
 if (cassandraCompare != javaCoreCompare) {
 System.err.println(got incorrect result from LongType compare 
 method. +
 \n\tlong1:  + l1 +
 \n\tlong2:  + l2 +
 \n\tcassandraCompare:  + cassandraCompare +
 \n\tjavaCoreCompare:  + javaCoreCompare
 );
 }
 }
 private static void doDatesCheck(Date d1, Date d2, int testNum) {
 int cassandraCompare = dateType.compare(dateSerializer.serialize(d1), 
 dateSerializer.serialize(d2));
 int javaCoreCompare = d1.compareTo(d2);
 if (cassandraCompare != javaCoreCompare) {
 System.err.println([ + testNum + ]got incorrect result from 
 DateType compare method. +
 \n\tdate1:  + d1 +
 \n\tdate2:  + d2 +
 \n\tcassandraCompare:  + cassandraCompare +
 \n\tjavaCoreCompare:  + javaCoreCompare
 );
 }
 }
 }
 {code}
 If you will run the code you will se next output:
 {code}
 [5]got incorrect result from DateType compare method.
   date1: Wed Dec 31 23:58:20 GMT 1969
   date2: Thu Jan 01 00:03:20 GMT 1970
   cassandraCompare: 1
   javaCoreCompare: -1
 [6]got incorrect result from DateType compare method.
   date1: Thu Jan 01 00:03:20 GMT 1970
   date2: Wed Dec 31 23:58:20 GMT 1969
   cassandraCompare: -1
   javaCoreCompare: 1
 [7]got incorrect result from DateType compare method.
   date1: Thu Jan 01 00:01:40 GMT 1970
   date2: Wed Dec 31 23:56:40 GMT 1969
   cassandraCompare: -1
   javaCoreCompare: 1
 [8]got incorrect result from DateType compare method.
   date1: Wed Dec 31 23:56:40 GMT 1969
   date2: Thu Jan 01 00:01:40 GMT 1970
   cassandraCompare: 1
   javaCoreCompare: -1
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9612) Assertion error while running `nodetool cfstats`

2015-06-17 Thread mlowicki (JIRA)
mlowicki created CASSANDRA-9612:
---

 Summary: Assertion error while running `nodetool cfstats`
 Key: CASSANDRA-9612
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9612
 Project: Cassandra
  Issue Type: Bug
 Environment: C* 2.1.6
Reporter: mlowicki


 nodetool cfstats sync.entity
{code}
Keyspace: sync
Read Count: 2916573
Read Latency: 0.26340278573517617 ms.
Write Count: 2356495
Write Latency: 0.03296340242606074 ms.
Pending Flushes: 0
Table: entity
SSTable count: 919
SSTables in each level: [50/4, 11/10, 101/100, 756, 0, 0, 0, 0, 
0]
Space used (live): 146265014558
Space used (total): 146265014558
Space used by snapshots (total): 0
Off heap memory used (total): 97950899
SSTable Compression Ratio: 0.1870809135227128
error: 
/var/lib/cassandra/data2/sync/entity-f73d1360770e11e49f1d673dc3e50a5f/sync-entity-tmplink-ka-516810-Data.db
-- StackTrace --
java.lang.AssertionError: 
/var/lib/cassandra/data2/sync/entity-f73d1360770e11e49f1d673dc3e50a5f/sync-entity-tmplink-ka-516810-Data.db
at 
org.apache.cassandra.io.sstable.SSTableReader.getApproximateKeyCount(SSTableReader.java:270)
at 
org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:296)
at 
org.apache.cassandra.metrics.ColumnFamilyMetrics$9.value(ColumnFamilyMetrics.java:290)
at 
com.yammer.metrics.reporting.JmxReporter$Gauge.getValue(JmxReporter.java:63)
at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at 
com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at 
com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at 
com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1464)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420)
at 
javax.management.remote.rmi.RMIConnectionImpl.getAttribute(RMIConnectionImpl.java:657)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
at sun.rmi.transport.Transport$2.run(Transport.java:202)
at sun.rmi.transport.Transport$2.run(Transport.java:199)
at java.security.AccessController.doPrivileged(Native Method)
at sun.rmi.transport.Transport.serviceCall(Transport.java:198)
at 
sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:567)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:828)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.access$400(TCPTransport.java:619)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:684)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$1.run(TCPTransport.java:681)
at java.security.AccessController.doPrivileged(Native Method)
at 
sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:681)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9613) Omit (de)serialization of state variable in UDAs

2015-06-17 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-9613:
---

 Summary: Omit (de)serialization of state variable in UDAs
 Key: CASSANDRA-9613
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9613
 Project: Cassandra
  Issue Type: Improvement
Reporter: Robert Stupp
Priority: Minor


Currently the result of each UDA's state function call is serialized and then 
deserialized for the next state-function invocation and optionally final 
function invocation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9496) ArrivalWindow should use primitives

2015-06-17 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-9496:
-
Attachment: 9496.txt

 ArrivalWindow should use primitives 
 

 Key: CASSANDRA-9496
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9496
 Project: Cassandra
  Issue Type: Improvement
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: 9496.txt


 While doing a heap analysis of a large cluster(1000+), I found that majority 
 of the strongly referenced objects on heap are coming from ArrivalWindow. 
 Currently ArrivalWindow uses BoundedStatsDeque which uses 
 LinkedBlockingDequeLong deque. 
 For a cluster of size 1000, it will translate into 2 million objects. 
 We can use primitives and use an array of long(long[]). This will cut down on 
 the number of objects and the change is not that big. 
   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9610) Increased response time with cassandra 2.0.9 from 1.2.19

2015-06-17 Thread Maitrayee (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590480#comment-14590480
 ] 

Maitrayee commented on CASSANDRA-9610:
--

in 1.2.19 yaml I have num_tokens: 256
How did you determine that nvodes are not enabled in 1.2.19?

 Increased response time with cassandra 2.0.9 from 1.2.19
 

 Key: CASSANDRA-9610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9610
 Project: Cassandra
  Issue Type: Bug
Reporter: Maitrayee
 Attachments: servicedefinition_schema.txt, traceout_1.2.19, 
 traceout_2.0.9


 I was using Cassandra 1.2.19. Recently upgraded to 2.0.9. Queries with 
 secondary index was completing much faster in 1.2.19
 Validated this with trace on via cqlsh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/4] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-17 Thread jasobrown
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b021db7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b021db7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b021db7

Branch: refs/heads/trunk
Commit: 8b021db7c2b4adbb5db8e8bba3ca916ef41fc4f7
Parents: 6068efb 4c15970
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:39:09 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:39:09 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 +++--
 .../cassandra/utils/BoundedStatsDeque.java  | 68 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 ---
 5 files changed, 111 insertions(+), 140 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b021db7/CHANGES.txt
--
diff --cc CHANGES.txt
index 3e9940c,8f3f9f0..c32596c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,40 -1,12 +1,41 @@@
 -2.1.7
 +2.2
 + * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)
 + * Mlockall before opening system sstables  remove boot_without_jna option 
(CASSANDRA-9573)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 + * Fix deprecated repair JMX API (CASSANDRA-9570)
 +Merged from 2.1:
   * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
  Merged from 2.0
+  * ArrivalWindow should use primitives (CASSANDRA-9496)
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
 + * Add logback metrics (CASSANDRA-9378)
  
  
 -2.1.6
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 +Merged from 2.1:
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b021db7/src/java/org/apache/cassandra/gms/FailureDetector.java
--



[2/4] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-06-17 Thread jasobrown
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/utils/BoundedStatsDeque.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c159701
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c159701
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c159701

Branch: refs/heads/trunk
Commit: 4c15970119e021dd0fe4b2fe8b4f9c594d21f334
Parents: 7c5fc40 ad8047a
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:36:59 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:36:59 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 +++--
 .../cassandra/utils/BoundedStatsDeque.java  | 68 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 ---
 5 files changed, 111 insertions(+), 140 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c159701/CHANGES.txt
--
diff --cc CHANGES.txt
index 009d974,753fb1c..8f3f9f0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,8 +1,52 @@@
 -2.0.16:
 +2.1.7
 + * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
 +Merged from 2.0
+  * ArrivalWindow should use primitives (CASSANDRA-9496)
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
 - * Backport indexed value validation fix from CASSANDRA-9057 (CASSANDRA-9564)
 +
 +
 +2.1.6
 + * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
 + * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
 + * Use ProtocolError code instead of ServerError code for native protocol
 +   error responses to unsupported protocol versions (CASSANDRA-9451)
 + * Default commitlog_sync_batch_window_in_ms changed to 2ms (CASSANDRA-9504)
 + * Fix empty partition assertion in unsorted sstable writing tools 
(CASSANDRA-9071)
 + * Ensure truncate without snapshot cannot produce corrupt responses 
(CASSANDRA-9388) 
 + * Consistent error message when a table mixes counter and non-counter
 +   columns (CASSANDRA-9492)
 + * Avoid getting unreadable keys during anticompaction (CASSANDRA-9508)
 + * (cqlsh) Better float precision by default (CASSANDRA-9224)
 + * Improve estimated row count (CASSANDRA-9107)
 + * Optimize range tombstone memory footprint (CASSANDRA-8603)
 + * Use configured gcgs in anticompaction (CASSANDRA-9397)
 + * Warn on misuse of unlogged batches (CASSANDRA-9282)
 + * Failure detector detects and ignores local pauses (CASSANDRA-9183)
 + * Add utility class to support for rate limiting a given log statement 
(CASSANDRA-9029)
 + * Add missing consistency levels to cassandra-stess (CASSANDRA-9361)
 + * Fix commitlog getCompletedTasks to not increment (CASSANDRA-9339)
 + * Fix for harmless exceptions logged as ERROR (CASSANDRA-8564)
 + * Delete processed sstables in sstablesplit/sstableupgrade (CASSANDRA-8606)
 + * Improve sstable exclusion from partition tombstones (CASSANDRA-9298)
 + * Validate the indexed column rather than the cell's contents for 2i 
(CASSANDRA-9057)
 + * Add support for top-k custom 2i queries (CASSANDRA-8717)
 + * Fix error when dropping table during compaction (CASSANDRA-9251)
 + * cassandra-stress supports validation operations over user profiles 
(CASSANDRA-8773)
 + * Add support for rate limiting log messages (CASSANDRA-9029)
 + * Log the partition key with tombstone warnings (CASSANDRA-8561)
 + * Reduce runWithCompactionsDisabled poll interval to 1ms (CASSANDRA-9271)
 + * Fix PITR commitlog replay (CASSANDRA-9195)
 + * GCInspector logs very different times (CASSANDRA-9124)
 + * Fix deleting from an empty list (CASSANDRA-9198)
 + * Update tuple and collection types that use a user-defined type when that 
UDT
 +   is modified (CASSANDRA-9148, CASSANDRA-9192)
 + * Use higher timeout for prepair and snapshot in repair (CASSANDRA-9261)
 + * Fix anticompaction blocking ANTI_ENTROPY stage (CASSANDRA-9151)
 + * Repair waits for anticompaction to finish (CASSANDRA-9097)
 + * Fix streaming not holding ref when stream error (CASSANDRA-9295)
 + * Fix canonical view returning early opened SSTables (CASSANDRA-9396)
 +Merged from 2.0:
   * Don't accumulate more range than necessary in RangeTombstone.Tracker 
(CASSANDRA-9486)
   * Add broadcast and rpc addresses to system.local (CASSANDRA-9436)
   * Always mark sstable suspect when corrupted (CASSANDRA-9478)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-17 Thread jasobrown
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b021db7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b021db7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b021db7

Branch: refs/heads/cassandra-2.2
Commit: 8b021db7c2b4adbb5db8e8bba3ca916ef41fc4f7
Parents: 6068efb 4c15970
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:39:09 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:39:09 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 +++--
 .../cassandra/utils/BoundedStatsDeque.java  | 68 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 ---
 5 files changed, 111 insertions(+), 140 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b021db7/CHANGES.txt
--
diff --cc CHANGES.txt
index 3e9940c,8f3f9f0..c32596c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,40 -1,12 +1,41 @@@
 -2.1.7
 +2.2
 + * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)
 + * Mlockall before opening system sstables  remove boot_without_jna option 
(CASSANDRA-9573)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 + * Fix deprecated repair JMX API (CASSANDRA-9570)
 +Merged from 2.1:
   * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
  Merged from 2.0
+  * ArrivalWindow should use primitives (CASSANDRA-9496)
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
 + * Add logback metrics (CASSANDRA-9378)
  
  
 -2.1.6
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 +Merged from 2.1:
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b021db7/src/java/org/apache/cassandra/gms/FailureDetector.java
--



[1/3] cassandra git commit: ArrivalWindow should use primitives

2015-06-17 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 6068efbaf - 8b021db7c


ArrivalWindow should use primitives

patch by sankalp kohli; reviewed by jasobrown for CASSANDRA-9496


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad8047ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad8047ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad8047ab

Branch: refs/heads/cassandra-2.2
Commit: ad8047abdf5db6652b9586e039debb1e855db09a
Parents: ec52e77
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:33:44 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:33:44 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 ++--
 .../cassandra/utils/BoundedStatsDeque.java  | 72 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 --
 5 files changed, 111 insertions(+), 144 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad8047ab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d031f6..753fb1c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.16:
+ * ArrivalWindow should use primitives (CASSANDRA-9496)
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
  * Backport indexed value validation fix from CASSANDRA-9057 (CASSANDRA-9564)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad8047ab/src/java/org/apache/cassandra/gms/FailureDetector.java
--
diff --git a/src/java/org/apache/cassandra/gms/FailureDetector.java 
b/src/java/org/apache/cassandra/gms/FailureDetector.java
index e247e48..8fdd99f 100644
--- a/src/java/org/apache/cassandra/gms/FailureDetector.java
+++ b/src/java/org/apache/cassandra/gms/FailureDetector.java
@@ -27,14 +27,12 @@ import java.util.concurrent.TimeUnit;
 import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
-import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.io.FSWriteError;
 import org.apache.cassandra.io.util.FileUtils;
-import org.apache.cassandra.utils.BoundedStatsDeque;
 import org.apache.cassandra.utils.FBUtilities;
 
 /**
@@ -289,11 +287,60 @@ public class FailureDetector implements IFailureDetector, 
FailureDetectorMBean
 }
 }
 
+/*
+ This class is not thread safe.
+ */
+class ArrayBackedBoundedStats
+{
+private final long[] arrivalIntervals;
+private long sum = 0;
+private int index = 0;
+private boolean isFilled = false;
+private volatile double mean = 0;
+
+public ArrayBackedBoundedStats(final int size)
+{
+arrivalIntervals = new long[size];
+}
+
+public void add(long interval)
+{
+if(index == arrivalIntervals.length)
+{
+isFilled = true;
+index = 0;
+}
+
+if(isFilled)
+sum = sum - arrivalIntervals[index];
+
+arrivalIntervals[index++] = interval;
+sum += interval;
+mean = (double)sum / size();
+}
+
+private int size()
+{
+return isFilled ? arrivalIntervals.length : index;
+}
+
+public double mean()
+{
+return mean;
+}
+
+public long[] getArrivalIntervals()
+{
+return arrivalIntervals;
+}
+
+}
+
 class ArrivalWindow
 {
 private static final Logger logger = 
LoggerFactory.getLogger(ArrivalWindow.class);
 private long tLast = 0L;
-private final BoundedStatsDeque arrivalIntervals;
+private final ArrayBackedBoundedStats arrivalIntervals;
 
 // this is useless except to provide backwards compatibility in 
phi_convict_threshold,
 // because everyone seems pretty accustomed to the default of 8, and users 
who have
@@ -309,7 +356,7 @@ class ArrivalWindow
 
 ArrivalWindow(int size)
 {
-arrivalIntervals = new BoundedStatsDeque(size);
+arrivalIntervals = new ArrayBackedBoundedStats(size);
 }
 
 private static long getMaxInterval()
@@ -355,14 +402,14 @@ class ArrivalWindow
 // see CASSANDRA-2597 for an explanation of the math at work here.
 double phi(long tnow)
 {
-assert arrivalIntervals.size()  0  tLast  0; // should not be 
called before any samples arrive
+assert arrivalIntervals.mean()  0  tLast  0; // should not be 
called before any samples arrive
 long t = tnow - tLast;
   

[1/4] cassandra git commit: ArrivalWindow should use primitives

2015-06-17 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/trunk 075ff5000 - 72ab4b944


ArrivalWindow should use primitives

patch by sankalp kohli; reviewed by jasobrown for CASSANDRA-9496


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad8047ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad8047ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad8047ab

Branch: refs/heads/trunk
Commit: ad8047abdf5db6652b9586e039debb1e855db09a
Parents: ec52e77
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:33:44 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:33:44 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 ++--
 .../cassandra/utils/BoundedStatsDeque.java  | 72 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 --
 5 files changed, 111 insertions(+), 144 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad8047ab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d031f6..753fb1c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.16:
+ * ArrivalWindow should use primitives (CASSANDRA-9496)
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
  * Backport indexed value validation fix from CASSANDRA-9057 (CASSANDRA-9564)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad8047ab/src/java/org/apache/cassandra/gms/FailureDetector.java
--
diff --git a/src/java/org/apache/cassandra/gms/FailureDetector.java 
b/src/java/org/apache/cassandra/gms/FailureDetector.java
index e247e48..8fdd99f 100644
--- a/src/java/org/apache/cassandra/gms/FailureDetector.java
+++ b/src/java/org/apache/cassandra/gms/FailureDetector.java
@@ -27,14 +27,12 @@ import java.util.concurrent.TimeUnit;
 import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
-import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.io.FSWriteError;
 import org.apache.cassandra.io.util.FileUtils;
-import org.apache.cassandra.utils.BoundedStatsDeque;
 import org.apache.cassandra.utils.FBUtilities;
 
 /**
@@ -289,11 +287,60 @@ public class FailureDetector implements IFailureDetector, 
FailureDetectorMBean
 }
 }
 
+/*
+ This class is not thread safe.
+ */
+class ArrayBackedBoundedStats
+{
+private final long[] arrivalIntervals;
+private long sum = 0;
+private int index = 0;
+private boolean isFilled = false;
+private volatile double mean = 0;
+
+public ArrayBackedBoundedStats(final int size)
+{
+arrivalIntervals = new long[size];
+}
+
+public void add(long interval)
+{
+if(index == arrivalIntervals.length)
+{
+isFilled = true;
+index = 0;
+}
+
+if(isFilled)
+sum = sum - arrivalIntervals[index];
+
+arrivalIntervals[index++] = interval;
+sum += interval;
+mean = (double)sum / size();
+}
+
+private int size()
+{
+return isFilled ? arrivalIntervals.length : index;
+}
+
+public double mean()
+{
+return mean;
+}
+
+public long[] getArrivalIntervals()
+{
+return arrivalIntervals;
+}
+
+}
+
 class ArrivalWindow
 {
 private static final Logger logger = 
LoggerFactory.getLogger(ArrivalWindow.class);
 private long tLast = 0L;
-private final BoundedStatsDeque arrivalIntervals;
+private final ArrayBackedBoundedStats arrivalIntervals;
 
 // this is useless except to provide backwards compatibility in 
phi_convict_threshold,
 // because everyone seems pretty accustomed to the default of 8, and users 
who have
@@ -309,7 +356,7 @@ class ArrivalWindow
 
 ArrivalWindow(int size)
 {
-arrivalIntervals = new BoundedStatsDeque(size);
+arrivalIntervals = new ArrayBackedBoundedStats(size);
 }
 
 private static long getMaxInterval()
@@ -355,14 +402,14 @@ class ArrivalWindow
 // see CASSANDRA-2597 for an explanation of the math at work here.
 double phi(long tnow)
 {
-assert arrivalIntervals.size()  0  tLast  0; // should not be 
called before any samples arrive
+assert arrivalIntervals.mean()  0  tLast  0; // should not be 
called before any samples arrive
 long t = tnow - tLast;
 return t 

[4/4] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-17 Thread jasobrown
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72ab4b94
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72ab4b94
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72ab4b94

Branch: refs/heads/trunk
Commit: 72ab4b944f915761d8b1bfd22ff75200499e6718
Parents: 075ff50 8b021db
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:39:43 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:39:43 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 +++--
 .../cassandra/utils/BoundedStatsDeque.java  | 68 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 ---
 5 files changed, 111 insertions(+), 140 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72ab4b94/CHANGES.txt
--



[jira] [Created] (CASSANDRA-9615) Speed up Windows launch scripts

2015-06-17 Thread Joshua McKenzie (JIRA)
Joshua McKenzie created CASSANDRA-9615:
--

 Summary: Speed up Windows launch scripts
 Key: CASSANDRA-9615
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9615
 Project: Cassandra
  Issue Type: Improvement
Reporter: Joshua McKenzie
Assignee: Joshua McKenzie
Priority: Minor
 Fix For: 2.2.x


Currently the async callback to start C* on Windows from within ccm is taking 
upwards of 1.5 to 2 seconds per node due to a variety of somewhat expensive 
process launches we're doing in there (java version check, async port open 
checking). Contrast this with a crisp 0-1 ms on linux...

Some of that stuff can be cleaned up and sped up which should help both speed 
up our testing environment and iron out an error that pops up on the port check 
w/IPv6 (note: node still starts, just complains).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9119) Nodetool rebuild creates an additional rebuild session even if there is one already running

2015-06-17 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590641#comment-14590641
 ] 

Joshua McKenzie commented on CASSANDRA-9119:


+1

 Nodetool rebuild creates an additional rebuild session even if there is one 
 already running
 ---

 Key: CASSANDRA-9119
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9119
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.12.200
 Cassandra 2.1.2.98
Reporter: Jose Martinez Poblete
Assignee: Yuki Morishita
 Fix For: 2.1.x, 2.2.0 rc2


 If a nodetool rebuild session is started and the shell session is finished 
 for whatever reason, a second nodetool rebuild will spawn a second rebuild 
 filestream
 {noformat}
 DC2-S1-100-29:~ # ps aux | grep nodetool 
 root 10304 0.0 0.0 4532 560 pts/3 S+ 05:23 0:00 grep nodetool 
 dds-user 20946 0.0 0.0 21180 1880 ? S 04:39 0:00 /bin/sh 
 /usr/share/dse/resources/cassandra/bin/nodetool rebuild group10  there 
 is only one rebuild running
 DC2-S1-100-29:~ # nodetool netstats | grep -v /var/local/cassandra 
 Mode: NORMAL 
 Rebuild 818307b0-d9ba-11e4-8d4c-7bce93ffad70 -- does this represent one 
 rebuild? 
 /10.96.100.22 
 Receiving 63 files, 221542605741 bytes total 
 /10.96.100.26 
 Receiving 48 files, 47712285610 bytes total 
 /10.96.100.25 
 /10.96.100.23 
 Receiving 57 files, 127515362783 bytes total 
 /10.96.100.27 
 /10.96.100.24 
 Rebuild 7bf9fcd0-d9bb-11e4-8d4c-7bce93ffad70 --- does this represent a 
 second rebuild? 
 /10.96.100.25 
 /10.96.100.26 
 Receiving 56 files, 47717905924 bytes total 
 /10.96.100.24 
 /10.96.100.22 
 Receiving 61 files, 221558642440 bytes total 
 /10.96.100.23 
 Receiving 62 files, 127528841272 bytes total 
 /10.96.100.27 
 Read Repair Statistics: 
 Attempted: 0 
 Mismatch (Blocking): 0 
 Mismatch (Background): 0 
 Pool Name Active Pending Completed 
 Commands n/a 0 2151322 
 Responses n/a 0 3343981
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: ArrivalWindow should use primitives

2015-06-17 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 7c5fc40b8 - 4c1597011


ArrivalWindow should use primitives

patch by sankalp kohli; reviewed by jasobrown for CASSANDRA-9496


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad8047ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad8047ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad8047ab

Branch: refs/heads/cassandra-2.1
Commit: ad8047abdf5db6652b9586e039debb1e855db09a
Parents: ec52e77
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:33:44 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:33:44 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 ++--
 .../cassandra/utils/BoundedStatsDeque.java  | 72 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 --
 5 files changed, 111 insertions(+), 144 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad8047ab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d031f6..753fb1c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.16:
+ * ArrivalWindow should use primitives (CASSANDRA-9496)
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
  * Backport indexed value validation fix from CASSANDRA-9057 (CASSANDRA-9564)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad8047ab/src/java/org/apache/cassandra/gms/FailureDetector.java
--
diff --git a/src/java/org/apache/cassandra/gms/FailureDetector.java 
b/src/java/org/apache/cassandra/gms/FailureDetector.java
index e247e48..8fdd99f 100644
--- a/src/java/org/apache/cassandra/gms/FailureDetector.java
+++ b/src/java/org/apache/cassandra/gms/FailureDetector.java
@@ -27,14 +27,12 @@ import java.util.concurrent.TimeUnit;
 import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
-import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.io.FSWriteError;
 import org.apache.cassandra.io.util.FileUtils;
-import org.apache.cassandra.utils.BoundedStatsDeque;
 import org.apache.cassandra.utils.FBUtilities;
 
 /**
@@ -289,11 +287,60 @@ public class FailureDetector implements IFailureDetector, 
FailureDetectorMBean
 }
 }
 
+/*
+ This class is not thread safe.
+ */
+class ArrayBackedBoundedStats
+{
+private final long[] arrivalIntervals;
+private long sum = 0;
+private int index = 0;
+private boolean isFilled = false;
+private volatile double mean = 0;
+
+public ArrayBackedBoundedStats(final int size)
+{
+arrivalIntervals = new long[size];
+}
+
+public void add(long interval)
+{
+if(index == arrivalIntervals.length)
+{
+isFilled = true;
+index = 0;
+}
+
+if(isFilled)
+sum = sum - arrivalIntervals[index];
+
+arrivalIntervals[index++] = interval;
+sum += interval;
+mean = (double)sum / size();
+}
+
+private int size()
+{
+return isFilled ? arrivalIntervals.length : index;
+}
+
+public double mean()
+{
+return mean;
+}
+
+public long[] getArrivalIntervals()
+{
+return arrivalIntervals;
+}
+
+}
+
 class ArrivalWindow
 {
 private static final Logger logger = 
LoggerFactory.getLogger(ArrivalWindow.class);
 private long tLast = 0L;
-private final BoundedStatsDeque arrivalIntervals;
+private final ArrayBackedBoundedStats arrivalIntervals;
 
 // this is useless except to provide backwards compatibility in 
phi_convict_threshold,
 // because everyone seems pretty accustomed to the default of 8, and users 
who have
@@ -309,7 +356,7 @@ class ArrivalWindow
 
 ArrivalWindow(int size)
 {
-arrivalIntervals = new BoundedStatsDeque(size);
+arrivalIntervals = new ArrayBackedBoundedStats(size);
 }
 
 private static long getMaxInterval()
@@ -355,14 +402,14 @@ class ArrivalWindow
 // see CASSANDRA-2597 for an explanation of the math at work here.
 double phi(long tnow)
 {
-assert arrivalIntervals.size()  0  tLast  0; // should not be 
called before any samples arrive
+assert arrivalIntervals.mean()  0  tLast  0; // should not be 
called before any samples arrive
 long t = tnow - tLast;
   

[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-06-17 Thread jasobrown
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/utils/BoundedStatsDeque.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c159701
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c159701
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c159701

Branch: refs/heads/cassandra-2.1
Commit: 4c15970119e021dd0fe4b2fe8b4f9c594d21f334
Parents: 7c5fc40 ad8047a
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:36:59 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:36:59 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 +++--
 .../cassandra/utils/BoundedStatsDeque.java  | 68 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 ---
 5 files changed, 111 insertions(+), 140 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c159701/CHANGES.txt
--
diff --cc CHANGES.txt
index 009d974,753fb1c..8f3f9f0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,8 +1,52 @@@
 -2.0.16:
 +2.1.7
 + * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
 +Merged from 2.0
+  * ArrivalWindow should use primitives (CASSANDRA-9496)
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
 - * Backport indexed value validation fix from CASSANDRA-9057 (CASSANDRA-9564)
 +
 +
 +2.1.6
 + * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
 + * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
 + * Use ProtocolError code instead of ServerError code for native protocol
 +   error responses to unsupported protocol versions (CASSANDRA-9451)
 + * Default commitlog_sync_batch_window_in_ms changed to 2ms (CASSANDRA-9504)
 + * Fix empty partition assertion in unsorted sstable writing tools 
(CASSANDRA-9071)
 + * Ensure truncate without snapshot cannot produce corrupt responses 
(CASSANDRA-9388) 
 + * Consistent error message when a table mixes counter and non-counter
 +   columns (CASSANDRA-9492)
 + * Avoid getting unreadable keys during anticompaction (CASSANDRA-9508)
 + * (cqlsh) Better float precision by default (CASSANDRA-9224)
 + * Improve estimated row count (CASSANDRA-9107)
 + * Optimize range tombstone memory footprint (CASSANDRA-8603)
 + * Use configured gcgs in anticompaction (CASSANDRA-9397)
 + * Warn on misuse of unlogged batches (CASSANDRA-9282)
 + * Failure detector detects and ignores local pauses (CASSANDRA-9183)
 + * Add utility class to support for rate limiting a given log statement 
(CASSANDRA-9029)
 + * Add missing consistency levels to cassandra-stess (CASSANDRA-9361)
 + * Fix commitlog getCompletedTasks to not increment (CASSANDRA-9339)
 + * Fix for harmless exceptions logged as ERROR (CASSANDRA-8564)
 + * Delete processed sstables in sstablesplit/sstableupgrade (CASSANDRA-8606)
 + * Improve sstable exclusion from partition tombstones (CASSANDRA-9298)
 + * Validate the indexed column rather than the cell's contents for 2i 
(CASSANDRA-9057)
 + * Add support for top-k custom 2i queries (CASSANDRA-8717)
 + * Fix error when dropping table during compaction (CASSANDRA-9251)
 + * cassandra-stress supports validation operations over user profiles 
(CASSANDRA-8773)
 + * Add support for rate limiting log messages (CASSANDRA-9029)
 + * Log the partition key with tombstone warnings (CASSANDRA-8561)
 + * Reduce runWithCompactionsDisabled poll interval to 1ms (CASSANDRA-9271)
 + * Fix PITR commitlog replay (CASSANDRA-9195)
 + * GCInspector logs very different times (CASSANDRA-9124)
 + * Fix deleting from an empty list (CASSANDRA-9198)
 + * Update tuple and collection types that use a user-defined type when that 
UDT
 +   is modified (CASSANDRA-9148, CASSANDRA-9192)
 + * Use higher timeout for prepair and snapshot in repair (CASSANDRA-9261)
 + * Fix anticompaction blocking ANTI_ENTROPY stage (CASSANDRA-9151)
 + * Repair waits for anticompaction to finish (CASSANDRA-9097)
 + * Fix streaming not holding ref when stream error (CASSANDRA-9295)
 + * Fix canonical view returning early opened SSTables (CASSANDRA-9396)
 +Merged from 2.0:
   * Don't accumulate more range than necessary in RangeTombstone.Tracker 
(CASSANDRA-9486)
   * Add broadcast and rpc addresses to system.local (CASSANDRA-9436)
   * Always mark sstable suspect when corrupted (CASSANDRA-9478)


[jira] [Commented] (CASSANDRA-9475) Duplicate compilation of UDFs on coordinator

2015-06-17 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590706#comment-14590706
 ] 

Tyler Hobbs commented on CASSANDRA-9475:


+1 on the second patch

 Duplicate compilation of UDFs on coordinator
 

 Key: CASSANDRA-9475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9475
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Robert Stupp
Assignee: Robert Stupp
Priority: Minor
 Fix For: 2.2.x

 Attachments: 9475-2.txt, 9475.txt


 User-defined functions are compiled twice on the node actually executing the 
 CREATE FUNCTION statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: ArrivalWindow should use primitives

2015-06-17 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 ec52e77ec - ad8047abd


ArrivalWindow should use primitives

patch by sankalp kohli; reviewed by jasobrown for CASSANDRA-9496


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ad8047ab
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ad8047ab
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ad8047ab

Branch: refs/heads/cassandra-2.0
Commit: ad8047abdf5db6652b9586e039debb1e855db09a
Parents: ec52e77
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jun 17 14:33:44 2015 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jun 17 14:33:44 2015 -0700

--
 CHANGES.txt |  1 +
 .../apache/cassandra/gms/FailureDetector.java   | 59 ++--
 .../cassandra/utils/BoundedStatsDeque.java  | 72 
 .../gms/ArrayBackedBoundedStatsTest.java| 57 
 .../cassandra/utils/BoundedStatsDequeTest.java  | 66 --
 5 files changed, 111 insertions(+), 144 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad8047ab/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d031f6..753fb1c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.16:
+ * ArrivalWindow should use primitives (CASSANDRA-9496)
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
  * Backport indexed value validation fix from CASSANDRA-9057 (CASSANDRA-9564)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ad8047ab/src/java/org/apache/cassandra/gms/FailureDetector.java
--
diff --git a/src/java/org/apache/cassandra/gms/FailureDetector.java 
b/src/java/org/apache/cassandra/gms/FailureDetector.java
index e247e48..8fdd99f 100644
--- a/src/java/org/apache/cassandra/gms/FailureDetector.java
+++ b/src/java/org/apache/cassandra/gms/FailureDetector.java
@@ -27,14 +27,12 @@ import java.util.concurrent.TimeUnit;
 import javax.management.MBeanServer;
 import javax.management.ObjectName;
 
-import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.io.FSWriteError;
 import org.apache.cassandra.io.util.FileUtils;
-import org.apache.cassandra.utils.BoundedStatsDeque;
 import org.apache.cassandra.utils.FBUtilities;
 
 /**
@@ -289,11 +287,60 @@ public class FailureDetector implements IFailureDetector, 
FailureDetectorMBean
 }
 }
 
+/*
+ This class is not thread safe.
+ */
+class ArrayBackedBoundedStats
+{
+private final long[] arrivalIntervals;
+private long sum = 0;
+private int index = 0;
+private boolean isFilled = false;
+private volatile double mean = 0;
+
+public ArrayBackedBoundedStats(final int size)
+{
+arrivalIntervals = new long[size];
+}
+
+public void add(long interval)
+{
+if(index == arrivalIntervals.length)
+{
+isFilled = true;
+index = 0;
+}
+
+if(isFilled)
+sum = sum - arrivalIntervals[index];
+
+arrivalIntervals[index++] = interval;
+sum += interval;
+mean = (double)sum / size();
+}
+
+private int size()
+{
+return isFilled ? arrivalIntervals.length : index;
+}
+
+public double mean()
+{
+return mean;
+}
+
+public long[] getArrivalIntervals()
+{
+return arrivalIntervals;
+}
+
+}
+
 class ArrivalWindow
 {
 private static final Logger logger = 
LoggerFactory.getLogger(ArrivalWindow.class);
 private long tLast = 0L;
-private final BoundedStatsDeque arrivalIntervals;
+private final ArrayBackedBoundedStats arrivalIntervals;
 
 // this is useless except to provide backwards compatibility in 
phi_convict_threshold,
 // because everyone seems pretty accustomed to the default of 8, and users 
who have
@@ -309,7 +356,7 @@ class ArrivalWindow
 
 ArrivalWindow(int size)
 {
-arrivalIntervals = new BoundedStatsDeque(size);
+arrivalIntervals = new ArrayBackedBoundedStats(size);
 }
 
 private static long getMaxInterval()
@@ -355,14 +402,14 @@ class ArrivalWindow
 // see CASSANDRA-2597 for an explanation of the math at work here.
 double phi(long tnow)
 {
-assert arrivalIntervals.size()  0  tLast  0; // should not be 
called before any samples arrive
+assert arrivalIntervals.mean()  0  tLast  0; // should not be 
called before any samples arrive
 long t = tnow - tLast;
   

[5/6] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-17 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dee675f1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dee675f1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dee675f1

Branch: refs/heads/cassandra-2.2
Commit: dee675f1ef148b40351c365b6d42c39f081cb706
Parents: 8b021db 9966419
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jun 17 20:48:40 2015 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jun 17 20:48:40 2015 -0500

--
 CHANGES.txt |  3 +-
 .../cassandra/service/StorageService.java   | 48 +---
 2 files changed, 34 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dee675f1/CHANGES.txt
--
diff --cc CHANGES.txt
index c32596c,1d72c9a..3b16b6f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,41 -1,13 +1,42 @@@
 -2.1.7
 +2.2
 + * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)
 + * Mlockall before opening system sstables  remove boot_without_jna option 
(CASSANDRA-9573)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 + * Fix deprecated repair JMX API (CASSANDRA-9570)
++ * Add logback metrics (CASSANDRA-9378)
 +Merged from 2.1:
   * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
++ * Make rebuild only run one at a time (CASSANDRA-9119)
  Merged from 2.0
   * ArrivalWindow should use primitives (CASSANDRA-9496)
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
-  * Add logback metrics (CASSANDRA-9378)
 - * Make rebuild only run one at a time (CASSANDRA-9119)
  
  
 -2.1.6
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 +Merged from 2.1:
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dee675f1/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 2dd56b5,e063c63..3edbe22
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -50,7 -50,9 +50,8 @@@ import java.util.concurrent.Future
  import java.util.concurrent.FutureTask;
  import java.util.concurrent.TimeUnit;
  import java.util.concurrent.TimeoutException;
+ import java.util.concurrent.atomic.AtomicBoolean;
  import java.util.concurrent.atomic.AtomicInteger;
 -import java.util.concurrent.atomic.AtomicLong;
  
  import javax.management.JMX;
  import javax.management.MBeanServer;
@@@ -239,11 -235,15 +240,13 @@@ public class StorageService extends Not
  private InetAddress removingNode;
  
  /* Are we starting this node in bootstrap mode? */
 -private boolean isBootstrapMode;
 +private volatile boolean isBootstrapMode;
  
  /* we bootstrap but 

[2/6] cassandra git commit: Make rebuild only run one at a time

2015-06-17 Thread yukim
Make rebuild only run one at a time

patch by yukim; reviewed by jmckenzie for CASSANDRA-9119


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9966419d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9966419d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9966419d

Branch: refs/heads/cassandra-2.2
Commit: 9966419dbda995421f41ccc769d3b89d63940c82
Parents: 4c15970
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jun 3 14:44:11 2015 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jun 17 20:41:16 2015 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/service/StorageService.java   | 36 ++--
 2 files changed, 27 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9966419d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8f3f9f0..1d72c9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@ Merged from 2.0
  * ArrivalWindow should use primitives (CASSANDRA-9496)
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
+ * Make rebuild only run one at a time (CASSANDRA-9119)
 
 
 2.1.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9966419d/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 7c8e424..e063c63 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -50,6 +50,7 @@ import java.util.concurrent.Future;
 import java.util.concurrent.FutureTask;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
 
@@ -237,7 +238,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 private boolean isBootstrapMode;
 
 /* we bootstrap but do NOT join the ring unless told to do so */
-private boolean isSurveyMode= 
Boolean.parseBoolean(System.getProperty(cassandra.write_survey, false));
+private boolean isSurveyMode = 
Boolean.parseBoolean(System.getProperty(cassandra.write_survey, false));
+/* true if node is rebuilding and receiving data */
+private final AtomicBoolean isRebuilding = new AtomicBoolean();
 
 /* when intialized as a client, we shouldn't write to the system keyspace. 
*/
 private boolean isClientMode;
@@ -1023,19 +1026,27 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public void rebuild(String sourceDc)
 {
-logger.info(rebuild from dc: {}, sourceDc == null ? (any dc) : 
sourceDc);
-
-RangeStreamer streamer = new RangeStreamer(tokenMetadata, 
FBUtilities.getBroadcastAddress(), Rebuild);
-streamer.addSourceFilter(new 
RangeStreamer.FailureDetectorSourceFilter(FailureDetector.instance));
-if (sourceDc != null)
-streamer.addSourceFilter(new 
RangeStreamer.SingleDatacenterFilter(DatabaseDescriptor.getEndpointSnitch(), 
sourceDc));
+// check on going rebuild
+if (!isRebuilding.compareAndSet(false, true))
+{
+throw new IllegalStateException(Node is still rebuilding. Check 
nodetool netstats.);
+}
 
-for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
-streamer.addRanges(keyspaceName, getLocalRanges(keyspaceName));
+logger.info(rebuild from dc: {}, sourceDc == null ? (any dc) : 
sourceDc);
 
 try
 {
-streamer.fetchAsync().get();
+RangeStreamer streamer = new RangeStreamer(tokenMetadata, 
FBUtilities.getBroadcastAddress(), Rebuild);
+streamer.addSourceFilter(new 
RangeStreamer.FailureDetectorSourceFilter(FailureDetector.instance));
+if (sourceDc != null)
+streamer.addSourceFilter(new 
RangeStreamer.SingleDatacenterFilter(DatabaseDescriptor.getEndpointSnitch(), 
sourceDc));
+
+for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
+streamer.addRanges(keyspaceName, getLocalRanges(keyspaceName));
+
+StreamResultFuture resultFuture = streamer.fetchAsync();
+// wait for result
+resultFuture.get();
 }
 catch (InterruptedException e)
 {
@@ -1047,6 +1058,11 @@ public class StorageService extends 

[6/6] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-17 Thread yukim
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/43d21c38
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/43d21c38
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/43d21c38

Branch: refs/heads/trunk
Commit: 43d21c384b4212f8731edfae33142c0a5676c474
Parents: 4c4c432 dee675f
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jun 17 21:58:02 2015 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jun 17 21:58:02 2015 -0500

--
 CHANGES.txt |  3 +-
 .../cassandra/service/StorageService.java   | 48 +---
 2 files changed, 34 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/43d21c38/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/43d21c38/src/java/org/apache/cassandra/service/StorageService.java
--



[3/6] cassandra git commit: Make rebuild only run one at a time

2015-06-17 Thread yukim
Make rebuild only run one at a time

patch by yukim; reviewed by jmckenzie for CASSANDRA-9119


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9966419d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9966419d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9966419d

Branch: refs/heads/trunk
Commit: 9966419dbda995421f41ccc769d3b89d63940c82
Parents: 4c15970
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jun 3 14:44:11 2015 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jun 17 20:41:16 2015 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/service/StorageService.java   | 36 ++--
 2 files changed, 27 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9966419d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8f3f9f0..1d72c9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@ Merged from 2.0
  * ArrivalWindow should use primitives (CASSANDRA-9496)
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
+ * Make rebuild only run one at a time (CASSANDRA-9119)
 
 
 2.1.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9966419d/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 7c8e424..e063c63 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -50,6 +50,7 @@ import java.util.concurrent.Future;
 import java.util.concurrent.FutureTask;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
 
@@ -237,7 +238,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 private boolean isBootstrapMode;
 
 /* we bootstrap but do NOT join the ring unless told to do so */
-private boolean isSurveyMode= 
Boolean.parseBoolean(System.getProperty(cassandra.write_survey, false));
+private boolean isSurveyMode = 
Boolean.parseBoolean(System.getProperty(cassandra.write_survey, false));
+/* true if node is rebuilding and receiving data */
+private final AtomicBoolean isRebuilding = new AtomicBoolean();
 
 /* when intialized as a client, we shouldn't write to the system keyspace. 
*/
 private boolean isClientMode;
@@ -1023,19 +1026,27 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public void rebuild(String sourceDc)
 {
-logger.info(rebuild from dc: {}, sourceDc == null ? (any dc) : 
sourceDc);
-
-RangeStreamer streamer = new RangeStreamer(tokenMetadata, 
FBUtilities.getBroadcastAddress(), Rebuild);
-streamer.addSourceFilter(new 
RangeStreamer.FailureDetectorSourceFilter(FailureDetector.instance));
-if (sourceDc != null)
-streamer.addSourceFilter(new 
RangeStreamer.SingleDatacenterFilter(DatabaseDescriptor.getEndpointSnitch(), 
sourceDc));
+// check on going rebuild
+if (!isRebuilding.compareAndSet(false, true))
+{
+throw new IllegalStateException(Node is still rebuilding. Check 
nodetool netstats.);
+}
 
-for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
-streamer.addRanges(keyspaceName, getLocalRanges(keyspaceName));
+logger.info(rebuild from dc: {}, sourceDc == null ? (any dc) : 
sourceDc);
 
 try
 {
-streamer.fetchAsync().get();
+RangeStreamer streamer = new RangeStreamer(tokenMetadata, 
FBUtilities.getBroadcastAddress(), Rebuild);
+streamer.addSourceFilter(new 
RangeStreamer.FailureDetectorSourceFilter(FailureDetector.instance));
+if (sourceDc != null)
+streamer.addSourceFilter(new 
RangeStreamer.SingleDatacenterFilter(DatabaseDescriptor.getEndpointSnitch(), 
sourceDc));
+
+for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
+streamer.addRanges(keyspaceName, getLocalRanges(keyspaceName));
+
+StreamResultFuture resultFuture = streamer.fetchAsync();
+// wait for result
+resultFuture.get();
 }
 catch (InterruptedException e)
 {
@@ -1047,6 +1058,11 @@ public class StorageService extends 

[1/6] cassandra git commit: Make rebuild only run one at a time

2015-06-17 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 4c1597011 - 9966419db
  refs/heads/cassandra-2.2 8b021db7c - dee675f1e
  refs/heads/trunk 4c4c4327a - 43d21c384


Make rebuild only run one at a time

patch by yukim; reviewed by jmckenzie for CASSANDRA-9119


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9966419d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9966419d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9966419d

Branch: refs/heads/cassandra-2.1
Commit: 9966419dbda995421f41ccc769d3b89d63940c82
Parents: 4c15970
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jun 3 14:44:11 2015 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jun 17 20:41:16 2015 -0500

--
 CHANGES.txt |  1 +
 .../cassandra/service/StorageService.java   | 36 ++--
 2 files changed, 27 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9966419d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8f3f9f0..1d72c9a 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -4,6 +4,7 @@ Merged from 2.0
  * ArrivalWindow should use primitives (CASSANDRA-9496)
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
+ * Make rebuild only run one at a time (CASSANDRA-9119)
 
 
 2.1.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/9966419d/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 7c8e424..e063c63 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -50,6 +50,7 @@ import java.util.concurrent.Future;
 import java.util.concurrent.FutureTask;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.TimeoutException;
+import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
 
@@ -237,7 +238,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 private boolean isBootstrapMode;
 
 /* we bootstrap but do NOT join the ring unless told to do so */
-private boolean isSurveyMode= 
Boolean.parseBoolean(System.getProperty(cassandra.write_survey, false));
+private boolean isSurveyMode = 
Boolean.parseBoolean(System.getProperty(cassandra.write_survey, false));
+/* true if node is rebuilding and receiving data */
+private final AtomicBoolean isRebuilding = new AtomicBoolean();
 
 /* when intialized as a client, we shouldn't write to the system keyspace. 
*/
 private boolean isClientMode;
@@ -1023,19 +1026,27 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public void rebuild(String sourceDc)
 {
-logger.info(rebuild from dc: {}, sourceDc == null ? (any dc) : 
sourceDc);
-
-RangeStreamer streamer = new RangeStreamer(tokenMetadata, 
FBUtilities.getBroadcastAddress(), Rebuild);
-streamer.addSourceFilter(new 
RangeStreamer.FailureDetectorSourceFilter(FailureDetector.instance));
-if (sourceDc != null)
-streamer.addSourceFilter(new 
RangeStreamer.SingleDatacenterFilter(DatabaseDescriptor.getEndpointSnitch(), 
sourceDc));
+// check on going rebuild
+if (!isRebuilding.compareAndSet(false, true))
+{
+throw new IllegalStateException(Node is still rebuilding. Check 
nodetool netstats.);
+}
 
-for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
-streamer.addRanges(keyspaceName, getLocalRanges(keyspaceName));
+logger.info(rebuild from dc: {}, sourceDc == null ? (any dc) : 
sourceDc);
 
 try
 {
-streamer.fetchAsync().get();
+RangeStreamer streamer = new RangeStreamer(tokenMetadata, 
FBUtilities.getBroadcastAddress(), Rebuild);
+streamer.addSourceFilter(new 
RangeStreamer.FailureDetectorSourceFilter(FailureDetector.instance));
+if (sourceDc != null)
+streamer.addSourceFilter(new 
RangeStreamer.SingleDatacenterFilter(DatabaseDescriptor.getEndpointSnitch(), 
sourceDc));
+
+for (String keyspaceName : Schema.instance.getNonSystemKeyspaces())
+streamer.addRanges(keyspaceName, getLocalRanges(keyspaceName));
+
+StreamResultFuture resultFuture = streamer.fetchAsync();
+// 

[4/6] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-17 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dee675f1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dee675f1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dee675f1

Branch: refs/heads/trunk
Commit: dee675f1ef148b40351c365b6d42c39f081cb706
Parents: 8b021db 9966419
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jun 17 20:48:40 2015 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jun 17 20:48:40 2015 -0500

--
 CHANGES.txt |  3 +-
 .../cassandra/service/StorageService.java   | 48 +---
 2 files changed, 34 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dee675f1/CHANGES.txt
--
diff --cc CHANGES.txt
index c32596c,1d72c9a..3b16b6f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,41 -1,13 +1,42 @@@
 -2.1.7
 +2.2
 + * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)
 + * Mlockall before opening system sstables  remove boot_without_jna option 
(CASSANDRA-9573)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 + * Fix deprecated repair JMX API (CASSANDRA-9570)
++ * Add logback metrics (CASSANDRA-9378)
 +Merged from 2.1:
   * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
++ * Make rebuild only run one at a time (CASSANDRA-9119)
  Merged from 2.0
   * ArrivalWindow should use primitives (CASSANDRA-9496)
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
-  * Add logback metrics (CASSANDRA-9378)
 - * Make rebuild only run one at a time (CASSANDRA-9119)
  
  
 -2.1.6
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 +Merged from 2.1:
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/dee675f1/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 2dd56b5,e063c63..3edbe22
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -50,7 -50,9 +50,8 @@@ import java.util.concurrent.Future
  import java.util.concurrent.FutureTask;
  import java.util.concurrent.TimeUnit;
  import java.util.concurrent.TimeoutException;
+ import java.util.concurrent.atomic.AtomicBoolean;
  import java.util.concurrent.atomic.AtomicInteger;
 -import java.util.concurrent.atomic.AtomicLong;
  
  import javax.management.JMX;
  import javax.management.MBeanServer;
@@@ -239,11 -235,15 +240,13 @@@ public class StorageService extends Not
  private InetAddress removingNode;
  
  /* Are we starting this node in bootstrap mode? */
 -private boolean isBootstrapMode;
 +private volatile boolean isBootstrapMode;
  
  /* we bootstrap but do NOT 

[jira] [Resolved] (CASSANDRA-9614) I am trying to execute queries using execute_concurrent() in python but I am getting the best throughput on a concurrency of 4 whereas according to the documentation

2015-06-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-9614.
---
   Resolution: Invalid
Fix Version/s: (was: 1.2.x)

 I am trying to execute queries using execute_concurrent() in python but I am 
 getting the best throughput on a concurrency of 4 whereas according to the 
 documentation there can be 100 concurrent requests. Why this kind of behavior?
 --

 Key: CASSANDRA-9614
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9614
 Project: Cassandra
  Issue Type: Test
  Components: Documentation  website
 Environment: Linux
Reporter: Kumar Saras





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: use short-circuiting ops

2015-06-17 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 72ab4b944 - 4c4c4327a


use short-circuiting ops


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c4c4327
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c4c4327
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c4c4327

Branch: refs/heads/trunk
Commit: 4c4c4327a2f7653b27dd89a9a1f1ae024d7c76be
Parents: 72ab4b9
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Wed Jun 17 22:23:23 2015 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Wed Jun 17 22:23:23 2015 -0400

--
 .../apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c4c4327/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java
--
diff --git 
a/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java 
b/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java
index 087b888..8f2cee0 100644
--- a/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java
+++ b/src/java/org/apache/cassandra/io/util/UnbufferedDataOutputStreamPlus.java
@@ -309,7 +309,7 @@ public abstract class UnbufferedDataOutputStreamPlus 
extends DataOutputStreamPlu
 for (int i = 0 ; i  charRunLength ; i++)
 {
 char ch = str.charAt(offset + i);
-if ((ch  0)  (ch = 127))
+if ((ch  0)  (ch = 127))
 {
 utfBytes[utfIndex++] = (byte) ch;
 }



[jira] [Commented] (CASSANDRA-9563) Rename class for DATE type in Java driver

2015-06-17 Thread Olivier Michallat (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590565#comment-14590565
 ] 

Olivier Michallat commented on CASSANDRA-9563:
--

It's merged now.

 Rename class for DATE type in Java driver
 -

 Key: CASSANDRA-9563
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9563
 Project: Cassandra
  Issue Type: Improvement
Reporter: Olivier Michallat
Assignee: Robert Stupp
Priority: Minor
 Fix For: 2.2.0 rc2


 An early preview of the Java driver 2.2 was provided for inclusion in 
 Cassandra 2.2.0-rc1. It uses a custom Java type to represent CQL type 
 {{DATE}}. Currently that Java type is called {{DateWithoutTime}}.
 We'd like to rename it to {{LocalDate}}. This would be a breaking change for 
 Cassandra, because that type is visible from UDF implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9549) Memory leak

2015-06-17 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-9549:
---
 Reviewer: Marcus Eriksson
Reproduced In: 2.1.6, 2.1.5  (was: 2.1.5, 2.1.6)

 Memory leak 
 

 Key: CASSANDRA-9549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9549
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.5. 9 node cluster in EC2 (m1.large nodes, 
 2 cores 7.5G memory, 800G platter for cassandra data, root partition and 
 commit log are on SSD EBS with sufficient IOPS), 3 nodes/availablity zone, 1 
 replica/zone
 JVM: /usr/java/jdk1.8.0_40/jre/bin/java 
 JVM Flags besides CP: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
 -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn200M 
 -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
 -XX:+UseTLAB -XX:CompileCommandFile=/etc/cassandra/conf/hotspot_compiler 
 -XX:CMSWaitDuration=1 -XX:+CMSParallelInitialMarkEnabled 
 -XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
 -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 
 -Dcom.sun.management.jmxremote.rmi.port=7199 
 -Dcom.sun.management.jmxremote.ssl=false 
 -Dcom.sun.management.jmxremote.authenticate=false 
 -Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
 -Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid 
 Kernel: Linux 2.6.32-504.16.2.el6.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
Reporter: Ivar Thorson
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.x

 Attachments: c4_system.log, c7fromboot.zip, cassandra.yaml, 
 cpu-load.png, memoryuse.png, ref-java-errors.jpeg, suspect.png, two-loads.png


 We have been experiencing a severe memory leak with Cassandra 2.1.5 that, 
 over the period of a couple of days, eventually consumes all of the available 
 JVM heap space, putting the JVM into GC hell where it keeps trying CMS 
 collection but can't free up any heap space. This pattern happens for every 
 node in our cluster and is requiring rolling cassandra restarts just to keep 
 the cluster running. We have upgraded the cluster per Datastax docs from the 
 2.0 branch a couple of months ago and have been using the data from this 
 cluster for more than a year without problem.
 As the heap fills up with non-GC-able objects, the CPU/OS load average grows 
 along with it. Heap dumps reveal an increasing number of 
 java.util.concurrent.ConcurrentLinkedQueue$Node objects. We took heap dumps 
 over a 2 day period, and watched the number of Node objects go from 4M, to 
 19M, to 36M, and eventually about 65M objects before the node stops 
 responding. The screen capture of our heap dump is from the 19M measurement.
 Load on the cluster is minimal. We can see this effect even with only a 
 handful of writes per second. (See attachments for Opscenter snapshots during 
 very light loads and heavier loads). Even with only 5 reads a sec we see this 
 behavior.
 Log files show repeated errors in Ref.java:181 and Ref.java:279 and LEAK 
 detected messages:
 {code}
 ERROR [CompactionExecutor:557] 2015-06-01 18:27:36,978 Ref.java:279 - Error 
 when closing class 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1302301946:/data1/data/ourtablegoeshere-ka-1150
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@32680b31 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@573464d6[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1644]
 {code}
 {code}
 ERROR [Reference-Reaper:1] 2015-06-01 18:27:37,083 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@74b5df92) to class 
 org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2054303604:/data2/data/ourtablegoeshere-ka-1151
  was not released before the reference was garbage collected
 {code}
 This might be related to [CASSANDRA-8723]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-06-17 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589679#comment-14589679
 ] 

Branimir Lambov commented on CASSANDRA-8099:


The changes look good, I accept that clarifying the side effects and switching 
to Iterable is not something that needs to be taken care of as part of this 
ticket.

On the range tombstones doc, I'm sorry, I totally failed to explain what I had 
in mind. I was looking for a statement in {{### Dealing with tombstones and 
shadowed cells}} that says an open marker is immediately followed by the 
corresponding close marker. This is a simple and easy to check statement which 
is equivalent to having both does not shadow its own data (pt 1) and there 
is at most one active tombstone at any point (pt 4).

However, to clarify this I went and looked further into how tombstones work in 
the new code and after looking at it I do not think the merging code could work 
correctly. To make certain I wrote a [partition merge 
test|https://github.com/apache/cassandra/commit/36709f4f8f81125def076c91ac8dffae2fdf71b0].
 The test results are very broken, for example:
{code}
Merging
6=[34] 8 8 19=[99] =36 40 47 50[56] 66 67 68 72=[26] =73 89=[97] 97 
2=[66] 19 19 34=[58] =35 36 40[94] 41 42[26] =48 55=[35] =57 58 83 88 
5 8 19 31=[49] =31 37=[70] =44 46[79] 55 65 72[57] 85 92[45] 93 93 
results in
2=[66] 19 19=[99] =36 37=[70] 40[94] 41=[70] 41=[70] 44=[26] 44=[26] 
46[79] 55=[56] 55=[56] 66 67 68 72=[26] 72[57] 85 88 89=[97] 97 
java.lang.AssertionError: 40[94] follows another open marker 37=[70]
java.lang.AssertionError: Duplicate marker 41=[70]
java.lang.AssertionError: Deletion time mismatch for position 44 
expected:deletedAt=70, localDeletion=70 but was:deletedAt=26, 
localDeletion=26
{code}
{code}
Merging
4 6 13=[62] =26 34=[89] =34 47 48[99] 52 54=[37] =57 78[6] =83 85 
88=[34] 91 
0 20 33=[58] =33 37=[84] 40 52 57 77 77 88[14] 91 92[15] 96 
1 8 31 41[17] 43 62=[25] =67 85 87 88 92 97 
results in
0 1 4 6 8 13=[62] =26 31 33=[58] =33 34=[89] =34 37=[84] 40 41[17] 43 
47 48[99] 52 52 54=[37] =57 62=[25] =67 77 77 78[6] =83 87 88=[34] 91 
91 92 92[15] 96 97 
java.lang.AssertionError: 91 should be preceded by open marker, found 91
java.lang.AssertionError: Deletion time mismatch for open 88=[34] and close 
91 expected:deletedAt=34, localDeletion=34 but was:deletedAt=14, 
localDeletion=14
{code}
(where {{x\[z\]}} means tombstone open marker at {{x}} with time {{z}} and 
{{y}} stands for close marker at {{y}}).

This is also broken with Benedict's implicit close markers, and his patch is 
not sufficient to fix it. The test does not even go far enough, as it does not 
include rows that are within the span of a tombstone but newer (as far as I can 
tell such a row should expand into three Unfiltered's: a close marker, row with 
deletion time, and open marker).

Am I somehow completely misunderstanding or missing some assumptions about the 
way tombstone ranges are supposed to work? Is there something very wrong in the 
way I'm doing this test?


 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of 

[jira] [Comment Edited] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-17 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589635#comment-14589635
 ] 

Andreas Schnitzerling edited comment on CASSANDRA-8535 at 6/17/15 11:12 AM:


On Windows 7, files and directories cannot be renamed, if they're still open, 
so we need to close them before:
{code:title=src/java/org/apache/cassandra/io/sstable/SSTableWriter.java}
private PairDescriptor, StatsMetadata close(FinishType type, long repairedAt)

Descriptor descriptor = this.descriptor;
if (type.isFinal)
{
dataFile.writeFullChecksum(descriptor);
writeMetadata(descriptor, metadataComponents);
// save the table of components
SSTable.appendTOC(descriptor, components);
+++ dataFile.close();
descriptor = rename(descriptor, components);
}
{code}
Can somebody check/test and maybe commit it? Thx.


was (Author: andie78):
On Windows 7, files and directories cannot be renamed, if they're still open, 
so we need to close them before:
{code:title=\src\java\org\apache\cassandra\io\sstable\SSTableWriter.java}
private PairDescriptor, StatsMetadata close(FinishType type, long repairedAt)

Descriptor descriptor = this.descriptor;
if (type.isFinal)
{
dataFile.writeFullChecksum(descriptor);
writeMetadata(descriptor, metadataComponents);
// save the table of components
SSTable.appendTOC(descriptor, components);
+++ dataFile.close();
descriptor = rename(descriptor, components);
}
{code}
Can somebody check/test and maybe commit it? Thx.

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it 

[jira] [Assigned] (CASSANDRA-9606) this query is not supported in new version

2015-06-17 Thread Benjamin Lerer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer reassigned CASSANDRA-9606:
-

Assignee: Benjamin Lerer

 this query is not supported in new version
 --

 Key: CASSANDRA-9606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9606
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.6
 jdk 1.7.0_55
Reporter: zhaoyan
Assignee: Benjamin Lerer

 Background:
 1、create a table:
 {code}
 CREATE TABLE test (
 a int,
 b int,
 c int,
   d int,
 PRIMARY KEY (a, b, c)
 );
 {code}
 2、query by a=1 and b6
 {code}
 select * from test where a=1 and b6;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (6 rows)
 {code}
 3、query by page
 first page:
 {code}
 select * from test where a=1 and b6 limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
 (2 rows)
 {code}
 second page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
 (2 rows)
 {code}
 last page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,5) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (2 rows)
 {code}
 question:
 this query by page is ok when cassandra 2.0.8.
 but is not supported in the latest version 2.1.6
 when execute:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
 {code}
 get one error message:
 InvalidRequest: code=2200 [Invalid query] message=Column b cannot have 
 both tuple-notation inequalities and single-column inequalities: (b, c)  (3, 
 2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-17 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589635#comment-14589635
 ] 

Andreas Schnitzerling commented on CASSANDRA-8535:
--

On Windows 7, files and directories cannot be renamed, if they're still open, 
so we need to close them before:
{code:title=\src\java\org\apache\cassandra\io\sstable\SSTableWriter.java}
private PairDescriptor, StatsMetadata close(FinishType type, long repairedAt)

Descriptor descriptor = this.descriptor;
if (type.isFinal)
{
dataFile.writeFullChecksum(descriptor);
writeMetadata(descriptor, metadataComponents);
// save the table of components
SSTable.appendTOC(descriptor, components);
+++ dataFile.close();
descriptor = rename(descriptor, components);
}
{code}
Can somebody check/test and maybe commit it? Thx.

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db:
  The process cannot access the file because it is being used by another 
 process.
   at 
 sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) 
 ~[na:1.7.0_45]
   at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301) 
 ~[na:1.7.0_45]
   at 
 sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) 
 ~[na:1.7.0_45]
   at java.nio.file.Files.move(Files.java:1345) ~[na:1.7.0_45]
   at 
 org.apache.cassandra.io.util.FileUtils.atomicMoveWithFallback(FileUtils.java:184)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:166) 
 ~[main/:na]
   ... 18 common frames omitted
 {code}



--
This message was sent by Atlassian 

[jira] [Commented] (CASSANDRA-9549) Memory leak

2015-06-17 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589683#comment-14589683
 ] 

Marcus Eriksson commented on CASSANDRA-9549:


+1

 Memory leak 
 

 Key: CASSANDRA-9549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9549
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.5. 9 node cluster in EC2 (m1.large nodes, 
 2 cores 7.5G memory, 800G platter for cassandra data, root partition and 
 commit log are on SSD EBS with sufficient IOPS), 3 nodes/availablity zone, 1 
 replica/zone
 JVM: /usr/java/jdk1.8.0_40/jre/bin/java 
 JVM Flags besides CP: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
 -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn200M 
 -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
 -XX:+UseTLAB -XX:CompileCommandFile=/etc/cassandra/conf/hotspot_compiler 
 -XX:CMSWaitDuration=1 -XX:+CMSParallelInitialMarkEnabled 
 -XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
 -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 
 -Dcom.sun.management.jmxremote.rmi.port=7199 
 -Dcom.sun.management.jmxremote.ssl=false 
 -Dcom.sun.management.jmxremote.authenticate=false 
 -Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
 -Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid 
 Kernel: Linux 2.6.32-504.16.2.el6.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
Reporter: Ivar Thorson
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.x

 Attachments: c4_system.log, c7fromboot.zip, cassandra.yaml, 
 cpu-load.png, memoryuse.png, ref-java-errors.jpeg, suspect.png, two-loads.png


 We have been experiencing a severe memory leak with Cassandra 2.1.5 that, 
 over the period of a couple of days, eventually consumes all of the available 
 JVM heap space, putting the JVM into GC hell where it keeps trying CMS 
 collection but can't free up any heap space. This pattern happens for every 
 node in our cluster and is requiring rolling cassandra restarts just to keep 
 the cluster running. We have upgraded the cluster per Datastax docs from the 
 2.0 branch a couple of months ago and have been using the data from this 
 cluster for more than a year without problem.
 As the heap fills up with non-GC-able objects, the CPU/OS load average grows 
 along with it. Heap dumps reveal an increasing number of 
 java.util.concurrent.ConcurrentLinkedQueue$Node objects. We took heap dumps 
 over a 2 day period, and watched the number of Node objects go from 4M, to 
 19M, to 36M, and eventually about 65M objects before the node stops 
 responding. The screen capture of our heap dump is from the 19M measurement.
 Load on the cluster is minimal. We can see this effect even with only a 
 handful of writes per second. (See attachments for Opscenter snapshots during 
 very light loads and heavier loads). Even with only 5 reads a sec we see this 
 behavior.
 Log files show repeated errors in Ref.java:181 and Ref.java:279 and LEAK 
 detected messages:
 {code}
 ERROR [CompactionExecutor:557] 2015-06-01 18:27:36,978 Ref.java:279 - Error 
 when closing class 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1302301946:/data1/data/ourtablegoeshere-ka-1150
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@32680b31 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@573464d6[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1644]
 {code}
 {code}
 ERROR [Reference-Reaper:1] 2015-06-01 18:27:37,083 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@74b5df92) to class 
 org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2054303604:/data2/data/ourtablegoeshere-ka-1151
  was not released before the reference was garbage collected
 {code}
 This might be related to [CASSANDRA-8723]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9591) Scrub (recover) sstables even when -Index.db is missing

2015-06-17 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589479#comment-14589479
 ] 

Benedict commented on CASSANDRA-9591:
-

If it's impossible to have those values wired up, we can pass a flag into 
{{updateLiveSet()}} asking it not to build an interval tree in the case we're 
offline (which is detected by {{cfstore}} being {{null}} in the {{Tracker}}).


 Scrub (recover) sstables even when -Index.db is missing
 ---

 Key: CASSANDRA-9591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9591
 Project: Cassandra
  Issue Type: Improvement
Reporter: mck
Assignee: mck
  Labels: sstablescrub
 Fix For: 2.0.x

 Attachments: 9591-2.0.txt


 Today SSTableReader needs at minimum 3 files to load an sstable:
  - -Data.db
  - -CompressionInfo.db 
  - -Index.db
 But during the scrub process the -Index.db file isn't actually necessary, 
 unless there's corruption in the -Data.db and we want to be able to skip over 
 corrupted rows. Given that there is still a fair chance that there's nothing 
 wrong with the -Data.db file and we're just missing the -Index.db file this 
 patch addresses that situation.
 So the following patch makes it possible for the StandaloneScrubber 
 (sstablescrub) to recover sstables despite missing -Index.db files.
 This can happen from a catastrophic incident where data directories have been 
 lost and/or corrupted, or wiped and the backup not healthy. I'm aware that 
 normally one depends on replicas or snapshots to avoid such situations, but 
 such catastrophic incidents do occur in the wild.
 I have not tested this patch against normal c* operations and all the other 
 (more critical) ways SSTableReader is used. i'll happily do that and add the 
 needed units tests if people see merit in accepting the patch.
 Otherwise the patch can live with the issue, in-case anyone else needs it. 
 There's also a cassandra distribution bundled with the patch 
 [here|https://github.com/michaelsembwever/cassandra/releases/download/2.0.15-recover-sstables-without-indexdb/apache-cassandra-2.0.15-recover-sstables-without-indexdb.tar.gz]
  to make life a little easier for anyone finding themselves in such a bad 
 situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-17 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589450#comment-14589450
 ] 

Sylvain Lebresne commented on CASSANDRA-9499:
-

bq. breaking compatibility with persisted caches?

We don't persist cache contents, we only persist the keys that are cached and 
reload them on startup, so this is a non-issue.

More generally, I think we should just drop 
{{EncodedDataOutput}}/{{EncodedDataInput}} and just make sure we don't forget 
to update the serialization code from 8099 to use the new methods introduced 
here (which shouldn't take very long).

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-17 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589470#comment-14589470
 ] 

Benedict commented on CASSANDRA-9499:
-

I'm confused as to why we need 10 bytes? Pretty much by definition a 
continuation bit encoding needs 9 bytes to represent 8 bytes. Let's not use 
Google's implementation. It looks pretty rubbish. 

The reason they use 10 bytes is they cannot be bothered to realise the last 
byte does not need a continuation bit. They also use a *terrible* 
implementation for deciding how long it needs to be.

Here's  a simple change which makes it 9 bytes, and easily optimised: the 
continuation bits are all shifted to the first byte, which effectively encodes 
the length in run-length encoding, i.e. the number of contiguous top order bits 
that are set to 1. i.e. {{length = Integer.numberOfLeadingZeros(firstByte ^ 
(byte) -1)}}

This way we read the first byte, and if there are any more to read, we read a 
long, and quickly truncate.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-17 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589589#comment-14589589
 ] 

Study Hsueh commented on CASSANDRA-9607:


This issues cause all of nodes downs.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7
Reporter: Study Hsueh
Priority: Critical
 Attachments: load.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-17 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589470#comment-14589470
 ] 

Benedict edited comment on CASSANDRA-9499 at 6/17/15 8:47 AM:
--

I'm confused as to why we need 10 bytes? Pretty much by definition a 
continuation bit encoding needs 9 bytes to represent 8 bytes. Let's not use 
Google's implementation. It looks pretty rubbish. 

The reason they use 10 bytes is they cannot be bothered to realise the last 
byte does not need a continuation bit. They also use a *terrible* 
implementation for deciding how long it needs to be.

Here's  a simple change which makes it 9 bytes, and easily optimised: the 
continuation bits are all shifted to the first byte, which effectively encodes 
the length in run-length encoding, i.e. the number of contiguous top order bits 
that are set to 1. i.e. {{length = Integer.numberOfLeadingZeros(firstByte ^ 
0x)}}

This way we read the first byte, and if there are any more to read, we read a 
long, and quickly truncate.


was (Author: benedict):
I'm confused as to why we need 10 bytes? Pretty much by definition a 
continuation bit encoding needs 9 bytes to represent 8 bytes. Let's not use 
Google's implementation. It looks pretty rubbish. 

The reason they use 10 bytes is they cannot be bothered to realise the last 
byte does not need a continuation bit. They also use a *terrible* 
implementation for deciding how long it needs to be.

Here's  a simple change which makes it 9 bytes, and easily optimised: the 
continuation bits are all shifted to the first byte, which effectively encodes 
the length in run-length encoding, i.e. the number of contiguous top order bits 
that are set to 1. i.e. {{length = Integer.numberOfLeadingZeros(firstByte ^ 
(byte) -1)}}

This way we read the first byte, and if there are any more to read, we read a 
long, and quickly truncate.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8535) java.lang.RuntimeException: Failed to rename XXX to YYY

2015-06-17 Thread Maxim Podkolzine (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589583#comment-14589583
 ] 

Maxim Podkolzine commented on CASSANDRA-8535:
-

I'm afraid this issue has to be reopened. I've got this in 2.1.6:

2015-06-15T17:31:44,797 [CompactionExecutor:1] ERROR 
o.a.c.service.CassandraDaemon - Exception in thread 
Thread[CompactionExecutor:1,1,main]
java.lang.RuntimeException: Failed to rename 
E:\Upsource\Upsource_2.0.262_new_cassandra\data\cassandra\data\upsource\content-f13ce210136211e59a87137398728adc\upsource-content-tmp-ka-18-Index.db
 to E:\Upsource\Upsource_2.0.262_new_cassandra\
data\cassandra\data\upsource\content-f13ce210136211e59a87137398728adc\upsource-content-ka-18-Index.db
at 
org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:541) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:533) 
~[apache-cassandra-2.1.6.jar:2.1.6]

Can provide the full stacktrace if needed, but it looks the same as in the 
issue:

Caused by: java.nio.file.FileSystemException: 
E:\Upsource\Upsource_2.0.262_new_cassandra\data\cassandra\data\upsource\uniqueidhistory_t-eef64d70136211e59a87137398728adc\upsource-uniqueidhistory_t-tmplink-ka-23-Index.db:
 The process cannot access the file because it is being used by another process.

I can add that we've been running Cassandra build on 879691dd revision and it 
worked fine on Windows under same load. So it looks like one of the last 
changes in 2.1 branch broke it again.

 java.lang.RuntimeException: Failed to rename XXX to YYY
 ---

 Key: CASSANDRA-8535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8535
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 2008 X64
Reporter: Leonid Shalupov
Assignee: Joshua McKenzie
  Labels: Windows
 Fix For: 2.1.5

 Attachments: 8535_v1.txt, 8535_v2.txt, 8535_v3.txt


 {code}
 java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  to 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-ka-5-Index.db
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:170) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:154) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:569) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.rename(SSTableWriter.java:561) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:535) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.finish(SSTableWriter.java:470) 
 ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finishAndMaybeThrow(SSTableRewriter.java:349)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:324)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableRewriter.finish(SSTableRewriter.java:304)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:200)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:226)
  ~[main/:na]
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 ~[na:1.7.0_45]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
 Caused by: java.nio.file.FileSystemException: 
 build\test\cassandra\data;0\system\schema_keyspaces-b0f2235744583cdb9631c43e59ce3676\system-schema_keyspaces-tmp-ka-5-Index.db
  - 
 

[jira] [Commented] (CASSANDRA-5383) Windows 7 deleting/renaming files problem

2015-06-17 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589504#comment-14589504
 ] 

Andreas Schnitzerling commented on CASSANDRA-5383:
--

I put it manually into the source:
/src/org/apache/cassandra/io/util/RandomAccessReader.java
Put (overwrite) the finalizer there.

To avoid unnecessary logs after patching u can un-comment the warning 
(logger.error) in /sstable/SSTableDeletingTask.java

To build u need maven (I use 3.0.5) and ant (1.9.2) and an internet-connection. 
I have to use ant on a dedicated server in front of the proxy since I couldn't 
get proxy-settings working in ant.
After the modifications u can type ant jar if u just need the patched jar(s). 
ant release builds u the compressed bin-package.

 Windows 7 deleting/renaming files problem
 -

 Key: CASSANDRA-5383
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5383
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Affects Versions: 2.0 beta 1
Reporter: Ryan McGuire
Assignee: Marcus Eriksson
  Labels: qa-resolved
 Fix For: 2.0.2

 Attachments: 
 0001-CASSANDRA-5383-cant-move-a-file-on-top-of-another-fi.patch, 
 0001-CASSANDRA-5383-v2.patch, 
 0001-use-Java7-apis-for-deleting-and-moving-files-and-cre.patch, 
 5383-v3.patch, 5383_patch_v2_system.log, cant_move_file_patch.log, 
 test_log.5383.patch_v2.log.txt, v2+cant_move_file_patch.log


 Two unit tests are failing on Windows 7 due to errors in renaming/deleting 
 files:
 org.apache.cassandra.db.ColumnFamilyStoreTest: 
 {code}
 [junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest
 [junit] Tests run: 27, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
 13.904 sec
 [junit] 
 [junit] - Standard Error -
 [junit] ERROR 13:06:46,058 Unable to delete 
 build\test\cassandra\data\Keyspace1\Indexed2\Keyspace1-Indexed2.birthdate_index-ja-1-Data.db
  (it will be removed on server restart; we'll also retry after GC)
 [junit] ERROR 13:06:48,508 Fatal exception in thread 
 Thread[NonPeriodicTasks:1,5,main]
 [junit] java.lang.RuntimeException: Tried to hard link to file that does 
 not exist 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-7-Statistics.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:72)
 [junit]   at 
 org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1057)
 [junit]   at 
 org.apache.cassandra.db.DataTracker$1.run(DataTracker.java:168)
 [junit]   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
 [junit]   at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 [junit]   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 [junit]   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
 [junit]   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
 [junit]   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
 [junit]   at java.lang.Thread.run(Thread.java:662)
 [junit] -  ---
 [junit] Testcase: 
 testSliceByNamesCommandOldMetatada(org.apache.cassandra.db.ColumnFamilyStoreTest):
   Caused an ERROR
 [junit] Failed to rename 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db-tmp
  to 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db
 [junit] java.lang.RuntimeException: Failed to rename 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db-tmp
  to 
 build\test\cassandra\data\Keyspace1\Standard1\Keyspace1-Standard1-ja-6-Statistics.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133)
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:122)
 [junit]   at 
 org.apache.cassandra.db.compaction.LeveledManifest.mutateLevel(LeveledManifest.java:575)
 [junit]   at 
 org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:589)
 [junit]   at 
 org.apache.cassandra.db.ColumnFamilyStoreTest.testSliceByNamesCommandOldMetatada(ColumnFamilyStoreTest.java:885)
 [junit] 
 [junit] 
 [junit] Testcase: 
 testRemoveUnifinishedCompactionLeftovers(org.apache.cassandra.db.ColumnFamilyStoreTest):
 Caused an ERROR
 [junit] java.io.IOException: Failed to delete 
 

[jira] [Created] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-17 Thread Study Hsueh (JIRA)
Study Hsueh created CASSANDRA-9607:
--

 Summary: Get high load after upgrading from 2.1.3 to cassandra 
2.1.6
 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
CentOS 6 * 4
Ubuntu 14.04 * 2

JDK: Oracle JDK 7
Reporter: Study Hsueh
Priority: Critical
 Attachments: load.png

After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
cassandra cluster grows from 0.x~1.x to 3.x~6.x. 

What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9607) Get high load after upgrading from 2.1.3 to cassandra 2.1.6

2015-06-17 Thread Study Hsueh (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589589#comment-14589589
 ] 

Study Hsueh edited comment on CASSANDRA-9607 at 6/17/15 10:27 AM:
--

This issues cause all of nodes downs. I will attach log later after I downgrade 
to 2.1.3...


was (Author: study):
This issues cause all of nodes downs.

 Get high load after upgrading from 2.1.3 to cassandra 2.1.6
 ---

 Key: CASSANDRA-9607
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9607
 Project: Cassandra
  Issue Type: Bug
 Environment: OS: 
 CentOS 6 * 4
 Ubuntu 14.04 * 2
 JDK: Oracle JDK 7
Reporter: Study Hsueh
Priority: Critical
 Attachments: load.png


 After upgrading cassandra version from 2.1.3 to 2.1.6, the average load of my 
 cassandra cluster grows from 0.x~1.x to 3.x~6.x. 
 What kind of additional information should I provide for this problem?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9549) Memory leak

2015-06-17 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589537#comment-14589537
 ] 

Benedict commented on CASSANDRA-9549:
-

I've added a regression test to the branch. Could I get a reviewer please, and 
can we ship this soon?

 Memory leak 
 

 Key: CASSANDRA-9549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9549
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.5. 9 node cluster in EC2 (m1.large nodes, 
 2 cores 7.5G memory, 800G platter for cassandra data, root partition and 
 commit log are on SSD EBS with sufficient IOPS), 3 nodes/availablity zone, 1 
 replica/zone
 JVM: /usr/java/jdk1.8.0_40/jre/bin/java 
 JVM Flags besides CP: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
 -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn200M 
 -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
 -XX:+UseTLAB -XX:CompileCommandFile=/etc/cassandra/conf/hotspot_compiler 
 -XX:CMSWaitDuration=1 -XX:+CMSParallelInitialMarkEnabled 
 -XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
 -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 
 -Dcom.sun.management.jmxremote.rmi.port=7199 
 -Dcom.sun.management.jmxremote.ssl=false 
 -Dcom.sun.management.jmxremote.authenticate=false 
 -Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
 -Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid 
 Kernel: Linux 2.6.32-504.16.2.el6.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
Reporter: Ivar Thorson
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.x

 Attachments: c4_system.log, c7fromboot.zip, cassandra.yaml, 
 cpu-load.png, memoryuse.png, ref-java-errors.jpeg, suspect.png, two-loads.png


 We have been experiencing a severe memory leak with Cassandra 2.1.5 that, 
 over the period of a couple of days, eventually consumes all of the available 
 JVM heap space, putting the JVM into GC hell where it keeps trying CMS 
 collection but can't free up any heap space. This pattern happens for every 
 node in our cluster and is requiring rolling cassandra restarts just to keep 
 the cluster running. We have upgraded the cluster per Datastax docs from the 
 2.0 branch a couple of months ago and have been using the data from this 
 cluster for more than a year without problem.
 As the heap fills up with non-GC-able objects, the CPU/OS load average grows 
 along with it. Heap dumps reveal an increasing number of 
 java.util.concurrent.ConcurrentLinkedQueue$Node objects. We took heap dumps 
 over a 2 day period, and watched the number of Node objects go from 4M, to 
 19M, to 36M, and eventually about 65M objects before the node stops 
 responding. The screen capture of our heap dump is from the 19M measurement.
 Load on the cluster is minimal. We can see this effect even with only a 
 handful of writes per second. (See attachments for Opscenter snapshots during 
 very light loads and heavier loads). Even with only 5 reads a sec we see this 
 behavior.
 Log files show repeated errors in Ref.java:181 and Ref.java:279 and LEAK 
 detected messages:
 {code}
 ERROR [CompactionExecutor:557] 2015-06-01 18:27:36,978 Ref.java:279 - Error 
 when closing class 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1302301946:/data1/data/ourtablegoeshere-ka-1150
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@32680b31 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@573464d6[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1644]
 {code}
 {code}
 ERROR [Reference-Reaper:1] 2015-06-01 18:27:37,083 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@74b5df92) to class 
 org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2054303604:/data2/data/ourtablegoeshere-ka-1151
  was not released before the reference was garbage collected
 {code}
 This might be related to [CASSANDRA-8723]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7020) Incorrect result of query WHERE token(key) -9223372036854775808 when using Murmur3Partitioner

2015-06-17 Thread Anton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589575#comment-14589575
 ] 

Anton commented on CASSANDRA-7020:
--

The same when equal.
{code}
cqlsh:test1 select * from test where token(key) = -9223372036854775808;

 key | value
-+--
   5 |   ee
  10 |j
   1 | 
   8 | 
   2 |  bbb
   4 |   dd
   7 | 
   6 |  fff
   9 | 
   3 |c
{code}
Too bad.

 Incorrect result of query WHERE token(key)  -9223372036854775808 when using 
 Murmur3Partitioner
 ---

 Key: CASSANDRA-7020
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7020
 Project: Cassandra
  Issue Type: Bug
 Environment: cassandra 2.0.6-snapshot 
Reporter: Piotr Kołaczkowski
Assignee: Marko Denda

 {noformat}
 cqlsh:test1 select * from test where token(key)  -9223372036854775807;
 (0 rows)
 cqlsh:test1 select * from test where token(key)  -9223372036854775808;
  key | value
 -+--
5 |   ee
   10 |j
1 | 
8 | 
2 |  bbb
4 |   dd
7 | 
6 |  fff
9 | 
3 |c
 {noformat}
 Expected: empty result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9606) this query is not supported in new version

2015-06-17 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589723#comment-14589723
 ] 

Benjamin Lerer edited comment on CASSANDRA-9606 at 6/17/15 12:55 PM:
-

The query should be invalid in 2.0. 

Cassandra does not allow multi-column restrictions and single-column 
restrictions on the same column. What it allow is:
{{select * from test where a=1 and (b)(6) and (b,c)  (3,2) limit 2;}}. Which 
will give you exactly what you want.

Be aware that mixing multi and single column restrictions has only been 
properly fixed in 2.0.15 and 2.1.5 by CASSANDRA-8613.

If what you want to do is paging, I advise you to check if the latest version 
of the driver that you use is not supporting manual paging like the java 
driver: 
http://datastax.github.io/java-driver/2.0.10/features/paging/#manual-paging



was (Author: blerer):
The query should be invalid in 2.0. 

We do not allow multi-column restrictions and single-column restrictions on the 
same column. What we allow is:
{{select * from test where a=1 and (b)(6) and (b,c)  (3,2) limit 2;}}. Which 
will give you exactly what you want.

Be aware that mixing multi and single column restrictions has only been 
properly fixed in 2.0.15 and 2.1.5 by CASSANDRA-8613.

If what you want to do is paging, I advise you to check if the latest version 
of the driver that you use is not supporting manual paging like the java 
driver: 
http://datastax.github.io/java-driver/2.0.10/features/paging/#manual-paging


 this query is not supported in new version
 --

 Key: CASSANDRA-9606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9606
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.6
 jdk 1.7.0_55
Reporter: zhaoyan
Assignee: Benjamin Lerer

 Background:
 1、create a table:
 {code}
 CREATE TABLE test (
 a int,
 b int,
 c int,
   d int,
 PRIMARY KEY (a, b, c)
 );
 {code}
 2、query by a=1 and b6
 {code}
 select * from test where a=1 and b6;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (6 rows)
 {code}
 3、query by page
 first page:
 {code}
 select * from test where a=1 and b6 limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
 (2 rows)
 {code}
 second page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
 (2 rows)
 {code}
 last page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,5) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (2 rows)
 {code}
 question:
 this query by page is ok when cassandra 2.0.8.
 but is not supported in the latest version 2.1.6
 when execute:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
 {code}
 get one error message:
 InvalidRequest: code=2200 [Invalid query] message=Column b cannot have 
 both tuple-notation inequalities and single-column inequalities: (b, c)  (3, 
 2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9606) this query is not supported in new version

2015-06-17 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589723#comment-14589723
 ] 

Benjamin Lerer commented on CASSANDRA-9606:
---

The query should be invalid in 2.0. 

We do not allow multi-column restrictions and single-column restrictions on the 
same column. What we allow is:
{{select * from test where a=1 and (b)(6) and (b,c)  (3,2) limit 2;}}. Which 
will give you exactly what you want.

Be aware that mixing multi and single column restrictions has only been 
properly fixed in 2.0.15 and 2.1.5 by CASSANDRA-8613.

If what you want to do is paging, I advise you to check if the latest version 
of the driver that you use is not supporting manual paging like the java 
driver: 
http://datastax.github.io/java-driver/2.0.10/features/paging/#manual-paging


 this query is not supported in new version
 --

 Key: CASSANDRA-9606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9606
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.6
 jdk 1.7.0_55
Reporter: zhaoyan
Assignee: Benjamin Lerer

 Background:
 1、create a table:
 {code}
 CREATE TABLE test (
 a int,
 b int,
 c int,
   d int,
 PRIMARY KEY (a, b, c)
 );
 {code}
 2、query by a=1 and b6
 {code}
 select * from test where a=1 and b6;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (6 rows)
 {code}
 3、query by page
 first page:
 {code}
 select * from test where a=1 and b6 limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
 (2 rows)
 {code}
 second page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
 (2 rows)
 {code}
 last page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,5) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (2 rows)
 {code}
 question:
 this query by page is ok when cassandra 2.0.8.
 but is not supported in the latest version 2.1.6
 when execute:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
 {code}
 get one error message:
 InvalidRequest: code=2200 [Invalid query] message=Column b cannot have 
 both tuple-notation inequalities and single-column inequalities: (b, c)  (3, 
 2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9606) this query is not supported in new version

2015-06-17 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589727#comment-14589727
 ] 

Sylvain Lebresne commented on CASSANDRA-9606:
-

{{(b)(6)}} should be absolutely equivalent {{b6}} or it's a bug.

 this query is not supported in new version
 --

 Key: CASSANDRA-9606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9606
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.6
 jdk 1.7.0_55
Reporter: zhaoyan
Assignee: Benjamin Lerer

 Background:
 1、create a table:
 {code}
 CREATE TABLE test (
 a int,
 b int,
 c int,
   d int,
 PRIMARY KEY (a, b, c)
 );
 {code}
 2、query by a=1 and b6
 {code}
 select * from test where a=1 and b6;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (6 rows)
 {code}
 3、query by page
 first page:
 {code}
 select * from test where a=1 and b6 limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
 (2 rows)
 {code}
 second page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
 (2 rows)
 {code}
 last page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,5) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (2 rows)
 {code}
 question:
 this query by page is ok when cassandra 2.0.8.
 but is not supported in the latest version 2.1.6
 when execute:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
 {code}
 get one error message:
 InvalidRequest: code=2200 [Invalid query] message=Column b cannot have 
 both tuple-notation inequalities and single-column inequalities: (b, c)  (3, 
 2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-06-17 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589721#comment-14589721
 ] 

Sylvain Lebresne commented on CASSANDRA-8099:
-

I'll look more closely at your test and fix any brokenness: it does seem the 
results are not what they are supposed to be.

For the record however, I'll note that it's not true that an open marker is 
immediately followed by the corresponding close marker, there can be some rows 
between an open and a close marker. However, the guarantee that iterators 
should provide is that those rows between an open and close marker are not 
deleted by the range tombstone (this doesn't make the tests result above any 
more right, but wanted to clarify).

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8099) Refactor and modernize the storage engine

2015-06-17 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589744#comment-14589744
 ] 

Branimir Lambov commented on CASSANDRA-8099:


bq. it's not true that an open marker is immediately followed by the 
corresponding close marker, there can be some rows between an open and a close 
marker. However, the guarantee that iterators should provide is that those rows 
between an open and close marker are not deleted by the range tombstone.

It would be great to have this clarification in the doc. This is a point that I 
was missing; I will post a new version of the test that takes this into account.

 Refactor and modernize the storage engine
 -

 Key: CASSANDRA-8099
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8099
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 3.0 beta 1

 Attachments: 8099-nit


 The current storage engine (which for this ticket I'll loosely define as the 
 code implementing the read/write path) is suffering from old age. One of the 
 main problem is that the only structure it deals with is the cell, which 
 completely ignores the more high level CQL structure that groups cell into 
 (CQL) rows.
 This leads to many inefficiencies, like the fact that during a reads we have 
 to group cells multiple times (to count on replica, then to count on the 
 coordinator, then to produce the CQL resultset) because we forget about the 
 grouping right away each time (so lots of useless cell names comparisons in 
 particular). But outside inefficiencies, having to manually recreate the CQL 
 structure every time we need it for something is hindering new features and 
 makes the code more complex that it should be.
 Said storage engine also has tons of technical debt. To pick an example, the 
 fact that during range queries we update {{SliceQueryFilter.count}} is pretty 
 hacky and error prone. Or the overly complex ways {{AbstractQueryPager}} has 
 to go into to simply remove the last query result.
 So I want to bite the bullet and modernize this storage engine. I propose to 
 do 2 main things:
 # Make the storage engine more aware of the CQL structure. In practice, 
 instead of having partitions be a simple iterable map of cells, it should be 
 an iterable list of row (each being itself composed of per-column cells, 
 though obviously not exactly the same kind of cell we have today).
 # Make the engine more iterative. What I mean here is that in the read path, 
 we end up reading all cells in memory (we put them in a ColumnFamily object), 
 but there is really no reason to. If instead we were working with iterators 
 all the way through, we could get to a point where we're basically 
 transferring data from disk to the network, and we should be able to reduce 
 GC substantially.
 Please note that such refactor should provide some performance improvements 
 right off the bat but it's not it's primary goal either. It's primary goal is 
 to simplify the storage engine and adds abstraction that are better suited to 
 further optimizations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9503) Update CQL doc reflecting current keywords

2015-06-17 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589871#comment-14589871
 ] 

Tyler Hobbs commented on CASSANDRA-9503:


bq. Do we need to update that directly, or is it part of the release process?

Once we commit the changes to the textile file, I can regenerate the site 
documentation.  It's in a separate ASF repository.

 Update CQL doc reflecting current keywords
 --

 Key: CASSANDRA-9503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9503
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Adam Holmberg
Assignee: Adam Holmberg
Priority: Trivial
 Fix For: 2.2.x

 Attachments: cql_keywords.txt


 The table in doc/cql3/CQL.textile#appendixA is outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9532) Provide access to select statement's real column definitions

2015-06-17 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589790#comment-14589790
 ] 

Benjamin Lerer commented on CASSANDRA-9532:
---

In my opinion, we should encapsulate the column definitions as part of 
{{SelectionColumnMapping}}. It makes sense to keep them together and will 
simplify the {{Selection}} code. In 2.2 and trunk we will then be able to 
populate {{SelectionColumnMapping}} via 
{{SelectorFactories.createFactoriesAndCollectColumnDefinitions}}.

For the {{SelectStatement.getSelection()}} and 
{{SelectStatement.getRestrictions}} the javadoc should specify why they are 
public. I have the bad habit to remove unused methods.

I think that the method {{SelectionColumnMapping.emptyMapping()}} has a 
confusing name. I expected the method to create an immutable empty 
{{SelectionColumnMapping}} (like Collections.emptyList()). 



 Provide access to select statement's real column definitions
 

 Key: CASSANDRA-9532
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9532
 Project: Cassandra
  Issue Type: Improvement
Reporter: mck
Assignee: Sam Tunnicliffe
 Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x

 Attachments: 9532-2.0-v2.txt, 9532-2.1-v2.txt, 9532-2.2-v2.txt, 
 9532-trunk-v2.txt, cassandra-2.0-9532.txt, cassandra-2.1-9532.txt, 
 cassandra-2.2-9532.txt, trunk-9532.txt


 Currently there is no way to get access to the real ColumnDefinitions being 
 used in a SelectStatement.
 This information is there in
 {{selectStatement.selection.columns}} but is private.
 Giving public access would make it possible for third-party implementations 
 of a {{QueryHandler}} to work accurately with the real columns being queried 
 and not have to work-around column aliases (or when the rawSelectors don't 
 map directly to ColumnDefinitions, eg in Selection.fromSelectors(..), like 
 functions), which is what one has to do today with going through 
 ResultSet.metadata.names.
 This issue provides a very minimal patch to provide access to the already 
 final and immutable fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9460) NullPointerException Creating Digest

2015-06-17 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9460:
---
Reviewer: Benjamin Lerer  (was: Tyler Hobbs)

Assigning [~blerer] to review.

bq. Do you think we need additional dtests or is consistency_test.py enough?

I think consistency_test.py is probably good enough, but since we know there 
can be a problem with race conditions, it might be good to make the test loop a 
few times instead of doing a single pass.

 NullPointerException Creating Digest
 

 Key: CASSANDRA-9460
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9460
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Stefania
 Fix For: 2.1.x

 Attachments: node2.log


 In the {{consistency_test.TestConsistency.short_read_test}} dtest against 
 cassandra-2.1, the following error occured:
 {noformat}
 ERROR [ReadRepairStage:3] 2015-05-22 16:35:25,034 CassandraDaemon.java:223 - 
 Exception in thread Thread[ReadRepairStage:3,5,main]
 java.lang.NullPointerException: null
 at 
 org.apache.cassandra.db.ColumnFamily.updateDigest(ColumnFamily.java:390) 
 ~[main/:na]
 at org.apache.cassandra.db.ColumnFamily.digest(ColumnFamily.java:383) 
 ~[main/:na]
 at 
 org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:84)
  ~[main/:na]
 at 
 org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28)
  ~[main/:na]
 at 
 org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:176)
  ~[main/:na]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_80]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  ~[na:1.7.0_80]
 at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_80]
 {noformat}
 From a glance at the code in the stacktrace, it looks like there was a null 
 cell in the ColumnFamily that we were creating a digest of.  This error is 
 probably particular to short reads.
 Here's the failing test: 
 http://cassci.datastax.com/job/cassandra-2.1_dtest/lastCompletedBuild/testReport/consistency_test/TestConsistency/short_read_test/.
   I've attached the logs for the node with the error.
 We saw this issue against 2.1, but the problem may also exist with 2.0 and/or 
 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9503) Update CQL doc reflecting current keywords

2015-06-17 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9503:
---
Reviewer: Benjamin Lerer  (was: Adam Holmberg)
Assignee: Adam Holmberg  (was: Tyler Hobbs)

Assigning to [~blerer] for review.

 Update CQL doc reflecting current keywords
 --

 Key: CASSANDRA-9503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9503
 Project: Cassandra
  Issue Type: Bug
  Components: Documentation  website
Reporter: Adam Holmberg
Assignee: Adam Holmberg
Priority: Trivial
 Fix For: 2.2.x

 Attachments: cql_keywords.txt


 The table in doc/cql3/CQL.textile#appendixA is outdated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9563) Rename class for DATE type in Java driver

2015-06-17 Thread Olivier Michallat (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589804#comment-14589804
 ] 

Olivier Michallat commented on CASSANDRA-9563:
--

We have a patch ready for JAVA-810, let us know when you need a new version of 
the driver.
I agree that this has to go in 2.2rc2, could you change the fix version?

 Rename class for DATE type in Java driver
 -

 Key: CASSANDRA-9563
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9563
 Project: Cassandra
  Issue Type: Improvement
Reporter: Olivier Michallat
Assignee: Robert Stupp
Priority: Minor
 Fix For: 2.2.x


 An early preview of the Java driver 2.2 was provided for inclusion in 
 Cassandra 2.2.0-rc1. It uses a custom Java type to represent CQL type 
 {{DATE}}. Currently that Java type is called {{DateWithoutTime}}.
 We'd like to rename it to {{LocalDate}}. This would be a breaking change for 
 Cassandra, because that type is visible from UDF implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9563) Rename class for DATE type in Java driver

2015-06-17 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-9563:

Fix Version/s: (was: 2.2.x)
   2.2.0 rc2

 Rename class for DATE type in Java driver
 -

 Key: CASSANDRA-9563
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9563
 Project: Cassandra
  Issue Type: Improvement
Reporter: Olivier Michallat
Assignee: Robert Stupp
Priority: Minor
 Fix For: 2.2.0 rc2


 An early preview of the Java driver 2.2 was provided for inclusion in 
 Cassandra 2.2.0-rc1. It uses a custom Java type to represent CQL type 
 {{DATE}}. Currently that Java type is called {{DateWithoutTime}}.
 We'd like to rename it to {{LocalDate}}. This would be a breaking change for 
 Cassandra, because that type is visible from UDF implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-17 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589904#comment-14589904
 ] 

Ariel Weisberg commented on CASSANDRA-9499:
---

Doesn't shifting the continuation bits to the first byte penalize the range of 
1-byte values? Supposedly that is the most common and so having 1 or 2 byte 
values with as much range as possible is a big advantage. Comments in the 
Google code make it seem like it's all about 1-byte values and that is the 
thing to optimize for.

In practice if you only have to process one or two bytes the current 
implementation won't have to loop that many times.

I don't understand how you can use a variable number of top order bits in the 
first bytes to indicate the number of bytes. How can you tell the difference 
between length bytes and value bytes? The way the existing code that doesn't 
use length extension works (and is simple) is that it drops the remaining bits 
of the first byte. That is something we want to get away from I thought. 


 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9563) Rename class for DATE type in Java driver

2015-06-17 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589887#comment-14589887
 ] 

Robert Stupp commented on CASSANDRA-9563:
-

Just give me a hint, when it's merged into driver's 2.2 branch. I can quickly 
build a snapshot of the driver and integrate it into the branch/patch.

 Rename class for DATE type in Java driver
 -

 Key: CASSANDRA-9563
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9563
 Project: Cassandra
  Issue Type: Improvement
Reporter: Olivier Michallat
Assignee: Robert Stupp
Priority: Minor
 Fix For: 2.2.0 rc2


 An early preview of the Java driver 2.2 was provided for inclusion in 
 Cassandra 2.2.0-rc1. It uses a custom Java type to represent CQL type 
 {{DATE}}. Currently that Java type is called {{DateWithoutTime}}.
 We'd like to rename it to {{LocalDate}}. This would be a breaking change for 
 Cassandra, because that type is visible from UDF implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-17 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589920#comment-14589920
 ] 

Aleksey Yeschenko commented on CASSANDRA-9499:
--

I was thinking that we'd just implement what UTF8 does, except without 
self-synchronization semantics, which we don't really care about (so using the 
full range of the subsequent bytes, instead of using up their first 2 bits  
with the '10' marker).

They way you tell the difference between length bits and value bits in the 
first byte is by separating the length bits and value bits by a mandatory 0 bit.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9605) Error message when creating a KS replicated to a non-existent DC

2015-06-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9605:
--
Assignee: Stefania

 Error message when creating a KS replicated to a non-existent DC
 

 Key: CASSANDRA-9605
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9605
 Project: Cassandra
  Issue Type: Bug
Reporter: Sebastian Estevez
Assignee: Stefania
Priority: Minor
 Fix For: 3.x


 One of the most common mistakes I see with beginners is they mix up the data 
 center configuration when using network topology strategy because they are 
 copying from some tutorial or sample code or have recently changed snitches.
 This should not be legal:
 {code}
 create KEYSPACE  test1 WITH replication = {'class': 
 'NetworkTopologyStrategy', 'doesnotexist': 1} ;
 cqlsh desc KEYSPACE test1 ;
 CREATE KEYSPACE test1 WITH replication = {'class': 'NetworkTopologyStrategy', 
 'doesnotexist': '1'}  AND durable_writes = true; 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9605) Error message when creating a KS replicated to a non-existent DC

2015-06-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9605:
--
Issue Type: New Feature  (was: Bug)

 Error message when creating a KS replicated to a non-existent DC
 

 Key: CASSANDRA-9605
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9605
 Project: Cassandra
  Issue Type: New Feature
Reporter: Sebastian Estevez
Assignee: Stefania
Priority: Minor
 Fix For: 3.x


 One of the most common mistakes I see with beginners is they mix up the data 
 center configuration when using network topology strategy because they are 
 copying from some tutorial or sample code or have recently changed snitches.
 This should not be legal:
 {code}
 create KEYSPACE  test1 WITH replication = {'class': 
 'NetworkTopologyStrategy', 'doesnotexist': 1} ;
 cqlsh desc KEYSPACE test1 ;
 CREATE KEYSPACE test1 WITH replication = {'class': 'NetworkTopologyStrategy', 
 'doesnotexist': '1'}  AND durable_writes = true; 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9605) Error message when creating a KS replicated to a non-existent DC

2015-06-17 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9605:
--
Fix Version/s: 3.x

 Error message when creating a KS replicated to a non-existent DC
 

 Key: CASSANDRA-9605
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9605
 Project: Cassandra
  Issue Type: New Feature
Reporter: Sebastian Estevez
Assignee: Stefania
Priority: Minor
 Fix For: 3.x


 One of the most common mistakes I see with beginners is they mix up the data 
 center configuration when using network topology strategy because they are 
 copying from some tutorial or sample code or have recently changed snitches.
 This should not be legal:
 {code}
 create KEYSPACE  test1 WITH replication = {'class': 
 'NetworkTopologyStrategy', 'doesnotexist': 1} ;
 cqlsh desc KEYSPACE test1 ;
 CREATE KEYSPACE test1 WITH replication = {'class': 'NetworkTopologyStrategy', 
 'doesnotexist': '1'}  AND durable_writes = true; 
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9609) org.apache.cassandra.db.marshal.DateType compares dates with negative timestamp incorrectly

2015-06-17 Thread Alexander Troshanin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Troshanin updated CASSANDRA-9609:
---
Description: 
We got issues in our application.
We have column family with column names as dates.
when we try to search these columns with some criteria like 
{code}
// pseudocode
colName  '01.01.2015'
{code}
then we dont receive columns whos names are less then 01.01.1970, i.e. with 
negative timestamp.

After a small research we found out, that probably DateType class has a bug in 
compateTo method.
Here is very small example shat shows incorrect work of DateType 

{code}
import org.apache.cassandra.db.marshal.DateType;
import org.apache.cassandra.db.marshal.LongType;
import org.apache.cassandra.serializers.LongSerializer;
import org.apache.cassandra.serializers.TimestampSerializer;

import java.util.Date;
import java.util.Locale;
import java.util.TimeZone;

public class TestCassandraCompare {

static TimestampSerializer dateSerializer = TimestampSerializer.instance;
static LongSerializer longSerializer = LongSerializer.instance;
static DateType dateType = DateType.instance;
static LongType longType = LongType.instance;

public static void main(String[] args) {

Locale.setDefault(Locale.ENGLISH);
TimeZone.setDefault(TimeZone.getTimeZone(GMT0));

// check longs
doLongsCheck(10, 20);
doLongsCheck(20, 10);
doLongsCheck(-10, -20);
doLongsCheck(-20, -10);
doLongsCheck(-10, 20);
doLongsCheck(20, -10);
doLongsCheck(10, -20);
doLongsCheck(-20, 10);

// check dates
doDatesCheck(new Date(10),  new Date(20),   1);
doDatesCheck(new Date(20),  new Date(10),   2);
doDatesCheck(new Date(-10), new Date(-20),  3);
doDatesCheck(new Date(-20), new Date(-10),  4);
doDatesCheck(new Date(-10), new Date(20),   5);
doDatesCheck(new Date(20),  new Date(-10),  6);
doDatesCheck(new Date(10),  new Date(-20),  7);
doDatesCheck(new Date(-20), new Date(10),   8);

}

private static void doLongsCheck(long l1, long l2) {
int cassandraCompare = longType.compare(longSerializer.serialize(l1), 
longSerializer.serialize(l2));
int javaCoreCompare = Long.compare(l1, l2);
if (cassandraCompare != javaCoreCompare) {
System.err.println(got incorrect result from LongType compare 
method. +
\n\tlong1:  + l1 +
\n\tlong2:  + l2 +
\n\tcassandraCompare:  + cassandraCompare +
\n\tjavaCoreCompare:  + javaCoreCompare
);
}
}

private static void doDatesCheck(Date d1, Date d2, int testNum) {
int cassandraCompare = dateType.compare(dateSerializer.serialize(d1), 
dateSerializer.serialize(d2));
int javaCoreCompare = d1.compareTo(d2);
if (cassandraCompare != javaCoreCompare) {
System.err.println([ + testNum + ]got incorrect result from 
DateType compare method. +
\n\tdate1:  + d1 +
\n\tdate2:  + d2 +
\n\tcassandraCompare:  + cassandraCompare +
\n\tjavaCoreCompare:  + javaCoreCompare
);
}
}

}
{code}

If you will run the code you will se next output:
{code}
[5]got incorrect result from DateType compare method.
date1: Wed Dec 31 23:58:20 GMT 1969
date2: Thu Jan 01 00:03:20 GMT 1970
cassandraCompare: 1
javaCoreCompare: -1
[6]got incorrect result from DateType compare method.
date1: Thu Jan 01 00:03:20 GMT 1970
date2: Wed Dec 31 23:58:20 GMT 1969
cassandraCompare: -1
javaCoreCompare: 1
[7]got incorrect result from DateType compare method.
date1: Thu Jan 01 00:01:40 GMT 1970
date2: Wed Dec 31 23:56:40 GMT 1969
cassandraCompare: -1
javaCoreCompare: 1
[8]got incorrect result from DateType compare method.
date1: Wed Dec 31 23:56:40 GMT 1969
date2: Thu Jan 01 00:01:40 GMT 1970
cassandraCompare: 1
javaCoreCompare: -1
{code}

  was:
We got issues in our application.
We have column family with column names as dates.
when we try to search these columns with some criteria like 
{code}
// pseudocode
colName  '01.01.2015'
{code}
then we dont receive columns whos names are less then 01.01.1970, i.e. with 
negative timestamp.

After a small research we found out, that probably DateType class has a bug in 
compateTo method.
Here is very small example shat shows incorrect work of DateType 

{code}
import com.netflix.astyanax.serializers.DateSerializer;
import com.netflix.astyanax.serializers.LongSerializer;
import org.apache.cassandra.db.marshal.DateType;
import org.apache.cassandra.db.marshal.LongType;


[jira] [Created] (CASSANDRA-9608) Support Java 9

2015-06-17 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-9608:
---

 Summary: Support Java 9
 Key: CASSANDRA-9608
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
 Project: Cassandra
  Issue Type: Task
Reporter: Robert Stupp
Priority: Minor


This ticket is intended to group all issues found to support Java 9 in the 
future.

From what I've found out so far:
* Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. It 
can be easily solved using this patch:
{code}
-dependency groupId=net.sourceforge.cobertura 
artifactId=cobertura/
+dependency groupId=net.sourceforge.cobertura artifactId=cobertura
+  exclusion groupId=com.sun artifactId=tools/
+/dependency
{code}
* Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
{{monitorEnter}} + {{monitorExit}}. These methods are used by 
{{o.a.c.utils.concurrent.Locks}} which is only used by 
{{o.a.c.db.AtomicBTreeColumns}}.

I don't mind to start working on this yet since Java 9 is in a too early 
development phase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9583) test-compression could run multiple unit tests in parallel like test

2015-06-17 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589889#comment-14589889
 ] 

Ariel Weisberg commented on CASSANDRA-9583:
---

The tests passed. If you +1 the last commit we will be in business.

 test-compression could run multiple unit tests in parallel like test
 

 Key: CASSANDRA-9583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9583
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9499) Introduce writeVInt method to DataOutputStreamPlus

2015-06-17 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589935#comment-14589935
 ] 

Benedict commented on CASSANDRA-9499:
-

bq. Doesn't shifting the continuation bits to the first byte penalize the range 
of 1-byte values? 

No. You have to have a single continuation bit for either encoding. They're 
literally equivalent, just with their bit positions swapped around, except in 
the single byte case, in which case they are exactly identical.

Run-length encoding means you count the number of set (or unset - this would 
actually be cleaner) contiguously at the top; the first that isn't (or is) 
tells you how many more bytes you need to read. i.e. if the first bit is unset, 
you're done, and the remaining 7 bits are value. If all 8 bits are set, we need 
to read a full long.

This encoding gives us pretty ideal behaviour, of cheap operation over the 
single byte representation, followed by consistent behaviour across all the 
remaining possible values. It also lets us easily avoid wasting a whole byte 
for the final bit of a long.

This is equivalent to Aleksey's characterisation.

 Introduce writeVInt method to DataOutputStreamPlus
 --

 Key: CASSANDRA-9499
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9499
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Ariel Weisberg
Priority: Minor
 Fix For: 3.0 beta 1


 CASSANDRA-8099 really could do with a writeVInt method, for both fixing 
 CASSANDRA-9498 but also efficiently encoding timestamp/deletion deltas. It 
 should be possible to make an especially efficient implementation against 
 BufferedDataOutputStreamPlus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9549) Memory leak

2015-06-17 Thread Ivar Thorson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589997#comment-14589997
 ] 

Ivar Thorson commented on CASSANDRA-9549:
-

We patched our 2.1.6 cluster on Wednesday and let it run for a day to let 
things accumulate. Looking at CPU activity and heap space for the last day 
suggests that the memory leak seems to have been fixed by the patch. Awesome 
work!

 Memory leak 
 

 Key: CASSANDRA-9549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9549
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.5. 9 node cluster in EC2 (m1.large nodes, 
 2 cores 7.5G memory, 800G platter for cassandra data, root partition and 
 commit log are on SSD EBS with sufficient IOPS), 3 nodes/availablity zone, 1 
 replica/zone
 JVM: /usr/java/jdk1.8.0_40/jre/bin/java 
 JVM Flags besides CP: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
 -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn200M 
 -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
 -XX:+UseTLAB -XX:CompileCommandFile=/etc/cassandra/conf/hotspot_compiler 
 -XX:CMSWaitDuration=1 -XX:+CMSParallelInitialMarkEnabled 
 -XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
 -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 
 -Dcom.sun.management.jmxremote.rmi.port=7199 
 -Dcom.sun.management.jmxremote.ssl=false 
 -Dcom.sun.management.jmxremote.authenticate=false 
 -Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
 -Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid 
 Kernel: Linux 2.6.32-504.16.2.el6.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
Reporter: Ivar Thorson
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.7

 Attachments: c4_system.log, c7fromboot.zip, cassandra.yaml, 
 cpu-load.png, memoryuse.png, ref-java-errors.jpeg, suspect.png, two-loads.png


 We have been experiencing a severe memory leak with Cassandra 2.1.5 that, 
 over the period of a couple of days, eventually consumes all of the available 
 JVM heap space, putting the JVM into GC hell where it keeps trying CMS 
 collection but can't free up any heap space. This pattern happens for every 
 node in our cluster and is requiring rolling cassandra restarts just to keep 
 the cluster running. We have upgraded the cluster per Datastax docs from the 
 2.0 branch a couple of months ago and have been using the data from this 
 cluster for more than a year without problem.
 As the heap fills up with non-GC-able objects, the CPU/OS load average grows 
 along with it. Heap dumps reveal an increasing number of 
 java.util.concurrent.ConcurrentLinkedQueue$Node objects. We took heap dumps 
 over a 2 day period, and watched the number of Node objects go from 4M, to 
 19M, to 36M, and eventually about 65M objects before the node stops 
 responding. The screen capture of our heap dump is from the 19M measurement.
 Load on the cluster is minimal. We can see this effect even with only a 
 handful of writes per second. (See attachments for Opscenter snapshots during 
 very light loads and heavier loads). Even with only 5 reads a sec we see this 
 behavior.
 Log files show repeated errors in Ref.java:181 and Ref.java:279 and LEAK 
 detected messages:
 {code}
 ERROR [CompactionExecutor:557] 2015-06-01 18:27:36,978 Ref.java:279 - Error 
 when closing class 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1302301946:/data1/data/ourtablegoeshere-ka-1150
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@32680b31 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@573464d6[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1644]
 {code}
 {code}
 ERROR [Reference-Reaper:1] 2015-06-01 18:27:37,083 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@74b5df92) to class 
 org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2054303604:/data2/data/ourtablegoeshere-ka-1151
  was not released before the reference was garbage collected
 {code}
 This might be related to [CASSANDRA-8723]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9610) Increased response time with cassandra 2.0.9 from 1.2.19

2015-06-17 Thread Maitrayee (JIRA)
Maitrayee created CASSANDRA-9610:


 Summary: Increased response time with cassandra 2.0.9 from 1.2.19
 Key: CASSANDRA-9610
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9610
 Project: Cassandra
  Issue Type: Bug
Reporter: Maitrayee


I was using Cassandra 1.2.19. Recently upgraded to 2.0.9. Queries with 
secondary index was completing much faster in 1.2.19

Validated this with trace on via cqlsh




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9609) org.apache.cassandra.db.marshal.DateType compares dates with negative timestamp incorrectly

2015-06-17 Thread Alexander Troshanin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Troshanin updated CASSANDRA-9609:
---
Description: 
We got issues in our application.
We have column family with column names as dates.
when we try to search these columns with some criteria like 
{code}
// pseudocode
colName  '01.01.2015'
{code}
then we dont receive columns whos names are less then 01.01.1970, i.e. with 
negative timestamp.

After a small research we found out, that probably DateType class has a bug in 
compateTo method.
Here is very small example shat shows incorrect work of DateType 

{code}
import com.netflix.astyanax.serializers.DateSerializer;
import com.netflix.astyanax.serializers.LongSerializer;
import org.apache.cassandra.db.marshal.DateType;
import org.apache.cassandra.db.marshal.LongType;

import java.util.Date;
import java.util.Locale;
import java.util.TimeZone;

public class TestCassandraCompare {

static DateSerializer dateSerializer = DateSerializer.get();
static LongSerializer longSerializer = LongSerializer.get();
static DateType dateType = DateType.instance;
static LongType longType = LongType.instance;

public static void main(String[] args) {

Locale.setDefault(Locale.ENGLISH);
TimeZone.setDefault(TimeZone.getTimeZone(GMT0));

// check longs
doLongsCheck(10, 20);
doLongsCheck(20, 10);
doLongsCheck(-10, -20);
doLongsCheck(-20, -10);
doLongsCheck(-10, 20);
doLongsCheck(20, -10);
doLongsCheck(10, -20);
doLongsCheck(-20, 10);

// check dates
doDatesCheck(new Date(10),  new Date(20),   1);
doDatesCheck(new Date(20),  new Date(10),   2);
doDatesCheck(new Date(-10), new Date(-20),  3);
doDatesCheck(new Date(-20), new Date(-10),  4);
doDatesCheck(new Date(-10), new Date(20),   5);
doDatesCheck(new Date(20),  new Date(-10),  6);
doDatesCheck(new Date(10),  new Date(-20),  7);
doDatesCheck(new Date(-20), new Date(10),   8);

}

private static void doLongsCheck(long l1, long l2) {
int cassandraCompare = 
longType.compare(longSerializer.toByteBuffer(l1), 
longSerializer.toByteBuffer(l2));
int javaCoreCompare = Long.compare(l1, l2);
if (cassandraCompare != javaCoreCompare) {
System.err.println(got incorrect result from LongType compare 
method. +
\n\tlong1:  + l1 +
\n\tlong2:  + l2 +
\n\tcassandraCompare:  + cassandraCompare +
\n\tjavaCoreCompare:  + javaCoreCompare
);
}
}

private static void doDatesCheck(Date d1, Date d2, int testNum) {
int cassandraCompare = 
dateType.compare(dateSerializer.toByteBuffer(d1), 
dateSerializer.toByteBuffer(d2));
int javaCoreCompare = d1.compareTo(d2);
if (cassandraCompare != javaCoreCompare) {
System.err.println([ + testNum + ]got incorrect result from 
DateType compare method. +
\n\tdate1:  + d1 +
\n\tdate2:  + d2 +
\n\tcassandraCompare:  + cassandraCompare +
\n\tjavaCoreCompare:  + javaCoreCompare
);
}
}

}
{code}

If you will run the code you will se next output:
{code}
[5]got incorrect result from DateType compare method.
date1: Wed Dec 31 23:58:20 GMT 1969
date2: Thu Jan 01 00:03:20 GMT 1970
cassandraCompare: 1
javaCoreCompare: -1
[6]got incorrect result from DateType compare method.
date1: Thu Jan 01 00:03:20 GMT 1970
date2: Wed Dec 31 23:58:20 GMT 1969
cassandraCompare: -1
javaCoreCompare: 1
[7]got incorrect result from DateType compare method.
date1: Thu Jan 01 00:01:40 GMT 1970
date2: Wed Dec 31 23:56:40 GMT 1969
cassandraCompare: -1
javaCoreCompare: 1
[8]got incorrect result from DateType compare method.
date1: Wed Dec 31 23:56:40 GMT 1969
date2: Thu Jan 01 00:01:40 GMT 1970
cassandraCompare: 1
javaCoreCompare: -1
{code}

  was:
We got issues in our application.
We have column family with column names as dates.
when we try to search these columns with some criteria like 
{code}
// pseudocode
colName  '01.01.2015'
{code}
then we dont receive columns whos names are less then 01.01.1970, i.e. with 
negative timestamp.

After a small research we found out, that probably DateType class has a bug in 
compateTo method.
Here is very small example shat shows incorrect work of DateType 

{code}
import com.netflix.astyanax.serializers.DateSerializer;
import com.netflix.astyanax.serializers.LongSerializer;
import org.apache.cassandra.db.marshal.DateType;
import org.apache.cassandra.db.marshal.LongType;

import 

[jira] [Commented] (CASSANDRA-9549) Memory leak

2015-06-17 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1458#comment-1458
 ] 

Benedict commented on CASSANDRA-9549:
-

Great, glad to hear it

 Memory leak 
 

 Key: CASSANDRA-9549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9549
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.5. 9 node cluster in EC2 (m1.large nodes, 
 2 cores 7.5G memory, 800G platter for cassandra data, root partition and 
 commit log are on SSD EBS with sufficient IOPS), 3 nodes/availablity zone, 1 
 replica/zone
 JVM: /usr/java/jdk1.8.0_40/jre/bin/java 
 JVM Flags besides CP: -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar 
 -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities 
 -XX:ThreadPriorityPolicy=42 -Xms2G -Xmx2G -Xmn200M 
 -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=103 
 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled 
 -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 
 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
 -XX:+UseTLAB -XX:CompileCommandFile=/etc/cassandra/conf/hotspot_compiler 
 -XX:CMSWaitDuration=1 -XX:+CMSParallelInitialMarkEnabled 
 -XX:+CMSEdenChunksRecordAlways -XX:CMSWaitDuration=1 -XX:+UseCondCardMark 
 -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.port=7199 
 -Dcom.sun.management.jmxremote.rmi.port=7199 
 -Dcom.sun.management.jmxremote.ssl=false 
 -Dcom.sun.management.jmxremote.authenticate=false 
 -Dlogback.configurationFile=logback.xml -Dcassandra.logdir=/var/log/cassandra 
 -Dcassandra.storagedir= -Dcassandra-pidfile=/var/run/cassandra/cassandra.pid 
 Kernel: Linux 2.6.32-504.16.2.el6.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux
Reporter: Ivar Thorson
Assignee: Benedict
Priority: Critical
 Fix For: 2.1.7

 Attachments: c4_system.log, c7fromboot.zip, cassandra.yaml, 
 cpu-load.png, memoryuse.png, ref-java-errors.jpeg, suspect.png, two-loads.png


 We have been experiencing a severe memory leak with Cassandra 2.1.5 that, 
 over the period of a couple of days, eventually consumes all of the available 
 JVM heap space, putting the JVM into GC hell where it keeps trying CMS 
 collection but can't free up any heap space. This pattern happens for every 
 node in our cluster and is requiring rolling cassandra restarts just to keep 
 the cluster running. We have upgraded the cluster per Datastax docs from the 
 2.0 branch a couple of months ago and have been using the data from this 
 cluster for more than a year without problem.
 As the heap fills up with non-GC-able objects, the CPU/OS load average grows 
 along with it. Heap dumps reveal an increasing number of 
 java.util.concurrent.ConcurrentLinkedQueue$Node objects. We took heap dumps 
 over a 2 day period, and watched the number of Node objects go from 4M, to 
 19M, to 36M, and eventually about 65M objects before the node stops 
 responding. The screen capture of our heap dump is from the 19M measurement.
 Load on the cluster is minimal. We can see this effect even with only a 
 handful of writes per second. (See attachments for Opscenter snapshots during 
 very light loads and heavier loads). Even with only 5 reads a sec we see this 
 behavior.
 Log files show repeated errors in Ref.java:181 and Ref.java:279 and LEAK 
 detected messages:
 {code}
 ERROR [CompactionExecutor:557] 2015-06-01 18:27:36,978 Ref.java:279 - Error 
 when closing class 
 org.apache.cassandra.io.sstable.SSTableReader$InstanceTidier@1302301946:/data1/data/ourtablegoeshere-ka-1150
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@32680b31 
 rejected from 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor@573464d6[Terminated,
  pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1644]
 {code}
 {code}
 ERROR [Reference-Reaper:1] 2015-06-01 18:27:37,083 Ref.java:181 - LEAK 
 DETECTED: a reference 
 (org.apache.cassandra.utils.concurrent.Ref$State@74b5df92) to class 
 org.apache.cassandra.io.sstable.SSTableReader$DescriptorTypeTidy@2054303604:/data2/data/ourtablegoeshere-ka-1151
  was not released before the reference was garbage collected
 {code}
 This might be related to [CASSANDRA-8723]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9611) wikipedia demo behaves different depending on how the demos .sh's are called

2015-06-17 Thread Jon Moses (JIRA)
Jon Moses created CASSANDRA-9611:


 Summary: wikipedia demo behaves different depending on how the 
demos .sh's are called
 Key: CASSANDRA-9611
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9611
 Project: Cassandra
  Issue Type: Bug
Reporter: Jon Moses
Priority: Trivial


If you run the 1-add-schema.sh script from a directory other that the 
wikipedia demo directory, it will succeed but be unable to find the 
create_table.cql file.  Subsequently, setting the solr options will create the 
table, but with COMPACT STORAGE, rather than via cql.

Outside dir:

{noformat}
$ ./wikipedia/1-add-schema.sh
Creating Cassandra table...
./wikipedia/1-add-schema.sh: line 15: create_table.cql: No such file or 
directory
Posting solrconfig.xml to 
http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml...
SUCCESS
Posted solrconfig.xml to 
http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml
Posting schema.xml to 
http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml...
SUCCESS
Posted schema.xml to 
http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml
Creating index...
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint 
name=QTime5620/int/lst
/response
Created index.
{noformat}

{noformat}
$ cqlsh -e desc table wiki.solr

/*
Warning: Table wiki.solr omitted because it has constructs not compatible with 
CQL (was created via legacy API).

Approximate structure, for reference:
(this should not be used to reproduce this schema)

CREATE TABLE wiki.solr (
key text PRIMARY KEY,
_docBoost text,
body text,
date text,
name text,
solr_query text,
title text
) WITH COMPACT STORAGE
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = 'NONE';
CREATE CUSTOM INDEX wiki_solr__docBoost_index ON wiki.solr (_docBoost) USING 
'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
CREATE CUSTOM INDEX wiki_solr_body_index ON wiki.solr (body) USING 
'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
CREATE CUSTOM INDEX wiki_solr_date_index ON wiki.solr (date) USING 
'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
CREATE CUSTOM INDEX wiki_solr_name_index ON wiki.solr (name) USING 
'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
CREATE CUSTOM INDEX wiki_solr_solr_query_index ON wiki.solr (solr_query) USING 
'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
CREATE CUSTOM INDEX wiki_solr_title_index ON wiki.solr (title) USING 
'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
CREATE TRIGGER dse_augmentation ON wiki.solr USING 
'com.datastax.bdp.search.solr.triggers.SolrAugmentationTrigger';
*/
{noformat}


Inside dir:

{noformat}
$ ./1-add-schema.sh
Creating Cassandra table...
Posting solrconfig.xml to 
http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml...
SUCCESS
Posted solrconfig.xml to 
http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml
Posting schema.xml to 
http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml...
SUCCESS
Posted schema.xml to 
http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml
Creating index...
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint 
name=QTime1272/int/lst
/response
Created index.
{noformat}

{noformat}
$ cqlsh -e desc table wiki.solr

CREATE TABLE wiki.solr (
id text PRIMARY KEY,
body text,
date text,
name text,
solr_query text,
title text
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
CREATE CUSTOM INDEX wiki_solr_body_index ON wiki.solr (body) USING 

[jira] [Resolved] (CASSANDRA-9611) wikipedia demo behaves different depending on how the demos .sh's are called

2015-06-17 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp resolved CASSANDRA-9611.
-
Resolution: Invalid

The demo is specific to DSE (Search) and not related to OSS Apache Cassandra.

[~jmoses] please forward this to DSE Search team.

 wikipedia demo behaves different depending on how the demos .sh's are called
 

 Key: CASSANDRA-9611
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9611
 Project: Cassandra
  Issue Type: Bug
Reporter: Jon Moses
Priority: Trivial

 If you run the 1-add-schema.sh script from a directory other that the 
 wikipedia demo directory, it will succeed but be unable to find the 
 create_table.cql file.  Subsequently, setting the solr options will create 
 the table, but with COMPACT STORAGE, rather than via cql.
 Outside dir:
 {noformat}
 $ ./wikipedia/1-add-schema.sh
 Creating Cassandra table...
 ./wikipedia/1-add-schema.sh: line 15: create_table.cql: No such file or 
 directory
 Posting solrconfig.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml...
 SUCCESS
 Posted solrconfig.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml
 Posting schema.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml...
 SUCCESS
 Posted schema.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml
 Creating index...
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime5620/int/lst
 /response
 Created index.
 {noformat}
 {noformat}
 $ cqlsh -e desc table wiki.solr
 /*
 Warning: Table wiki.solr omitted because it has constructs not compatible 
 with CQL (was created via legacy API).
 Approximate structure, for reference:
 (this should not be used to reproduce this schema)
 CREATE TABLE wiki.solr (
 key text PRIMARY KEY,
 _docBoost text,
 body text,
 date text,
 name text,
 solr_query text,
 title text
 ) WITH COMPACT STORAGE
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = 'NONE';
 CREATE CUSTOM INDEX wiki_solr__docBoost_index ON wiki.solr (_docBoost) 
 USING 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_body_index ON wiki.solr (body) USING 
 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_date_index ON wiki.solr (date) USING 
 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_name_index ON wiki.solr (name) USING 
 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_solr_query_index ON wiki.solr (solr_query) 
 USING 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_title_index ON wiki.solr (title) USING 
 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE TRIGGER dse_augmentation ON wiki.solr USING 
 'com.datastax.bdp.search.solr.triggers.SolrAugmentationTrigger';
 */
 {noformat}
 Inside dir:
 {noformat}
 $ ./1-add-schema.sh
 Creating Cassandra table...
 Posting solrconfig.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml...
 SUCCESS
 Posted solrconfig.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml
 Posting schema.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml...
 SUCCESS
 Posted schema.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml
 Creating index...
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime1272/int/lst
 /response
 Created index.
 {noformat}
 {noformat}
 $ cqlsh -e desc table wiki.solr
 CREATE TABLE wiki.solr (
 id text PRIMARY KEY,
 body text,
 date text,
 name text,
 solr_query text,
 title text
 ) WITH bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 

[jira] [Commented] (CASSANDRA-9611) wikipedia demo behaves different depending on how the demos .sh's are called

2015-06-17 Thread Jon Moses (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14590041#comment-14590041
 ] 

Jon Moses commented on CASSANDRA-9611:
--

Whups, my bad.  Too many jira tabs open.

 wikipedia demo behaves different depending on how the demos .sh's are called
 

 Key: CASSANDRA-9611
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9611
 Project: Cassandra
  Issue Type: Bug
Reporter: Jon Moses
Priority: Trivial

 If you run the 1-add-schema.sh script from a directory other that the 
 wikipedia demo directory, it will succeed but be unable to find the 
 create_table.cql file.  Subsequently, setting the solr options will create 
 the table, but with COMPACT STORAGE, rather than via cql.
 Outside dir:
 {noformat}
 $ ./wikipedia/1-add-schema.sh
 Creating Cassandra table...
 ./wikipedia/1-add-schema.sh: line 15: create_table.cql: No such file or 
 directory
 Posting solrconfig.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml...
 SUCCESS
 Posted solrconfig.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml
 Posting schema.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml...
 SUCCESS
 Posted schema.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml
 Creating index...
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime5620/int/lst
 /response
 Created index.
 {noformat}
 {noformat}
 $ cqlsh -e desc table wiki.solr
 /*
 Warning: Table wiki.solr omitted because it has constructs not compatible 
 with CQL (was created via legacy API).
 Approximate structure, for reference:
 (this should not be used to reproduce this schema)
 CREATE TABLE wiki.solr (
 key text PRIMARY KEY,
 _docBoost text,
 body text,
 date text,
 name text,
 solr_query text,
 title text
 ) WITH COMPACT STORAGE
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND dclocal_read_repair_chance = 0.1
 AND default_time_to_live = 0
 AND gc_grace_seconds = 864000
 AND max_index_interval = 2048
 AND memtable_flush_period_in_ms = 0
 AND min_index_interval = 128
 AND read_repair_chance = 0.0
 AND speculative_retry = 'NONE';
 CREATE CUSTOM INDEX wiki_solr__docBoost_index ON wiki.solr (_docBoost) 
 USING 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_body_index ON wiki.solr (body) USING 
 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_date_index ON wiki.solr (date) USING 
 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_name_index ON wiki.solr (name) USING 
 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_solr_query_index ON wiki.solr (solr_query) 
 USING 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE CUSTOM INDEX wiki_solr_title_index ON wiki.solr (title) USING 
 'com.datastax.bdp.search.solr.ThriftSolrSecondaryIndex';
 CREATE TRIGGER dse_augmentation ON wiki.solr USING 
 'com.datastax.bdp.search.solr.triggers.SolrAugmentationTrigger';
 */
 {noformat}
 Inside dir:
 {noformat}
 $ ./1-add-schema.sh
 Creating Cassandra table...
 Posting solrconfig.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml...
 SUCCESS
 Posted solrconfig.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/solrconfig.xml
 Posting schema.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml...
 SUCCESS
 Posted schema.xml to 
 http://ip-172-31-22-166.ec2.internal:8983/solr/resource/wiki.solr/schema.xml
 Creating index...
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status0/intint 
 name=QTime1272/int/lst
 /response
 Created index.
 {noformat}
 {noformat}
 $ cqlsh -e desc table wiki.solr
 CREATE TABLE wiki.solr (
 id text PRIMARY KEY,
 body text,
 date text,
 name text,
 solr_query text,
 title text
 ) WITH bloom_filter_fp_chance = 0.01
 AND caching = '{keys:ALL, rows_per_partition:NONE}'
 AND comment = ''
 AND compaction = {'min_threshold': '4', 'class': 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
 'max_threshold': '32'}
 AND compression = {'sstable_compression': 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
 AND 

cassandra git commit: Fix memory leak in Ref due to ConcurrentLinkedQueue behaviour

2015-06-17 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 46cc577a8 - 7c5fc40b8


Fix memory leak in Ref due to ConcurrentLinkedQueue behaviour

patch by benedict; reviewed by marcus for CASSANDRA-9549


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c5fc40b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c5fc40b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c5fc40b

Branch: refs/heads/cassandra-2.1
Commit: 7c5fc40b8b644e05c32479f2581309f75f981421
Parents: 46cc577
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jun 17 17:02:03 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jun 17 17:02:03 2015 +0100

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/utils/concurrent/Ref.java   |  8 +++-
 .../cassandra/utils/concurrent/RefCountedTest.java   | 15 +++
 3 files changed, 19 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5f9187c..009d974 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.7
+ * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
 Merged from 2.0
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
--
diff --git a/src/java/org/apache/cassandra/utils/concurrent/Ref.java 
b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
index 4e6cef7..f9876de 100644
--- a/src/java/org/apache/cassandra/utils/concurrent/Ref.java
+++ b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
@@ -2,12 +2,10 @@ package org.apache.cassandra.utils.concurrent;
 
 import java.lang.ref.PhantomReference;
 import java.lang.ref.ReferenceQueue;
+import java.util.Collection;
 import java.util.Collections;
 import java.util.Set;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentLinkedQueue;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
+import java.util.concurrent.*;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
 
@@ -233,7 +231,7 @@ public final class RefT implements RefCountedT, 
AutoCloseable
 {
 // we need to retain a reference to each of the PhantomReference 
instances
 // we are using to track individual refs
-private final ConcurrentLinkedQueueState locallyExtant = new 
ConcurrentLinkedQueue();
+private final CollectionState locallyExtant = new 
ConcurrentLinkedDeque();
 // the number of live refs
 private final AtomicInteger counts = new AtomicInteger();
 // the object to call to cleanup when our refs are all finished with

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java 
b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
index a9247cd..bb173fe 100644
--- a/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
+++ b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
@@ -21,6 +21,7 @@ package org.apache.cassandra.utils.concurrent;
 import org.junit.Test;
 
 import junit.framework.Assert;
+import org.apache.cassandra.utils.ObjectSizes;
 
 public class RefCountedTest
 {
@@ -82,4 +83,18 @@ public class RefCountedTest
 {
 }
 }
+
+@Test
+public void testMemoryLeak()
+{
+Tidier tidier = new Tidier();
+RefObject ref = new Ref(null, tidier);
+long initialSize = ObjectSizes.measureDeep(ref);
+for (int i = 0 ; i  1000 ; i++)
+ref.ref().release();
+long finalSize = ObjectSizes.measureDeep(ref);
+if (finalSize  initialSize * 2)
+throw new AssertionError();
+ref.release();
+}
 }



[jira] [Created] (CASSANDRA-9609) org.apache.cassandra.db.marshal.DateType compares dates with negative timestamp incorrectly

2015-06-17 Thread Alexander Troshanin (JIRA)
Alexander Troshanin created CASSANDRA-9609:
--

 Summary: org.apache.cassandra.db.marshal.DateType compares dates 
with negative timestamp incorrectly
 Key: CASSANDRA-9609
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9609
 Project: Cassandra
  Issue Type: Bug
Reporter: Alexander Troshanin


We got issues in our application.
We have column family with column names as dates.
when we try to search these columns with some criteria like 
{code}
// pseudocode
colName  '01.01.2015'
{code}
then we dont receive columns whos names are less then 01.01.1970, i.e. with 
negative timestamp.

After a small research we found out, that probably DateType class has a bug in 
compateTo method.
Here is very small example shat shows incorrect work of DateType 

{code}
import com.netflix.astyanax.serializers.DateSerializer;
import com.netflix.astyanax.serializers.LongSerializer;
import org.apache.cassandra.db.marshal.DateType;
import org.apache.cassandra.db.marshal.LongType;

import java.util.Date;
import java.util.Locale;
import java.util.TimeZone;

public class TestCassandraCompare {

static DateSerializer dateSerializer = DateSerializer.get();
static LongSerializer longSerializer = LongSerializer.get();
static DateType dateType = DateType.instance;
static LongType longType = LongType.instance;

public static void main(String[] args) {

Locale.setDefault(Locale.ENGLISH);
TimeZone.setDefault(TimeZone.getTimeZone(GMT0));

// check longs
doLongsCheck(10, 20);
doLongsCheck(20, 10);
doLongsCheck(-10, -20);
doLongsCheck(-20, -10);
doLongsCheck(-10, 20);
doLongsCheck(20, -10);
doLongsCheck(10, -20);
doLongsCheck(-20, 10);

// check dates
doDatesCheck(new Date(10),  new Date(20),   1);
doDatesCheck(new Date(20),  new Date(10),   2);
doDatesCheck(new Date(-10), new Date(-20),  3);
doDatesCheck(new Date(-20), new Date(-10),  4);
doDatesCheck(new Date(-10), new Date(20),   5);
doDatesCheck(new Date(20),  new Date(-10),  6);
doDatesCheck(new Date(10),  new Date(-20),  7);
doDatesCheck(new Date(-20), new Date(10),   8);

}

private static void doLongsCheck(long l1, long l2) {
int cassandraCompare = 
longType.compare(longSerializer.toByteBuffer(l1), 
longSerializer.toByteBuffer(l2));
int javaCoreCompare = Long.compare(l1, l2);
if (cassandraCompare != javaCoreCompare) {
System.err.println(got incorrect result from LongType compare 
method. +
\n\tlong1:  + l1 +
\n\tlong2:  + l2 +
\n\tcassandraCompare:  + cassandraCompare +
\n\tjavaCoreCompare:  + javaCoreCompare
);
}
}

private static void doDatesCheck(Date d1, Date d2, int testNum) {
int cassandraCompare = 
dateType.compare(dateSerializer.toByteBuffer(d1), 
dateSerializer.toByteBuffer(d2));
int javaCoreCompare = d1.compareTo(d2);
if (cassandraCompare != javaCoreCompare) {
System.err.println([ + testNum + ]got incorrect result from 
LongType compare method. +
\n\tdate1:  + d1 +
\n\tdate2:  + d2 +
\n\tcassandraCompare:  + cassandraCompare +
\n\tjavaCoreCompare:  + javaCoreCompare
);
}
}

}
{code}

If you will run the code you will se next output:
{code}
[5]got incorrect result from LongType compare method.
date1: Wed Dec 31 23:58:20 GMT 1969
date2: Thu Jan 01 00:03:20 GMT 1970
cassandraCompare: 1
javaCoreCompare: -1
[6]got incorrect result from LongType compare method.
date1: Thu Jan 01 00:03:20 GMT 1970
date2: Wed Dec 31 23:58:20 GMT 1969
cassandraCompare: -1
javaCoreCompare: 1
[7]got incorrect result from LongType compare method.
date1: Thu Jan 01 00:01:40 GMT 1970
date2: Wed Dec 31 23:56:40 GMT 1969
cassandraCompare: -1
javaCoreCompare: 1
[8]got incorrect result from LongType compare method.
date1: Wed Dec 31 23:56:40 GMT 1969
date2: Thu Jan 01 00:01:40 GMT 1970
cassandraCompare: 1
javaCoreCompare: -1
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-06-17 Thread benedict
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/075ff500
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/075ff500
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/075ff500

Branch: refs/heads/trunk
Commit: 075ff5000ced24b42f3b540815cae471bee4049d
Parents: 5ea5949 6068efb
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jun 17 17:14:28 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jun 17 17:14:28 2015 +0100

--
 CHANGES.txt  |  4 +++-
 .../org/apache/cassandra/utils/concurrent/Ref.java   |  8 +++-
 .../cassandra/utils/concurrent/RefCountedTest.java   | 15 +++
 3 files changed, 21 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/075ff500/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/075ff500/src/java/org/apache/cassandra/utils/concurrent/Ref.java
--



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-17 Thread benedict
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6068efba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6068efba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6068efba

Branch: refs/heads/cassandra-2.2
Commit: 6068efbaf43a18c6f9250638bece98010ce59100
Parents: 514dcd9 7c5fc40
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jun 17 17:08:07 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jun 17 17:08:07 2015 +0100

--
 CHANGES.txt  |  4 +++-
 .../org/apache/cassandra/utils/concurrent/Ref.java   |  8 +++-
 .../cassandra/utils/concurrent/RefCountedTest.java   | 15 +++
 3 files changed, 21 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6068efba/CHANGES.txt
--
diff --cc CHANGES.txt
index 9dccd84,009d974..3e9940c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,38 -1,11 +1,40 @@@
 -2.1.7
 +2.2
 + * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)
 + * Mlockall before opening system sstables  remove boot_without_jna option 
(CASSANDRA-9573)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 + * Fix deprecated repair JMX API (CASSANDRA-9570)
- Merged from 2.0:
++Merged from 2.1:
+  * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
+ Merged from 2.0
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
 + * Add logback metrics (CASSANDRA-9378)
  
  
 -2.1.6
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 +Merged from 2.1:
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6068efba/src/java/org/apache/cassandra/utils/concurrent/Ref.java
--



[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-06-17 Thread benedict
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6068efba
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6068efba
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6068efba

Branch: refs/heads/trunk
Commit: 6068efbaf43a18c6f9250638bece98010ce59100
Parents: 514dcd9 7c5fc40
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jun 17 17:08:07 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jun 17 17:08:07 2015 +0100

--
 CHANGES.txt  |  4 +++-
 .../org/apache/cassandra/utils/concurrent/Ref.java   |  8 +++-
 .../cassandra/utils/concurrent/RefCountedTest.java   | 15 +++
 3 files changed, 21 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6068efba/CHANGES.txt
--
diff --cc CHANGES.txt
index 9dccd84,009d974..3e9940c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,38 -1,11 +1,40 @@@
 -2.1.7
 +2.2
 + * Fix connection leak in CqlRecordWriter (CASSANDRA-9576)
 + * Mlockall before opening system sstables  remove boot_without_jna option 
(CASSANDRA-9573)
 + * Add functions to convert timeuuid to date or time, deprecate dateOf and 
unixTimestampOf (CASSANDRA-9229)
 + * Make sure we cancel non-compacting sstables from LifecycleTransaction 
(CASSANDRA-9566)
 + * Fix deprecated repair JMX API (CASSANDRA-9570)
- Merged from 2.0:
++Merged from 2.1:
+  * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
+ Merged from 2.0
   * Periodically submit background compaction tasks (CASSANDRA-9592)
   * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)
 + * Add logback metrics (CASSANDRA-9378)
  
  
 -2.1.6
 +2.2.0-rc1
 + * Compressed commit log should measure compressed space used (CASSANDRA-9095)
 + * Fix comparison bug in CassandraRoleManager#collectRoles (CASSANDRA-9551)
 + * Add tinyint,smallint,time,date support for UDFs (CASSANDRA-9400)
 + * Deprecates SSTableSimpleWriter and SSTableSimpleUnsortedWriter 
(CASSANDRA-9546)
 + * Empty INITCOND treated as null in aggregate (CASSANDRA-9457)
 + * Remove use of Cell in Thrift MapReduce classes (CASSANDRA-8609)
 + * Integrate pre-release Java Driver 2.2-rc1, custom build (CASSANDRA-9493)
 + * Clean up gossiper logic for old versions (CASSANDRA-9370)
 + * Fix custom payload coding/decoding to match the spec (CASSANDRA-9515)
 + * ant test-all results incomplete when parsed (CASSANDRA-9463)
 + * Disallow frozen types in function arguments and return types for
 +   clarity (CASSANDRA-9411)
 + * Static Analysis to warn on unsafe use of Autocloseable instances 
(CASSANDRA-9431)
 + * Update commitlog archiving examples now that commitlog segments are
 +   not recycled (CASSANDRA-9350)
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 + * Let CassandraVersion handle SNAPSHOT version (CASSANDRA-9438)
 +Merged from 2.1:
   * (cqlsh) Fix using COPY through SOURCE or -f (CASSANDRA-9083)
   * Fix occasional lack of `system` keyspace in schema tables (CASSANDRA-8487)
   * Use ProtocolError code instead of ServerError code for native protocol

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6068efba/src/java/org/apache/cassandra/utils/concurrent/Ref.java
--



[1/2] cassandra git commit: Fix memory leak in Ref due to ConcurrentLinkedQueue behaviour

2015-06-17 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 514dcd944 - 6068efbaf


Fix memory leak in Ref due to ConcurrentLinkedQueue behaviour

patch by benedict; reviewed by marcus for CASSANDRA-9549


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c5fc40b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c5fc40b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c5fc40b

Branch: refs/heads/cassandra-2.2
Commit: 7c5fc40b8b644e05c32479f2581309f75f981421
Parents: 46cc577
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jun 17 17:02:03 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jun 17 17:02:03 2015 +0100

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/utils/concurrent/Ref.java   |  8 +++-
 .../cassandra/utils/concurrent/RefCountedTest.java   | 15 +++
 3 files changed, 19 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5f9187c..009d974 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.7
+ * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
 Merged from 2.0
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
--
diff --git a/src/java/org/apache/cassandra/utils/concurrent/Ref.java 
b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
index 4e6cef7..f9876de 100644
--- a/src/java/org/apache/cassandra/utils/concurrent/Ref.java
+++ b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
@@ -2,12 +2,10 @@ package org.apache.cassandra.utils.concurrent;
 
 import java.lang.ref.PhantomReference;
 import java.lang.ref.ReferenceQueue;
+import java.util.Collection;
 import java.util.Collections;
 import java.util.Set;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentLinkedQueue;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
+import java.util.concurrent.*;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
 
@@ -233,7 +231,7 @@ public final class RefT implements RefCountedT, 
AutoCloseable
 {
 // we need to retain a reference to each of the PhantomReference 
instances
 // we are using to track individual refs
-private final ConcurrentLinkedQueueState locallyExtant = new 
ConcurrentLinkedQueue();
+private final CollectionState locallyExtant = new 
ConcurrentLinkedDeque();
 // the number of live refs
 private final AtomicInteger counts = new AtomicInteger();
 // the object to call to cleanup when our refs are all finished with

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java 
b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
index a9247cd..bb173fe 100644
--- a/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
+++ b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
@@ -21,6 +21,7 @@ package org.apache.cassandra.utils.concurrent;
 import org.junit.Test;
 
 import junit.framework.Assert;
+import org.apache.cassandra.utils.ObjectSizes;
 
 public class RefCountedTest
 {
@@ -82,4 +83,18 @@ public class RefCountedTest
 {
 }
 }
+
+@Test
+public void testMemoryLeak()
+{
+Tidier tidier = new Tidier();
+RefObject ref = new Ref(null, tidier);
+long initialSize = ObjectSizes.measureDeep(ref);
+for (int i = 0 ; i  1000 ; i++)
+ref.ref().release();
+long finalSize = ObjectSizes.measureDeep(ref);
+if (finalSize  initialSize * 2)
+throw new AssertionError();
+ref.release();
+}
 }



[1/3] cassandra git commit: Fix memory leak in Ref due to ConcurrentLinkedQueue behaviour

2015-06-17 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5ea5949f9 - 075ff5000


Fix memory leak in Ref due to ConcurrentLinkedQueue behaviour

patch by benedict; reviewed by marcus for CASSANDRA-9549


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c5fc40b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c5fc40b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c5fc40b

Branch: refs/heads/trunk
Commit: 7c5fc40b8b644e05c32479f2581309f75f981421
Parents: 46cc577
Author: Benedict Elliott Smith bened...@apache.org
Authored: Wed Jun 17 17:02:03 2015 +0100
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Wed Jun 17 17:02:03 2015 +0100

--
 CHANGES.txt  |  1 +
 .../org/apache/cassandra/utils/concurrent/Ref.java   |  8 +++-
 .../cassandra/utils/concurrent/RefCountedTest.java   | 15 +++
 3 files changed, 19 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 5f9187c..009d974 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.7
+ * Fix memory leak in Ref due to ConcurrentLinkedQueue.remove() behaviour 
(CASSANDRA-9549)
 Merged from 2.0
  * Periodically submit background compaction tasks (CASSANDRA-9592)
  * Set HAS_MORE_PAGES flag to false when PagingState is null (CASSANDRA-9571)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
--
diff --git a/src/java/org/apache/cassandra/utils/concurrent/Ref.java 
b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
index 4e6cef7..f9876de 100644
--- a/src/java/org/apache/cassandra/utils/concurrent/Ref.java
+++ b/src/java/org/apache/cassandra/utils/concurrent/Ref.java
@@ -2,12 +2,10 @@ package org.apache.cassandra.utils.concurrent;
 
 import java.lang.ref.PhantomReference;
 import java.lang.ref.ReferenceQueue;
+import java.util.Collection;
 import java.util.Collections;
 import java.util.Set;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.ConcurrentLinkedQueue;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
+import java.util.concurrent.*;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
 
@@ -233,7 +231,7 @@ public final class RefT implements RefCountedT, 
AutoCloseable
 {
 // we need to retain a reference to each of the PhantomReference 
instances
 // we are using to track individual refs
-private final ConcurrentLinkedQueueState locallyExtant = new 
ConcurrentLinkedQueue();
+private final CollectionState locallyExtant = new 
ConcurrentLinkedDeque();
 // the number of live refs
 private final AtomicInteger counts = new AtomicInteger();
 // the object to call to cleanup when our refs are all finished with

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c5fc40b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java 
b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
index a9247cd..bb173fe 100644
--- a/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
+++ b/test/unit/org/apache/cassandra/utils/concurrent/RefCountedTest.java
@@ -21,6 +21,7 @@ package org.apache.cassandra.utils.concurrent;
 import org.junit.Test;
 
 import junit.framework.Assert;
+import org.apache.cassandra.utils.ObjectSizes;
 
 public class RefCountedTest
 {
@@ -82,4 +83,18 @@ public class RefCountedTest
 {
 }
 }
+
+@Test
+public void testMemoryLeak()
+{
+Tidier tidier = new Tidier();
+RefObject ref = new Ref(null, tidier);
+long initialSize = ObjectSizes.measureDeep(ref);
+for (int i = 0 ; i  1000 ; i++)
+ref.ref().release();
+long finalSize = ObjectSizes.measureDeep(ref);
+if (finalSize  initialSize * 2)
+throw new AssertionError();
+ref.release();
+}
 }