[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345629#comment-14345629
 ] 

Benedict commented on CASSANDRA-8067:
-

bq. but hesitant to do that in 2.1.x

Agreed

> NullPointerException in KeyCacheSerializer
> --
>
> Key: CASSANDRA-8067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Eric Leleu
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.4
>
> Attachments: 8067.txt
>
>
> Hi,
> I have this stack trace in the logs of Cassandra server (v2.1)
> {code}
> ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
> CassandraDaemon.java:166 - Exception in thread 
> Thread[CompactionExecutor:14,1,main]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
> Source) ~[na:1.7.0]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
> ~[na:1.7.0]
> at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0]
> {code}
> It may not be critical because this error occured in the AutoSavingCache. 
> However the line 475 is about the CFMetaData so it may hide bigger issue...
> {code}
>  474 CFMetaData cfm = 
> Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
>  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
> out);
> {code}
> Regards,
> Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345630#comment-14345630
 ] 

Benedict commented on CASSANDRA-8067:
-

bq. but hesitant to do that in 2.1.x

Agreed

> NullPointerException in KeyCacheSerializer
> --
>
> Key: CASSANDRA-8067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Eric Leleu
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.4
>
> Attachments: 8067.txt
>
>
> Hi,
> I have this stack trace in the logs of Cassandra server (v2.1)
> {code}
> ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
> CassandraDaemon.java:166 - Exception in thread 
> Thread[CompactionExecutor:14,1,main]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
> Source) ~[na:1.7.0]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
> ~[na:1.7.0]
> at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0]
> {code}
> It may not be critical because this error occured in the AutoSavingCache. 
> However the line 475 is about the CFMetaData so it may hide bigger issue...
> {code}
>  474 CFMetaData cfm = 
> Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
>  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
> out);
> {code}
> Regards,
> Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345621#comment-14345621
 ] 

Aleksey Yeschenko commented on CASSANDRA-8067:
--

Agreed, but hesitant to do that in 2.1.x. I'll open a separate 3.0 ticket for 
just that.

> NullPointerException in KeyCacheSerializer
> --
>
> Key: CASSANDRA-8067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Eric Leleu
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.4
>
> Attachments: 8067.txt
>
>
> Hi,
> I have this stack trace in the logs of Cassandra server (v2.1)
> {code}
> ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
> CassandraDaemon.java:166 - Exception in thread 
> Thread[CompactionExecutor:14,1,main]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
> Source) ~[na:1.7.0]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
> ~[na:1.7.0]
> at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0]
> {code}
> It may not be critical because this error occured in the AutoSavingCache. 
> However the line 475 is about the CFMetaData so it may hide bigger issue...
> {code}
>  474 CFMetaData cfm = 
> Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
>  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
> out);
> {code}
> Regards,
> Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table

2015-03-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345599#comment-14345599
 ] 

Tyler Hobbs commented on CASSANDRA-8899:


This was resolved for 3.0 by CASSANDRA-4914.  I don't believe it would be too 
difficult to make 2.0 and 2.1 not use the limit for the max {{count()}} result 
(without backporting the rest of the aggregate function changes).

[~blerer] do you want to take a look and see how realistic that is?

> cqlsh - not able to get row count with select(*) for large table
> 
>
> Key: CASSANDRA-8899
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.2 ubuntu12.04
>Reporter: Jeff Liu
>
>  I'm getting errors when running a query that looks at a large number of rows.
> {noformat}
> cqlsh:events> select count(*) from catalog;
>  count
> ---
>  1
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 11000;
>  count
> ---
>  11000
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 5;
> errors={}, last_host=127.0.0.1
> cqlsh:events> 
> {noformat}
> We are not able to make the select * query to get row count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345569#comment-14345569
 ] 

Robert Stupp commented on CASSANDRA-8877:
-

It's related to CASSANDRA-7396 - i.e. it uses basically the same functionality 
(selecting individual collection elements). I'd prefer to make this ticket 
depend on CASSANDRA-7396.

> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8883) Percentile computation should use ceil not floor in EstimatedHistogram

2015-03-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-8883:
--
Attachment: 8883-2.1.txt

Since numpy has access to the original values, it provides interpolation 
between the points if the percentile isn't exactly on a boundary:
{code}
np.percentile(np.array([1, 2, 3, 4, 5, 6]), 50)
==> 3.5
{code}
Since we are using the histogram, we don't really know where that lands, so we 
just need to return a value inside of the range. Currently we are returning the 
end of the range before where the percentile occurs.

I've changed EstimatedHistogram to use ceil instead of floor, and updated the 
tests accordingly.

> Percentile computation should use ceil not floor in EstimatedHistogram
> --
>
> Key: CASSANDRA-8883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8883
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Lohfink
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 2.1.4
>
> Attachments: 8883-2.1.txt
>
>
> When computing the pcount Cassandra uses floor and the comparison with 
> elements is >= so given a simple example of there being a total of five 
> elements
> {code}
> // data
> [1, 1, 1, 1, 1]
> // offsets
> [1, 2, 3, 4, 5]
> {code}
> Cassandra  would report the 50th percentile as 2.  While 3 is the more 
> expected value.  As a comparison using numpy
> {code}
> import numpy as np
> np.percentile(np.array([1, 2, 3, 4, 5]), 50)
> ==> 3.0
> {code}
> The percentiles was added in CASSANDRA-4022 but is now used a lot in metrics 
> Cassandra reports.  I think it should error on the side on overestimating 
> instead of underestimating. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8899) cqlsh not able to get row count with select(*) with large table

2015-03-03 Thread Jeff Liu (JIRA)
Jeff Liu created CASSANDRA-8899:
---

 Summary: cqlsh not able to get row count with select(*) with large 
table
 Key: CASSANDRA-8899
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.2 ubuntu12.04
Reporter: Jeff Liu


 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events> select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events> select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events> select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events> 
{noformat}

We don't make queries w/o a WHERE clause in Chisel itself but I can't validate 
the correct number of rows are being inserted into the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table

2015-03-03 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8899:

Description: 
 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events> select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events> select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events> select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events> 
{noformat}

We are not able to make the select * query to get row count.

  was:
 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events> select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events> select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events> select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events> 
{noformat}

We are not able to make the select(*) query to get row count.


> cqlsh - not able to get row count with select(*) for large table
> 
>
> Key: CASSANDRA-8899
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.2 ubuntu12.04
>Reporter: Jeff Liu
>
>  I'm getting errors when running a query that looks at a large number of rows.
> {noformat}
> cqlsh:events> select count(*) from catalog;
>  count
> ---
>  1
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 11000;
>  count
> ---
>  11000
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 5;
> errors={}, last_host=127.0.0.1
> cqlsh:events> 
> {noformat}
> We are not able to make the select * query to get row count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table

2015-03-03 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8899:

Description: 
 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events> select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events> select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events> select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events> 
{noformat}

We are not able to make the select(*) query to get row count.

  was:
 I'm getting errors when running a query that looks at a large number of rows.
{noformat}
cqlsh:events> select count(*) from catalog;

 count
---
 1

(1 rows)

cqlsh:events> select count(*) from catalog limit 11000;

 count
---
 11000

(1 rows)

cqlsh:events> select count(*) from catalog limit 5;
errors={}, last_host=127.0.0.1
cqlsh:events> 
{noformat}

We don't make queries w/o a WHERE clause in Chisel itself but I can't validate 
the correct number of rows are being inserted into the table.


> cqlsh - not able to get row count with select(*) for large table
> 
>
> Key: CASSANDRA-8899
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.2 ubuntu12.04
>Reporter: Jeff Liu
>
>  I'm getting errors when running a query that looks at a large number of rows.
> {noformat}
> cqlsh:events> select count(*) from catalog;
>  count
> ---
>  1
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 11000;
>  count
> ---
>  11000
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 5;
> errors={}, last_host=127.0.0.1
> cqlsh:events> 
> {noformat}
> We are not able to make the select(*) query to get row count.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) with large table

2015-03-03 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8899:

Summary: cqlsh - not able to get row count with select(*) with large table  
(was: cqlsh not able to get row count with select(*) with large table)

> cqlsh - not able to get row count with select(*) with large table
> -
>
> Key: CASSANDRA-8899
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.2 ubuntu12.04
>Reporter: Jeff Liu
>
>  I'm getting errors when running a query that looks at a large number of rows.
> {noformat}
> cqlsh:events> select count(*) from catalog;
>  count
> ---
>  1
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 11000;
>  count
> ---
>  11000
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 5;
> errors={}, last_host=127.0.0.1
> cqlsh:events> 
> {noformat}
> We don't make queries w/o a WHERE clause in Chisel itself but I can't 
> validate the correct number of rows are being inserted into the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table

2015-03-03 Thread Jeff Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Liu updated CASSANDRA-8899:

Summary: cqlsh - not able to get row count with select(*) for large table  
(was: cqlsh - not able to get row count with select(*) with large table)

> cqlsh - not able to get row count with select(*) for large table
> 
>
> Key: CASSANDRA-8899
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8899
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.2 ubuntu12.04
>Reporter: Jeff Liu
>
>  I'm getting errors when running a query that looks at a large number of rows.
> {noformat}
> cqlsh:events> select count(*) from catalog;
>  count
> ---
>  1
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 11000;
>  count
> ---
>  11000
> (1 rows)
> cqlsh:events> select count(*) from catalog limit 5;
> errors={}, last_host=127.0.0.1
> cqlsh:events> 
> {noformat}
> We don't make queries w/o a WHERE clause in Chisel itself but I can't 
> validate the correct number of rows are being inserted into the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7816:
---
  Component/s: (was: Documentation & website)
   API
 Priority: Minor  (was: Trivial)
Fix Version/s: 2.1.4
   2.0.13
   Issue Type: Bug  (was: Improvement)
  Summary: Duplicate DOWN/UP Events Pushed with Native Protocol  (was: 
Updated the "4.2.6. EVENT" section in the binary protocol specification)

I went ahead and committed the patch to update the native protocol specs as 
72c6ed288, since there was no debate there.

I've updated the ticket title and fields to reflect the current issue of 
duplicate notifications.

> Duplicate DOWN/UP Events Pushed with Native Protocol
> 
>
> Key: CASSANDRA-7816
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7816
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
>Reporter: Michael Penick
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.0.13, 2.1.4
>
> Attachments: tcpdump_repeating_status_change.txt, trunk-7816.txt
>
>
> Added "MOVED_NODE" as a possible type of topology change and also specified 
> that it is possible to receive the same event multiple times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Add missing MOVED_NODE event to native protocol spec

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 2f7077c06 -> 3f6ad3c98


Add missing MOVED_NODE event to native protocol spec

Patch by Michael Penick; reviewed by Tyler Hobbs for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72c6ed28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72c6ed28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72c6ed28

Branch: refs/heads/cassandra-2.1
Commit: 72c6ed2883a24486f6785b53cf73fdc8e78e2765
Parents: 33a3a09
Author: Michael Penick 
Authored: Tue Mar 3 12:47:41 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Mar 3 12:47:41 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 2 files changed, 11 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v1.spec
--
diff --git a/doc/native_protocol_v1.spec b/doc/native_protocol_v1.spec
index bc2bb78..41146f9 100644
--- a/doc/native_protocol_v1.spec
+++ b/doc/native_protocol_v1.spec
@@ -486,8 +486,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change ("NEW_NODE" or "REMOVED_NODE") followed by the address of
-  the new/removed node.
+  type of change ("NEW_NODE", "REMOVED_NODE", or "MOVED_NODE") followed
+  by the address of the new/removed/moved node.
 - "STATUS_CHANGE": events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -509,6 +509,9 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
 
 5. Compression
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v2.spec
--
diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec
index ef54099..584ae2f 100644
--- a/doc/native_protocol_v2.spec
+++ b/doc/native_protocol_v2.spec
@@ -604,8 +604,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change ("NEW_NODE" or "REMOVED_NODE") followed by the address of
-  the new/removed node.
+  type of change ("NEW_NODE", "REMOVED_NODE", or "MOVED_NODE") followed
+  by the address of the new/removed/moved node.
 - "STATUS_CHANGE": events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -627,6 +627,10 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
+
 4.2.7. AUTH_CHALLENGE
 
   A server authentication challenge (see AUTH_RESPONSE (Section 4.1.2) for more



[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f6ad3c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f6ad3c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f6ad3c9

Branch: refs/heads/trunk
Commit: 3f6ad3c9886c01c2cdaed6cad10c6f0672004473
Parents: 2f7077c 72c6ed2
Author: Tyler Hobbs 
Authored: Tue Mar 3 12:50:20 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Mar 3 12:50:20 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 doc/native_protocol_v3.spec | 8 ++--
 3 files changed, 17 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v1.spec
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v2.spec
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v3.spec
--
diff --cc doc/native_protocol_v3.spec
index 1d35d50,000..9894d76
mode 100644,00..100644
--- a/doc/native_protocol_v3.spec
+++ b/doc/native_protocol_v3.spec
@@@ -1,1027 -1,0 +1,1031 @@@
 +
 + CQL BINARY PROTOCOL v3
 +
 +
 +Table of Contents
 +
 +  1. Overview
 +  2. Frame header
 +2.1. version
 +2.2. flags
 +2.3. stream
 +2.4. opcode
 +2.5. length
 +  3. Notations
 +  4. Messages
 +4.1. Requests
 +  4.1.1. STARTUP
 +  4.1.2. AUTH_RESPONSE
 +  4.1.3. OPTIONS
 +  4.1.4. QUERY
 +  4.1.5. PREPARE
 +  4.1.6. EXECUTE
 +  4.1.7. BATCH
 +  4.1.8. REGISTER
 +4.2. Responses
 +  4.2.1. ERROR
 +  4.2.2. READY
 +  4.2.3. AUTHENTICATE
 +  4.2.4. SUPPORTED
 +  4.2.5. RESULT
 +4.2.5.1. Void
 +4.2.5.2. Rows
 +4.2.5.3. Set_keyspace
 +4.2.5.4. Prepared
 +4.2.5.5. Schema_change
 +  4.2.6. EVENT
 +  4.2.7. AUTH_CHALLENGE
 +  4.2.8. AUTH_SUCCESS
 +  5. Compression
 +  6. Data Type Serialization Formats
 +  7. User Defined Type Serialization
 +  8. Result paging
 +  9. Error codes
 +  10. Changes from v2
 +
 +
 +1. Overview
 +
 +  The CQL binary protocol is a frame based protocol. Frames are defined as:
 +
 +  0 8162432 40
 +  +-+-+-+-+-+
 +  | version |  flags  |  stream   | opcode  |
 +  +-+-+-+-+-+
 +  |length |
 +  +-+-+-+-+
 +  |   |
 +  ....  body ...  .
 +  .   .
 +  .   .
 +  +
 +
 +  The protocol is big-endian (network byte order).
 +
 +  Each frame contains a fixed size header (9 bytes) followed by a variable 
size
 +  body. The header is described in Section 2. The content of the body depends
 +  on the header opcode value (the body can in particular be empty for some
 +  opcode values). The list of allowed opcode is defined Section 2.3 and the
 +  details of each corresponding message is described Section 4.
 +
 +  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
 +  are those frame sent by the clients to the server, response are the ones 
sent
 +  by the server. Note however that the protocol supports server pushes 
(events)
 +  so responses does not necessarily come right after a client request.
 +
 +  Note to client implementors: clients library should always assume that the
 +  body of a given frame may contain more data than what is described in this
 +  document. It will however always be safe to ignore the remaining of the 
frame
 +  body in such cases. The reason is that this may allow to sometimes extend 
the
 +  protocol with optional features without needing to change the protocol
 +  version.
 +
 +
 +
 +2. Frame header
 +
 +2.1. version
 +
 +  The version is a single byte that indicate both the direction of the message
 +  (request or response) and the version of the protocol in use. The up-most 
bit
 +  of version is used to define the direction of the message: 0 indicates a
 +  request, 1 indicates a responses. This can be useful for protocol analyzers 
to
 +  distinguish the nature of the packet from the direction which it is moving.
 +  The rest of that byte is the protocol version (3 for the protocol defined in
 +  this document). In other words, for this version of the pro

[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fccf0b4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fccf0b4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fccf0b4f

Branch: refs/heads/trunk
Commit: fccf0b4f66c9ed60fa5bad10174676424a97
Parents: 2818ca4 3f6ad3c
Author: Tyler Hobbs 
Authored: Tue Mar 3 12:50:48 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Mar 3 12:50:48 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 doc/native_protocol_v3.spec | 8 ++--
 3 files changed, 17 insertions(+), 6 deletions(-)
--




[1/3] cassandra git commit: Add missing MOVED_NODE event to native protocol spec

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2818ca4cf -> fccf0b4f6


Add missing MOVED_NODE event to native protocol spec

Patch by Michael Penick; reviewed by Tyler Hobbs for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72c6ed28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72c6ed28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72c6ed28

Branch: refs/heads/trunk
Commit: 72c6ed2883a24486f6785b53cf73fdc8e78e2765
Parents: 33a3a09
Author: Michael Penick 
Authored: Tue Mar 3 12:47:41 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Mar 3 12:47:41 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 2 files changed, 11 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v1.spec
--
diff --git a/doc/native_protocol_v1.spec b/doc/native_protocol_v1.spec
index bc2bb78..41146f9 100644
--- a/doc/native_protocol_v1.spec
+++ b/doc/native_protocol_v1.spec
@@ -486,8 +486,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change ("NEW_NODE" or "REMOVED_NODE") followed by the address of
-  the new/removed node.
+  type of change ("NEW_NODE", "REMOVED_NODE", or "MOVED_NODE") followed
+  by the address of the new/removed/moved node.
 - "STATUS_CHANGE": events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -509,6 +509,9 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
 
 5. Compression
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v2.spec
--
diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec
index ef54099..584ae2f 100644
--- a/doc/native_protocol_v2.spec
+++ b/doc/native_protocol_v2.spec
@@ -604,8 +604,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change ("NEW_NODE" or "REMOVED_NODE") followed by the address of
-  the new/removed node.
+  type of change ("NEW_NODE", "REMOVED_NODE", or "MOVED_NODE") followed
+  by the address of the new/removed/moved node.
 - "STATUS_CHANGE": events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -627,6 +627,10 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
+
 4.2.7. AUTH_CHALLENGE
 
   A server authentication challenge (see AUTH_RESPONSE (Section 4.1.2) for more



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2015-03-03 Thread tylerhobbs
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f6ad3c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f6ad3c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f6ad3c9

Branch: refs/heads/cassandra-2.1
Commit: 3f6ad3c9886c01c2cdaed6cad10c6f0672004473
Parents: 2f7077c 72c6ed2
Author: Tyler Hobbs 
Authored: Tue Mar 3 12:50:20 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Mar 3 12:50:20 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 doc/native_protocol_v3.spec | 8 ++--
 3 files changed, 17 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v1.spec
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v2.spec
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f6ad3c9/doc/native_protocol_v3.spec
--
diff --cc doc/native_protocol_v3.spec
index 1d35d50,000..9894d76
mode 100644,00..100644
--- a/doc/native_protocol_v3.spec
+++ b/doc/native_protocol_v3.spec
@@@ -1,1027 -1,0 +1,1031 @@@
 +
 + CQL BINARY PROTOCOL v3
 +
 +
 +Table of Contents
 +
 +  1. Overview
 +  2. Frame header
 +2.1. version
 +2.2. flags
 +2.3. stream
 +2.4. opcode
 +2.5. length
 +  3. Notations
 +  4. Messages
 +4.1. Requests
 +  4.1.1. STARTUP
 +  4.1.2. AUTH_RESPONSE
 +  4.1.3. OPTIONS
 +  4.1.4. QUERY
 +  4.1.5. PREPARE
 +  4.1.6. EXECUTE
 +  4.1.7. BATCH
 +  4.1.8. REGISTER
 +4.2. Responses
 +  4.2.1. ERROR
 +  4.2.2. READY
 +  4.2.3. AUTHENTICATE
 +  4.2.4. SUPPORTED
 +  4.2.5. RESULT
 +4.2.5.1. Void
 +4.2.5.2. Rows
 +4.2.5.3. Set_keyspace
 +4.2.5.4. Prepared
 +4.2.5.5. Schema_change
 +  4.2.6. EVENT
 +  4.2.7. AUTH_CHALLENGE
 +  4.2.8. AUTH_SUCCESS
 +  5. Compression
 +  6. Data Type Serialization Formats
 +  7. User Defined Type Serialization
 +  8. Result paging
 +  9. Error codes
 +  10. Changes from v2
 +
 +
 +1. Overview
 +
 +  The CQL binary protocol is a frame based protocol. Frames are defined as:
 +
 +  0 8162432 40
 +  +-+-+-+-+-+
 +  | version |  flags  |  stream   | opcode  |
 +  +-+-+-+-+-+
 +  |length |
 +  +-+-+-+-+
 +  |   |
 +  ....  body ...  .
 +  .   .
 +  .   .
 +  +
 +
 +  The protocol is big-endian (network byte order).
 +
 +  Each frame contains a fixed size header (9 bytes) followed by a variable 
size
 +  body. The header is described in Section 2. The content of the body depends
 +  on the header opcode value (the body can in particular be empty for some
 +  opcode values). The list of allowed opcode is defined Section 2.3 and the
 +  details of each corresponding message is described Section 4.
 +
 +  The protocol distinguishes 2 types of frames: requests and responses. 
Requests
 +  are those frame sent by the clients to the server, response are the ones 
sent
 +  by the server. Note however that the protocol supports server pushes 
(events)
 +  so responses does not necessarily come right after a client request.
 +
 +  Note to client implementors: clients library should always assume that the
 +  body of a given frame may contain more data than what is described in this
 +  document. It will however always be safe to ignore the remaining of the 
frame
 +  body in such cases. The reason is that this may allow to sometimes extend 
the
 +  protocol with optional features without needing to change the protocol
 +  version.
 +
 +
 +
 +2. Frame header
 +
 +2.1. version
 +
 +  The version is a single byte that indicate both the direction of the message
 +  (request or response) and the version of the protocol in use. The up-most 
bit
 +  of version is used to define the direction of the message: 0 indicates a
 +  request, 1 indicates a responses. This can be useful for protocol analyzers 
to
 +  distinguish the nature of the packet from the direction which it is moving.
 +  The rest of that byte is the protocol version (3 for the protocol defined in
 +  this document). In other words, for this version of

cassandra git commit: Add missing MOVED_NODE event to native protocol spec

2015-03-03 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 33a3a09cb -> 72c6ed288


Add missing MOVED_NODE event to native protocol spec

Patch by Michael Penick; reviewed by Tyler Hobbs for CASSANDRA-7816


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72c6ed28
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72c6ed28
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72c6ed28

Branch: refs/heads/cassandra-2.0
Commit: 72c6ed2883a24486f6785b53cf73fdc8e78e2765
Parents: 33a3a09
Author: Michael Penick 
Authored: Tue Mar 3 12:47:41 2015 -0600
Committer: Tyler Hobbs 
Committed: Tue Mar 3 12:47:41 2015 -0600

--
 doc/native_protocol_v1.spec | 7 +--
 doc/native_protocol_v2.spec | 8 ++--
 2 files changed, 11 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v1.spec
--
diff --git a/doc/native_protocol_v1.spec b/doc/native_protocol_v1.spec
index bc2bb78..41146f9 100644
--- a/doc/native_protocol_v1.spec
+++ b/doc/native_protocol_v1.spec
@@ -486,8 +486,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change ("NEW_NODE" or "REMOVED_NODE") followed by the address of
-  the new/removed node.
+  type of change ("NEW_NODE", "REMOVED_NODE", or "MOVED_NODE") followed
+  by the address of the new/removed/moved node.
 - "STATUS_CHANGE": events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -509,6 +509,9 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
 
 5. Compression
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72c6ed28/doc/native_protocol_v2.spec
--
diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec
index ef54099..584ae2f 100644
--- a/doc/native_protocol_v2.spec
+++ b/doc/native_protocol_v2.spec
@@ -604,8 +604,8 @@ Table of Contents
   Currently, events are sent when new nodes are added to the cluster, and
   when nodes are removed. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
-  type of change ("NEW_NODE" or "REMOVED_NODE") followed by the address of
-  the new/removed node.
+  type of change ("NEW_NODE", "REMOVED_NODE", or "MOVED_NODE") followed
+  by the address of the new/removed/moved node.
 - "STATUS_CHANGE": events related to change of node status. Currently,
   up/down events are sent. The body of the message (after the event type)
   consists of a [string] and an [inet], corresponding respectively to the
@@ -627,6 +627,10 @@ Table of Contents
   should be enough), otherwise they may experience a connection refusal at
   first.
 
+  It is possible for the same event to be sent multiple times. Therefore,
+  a client library should ignore the same event if it has already been notified
+  of a change.
+
 4.2.7. AUTH_CHALLENGE
 
   A server authentication challenge (see AUTH_RESPONSE (Section 4.1.2) for more



[jira] [Commented] (CASSANDRA-8861) HyperLogLog Collection Type

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345487#comment-14345487
 ] 

Drew Kutcharian commented on CASSANDRA-8861:


Thanks [~iamaleksey]

> HyperLogLog Collection Type
> ---
>
> Key: CASSANDRA-8861
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8861
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Drew Kutcharian
>Assignee: Aleksey Yeschenko
> Fix For: 3.1
>
>
> Considering that HyperLogLog and its variants have become pretty popular in 
> analytics space and Cassandra has "read-before-write" collections (Lists), I 
> think it would not be too painful to add support for HyperLogLog "collection" 
> type. They would act similar to CQL 3 Sets, meaning you would be able to 
> "set" the value and "add" an element, but you won't be able to remove an 
> element. Also, when getting the value of a HyperLogLog collection column, 
> you'd get the cardinality.
> There are a couple of good attributes with HyperLogLog which fit Cassandra 
> pretty well.
> - Adding an element is idempotent (adding an existing element doesn't change 
> the HLL)
> - HLL can be thought of as a CRDT, since we can safely merge them. Which 
> means we can merge two HLLs during read-repair. But if that's too much work, 
> I guess we can even live with LWW since these counts are "estimates" after 
> all.
> There is already a proof of concept at:
> http://vilkeliskis.com/blog/2013/12/28/hacking_cassandra.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345474#comment-14345474
 ] 

Drew Kutcharian edited comment on CASSANDRA-8877 at 3/3/15 6:46 PM:


[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['first_name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['first_name']), WRITETIME(fields['first_name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields => { 'first_name': 'john', 'last_name': 'doe' }

METADATA(fields) => { 'first_name': {'ttl': , 'writetime': 
 }, 'last_name': {'ttl': , 'writetime':  } }
{code}

or alternatively (without adding a new function):
{code}
SELECT fields, TTL(fields), WRITETIME(fields) from user
{code}

and the response would be
{code}
fields => { 'first_name': 'john', 'last_name': 'doe' }

TTL(fields) => { 'first_name': , 'last_name':  }

WRITETIME(fields) => { 'first_name': , 'last_name': 
 }
{code}



was (Author: drew_kutchar):
[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
METADATA(fields): { 'name': {'ttl': , 'writetime':  } }
{code}

or alternatively (without adding a new function):
{code}
SELECT fields, TTL(fields), WRITETIME(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
TTL(fields): { 'name':  }
WRITETIME(fields): { 'name':  }
{code}


> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345474#comment-14345474
 ] 

Drew Kutcharian edited comment on CASSANDRA-8877 at 3/3/15 6:42 PM:


[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
METADATA(fields): { 'name': {'ttl': , 'writetime':  } }
{code}

or alternatively (without adding a new function):
{code}
SELECT fields, TTL(fields), WRITETIME(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
TTL(fields): { 'name':  }
WRITETIME(fields): { 'name':  }
{code}



was (Author: drew_kutchar):
[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
METADATA(fields): { 'name': {'ttl': , 'writetime':  } }
{code}


> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345474#comment-14345474
 ] 

Drew Kutcharian edited comment on CASSANDRA-8877 at 3/3/15 6:40 PM:


[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, METADATA(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
METADATA(fields): { 'name': {'ttl': , 'writetime':  } }
{code}



was (Author: drew_kutchar):
[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, metadata(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
metadata(fields): { 'name': {'ttl': , 'writetime':  } }
{code}


> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8832) SSTableRewriter.abort() should be more robust to failure

2015-03-03 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345460#comment-14345460
 ] 

Branimir Lambov edited comment on CASSANDRA-8832 at 3/3/15 6:39 PM:


AFAICS the actual fix for the problem was committed [as part of 
7705|https://github.com/apache/cassandra/commit/c75ee4160cb8fcdf47c90bfce8bf0d861f32d268#diff-426d04d201a410848604b55984d1b370R291]
 and this patch only adds continued processing after exceptions. Can you 
confirm this?

A couple of comments on the patch:
* {{replaceWithFinishedReaders}} can also throw (e.g. due to a reference 
counting bug), hiding any earlier errors. It should also be wrapped in a 
try/merge block.
* The static {{merge}} of throwables will probably be needed in many other 
places. Could we move it to a more generic location?
* Is it possible to include a regression test for the bug?


was (Author: blambov):
AFAICS the actual fix for the problem was committed [as part of 
7705|https://github.com/apache/cassandra/commit/c75ee4160cb8fcdf47c90bfce8bf0d861f32d268]
 and this patch only adds continued processing after exceptions. Can you 
confirm this?

A couple of comments on the patch:
* {{replaceWithFinishedReaders}} can also throw (e.g. due to a reference 
counting bug), hiding any earlier errors. It should also be wrapped in a 
try/merge block.
* The static {{merge}} of throwables will probably be needed in many other 
places. Could we move it to a more generic location?
* Is it possible to include a regression test for the bug?

> SSTableRewriter.abort() should be more robust to failure
> 
>
> Key: CASSANDRA-8832
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8832
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1.4
>
>
> This fixes a bug introduced in CASSANDRA-8124 that attempts to open early 
> during abort, introducing a failure risk. This patch further preempts 
> CASSANDRA-8690 to wrap every rollback action in a try/catch block, so that 
> any internal assertion checks do not actually worsen the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Drew Kutcharian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345474#comment-14345474
 ] 

Drew Kutcharian commented on CASSANDRA-8877:


[~slebresne] you are correct that this is relates to CASSANDRA-7396. The ideal 
situation would be:

1. Be able to select the value of the an element in a collection individually, 
i.e. 
{code}
SELECT fields['name'] from user
{code}

2. Be able to select the value, TTL and writetime of the an element in a 
collection individually
{code}
SELECT TTL(fields['name']), WRITETIME(fields['name']) from user
{code}

3. Be able to select the values of ALL the elements in a collection (this is 
the current functionality when selecting a collection column)
{code}
SELECT fields from user
{code}

Optionally:
4. Be able to select the value, TTL and writetime of ALL the elements in a 
collection. This is where I haven't come up with a good syntax but maybe 
something like this:
{code}
SELECT fields, metadata(fields) from user
{code}

and the response would be
{code}
fields: { 'name': 'john' }
metadata(fields): { 'name': {'ttl': , 'writetime':  } }
{code}


> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8870) Tombstone overwhelming issue aborts client queries

2015-03-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8870:
---
Tester: Shawn Kumar

> Tombstone overwhelming issue aborts client queries
> --
>
> Key: CASSANDRA-8870
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8870
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.2 ubunbtu 12.04
>Reporter: Jeff Liu
>
> We are getting client queries timeout issues on the clients who are trying to 
> query data from cassandra cluster. 
> Nodetool status shows that all nodes are still up regardless.
> Logs from client side:
> {noformat}
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: 
> cass-chisel01.abc01.abc02.abc.abc.com/10.66.182.113:9042 
> (com.datastax.driver.core.TransportException: 
> [cass-chisel01.tgr01.iad02.testd.nestlabs.com/10.66.182.113:9042] Connection 
> has been closed))
> at 
> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108) 
> ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
> at 
> com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179) 
> ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_55]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_55]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
> {noformat}
> Logs from cassandra/system.log
> {noformat}
> ERROR [HintedHandoff:2] 2015-02-23 23:46:28,410 SliceQueryFilter.java:212 - 
> Scanned over 10 tombstones in system.hints; query aborted (see 
> tombstone_failure_threshold)
> ERROR [HintedHandoff:2] 2015-02-23 23:46:28,417 CassandraDaemon.java:153 - 
> Exception in thread Thread[HintedHandoff:2,1,main]
> org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:214)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:310)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1858)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1666)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:385)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:344)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.access$400(HintedHandOffManager.java:94)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:555)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_55]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_55]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8870) Tombstone overwhelming issue aborts client queries

2015-03-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345472#comment-14345472
 ] 

Philip Thompson commented on CASSANDRA-8870:


[~shawn.kumar] is handling reproduction.

> Tombstone overwhelming issue aborts client queries
> --
>
> Key: CASSANDRA-8870
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8870
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.2 ubunbtu 12.04
>Reporter: Jeff Liu
>
> We are getting client queries timeout issues on the clients who are trying to 
> query data from cassandra cluster. 
> Nodetool status shows that all nodes are still up regardless.
> Logs from client side:
> {noformat}
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: 
> cass-chisel01.abc01.abc02.abc.abc.com/10.66.182.113:9042 
> (com.datastax.driver.core.TransportException: 
> [cass-chisel01.tgr01.iad02.testd.nestlabs.com/10.66.182.113:9042] Connection 
> has been closed))
> at 
> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108) 
> ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
> at 
> com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179) 
> ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_55]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_55]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
> {noformat}
> Logs from cassandra/system.log
> {noformat}
> ERROR [HintedHandoff:2] 2015-02-23 23:46:28,410 SliceQueryFilter.java:212 - 
> Scanned over 10 tombstones in system.hints; query aborted (see 
> tombstone_failure_threshold)
> ERROR [HintedHandoff:2] 2015-02-23 23:46:28,417 CassandraDaemon.java:153 - 
> Exception in thread Thread[HintedHandoff:2,1,main]
> org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:214)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:310)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1858)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1666)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:385)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:344)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.access$400(HintedHandOffManager.java:94)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:555)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_55]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_55]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345456#comment-14345456
 ] 

Tyler Hobbs commented on CASSANDRA-8877:


If we make this dependent on CASSANDRA-7396, would we only support it for 
single-element lookup, or would it be supported for slice syntax as well?  If 
we support it for slices, we will need to do what you suggest anyway (return a 
list).

> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8832) SSTableRewriter.abort() should be more robust to failure

2015-03-03 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345460#comment-14345460
 ] 

Branimir Lambov commented on CASSANDRA-8832:


AFAICS the actual fix for the problem was committed [as part of 
7705|https://github.com/apache/cassandra/commit/c75ee4160cb8fcdf47c90bfce8bf0d861f32d268]
 and this patch only adds continued processing after exceptions. Can you 
confirm this?

A couple of comments on the patch:
* {{replaceWithFinishedReaders}} can also throw (e.g. due to a reference 
counting bug), hiding any earlier errors. It should also be wrapped in a 
try/merge block.
* The static {{merge}} of throwables will probably be needed in many other 
places. Could we move it to a more generic location?
* Is it possible to include a regression test for the bug?

> SSTableRewriter.abort() should be more robust to failure
> 
>
> Key: CASSANDRA-8832
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8832
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1.4
>
>
> This fixes a bug introduced in CASSANDRA-8124 that attempts to open early 
> during abort, introducing a failure risk. This patch further preempts 
> CASSANDRA-8690 to wrap every rollback action in a try/catch block, so that 
> any internal assertion checks do not actually worsen the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8877:
---
Priority: Minor  (was: Major)

> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
>Priority: Minor
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8870) Tombstone overwhelming issue aborts client queries

2015-03-03 Thread Jeff Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345461#comment-14345461
 ] 

Jeff Liu commented on CASSANDRA-8870:
-

Another question I have been curious about is that why we would see those 
tombstone errors. in our application, we are doing insert and update only. Will 
update ops generate tombstones?

> Tombstone overwhelming issue aborts client queries
> --
>
> Key: CASSANDRA-8870
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8870
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 2.1.2 ubunbtu 12.04
>Reporter: Jeff Liu
>
> We are getting client queries timeout issues on the clients who are trying to 
> query data from cassandra cluster. 
> Nodetool status shows that all nodes are still up regardless.
> Logs from client side:
> {noformat}
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: 
> cass-chisel01.abc01.abc02.abc.abc.com/10.66.182.113:9042 
> (com.datastax.driver.core.TransportException: 
> [cass-chisel01.tgr01.iad02.testd.nestlabs.com/10.66.182.113:9042] Connection 
> has been closed))
> at 
> com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108) 
> ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
> at 
> com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179) 
> ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_55]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_55]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
> {noformat}
> Logs from cassandra/system.log
> {noformat}
> ERROR [HintedHandoff:2] 2015-02-23 23:46:28,410 SliceQueryFilter.java:212 - 
> Scanned over 10 tombstones in system.hints; query aborted (see 
> tombstone_failure_threshold)
> ERROR [HintedHandoff:2] 2015-02-23 23:46:28,417 CassandraDaemon.java:153 - 
> Exception in thread Thread[HintedHandoff:2,1,main]
> org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:214)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:310)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1858)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1666)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:385)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:344)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager.access$400(HintedHandOffManager.java:94)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:555)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_55]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  ~[na:1.7.0_55]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8504) Stack trace is erroneously logged twice

2015-03-03 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-8504.
-
Resolution: Not a Problem

> Stack trace is erroneously logged twice
> ---
>
> Key: CASSANDRA-8504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8504
> Project: Cassandra
>  Issue Type: Bug
> Environment: OSX and Ubuntu
>Reporter: Philip Thompson
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0
>
> Attachments: node4.log
>
>
> The dtest 
> {{replace_address_test.TestReplaceAddress.replace_active_node_test}} is 
> failing on 3.0. The following can be seen in the log:{code}ERROR [main] 
> 2014-12-17 15:12:33,871 CassandraDaemon.java:496 - Exception encountered 
> during startup
> java.lang.UnsupportedOperationException: Cannot replace a live node...
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:773)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:593)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:356) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:479)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:571) 
> [main/:na]
> ERROR [main] 2014-12-17 15:12:33,872 CassandraDaemon.java:584 - Exception 
> encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace a live node...
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:773)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:593)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:356) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:479)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:571) 
> [main/:na]
> INFO  [StorageServiceShutdownHook] 2014-12-17 15:12:33,873 Gossiper.java:1349 
> - Announcing shutdown
> INFO  [StorageServiceShutdownHook] 2014-12-17 15:12:35,876 
> MessagingService.java:708 - Waiting for messaging service to quiesce{code}
> The test starts up a three node cluster, loads some data, then attempts to 
> start a fourth node with replace_address against the IP of a live node. This 
> is expected to fail, with one ERROR message in the log. In 3.0, we are seeing 
> two messages. 2.1-HEAD is working as expected. Attached is the full log of 
> the fourth node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8860) Too many java.util.HashMap$Entry objects in heap

2015-03-03 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345431#comment-14345431
 ] 

Tyler Hobbs commented on CASSANDRA-8860:


+1, patch looks good

> Too many java.util.HashMap$Entry objects in heap
> 
>
> Key: CASSANDRA-8860
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8860
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.3, jdk 1.7u51
>Reporter: Phil Yang
>Assignee: Marcus Eriksson
> Fix For: 2.1.4
>
> Attachments: 0001-remove-cold_reads_to_omit.patch, 8860-v2.txt, 
> 8860.txt, cassandra-env.sh, cassandra.yaml, jmap.txt, jstack.txt, 
> jstat-afterv1.txt, jstat-afterv2.txt, jstat-before.txt
>
>
> While I upgrading my cluster to 2.1.3, I find some nodes (not all) may have 
> GC issue after the node restarting successfully. Old gen grows very fast and 
> most of the space can not be recycled after setting its status to normal 
> immediately. The qps of both reading and writing are very low and there is no 
> heavy compaction.
> Jmap result seems strange that there are too many java.util.HashMap$Entry 
> objects in heap, where in my experience the "[B" is usually the No1.
> If I downgrade it to 2.1.1, this issue will not appear.
> I uploaded conf files and jstack/jmap outputs. I'll upload heap dump if 
> someone need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8898) sstableloader utility should allow loading of data from mounted filesystem

2015-03-03 Thread Kenneth Failbus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Failbus updated CASSANDRA-8898:
---
Description: 
When trying to load data from a mounted filesystem onto a new cluster, 
following exceptions is observed intermittently, and at some point the 
sstableloader process gets hung without completing the loading process.

Please note that in my case the scenario was loading the existing sstables from 
an existing cluster to a brand new cluster.

Finally, it was found that there were some hard assumptions been made by 
sstableloader utility w.r.t response from the filesystem, which was not working 
with mounted filesystem.

The work-around was to copy each existing nodes sstable data files locally and 
then point sstableloader to that local filesystem to then load data onto new 
cluster.

In case of restoring during disaster recovery from backups the data using 
sstableloader, this copying to local filesystem of data files and then loading 
would take a long time.

It would be a good enhancement of the sstableloader utility to enable use of 
mounted filesystem as copying data locally and then loading is time consuming.

Below is the exception seen during the use of mounted filesystem.
{code}
java.lang.AssertionError: Reference counter -1 for 
/opt/tmp/casapp-c1-c00053-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-5449-Data.db
 
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1146)
 
at 
org.apache.cassandra.streaming.StreamTransferTask.complete(StreamTransferTask.java:74)
 
at 
org.apache.cassandra.streaming.StreamSession.received(StreamSession.java:542) 
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:424)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:245)
 
at java.lang.Thread.run(Thread.java:744) 
WARN 21:07:16,853 [Stream #3e5a5ba0-bdef-11e4-a975-5777dbff0945] Stream failed

  at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:59)
 
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1406)
 
at 
org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:55)
 
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
 
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
 
at 
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:311)
 
at java.lang.Thread.run(Thread.java:744) 
Caused by: java.io.FileNotFoundException: 
/opt/tmp/casapp-c1-c00055-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-5997-Data.db
 (No such file or directory) 
at java.io.RandomAccessFile.open(Native Method) 
at java.io.RandomAccessFile.(RandomAccessFile.java:241) 
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:55)
 
... 8 more 
Exception in thread "STREAM-OUT-/96.115.88.196" java.lang.NullPointerException 
at 
org.apache.cassandra.streaming.ConnectionHandler$MessageHandler.signalCloseDone(ConnectionHandler.java:205)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
 
at java.lang.Thread.run(Thread.java:744) 
ERROR 20:49:35,646 [Stream #d9fce650-bdf3-11e4-b6c0-252cb9b3e9f3] Streaming 
error occurred 
java.lang.AssertionError: Reference counter -3 for 
/opt/tmp/casapp-c1-c00055-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-4897-Data.db
 
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1146)
 
at 
org.apache.cassandra.streaming.StreamTransferTask.complete(StreamTransferTask.java:74)
 
at 
org.apache.cassandra.streaming.StreamSession.received(StreamSession.java:542) 
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:424)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:245)
 
at java.lang.Thread.run(Thread.java:744) 
Exception in thread "STREAM-IN-/96.115.88.196" java.lang.NullPointerException 

[jira] [Commented] (CASSANDRA-8730) Optimize UUIDType comparisons

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345416#comment-14345416
 ] 

Benedict commented on CASSANDRA-8730:
-

I've pushed a small change with a very simple trick to permit both faster and 
simpler signed byte comparison of the LSB in TimeUUIDTyoe

> Optimize UUIDType comparisons
> -
>
> Key: CASSANDRA-8730
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8730
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: J.B. Langston
>Assignee: Benedict
> Fix For: 3.0
>
>
> Compaction is slow on tables using compound keys containing UUIDs due to 
> being CPU bound by key comparison.  [~benedict] said he sees some easy 
> optimizations that could be made for UUID comparison.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8898) sstableloader utility should allow loading of data from mounted filesystem

2015-03-03 Thread Kenneth Failbus (JIRA)
Kenneth Failbus created CASSANDRA-8898:
--

 Summary: sstableloader utility should allow loading of data from 
mounted filesystem
 Key: CASSANDRA-8898
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8898
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
 Environment: 2.0.12
Reporter: Kenneth Failbus


When trying to load data from a mounted filesystem onto a new cluster, 
following exceptions is observed intermittently, and at some point the 
sstableloader process gets hung without completing the loading process.

Please note that in my case the scenario was loading the existing sstables from 
an existing cluster to a brand new cluster.

Finally, it was found that there were some hard assumptions been made by 
sstableloader utility w.r.t response from the filesystem, which was not working 
with mounted filesystem.

The work-around was to copy each existing nodes sstable data files locally and 
then point sstableloader to that local filesystem to then load data.

In case of restoring during disaster recovery from backups the data using 
sstableloader, this copying to local filesystem of data files and then loading 
would take a long time.

It would be a good enhancement of the sstableloader utility to enable use of 
mounted filesystem as copying data locally and then loading is time consuming.

Below is the exception seen during the use of mounted filesystem.
{code}
java.lang.AssertionError: Reference counter -1 for 
/opt/tmp/casapp-c1-c00053-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-5449-Data.db
 
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1146)
 
at 
org.apache.cassandra.streaming.StreamTransferTask.complete(StreamTransferTask.java:74)
 
at 
org.apache.cassandra.streaming.StreamSession.received(StreamSession.java:542) 
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:424)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:245)
 
at java.lang.Thread.run(Thread.java:744) 
WARN 21:07:16,853 [Stream #3e5a5ba0-bdef-11e4-a975-5777dbff0945] Stream failed

  at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:59)
 
at 
org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1406)
 
at 
org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:55)
 
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:59)
 
at 
org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:42)
 
at 
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:45)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:339)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:311)
 
at java.lang.Thread.run(Thread.java:744) 
Caused by: java.io.FileNotFoundException: 
/opt/tmp/casapp-c1-c00055-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-5997-Data.db
 (No such file or directory) 
at java.io.RandomAccessFile.open(Native Method) 
at java.io.RandomAccessFile.(RandomAccessFile.java:241) 
at 
org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58)
 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76)
 
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:55)
 
... 8 more 
Exception in thread "STREAM-OUT-/96.115.88.196" java.lang.NullPointerException 
at 
org.apache.cassandra.streaming.ConnectionHandler$MessageHandler.signalCloseDone(ConnectionHandler.java:205)
 
at 
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:331)
 
at java.lang.Thread.run(Thread.java:744) 
ERROR 20:49:35,646 [Stream #d9fce650-bdf3-11e4-b6c0-252cb9b3e9f3] Streaming 
error occurred 
java.lang.AssertionError: Reference counter -3 for 
/opt/tmp/casapp-c1-c00055-g.ch.tvx.comcast.com/MinDataService/System/MinDataService-System-jb-4897-Data.db
 
at 
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1146)
 
at 
org.apache.cassandra.streaming.StreamTransferTask.complete(StreamTransferTask.java:74)
 
at 
org.apache.cassandra.streaming.StreamSession.received(StreamSession.java:542) 
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:424)
 
at 
org.apache.cassandra.strea

[jira] [Resolved] (CASSANDRA-7875) Prepared statements using dropped indexes are not handled correctly

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-7875.

   Resolution: Won't Fix
Fix Version/s: (was: 2.1.4)
   2.0.13
 Reviewer: Tyler Hobbs

+1 on leaving 2.0 alone.  I'm resolving this as Won't Fix, and we'll get the 
dtest merged.  Thanks Stefania!

> Prepared statements using dropped indexes are not handled correctly
> ---
>
> Key: CASSANDRA-7875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7875
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.0.13
>
> Attachments: prepared_statements_test.py, repro.py
>
>
> When select statements are prepared, we verify that the column restrictions 
> use indexes (where necessary).  However, we don't perform a similar check 
> when the statement is executed, so it fails somewhere further down the line.  
> In this case, it hits an assertion:
> {noformat}
> java.lang.AssertionError: Sequential scan with filters is not supported (if 
> you just created an index, you need to wait for the creation to be propagated 
> to all nodes before querying it)
>   at 
> org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.getExtraFilter(ExtendedFilter.java:259)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1759)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1709)
>   at 
> org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119)
>   at 
> org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1394)
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> {noformat}
> During execution, we should check that the indexes still exist and provide a 
> better error if they do not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8850) clean up options syntax for create/alter role

2015-03-03 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8850:
---
Attachment: 8850-v2.txt

v2 attached where {{WITH}} & {{AND}} are mandatory in {{CREATE|ALTER ROLE}}. 

Update to auth_roles_dtest 
[here|https://github.com/riptano/cassandra-dtest/pull/178] 

> clean up options syntax for create/alter role 
> --
>
> Key: CASSANDRA-8850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8850
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 8850-v2.txt, 8850.txt
>
>
> {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} 
> in a way more consistent with other statements.
> e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8516) NEW_NODE topology event emitted instead of MOVED_NODE by moving node

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-8516:
---
 Reviewer: Brandon Williams
Reproduced In: 2.1.2, 2.0.11  (was: 2.0.11, 2.1.2)

> NEW_NODE topology event emitted instead of MOVED_NODE by moving node
> 
>
> Key: CASSANDRA-8516
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8516
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.0.13
>
> Attachments: cassandra_8516_a.txt, cassandra_8516_b.txt, 
> cassandra_8516_dtest.txt
>
>
> As discovered in CASSANDRA-8373, when you move a node in a single-node 
> cluster, a {{NEW_NODE}} event is generated instead of a {{MOVED_NODE}} event.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8897) Remove FileCacheService, instead pooling the buffers

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8897:
---

 Summary: Remove FileCacheService, instead pooling the buffers
 Key: CASSANDRA-8897
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8897
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


After CASSANDRA-8893, a RAR will be a very lightweight object and will not need 
caching, so we can eliminate this cache entirely. Instead we should have a pool 
of buffers that are page-aligned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8878) Counter Tables should be more clearly identified

2015-03-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345332#comment-14345332
 ] 

Jonathan Ellis commented on CASSANDRA-8878:
---

What would we need to do to get rid of this distinction, then?  It's maybe the 
ugliest wart we have left at the cql level.

> Counter Tables should be more clearly identified
> 
>
> Key: CASSANDRA-8878
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8878
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michaël Figuière
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 3.0
>
>
> Counter tables are internally considered as a particular kind of table, 
> different from the regular ones. This counter specific nature is implicitly 
> defined by the fact that columns within a table have the {{counter}} data 
> type. This nature turns out to be persistent over the time, that is if the 
> user do the following:
> {code}
> CREATE TABLE counttable (key uuid primary key, count counter);
> ALTER TABLE counttable DROP count;
> ALTER TABLE counttable ADD count2 int;
> {code} 
> The following error will be thrown:
> {code}
> Cannot add a non counter column (count2) in a counter column family
> {code}
> Even if the table doesn't have any counter column anymore. This implicit, 
> persistent nature can be challenging to understand for users (and impossible 
> to infer in the case above). For this reason a more explicit declaration of 
> counter tables would be appropriate, as:
> {code}
> CREATE COUNTER TABLE counttable (key uuid primary key, count counter);
> {code}
> Besides that, adding a boolean {{counter_table}} column in the 
> {{system.schema_columnfamilies}} table would allow external tools to easily 
> differentiate a counter table from a regular one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345344#comment-14345344
 ] 

Benedict commented on CASSANDRA-8894:
-

bq. Sharing buffers across files is tricky because of the internals of 
RandomAccessReader. Maybe this should be a separate ticket.

I've filed CASSANDRA-8897 which encompasses this.

> Our default buffer size for (uncompressed) buffered reads should be smaller, 
> and based on the expected record size
> --
>
> Key: CASSANDRA-8894
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 3.0
>
>
> A large contributor to slower buffered reads than mmapped is likely that we 
> read a full 64Kb at once, when average record sizes may be as low as 140 
> bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
> each read will touch 32 of these, meaning we are unlikely to almost ever be 
> hitting the TLB, and will be incurring at least 30 unnecessary misses each 
> time (as well as the other costs of larger than necessary accesses). When 
> working with an SSD there is little to no benefit reading more than 4Kb at 
> once, and in either case reading more data than we need is wasteful. So, I 
> propose selecting a buffer size that is the next larger power of 2 than our 
> average record size (with a minimum of 4Kb), so that we expect to read in one 
> operation. I also propose that we create a pool of these buffers up-front, 
> and that we ensure they are all exactly aligned to a virtual page, so that 
> the source and target operations each touch exactly one virtual page per 4Kb 
> of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8657) long-test LongCompactionsTest fails

2015-03-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8657:
--
 Reviewer: Yuki Morishita
Reproduced In: 2.1.2, 2.0.12  (was: 2.0.12, 2.1.2)

[~yukim] to review

> long-test LongCompactionsTest fails
> ---
>
> Key: CASSANDRA-8657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8657
> Project: Cassandra
>  Issue Type: Test
>  Components: Tests
>Reporter: Michael Shuler
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 2.0.13, 2.1.4
>
> Attachments: 8657-2.0.txt, system.log
>
>
> Same error on 3 of the 4 tests in this suite - failure is the same for 2.0 
> and 2.1 branch:
> {noformat}
> [junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest
> [junit] Tests run: 4, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 
> 27.294 sec
> [junit] 
> [junit] Testcase: 
> testCompactionMany(org.apache.cassandra.db.compaction.LongCompactionsTest):   
> FAILED
> [junit] 
> /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] junit.framework.AssertionFailedError: 
> /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49)
> [junit] at 
> org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionMany(LongCompactionsTest.java:67)
> [junit] 
> [junit] 
> [junit] Testcase: 
> testCompactionSlim(org.apache.cassandra.db.compaction.LongCompactionsTest):   
> FAILED
> [junit] 
> /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] junit.framework.AssertionFailedError: 
> /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49)
> [junit] at 
> org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionSlim(LongCompactionsTest.java:58)
> [junit] 
> [junit] 
> [junit] Testcase: 
> testCompactionWide(org.apache.cassandra.db.compaction.LongCompactionsTest):   
> FAILED
> [junit] 
> /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] junit.framework.AssertionFailedError: 
> /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49)
> [junit] at 
> org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionWide(LongCompactionsTest.java:49)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED
> {noformat}
> A system.log is attached from the above run on 2.0 HEAD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8657) long-test LongCompactionsTest fails

2015-03-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-8657:
--
Attachment: 8657-2.0.txt

The test wasn't properly marking the files as compacting, and also wasn't 
properly cleaning up between tests.

> long-test LongCompactionsTest fails
> ---
>
> Key: CASSANDRA-8657
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8657
> Project: Cassandra
>  Issue Type: Test
>  Components: Tests
>Reporter: Michael Shuler
>Assignee: Carl Yeksigian
>Priority: Minor
> Fix For: 2.1.4
>
> Attachments: 8657-2.0.txt, system.log
>
>
> Same error on 3 of the 4 tests in this suite - failure is the same for 2.0 
> and 2.1 branch:
> {noformat}
> [junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest
> [junit] Tests run: 4, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: 
> 27.294 sec
> [junit] 
> [junit] Testcase: 
> testCompactionMany(org.apache.cassandra.db.compaction.LongCompactionsTest):   
> FAILED
> [junit] 
> /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] junit.framework.AssertionFailedError: 
> /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49)
> [junit] at 
> org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionMany(LongCompactionsTest.java:67)
> [junit] 
> [junit] 
> [junit] Testcase: 
> testCompactionSlim(org.apache.cassandra.db.compaction.LongCompactionsTest):   
> FAILED
> [junit] 
> /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] junit.framework.AssertionFailedError: 
> /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49)
> [junit] at 
> org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionSlim(LongCompactionsTest.java:58)
> [junit] 
> [junit] 
> [junit] Testcase: 
> testCompactionWide(org.apache.cassandra.db.compaction.LongCompactionsTest):   
> FAILED
> [junit] 
> /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] junit.framework.AssertionFailedError: 
> /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db
>  is not correctly marked compacting
> [junit] at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49)
> [junit] at 
> org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102)
> [junit] at 
> org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionWide(LongCompactionsTest.java:49)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED
> {noformat}
> A system.log is attached from the above run on 2.0 HEAD.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345314#comment-14345314
 ] 

Norman Maurer commented on CASSANDRA-8086:
--

Addressed comment and uploaded new patch

> Cassandra should have ability to limit the number of native connections
> ---
>
> Key: CASSANDRA-8086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vishy Kasar
>Assignee: Norman Maurer
> Fix For: 2.1.4
>
> Attachments: 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final-v2.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt
>
>
> We have a production cluster with 72 instances spread across 2 DCs. We have a 
> large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
> connects to 4 cassandra instances. Some event (we think it is a schema change 
> on server side) triggered the client to establish connections to all 
> cassandra instances of local DC. This brought the server to its knees. The 
> client connections failed and client attempted re-connections. 
> Cassandra should protect itself from such attack from client. Do we have any 
> knobs to control the number of max connections? If not, we need to add that 
> knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-8086:
-
Attachment: 
0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final-v2.patch

Address comment... 

> Cassandra should have ability to limit the number of native connections
> ---
>
> Key: CASSANDRA-8086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vishy Kasar
>Assignee: Norman Maurer
> Fix For: 2.1.4
>
> Attachments: 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final-v2.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt
>
>
> We have a production cluster with 72 instances spread across 2 DCs. We have a 
> large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
> connects to 4 cassandra instances. Some event (we think it is a schema change 
> on server side) triggered the client to establish connections to all 
> cassandra instances of local DC. This brought the server to its knees. The 
> client connections failed and client attempted re-connections. 
> Cassandra should protect itself from such attack from client. Do we have any 
> knobs to control the number of max connections? If not, we need to add that 
> knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8878) Counter Tables should be more clearly identified

2015-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345310#comment-14345310
 ] 

Sylvain Lebresne commented on CASSANDRA-8878:
-

Afaik, none of reasons for no allowing mixing counter and non-counter will be 
removed by splitting counters in cells, so that wouldn't change anything for 
this issue.

> Counter Tables should be more clearly identified
> 
>
> Key: CASSANDRA-8878
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8878
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michaël Figuière
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 3.0
>
>
> Counter tables are internally considered as a particular kind of table, 
> different from the regular ones. This counter specific nature is implicitly 
> defined by the fact that columns within a table have the {{counter}} data 
> type. This nature turns out to be persistent over the time, that is if the 
> user do the following:
> {code}
> CREATE TABLE counttable (key uuid primary key, count counter);
> ALTER TABLE counttable DROP count;
> ALTER TABLE counttable ADD count2 int;
> {code} 
> The following error will be thrown:
> {code}
> Cannot add a non counter column (count2) in a counter column family
> {code}
> Even if the table doesn't have any counter column anymore. This implicit, 
> persistent nature can be challenging to understand for users (and impossible 
> to infer in the case above). For this reason a more explicit declaration of 
> counter tables would be appropriate, as:
> {code}
> CREATE COUNTER TABLE counttable (key uuid primary key, count counter);
> {code}
> Besides that, adding a boolean {{counter_table}} column in the 
> {{system.schema_columnfamilies}} table would allow external tools to easily 
> differentiate a counter table from a regular one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Norman Maurer updated CASSANDRA-8086:
-
Comment: was deleted

(was: you are right, sigh... fixing now)

> Cassandra should have ability to limit the number of native connections
> ---
>
> Key: CASSANDRA-8086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vishy Kasar
>Assignee: Norman Maurer
> Fix For: 2.1.4
>
> Attachments: 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt
>
>
> We have a production cluster with 72 instances spread across 2 DCs. We have a 
> large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
> connects to 4 cassandra instances. Some event (we think it is a schema change 
> on server side) triggered the client to establish connections to all 
> cassandra instances of local DC. This brought the server to its knees. The 
> client connections failed and client attempted re-connections. 
> Cassandra should protect itself from such attack from client. Do we have any 
> knobs to control the number of max connections? If not, we need to add that 
> knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8889) CQL spec is missing doc for support of bind variables for LIMIT, TTL, and TIMESTAMP

2015-03-03 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-8889:
--

Assignee: Tyler Hobbs

> CQL spec is missing doc for support of bind variables for LIMIT, TTL, and 
> TIMESTAMP
> ---
>
> Key: CASSANDRA-8889
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8889
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation & website
>Reporter: Jack Krupansky
>Assignee: Tyler Hobbs
>Priority: Minor
>
> CASSANDRA-4450 added the ability to specify a bind variable for the integer 
> value of a LIMIT, TTL, or TIMESTAMP option, but the CQL spec has not been 
> updated to reflect this enhancement.
> Also, the special predefined bind variable names are not documented in the 
> CQL spec: "[limit]", "[ttl]", and "[timestamp]".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345295#comment-14345295
 ] 

Jonathan Ellis commented on CASSANDRA-8894:
---

bq. I propose selecting a buffer size that is the next larger power of 2 than 
our average record size (with a minimum of 4Kb), so that we expect to read in 
one operation.

Makes sense to me.

> I also propose that we create a pool of these buffers up-front

Sharing buffers across files is tricky because of the internals of 
RandomAccessReader.  Maybe this should be a separate ticket.

> Our default buffer size for (uncompressed) buffered reads should be smaller, 
> and based on the expected record size
> --
>
> Key: CASSANDRA-8894
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 3.0
>
>
> A large contributor to slower buffered reads than mmapped is likely that we 
> read a full 64Kb at once, when average record sizes may be as low as 140 
> bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
> each read will touch 32 of these, meaning we are unlikely to almost ever be 
> hitting the TLB, and will be incurring at least 30 unnecessary misses each 
> time (as well as the other costs of larger than necessary accesses). When 
> working with an SSD there is little to no benefit reading more than 4Kb at 
> once, and in either case reading more data than we need is wasteful. So, I 
> propose selecting a buffer size that is the next larger power of 2 than our 
> average record size (with a minimum of 4Kb), so that we expect to read in one 
> operation. I also propose that we create a pool of these buffers up-front, 
> and that we ensure they are all exactly aligned to a virtual page, so that 
> the source and target operations each touch exactly one virtual page per 4Kb 
> of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345295#comment-14345295
 ] 

Jonathan Ellis edited comment on CASSANDRA-8894 at 3/3/15 4:35 PM:
---

bq. I propose selecting a buffer size that is the next larger power of 2 than 
our average record size (with a minimum of 4Kb), so that we expect to read in 
one operation.

Makes sense to me.

bq. I also propose that we create a pool of these buffers up-front

Sharing buffers across files is tricky because of the internals of 
RandomAccessReader.  Maybe this should be a separate ticket.


was (Author: jbellis):
bq. I propose selecting a buffer size that is the next larger power of 2 than 
our average record size (with a minimum of 4Kb), so that we expect to read in 
one operation.

Makes sense to me.

> I also propose that we create a pool of these buffers up-front

Sharing buffers across files is tricky because of the internals of 
RandomAccessReader.  Maybe this should be a separate ticket.

> Our default buffer size for (uncompressed) buffered reads should be smaller, 
> and based on the expected record size
> --
>
> Key: CASSANDRA-8894
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 3.0
>
>
> A large contributor to slower buffered reads than mmapped is likely that we 
> read a full 64Kb at once, when average record sizes may be as low as 140 
> bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
> each read will touch 32 of these, meaning we are unlikely to almost ever be 
> hitting the TLB, and will be incurring at least 30 unnecessary misses each 
> time (as well as the other costs of larger than necessary accesses). When 
> working with an SSD there is little to no benefit reading more than 4Kb at 
> once, and in either case reading more data than we need is wasteful. So, I 
> propose selecting a buffer size that is the next larger power of 2 than our 
> average record size (with a minimum of 4Kb), so that we expect to read in one 
> operation. I also propose that we create a pool of these buffers up-front, 
> and that we ensure they are all exactly aligned to a virtual page, so that 
> the source and target operations each touch exactly one virtual page per 4Kb 
> of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345294#comment-14345294
 ] 

Norman Maurer commented on CASSANDRA-8086:
--

you are right, sigh... fixing now

> Cassandra should have ability to limit the number of native connections
> ---
>
> Key: CASSANDRA-8086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vishy Kasar
>Assignee: Norman Maurer
> Fix For: 2.1.4
>
> Attachments: 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt
>
>
> We have a production cluster with 72 instances spread across 2 DCs. We have a 
> large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
> connects to 4 cassandra instances. Some event (we think it is a schema change 
> on server side) triggered the client to establish connections to all 
> cassandra instances of local DC. This brought the server to its knees. The 
> client connections failed and client attempted re-connections. 
> Cassandra should protect itself from such attack from client. Do we have any 
> knobs to control the number of max connections? If not, we need to add that 
> knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345293#comment-14345293
 ] 

Norman Maurer commented on CASSANDRA-8086:
--

you are right, sigh... fixing now

> Cassandra should have ability to limit the number of native connections
> ---
>
> Key: CASSANDRA-8086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vishy Kasar
>Assignee: Norman Maurer
> Fix For: 2.1.4
>
> Attachments: 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt
>
>
> We have a production cluster with 72 instances spread across 2 DCs. We have a 
> large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
> connects to 4 cassandra instances. Some event (we think it is a schema change 
> on server side) triggered the client to establish connections to all 
> cassandra instances of local DC. This brought the server to its knees. The 
> client connections failed and client attempted re-connections. 
> Cassandra should protect itself from such attack from client. Do we have any 
> knobs to control the number of max connections? If not, we need to add that 
> knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8086) Cassandra should have ability to limit the number of native connections

2015-03-03 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345289#comment-14345289
 ] 

Joshua McKenzie commented on CASSANDRA-8086:


It appears you've double-decremented on the connectionsPerClient record when 
the IP's over the limit:
{code}
if (perIpCount.incrementAndGet() > perIpLimit)
{
   perIpCount.decrementAndGet();
   // The decrement will be done in channelClosed(...)
{code}

While counter is decremented in channelClosed and likely what that comment 
refers to, you're also decrementing the connectionsPerClient record again for 
the address in question.

Other than that, LGTM.

> Cassandra should have ability to limit the number of native connections
> ---
>
> Key: CASSANDRA-8086
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8086
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vishy Kasar
>Assignee: Norman Maurer
> Fix For: 2.1.4
>
> Attachments: 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-2.0.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c-final.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.patch, 
> 0001-CASSANDRA-8086-Allow-to-limit-the-number-of-native-c.txt
>
>
> We have a production cluster with 72 instances spread across 2 DCs. We have a 
> large number ( ~ 40,000 ) of clients hitting this cluster. Client normally 
> connects to 4 cassandra instances. Some event (we think it is a schema change 
> on server side) triggered the client to establish connections to all 
> cassandra instances of local DC. This brought the server to its knees. The 
> client connections failed and client attempted re-connections. 
> Cassandra should protect itself from such attack from client. Do we have any 
> knobs to control the number of max connections? If not, we need to add that 
> knob.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8878) Counter Tables should be more clearly identified

2015-03-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345286#comment-14345286
 ] 

Jonathan Ellis commented on CASSANDRA-8878:
---

Won't we be able to mix counter and non-counter columns once Aleksey's counter 
cell format change is done?  In which case I'm reluctant to add special syntax.

> Counter Tables should be more clearly identified
> 
>
> Key: CASSANDRA-8878
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8878
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michaël Figuière
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 3.0
>
>
> Counter tables are internally considered as a particular kind of table, 
> different from the regular ones. This counter specific nature is implicitly 
> defined by the fact that columns within a table have the {{counter}} data 
> type. This nature turns out to be persistent over the time, that is if the 
> user do the following:
> {code}
> CREATE TABLE counttable (key uuid primary key, count counter);
> ALTER TABLE counttable DROP count;
> ALTER TABLE counttable ADD count2 int;
> {code} 
> The following error will be thrown:
> {code}
> Cannot add a non counter column (count2) in a counter column family
> {code}
> Even if the table doesn't have any counter column anymore. This implicit, 
> persistent nature can be challenging to understand for users (and impossible 
> to infer in the case above). For this reason a more explicit declaration of 
> counter tables would be appropriate, as:
> {code}
> CREATE COUNTER TABLE counttable (key uuid primary key, count counter);
> {code}
> Besides that, adding a boolean {{counter_table}} column in the 
> {{system.schema_columnfamilies}} table would allow external tools to easily 
> differentiate a counter table from a regular one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345280#comment-14345280
 ] 

Sylvain Lebresne commented on CASSANDRA-8877:
-

We should support that at some point, but that's probably dependent on 
CASSANDRA-7396. Unless we want to make {{writetime}} and {{ttl}} work on a 
collection column directly, but return a list of timestamp/ttl, one for each 
element (which can be done, though with the slight downside that it will make 
the code for handling timestamp and ttl in Selection a tad more complex).

> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8850) clean up options syntax for create/alter role

2015-03-03 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345262#comment-14345262
 ] 

Sam Tunnicliffe commented on CASSANDRA-8850:


Certainly, we can do that & I'm also not a fan of having multiple equivalent 
expressions for the same thing. The reasoning for making them optional was to 
preserve support for things like {{CREATE ROLE r NOSUPERUSER;}} which was 
brought along from {{CREATE USER}} syntax. And I assume it was there originally 
to emulate postgres. 

I'll post a new patch (& a PR for dtests) directly.



> clean up options syntax for create/alter role 
> --
>
> Key: CASSANDRA-8850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8850
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 8850.txt
>
>
> {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} 
> in a way more consistent with other statements.
> e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8832) SSTableRewriter.abort() should be more robust to failure

2015-03-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8832:

Reviewer: Branimir Lambov

> SSTableRewriter.abort() should be more robust to failure
> 
>
> Key: CASSANDRA-8832
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8832
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1.4
>
>
> This fixes a bug introduced in CASSANDRA-8124 that attempts to open early 
> during abort, introducing a failure risk. This patch further preempts 
> CASSANDRA-8690 to wrap every rollback action in a try/catch block, so that 
> any internal assertion checks do not actually worsen the state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8850) clean up options syntax for create/alter role

2015-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345253#comment-14345253
 ] 

Sylvain Lebresne commented on CASSANDRA-8850:
-

I'll admit I'm not a huge fan of having gazillions ways of expressing the same 
thing, especially when there isn't a meaningful amount of character difference 
between the options. Since roles are new to 3.0, can't we just go with {{WITH}} 
and {{AND}} being mandatory (since that's how other DDL statements work)?

> clean up options syntax for create/alter role 
> --
>
> Key: CASSANDRA-8850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8850
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 8850.txt
>
>
> {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} 
> in a way more consistent with other statements.
> e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6060) Remove internal use of Strings for ks/cf names

2015-03-03 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345250#comment-14345250
 ] 

 Brian Hess commented on CASSANDRA-6060:


I know this ticket is closed, but there is another use case that might make 
this more useful.  Namely, with the advent of CTAS (CASSANDRA-8234), you could 
want to change the primary key of a table.  To do that, you could create a new 
table with the new primary key and select the old data into it.  The last step, 
for cleanliness, might be to drop the original table alter the name of the new 
table to the original table name - thereby completing the change of the primary 
key.

> Remove internal use of Strings for ks/cf names
> --
>
> Key: CASSANDRA-6060
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6060
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Jonathan Ellis
>Assignee: Ariel Weisberg
>  Labels: performance
>
> We toss a lot of Strings around internally, including across the network.  
> Once a request has been Prepared, we ought to be able to encode these as int 
> ids.
> Unfortuntely, we moved from int to uuid in CASSANDRA-3794, which was a 
> reasonable move at the time, but a uuid is a lot bigger than an int.  Now 
> that we have CAS we can allow concurrent schema updates while still using 
> sequential int IDs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8709) Convert SequentialWriter from using RandomAccessFile to nio channel

2015-03-03 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345249#comment-14345249
 ] 

Joshua McKenzie commented on CASSANDRA-8709:


Branch updated.

bq. CompressedSW.flushData() calls crcMetadata.append(compressed.buffer.array() 
... is clearer.
Fixed.  Left the rewind since appendDirect relies on .position()

bq. In DataIntegrityMetadata, your new appendDirect call shouldn't be using 
mark and reset since it's racy. Better to .duplicate() the input buffer.
Switched to duplicated ByteBuffer and mark/reset on that as the counters should 
be of local use only and thus no threat from a raciness perspective.

bq. In LZ4Compressor.compress() the source length should be using .remaining() 
not .limit()
Good catch - fixed.

bq. All of your non-direct byte buffer code makes me nervous since you are 
accessing .array()...
I went ahead and swapped all of those calls to the appendDirect form.

I also uncommented a block in CompressorTest that snuck into the patch file.

bq. Write test for CompressedSW across all compressors
Added. The unit tests uncovered what appears to be a bug in 
CompressedSequentialWriter.resetAndTruncate with resetting to a mark that's at 
buffer-aligned length. I backported that test into current 2.0/2.1 and the same 
error occurs; we don't mark the current buffered data as dirty on 
resetAndTruncate so if we reset to the chunkOffset with a full buffer it's 
never marked dirty from a subsequent write and reBuffer just drops the data.  
I'll open a ticket for 2.0.13 to get that fix in once we've confirmed it here.

> Convert SequentialWriter from using RandomAccessFile to nio channel
> ---
>
> Key: CASSANDRA-8709
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8709
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Joshua McKenzie
>Assignee: Joshua McKenzie
>  Labels: Windows
> Fix For: 3.0
>
>
> For non-mmap'ed I/O on Windows, using nio channels will give us substantially 
> more flexibility w/regards to renaming and moving files around while writing 
> them.  This change in conjunction with CASSANDRA-4050 should allow us to 
> remove the Windows bypass code in SSTableRewriter for non-memory-mapped I/O.
> In general, migrating from instances of RandomAccessFile to nio channels will 
> help make Windows and linux behavior more consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8234) CTAS for COPY

2015-03-03 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345246#comment-14345246
 ] 

 Brian Hess commented on CASSANDRA-8234:


It would also be useful to be able to do: 
INSERT INTO foo(x, y, z) SELECT a, b, c FROM bar;

That is, you already have a table set up and want to INSERT into it.  This is 
sort of under the covers of CTAS (step 1: create the table; step 2: insert the 
data into it).

> CTAS for COPY
> -
>
> Key: CASSANDRA-8234
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Tools
>Reporter: Robin Schumacher
> Fix For: 3.1
>
>
> Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
> The COPY command can be enhanced to perform simple and customized copies of 
> existing tables to satisfy the need. 
> - Simple copy is COPY table a TO new table b.
> - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
> country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8877) Ability to read the TTL and WRTIE TIME of an element in a collection

2015-03-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8877:
---
Fix Version/s: 3.0
 Assignee: Benjamin Lerer

> Ability to read the TTL and WRTIE TIME of an element in a collection
> 
>
> Key: CASSANDRA-8877
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8877
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Drew Kutcharian
>Assignee: Benjamin Lerer
> Fix For: 3.0
>
>
> Currently it's possible to set the TTL and WRITE TIME of an element in a 
> collection using CQL, but there is no way to read them back. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8850) clean up options syntax for create/alter role

2015-03-03 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8850:
---
Attachment: 8850.txt

Attached patch removes the ordering constraints on options supplied to 
{{(CREATE|ALTER) ROLE}} statements. Also, the {{WITH}} and {{AND}} become 
optional, allowing for syntax like:

{code}
CREATE ROLE r WITH LOGIN AND PASSWORD = 'foo';
CREATE ROLE r WITH PASSWORD 'foo' AND LOGIN AND SUPERUSER;
CREATE ROLE r WITH SUPERUSER LOGIN PASSWORD = 'foo';
CREATE ROLE r NOLOGIN; // compatibility with existing syntax
CREATE ROLE r WITH PASSWORD = 'foo' LOGIN SUPERUSER;  // compatibility with 
existing syntax
{code}

All of the existing dtests in test_auth.py & test_auth_roles.py still pass and 
I added some unit tests to verify the various permutations of the syntax.

{{(CREATE|ALTER) USER}} remains as before. That is, only the following form is 
supported:

{code}
CREATE USER u WITH PASSWORD 'foo' SUPERUSER;
{code}

> clean up options syntax for create/alter role 
> --
>
> Key: CASSANDRA-8850
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8850
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 8850.txt
>
>
> {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} 
> in a way more consistent with other statements.
> e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8890) Enhance cassandra-env.sh to handle Java version output in case of OpenJDK icedtea"

2015-03-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345235#comment-14345235
 ] 

Philip Thompson commented on CASSANDRA-8890:


Feel free to submit this as a patch as explained here:
http://wiki.apache.org/cassandra/HowToContribute

> Enhance cassandra-env.sh to handle Java version output in case of OpenJDK 
> icedtea"
> --
>
> Key: CASSANDRA-8890
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8890
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
> Environment: Red Hat Enterprise Linux Server release 6.4 (Santiago)
>Reporter: Sumod Pawgi
>Priority: Minor
> Fix For: 2.1.4
>
>
> Where observed - 
> Cassandra node has OpenJDK - 
> java version "1.7.0_09-icedtea"
> In some situations, external agents trying to monitor a C* cluster would need 
> to run cassandra -v command to determine the Cassandra version and would 
> expect a numerical output e.g. java version "1.7.0_75" as in case of Oracle 
> JDK. But if the cluster has OpenJDK IcedTea installed, then this condition is 
> not satisfied and the agents will not work correctly as the output from 
> "cassandra -v" is 
> /opt/apache/cassandra/bin/../conf/cassandra-env.sh: line 102: [: 09-icedtea: 
> integer expression expected
> Cause - 
> The line which is causing this behavior is -
> jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' 
> 'NR==1 {print $2}'`
> Suggested enhancement -
> If we change the line to -
>  jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' 
> 'NR==1 {print $2}' | awk 'BEGIN {FS="-"};{print $1}'`,
> it will give $jvmver as - 1.7.0_09 for the above case. 
> Can we add this enhancement in the cassandra-env.sh? I would like to add it 
> myself and submit for review, but I am not familiar with C* check in process. 
> There might be better ways to do this, but I thought of this to be simplest 
> and as the edition is at the end of the line, it will be easy to reverse if 
> needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8834) Top partitions reporting wrong cardinality

2015-03-03 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345229#comment-14345229
 ] 

Chris Lohfink commented on CASSANDRA-8834:
--

So I cant seem to reproduce it but I when testing after upgrading to 2.1.3 
(along with stress tool upgrade that happened there) I was getting that 
exception from there, which I can neither explain or have happen again...  
Something involving the change in schema when going from old pure thrift table 
to what cqlstress creates now I assumed.

> Top partitions reporting wrong cardinality
> --
>
> Key: CASSANDRA-8834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8834
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Lohfink
>Assignee: Chris Lohfink
> Fix For: 2.1.4
>
> Attachments: cardinality.patch
>
>
> It always reports a cardinality of 1.  Patch also includes a try/catch around 
> the conversion of partition keys that isn't always handled well in thrift cfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8504) Stack trace is erroneously logged twice

2015-03-03 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345217#comment-14345217
 ] 

Philip Thompson commented on CASSANDRA-8504:


Yep, the test is now passing and that commit did fix it. Feel free to close 
this now.

> Stack trace is erroneously logged twice
> ---
>
> Key: CASSANDRA-8504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8504
> Project: Cassandra
>  Issue Type: Bug
> Environment: OSX and Ubuntu
>Reporter: Philip Thompson
>Assignee: Stefania
>Priority: Minor
> Fix For: 3.0
>
> Attachments: node4.log
>
>
> The dtest 
> {{replace_address_test.TestReplaceAddress.replace_active_node_test}} is 
> failing on 3.0. The following can be seen in the log:{code}ERROR [main] 
> 2014-12-17 15:12:33,871 CassandraDaemon.java:496 - Exception encountered 
> during startup
> java.lang.UnsupportedOperationException: Cannot replace a live node...
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:773)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:593)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:356) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:479)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:571) 
> [main/:na]
> ERROR [main] 2014-12-17 15:12:33,872 CassandraDaemon.java:584 - Exception 
> encountered during startup
> java.lang.UnsupportedOperationException: Cannot replace a live node...
> at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:773)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:593)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:483)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:356) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:479)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:571) 
> [main/:na]
> INFO  [StorageServiceShutdownHook] 2014-12-17 15:12:33,873 Gossiper.java:1349 
> - Announcing shutdown
> INFO  [StorageServiceShutdownHook] 2014-12-17 15:12:35,876 
> MessagingService.java:708 - Waiting for messaging service to quiesce{code}
> The test starts up a three node cluster, loads some data, then attempts to 
> start a fourth node with replace_address against the IP of a live node. This 
> is expected to fail, with one ERROR message in the log. In 3.0, we are seeing 
> two messages. 2.1-HEAD is working as expected. Attached is the full log of 
> the fourth node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8819) LOCAL_QUORUM writes returns wrong message

2015-03-03 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8819:
---
Tester: Alan Boudreault  (was: Philip Thompson)

> LOCAL_QUORUM writes returns wrong message
> -
>
> Key: CASSANDRA-8819
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8819
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: CentOS 6.6
>Reporter: Wei Zhu
>Assignee: Sylvain Lebresne
> Fix For: 2.0.13
>
> Attachments: 8819-2.0.patch
>
>
> We have two DC3, each with 7 nodes.
> Here is the keyspace setup:
>  create keyspace test
>  with placement_strategy = 'NetworkTopologyStrategy'
>  and strategy_options = {DC2 : 3, DC1 : 3}
>  and durable_writes = true;
> We brought down two nodes in DC2 for maintenance. We only write to DC1 using 
> local_quroum (using datastax JavaClient)
> But we see this errors in the log:
> Cassandra timeout during write query at consistency LOCAL_QUORUM (4 replica 
> were required but only 3 acknowledged the write
> why does it say 4 replica were required? and Why would it give error back to 
> client since local_quorum should succeed.
> Here are the output from nodetool status
> Note: Ownership information does not include topology; for complete 
> information, specify a keyspace
> Datacenter: DC2
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address  Load   Tokens  Owns   Host ID
>Rack
> UN  10.2.0.1  10.92 GB   256 7.9%     RAC206
> UN  10.2.0.2   6.17 GB256 8.0%     RAC106
> UN  10.2.0.3  6.63 GB256 7.3%     RAC107
> DL  10.2.0.4  1.54 GB256 7.7%    RAC107
> UN  10.2.0.5  6.02 GB256 6.6%     RAC106
> UJ  10.2.0.6   3.68 GB256 ?    RAC205
> UN  10.2.0.7  7.22 GB256 7.7%    RAC205
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  Address  Load   Tokens  Owns   Host ID
>Rack
> UN  10.1.0.1   6.04 GB256 8.6%    RAC10
> UN  10.1.0.2   7.55 GB256 7.4%     RAC8
> UN  10.1.0.3   5.83 GB256 7.0%     RAC9
> UN  10.1.0.47.34 GB256 7.9%     RAC6
> UN  10.1.0.5   7.57 GB256 8.0%    RAC7
> UN  10.1.0.6   5.31 GB256 7.3%     RAC10
> UN  10.1.0.7   5.47 GB256 8.6%    RAC9
> I did a cql trace on the query and here is the trace, and it does say 
>Write timeout; received 3 of 4 required replies | 17:27:52,831 |  10.1.0.1 
> |2002873
> at the end. I guess that is where the client gets the error from. But the 
> rows was inserted to Cassandra correctly. And I traced read with local_quorum 
> and it behaves correctly and the reads don't go to DC2. The problem is only 
> with writes on local_quorum.
> {code}
> Tracing session: 5a789fb0-b70d-11e4-8fca-99bff9c19890
>  activity 
>| timestamp
> | source  | source_elapsed
> -+--+-+
>   
> execute_cql3_query | 17:27:50,828 
> |  10.1.0.1 |  0
>  Parsing insert into test (user_id, created, event_data, event_id)values ( 
> 123456789 , 9eab8950-b70c-11e4-8fca-99bff9c19891, 'test', '16'); | 
> 17:27:50,828 |  10.1.0.1 | 39
>   
>Preparing statement | 17:27:50,828 
> |  10.1.0.1 |135
>   
>  Message received from /10.1.0.1 | 17:27:50,829 | 
>  10.1.0.5 | 25
>   
> Sending message to /10.1.0.5 | 17:27:50,829 | 
>  10.1.0.1 |421
>   
>  Executing single-partition query on users | 17:27:50,829

[jira] [Updated] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8894:

Description: A large contributor to slower buffered reads than mmapped is 
likely that we read a full 64Kb at once, when average record sizes may be as 
low as 140 bytes on our stress tests. The TLB has only 128 entries on a modern 
core, and each read will touch 32 of these, meaning we are unlikely to almost 
ever be hitting the TLB, and will be incurring at least 30 unnecessary misses 
each time (as well as the other costs of larger than necessary accesses). When 
working with an SSD there is little to no benefit reading more than 4Kb at 
once, and in either case reading more data than we need is wasteful. So, I 
propose selecting a buffer size that is the next larger power of 2 than our 
average record size (with a minimum of 4Kb), so that we expect to read in one 
operation. I also propose that we create a pool of these buffers up-front, and 
that we ensure they are all exactly aligned to a virtual page, so that the 
source and target operations each touch exactly one virtual page per 4Kb of 
expected record size.  (was: A large contributor to slower buffered reads than 
mmapped is likely that we read a full 64Kb at once, when average record sizes 
may be as low as 140 bytes on our stress tests. The TLB has only 128 entries on 
a modern core, and each read will touch 16 of these, meaning we are unlikely to 
almost ever be hitting the TLB, and will be incurring at least 15 unnecessary 
misses each time (as well as the other costs of larger than necessary 
accesses). When working with an SSD there is little to no benefit reading more 
than 4Kb at once, and in either case reading more data than we need is 
wasteful. So, I propose selecting a buffer size that is the next larger power 
of 2 than our average record size (with a minimum of 4Kb), so that we expect to 
read in one operation. I also propose that we create a pool of these buffers 
up-front, and that we ensure they are all exactly aligned to a virtual page, so 
that the source and target operations each touch exactly one virtual page per 
4Kb of expected record size.)

> Our default buffer size for (uncompressed) buffered reads should be smaller, 
> and based on the expected record size
> --
>
> Key: CASSANDRA-8894
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 3.0
>
>
> A large contributor to slower buffered reads than mmapped is likely that we 
> read a full 64Kb at once, when average record sizes may be as low as 140 
> bytes on our stress tests. The TLB has only 128 entries on a modern core, and 
> each read will touch 32 of these, meaning we are unlikely to almost ever be 
> hitting the TLB, and will be incurring at least 30 unnecessary misses each 
> time (as well as the other costs of larger than necessary accesses). When 
> working with an SSD there is little to no benefit reading more than 4Kb at 
> once, and in either case reading more data than we need is wasteful. So, I 
> propose selecting a buffer size that is the next larger power of 2 than our 
> average record size (with a minimum of 4Kb), so that we expect to read in one 
> operation. I also propose that we create a pool of these buffers up-front, 
> and that we ensure they are all exactly aligned to a virtual page, so that 
> the source and target operations each touch exactly one virtual page per 4Kb 
> of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345198#comment-14345198
 ] 

Benedict commented on CASSANDRA-8067:
-

+1, although I think this code could do with being refactored, as there's a bit 
of a poor isolation of concerns - the caller and callee of CacheSerializer 
methods repeat much of the same work.

> NullPointerException in KeyCacheSerializer
> --
>
> Key: CASSANDRA-8067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Eric Leleu
>Assignee: Aleksey Yeschenko
> Fix For: 2.1.4
>
> Attachments: 8067.txt
>
>
> Hi,
> I have this stack trace in the logs of Cassandra server (v2.1)
> {code}
> ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
> CassandraDaemon.java:166 - Exception in thread 
> Thread[CompactionExecutor:14,1,main]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
> Source) ~[na:1.7.0]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
> ~[na:1.7.0]
> at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0]
> {code}
> It may not be critical because this error occured in the AutoSavingCache. 
> However the line 475 is about the CFMetaData so it may hide bigger issue...
> {code}
>  474 CFMetaData cfm = 
> Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
>  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
> out);
> {code}
> Regards,
> Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8896) Investigate upstream changes to compressors to fit contents exactly to one page

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8896:
---

 Summary: Investigate upstream changes to compressors to fit 
contents exactly to one page
 Key: CASSANDRA-8896
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8896
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict


For optimal disk performance, it makes most sense to choose our compression 
boundaries based on compressed size, not uncompressed. If our compressors could 
take a target length, and return the number of source bytes they managed to fit 
into that space, this would permit us to lower the number of disk accesses per 
read. [~blambov]: you've dived into LZ4. How tricky do you think this might be?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8895) Compressed sstables should only compress if the win is above a certain threshold, and should use a variable block size

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8895:
---

 Summary: Compressed sstables should only compress if the win is 
above a certain threshold, and should use a variable block size
 Key: CASSANDRA-8895
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8895
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


On performing a flush to disk, we should assess if the data we're flushing will 
actually be substantively compressed, and how large the page should be to get 
optimal compression ratio versus read latency. Decompressing 64Kb chunks is 
wasteful when reading small records.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8894) Our default buffer size for (uncompressed) buffered reads should be smaller, and based on the expected record size

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8894:
---

 Summary: Our default buffer size for (uncompressed) buffered reads 
should be smaller, and based on the expected record size
 Key: CASSANDRA-8894
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8894
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


A large contributor to slower buffered reads than mmapped is likely that we 
read a full 64Kb at once, when average record sizes may be as low as 140 bytes 
on our stress tests. The TLB has only 128 entries on a modern core, and each 
read will touch 16 of these, meaning we are unlikely to almost ever be hitting 
the TLB, and will be incurring at least 15 unnecessary misses each time (as 
well as the other costs of larger than necessary accesses). When working with 
an SSD there is little to no benefit reading more than 4Kb at once, and in 
either case reading more data than we need is wasteful. So, I propose selecting 
a buffer size that is the next larger power of 2 than our average record size 
(with a minimum of 4Kb), so that we expect to read in one operation. I also 
propose that we create a pool of these buffers up-front, and that we ensure 
they are all exactly aligned to a virtual page, so that the source and target 
operations each touch exactly one virtual page per 4Kb of expected record size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8893) RandomAccessReader should share its FileChannel with all instances (via SegmentedFile)

2015-03-03 Thread Benedict (JIRA)
Benedict created CASSANDRA-8893:
---

 Summary: RandomAccessReader should share its FileChannel with all 
instances (via SegmentedFile)
 Key: CASSANDRA-8893
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8893
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 3.0


There's no good reason to open a FileChannel for each 
\(Compressed\)\?RandomAccessReader, and this would simplify RandomAccessReader 
to just a thin wrapper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8793) Avoid memory allocation when searching index summary

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345156#comment-14345156
 ] 

Benedict commented on CASSANDRA-8793:
-

Updated repo with those changes. As to other callers, getKeySamples() is the 
only example I can see, and it is called rarely. It would require some (slight) 
ugliness inducing refactor which is probably not worth the low yield IMO.

> Avoid memory allocation when searching index summary
> 
>
> Key: CASSANDRA-8793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8793
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
> Fix For: 3.0
>
>
> Currently we build a byte[] for each comparison, when we could just fill the 
> details into a DirectByteBuffer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8764) Refactor the way we notify compaction strategies about early opened files

2015-03-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-8764.

Resolution: Duplicate

> Refactor the way we notify compaction strategies about early opened files
> -
>
> Key: CASSANDRA-8764
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8764
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
> Fix For: 2.1.4
>
>
> We currently only notify compaction strategies about when we get a new 
> instance of an sstable reader - for example when we move the start position. 
> We don't notify when we create a new 'temporary' sstable when opening early.
> We should probably only track actual files, with their original first/last 
> keys to make it easier for compaction strategies to not have to keep track of 
> what files are 'real' and what files have had their starts moved etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-8764) Refactor the way we notify compaction strategies about early opened files

2015-03-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reopened CASSANDRA-8764:


> Refactor the way we notify compaction strategies about early opened files
> -
>
> Key: CASSANDRA-8764
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8764
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
> Fix For: 2.1.4
>
>
> We currently only notify compaction strategies about when we get a new 
> instance of an sstable reader - for example when we move the start position. 
> We don't notify when we create a new 'temporary' sstable when opening early.
> We should probably only track actual files, with their original first/last 
> keys to make it easier for compaction strategies to not have to keep track of 
> what files are 'real' and what files have had their starts moved etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8764) Refactor the way we notify compaction strategies about early opened files

2015-03-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-8764.

Resolution: Fixed

> Refactor the way we notify compaction strategies about early opened files
> -
>
> Key: CASSANDRA-8764
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8764
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
> Fix For: 2.1.4
>
>
> We currently only notify compaction strategies about when we get a new 
> instance of an sstable reader - for example when we move the start position. 
> We don't notify when we create a new 'temporary' sstable when opening early.
> We should probably only track actual files, with their original first/last 
> keys to make it easier for compaction strategies to not have to keep track of 
> what files are 'real' and what files have had their starts moved etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8884) Opening a non-system keyspace before first accessing the system keyspace results in deadlock

2015-03-03 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345093#comment-14345093
 ] 

Benjamin Lerer commented on CASSANDRA-8884:
---

I am not able to reproduce that problem. My code use CQLSSTableBuilder without 
calling first {{Keyspace.open("system")}} and do not trigger any deadlock. 
It would help to have a simple program that I can use to reproduce the problem.

> Opening a non-system keyspace before first accessing the system keyspace 
> results in deadlock
> 
>
> Key: CASSANDRA-8884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8884
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Piotr Kołaczkowski
>Assignee: Benjamin Lerer
> Attachments: bulk.jstack
>
>
> I created a writer like this:
> {code}
> val writer = CQLSSTableWriter.builder()
>   .forTable(tableDef.cql)
>   .using(insertStatement)
>   .withPartitioner(partitioner)
>   .inDirectory(outputDirectory)
>   .withBufferSizeInMB(bufferSizeInMB)
>   .build()
> {code}
> Then I'm trying to write a row with {{addRow}} and it blocks forever.
> Everything related to {{CQLSSTableWriter}}, including its creation, is 
> happening in only one thread.
> {noformat}
> "SSTableBatchOpen:3" daemon prio=10 tid=0x7f4b399d7000 nid=0x4778 waiting 
> for monitor entry [0x7f4b240a7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
>   - waiting to lock <0xe35fd6d0> (a java.lang.Class for 
> org.apache.cassandra.db.Keyspace)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1464)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:517)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:265)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:306)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:316)
>   at 
> org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:910)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.(SSTableReader.java:561)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:433)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:343)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:480)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "SSTableBatchOpen:2" daemon prio=10 tid=0x7f4b399e7800 nid=0x4777 waiting 
> for monitor entry [0x7f4b23ca3000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
>   - waiting to lock <0xe35fd6d0> (a java.lang.Class for 
> org.apache.cassandra.db.Keyspace)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1464)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:517)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:265)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:306)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:316)
>   at 
> org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:910)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.(SSTableReader.java:561)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:433)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:343)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:480)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> 

[jira] [Updated] (CASSANDRA-8884) Opening a non-system keyspace before first accessing the system keyspace results in deadlock

2015-03-03 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8884:

Summary: Opening a non-system keyspace before first accessing the system 
keyspace results in deadlock  (was: CQLSSTableWriter freezes on addRow)

> Opening a non-system keyspace before first accessing the system keyspace 
> results in deadlock
> 
>
> Key: CASSANDRA-8884
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8884
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Piotr Kołaczkowski
>Assignee: Benjamin Lerer
> Attachments: bulk.jstack
>
>
> I created a writer like this:
> {code}
> val writer = CQLSSTableWriter.builder()
>   .forTable(tableDef.cql)
>   .using(insertStatement)
>   .withPartitioner(partitioner)
>   .inDirectory(outputDirectory)
>   .withBufferSizeInMB(bufferSizeInMB)
>   .build()
> {code}
> Then I'm trying to write a row with {{addRow}} and it blocks forever.
> Everything related to {{CQLSSTableWriter}}, including its creation, is 
> happening in only one thread.
> {noformat}
> "SSTableBatchOpen:3" daemon prio=10 tid=0x7f4b399d7000 nid=0x4778 waiting 
> for monitor entry [0x7f4b240a7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
>   - waiting to lock <0xe35fd6d0> (a java.lang.Class for 
> org.apache.cassandra.db.Keyspace)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1464)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:517)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:265)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:306)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:316)
>   at 
> org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:910)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.(SSTableReader.java:561)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:433)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:343)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:480)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "SSTableBatchOpen:2" daemon prio=10 tid=0x7f4b399e7800 nid=0x4777 waiting 
> for monitor entry [0x7f4b23ca3000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:118)
>   - waiting to lock <0xe35fd6d0> (a java.lang.Class for 
> org.apache.cassandra.db.Keyspace)
>   at org.apache.cassandra.db.Keyspace.open(Keyspace.java:99)
>   at 
> org.apache.cassandra.cql3.statements.SelectStatement$RawStatement.prepare(SelectStatement.java:1464)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:517)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.parseStatement(QueryProcessor.java:265)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.prepareInternal(QueryProcessor.java:306)
>   at 
> org.apache.cassandra.cql3.QueryProcessor.executeInternal(QueryProcessor.java:316)
>   at 
> org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:910)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.(SSTableReader.java:561)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:433)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:343)
>   at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:480)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "SSTableBatchOpen:1" daemon prio=10 tid=0x7f4b399e7000 nid=0x4776 waiting 
> for monitor entry [0x7f4b2359d000]
>java.lang.Thread.State: BLOCKED

[jira] [Commented] (CASSANDRA-8757) IndexSummaryBuilder should construct itself offheap, and share memory between the result of each build() invocation

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345066#comment-14345066
 ] 

Benedict commented on CASSANDRA-8757:
-

OK, I've pushed a new version to the repository that improves the comments and 
integrates SafeMemoryWriter with DataOutputTest (also slightly changing the 
behaviour of SafeMemoryWriter to support this, but in a way that is probably 
generally sensible anyway)

> IndexSummaryBuilder should construct itself offheap, and share memory between 
> the result of each build() invocation
> ---
>
> Key: CASSANDRA-8757
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8757
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1.4
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345036#comment-14345036
 ] 

Benedict commented on CASSANDRA-8739:
-

I'm hoping to fix this in CASSANDRA-8568 also

> Don't check for overlap with sstables that have had their start positions 
> moved in LCS
> --
>
> Key: CASSANDRA-8739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.4
>
> Attachments: 0001-8739.patch
>
>
> When picking compaction candidates in LCS, we check that we won't cause any 
> overlap in the higher level. Problem is that we compare the files that have 
> had their start positions moved meaning we can cause overlap. We need to also 
> include the tmplink files when checking this.
> Note that in 2.1 overlap is not as big problem as earlier, if adding an 
> sstable would cause overlap, we send it back to L0 instead, meaning we do a 
> bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8739) Don't check for overlap with sstables that have had their start positions moved in LCS

2015-03-03 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345034#comment-14345034
 ] 

Marcus Eriksson commented on CASSANDRA-8739:


the new compacting L0 calculation takes the sstable *instances* from the 
datatracker compacting set - these instances are not the same as the ones in 
LCS L0 (the LCS L0 can have had their start positions moved), hoping to fix 
that in CASSANDRA-8764

> Don't check for overlap with sstables that have had their start positions 
> moved in LCS
> --
>
> Key: CASSANDRA-8739
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8739
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.4
>
> Attachments: 0001-8739.patch
>
>
> When picking compaction candidates in LCS, we check that we won't cause any 
> overlap in the higher level. Problem is that we compare the files that have 
> had their start positions moved meaning we can cause overlap. We need to also 
> include the tmplink files when checking this.
> Note that in 2.1 overlap is not as big problem as earlier, if adding an 
> sstable would cause overlap, we send it back to L0 instead, meaning we do a 
> bit more compaction but we never actually have overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8761) Make custom role options accessible from IRoleManager

2015-03-03 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8761:
---
Reviewer: Aleksey Yeschenko

> Make custom role options accessible from IRoleManager
> -
>
> Key: CASSANDRA-8761
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8761
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 8761.txt
>
>
> IRoleManager implementations may support custom OPTIONS arguments to CREATE & 
> ALTER ROLE. If supported, these custom options should be retrievable from the 
> IRoleManager and included in the results of LIST ROLES queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8849) ListUsersStatement should consider inherited superuser status

2015-03-03 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8849:
---
Attachment: 8849-v2.txt

Attached a v2 patch with the comments addressed. I've named the new class 
managing the caching {{RolesCache}} but that seems slightly inaccurate as 
caching may actually be disabled by setting {{roles_validity_in_ms}} to 0 (or 
if {{AllowAllAuthenticator}} is in use). The new class encapsulates this 
behaviour, so perhaps would be better named {{CachingRoleProvider}} or similar. 

If we do rename along those lines, we may want to follow up with something 
similar for {{PermissionsCache}}.


bq. Additionally, a dtest would be nice to have.

Sorry, I forgot to link to the PR with the new dtest:
https://github.com/riptano/cassandra-dtest/pull/174

> ListUsersStatement should consider inherited superuser status
> -
>
> Key: CASSANDRA-8849
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8849
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 8849-v2.txt, 8849.txt
>
>
> When introducing roles in CASSANDRA-7653, we retained {{LIST USERS}} support 
> for backwards compatibility. However, the {{super}} column in its results is 
> derived from {{IRoleManager#isSuper}} which only returns the superuser status 
> for the named role and doesn't consider any other roles granted to it. 
> {{LIST USERS}} then incorrectly shows a role which does not directly have 
> superuser status, but which inherits it as not-a-superuser.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8860) Too many java.util.HashMap$Entry objects in heap

2015-03-03 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344917#comment-14344917
 ] 

Marcus Eriksson edited comment on CASSANDRA-8860 at 3/3/15 11:36 AM:
-

patch to remove option attached

keeping the actual parameter in 2.1 if anyone has automated creating tables 
etc, but will remove entirely in 3.0


was (Author: krummas):
patch to remove option attached

keeping the option in 2.1 if anyone has automated creating tables etc, but will 
remove entirely in 3.0

> Too many java.util.HashMap$Entry objects in heap
> 
>
> Key: CASSANDRA-8860
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8860
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.3, jdk 1.7u51
>Reporter: Phil Yang
>Assignee: Marcus Eriksson
> Fix For: 2.1.4
>
> Attachments: 0001-remove-cold_reads_to_omit.patch, 8860-v2.txt, 
> 8860.txt, cassandra-env.sh, cassandra.yaml, jmap.txt, jstack.txt, 
> jstat-afterv1.txt, jstat-afterv2.txt, jstat-before.txt
>
>
> While I upgrading my cluster to 2.1.3, I find some nodes (not all) may have 
> GC issue after the node restarting successfully. Old gen grows very fast and 
> most of the space can not be recycled after setting its status to normal 
> immediately. The qps of both reading and writing are very low and there is no 
> heavy compaction.
> Jmap result seems strange that there are too many java.util.HashMap$Entry 
> objects in heap, where in my experience the "[B" is usually the No1.
> If I downgrade it to 2.1.1, this issue will not appear.
> I uploaded conf files and jstack/jmap outputs. I'll upload heap dump if 
> someone need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8860) Too many java.util.HashMap$Entry objects in heap

2015-03-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8860:
---
Reviewer: Tyler Hobbs  (was: Benedict)

> Too many java.util.HashMap$Entry objects in heap
> 
>
> Key: CASSANDRA-8860
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8860
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.3, jdk 1.7u51
>Reporter: Phil Yang
>Assignee: Marcus Eriksson
> Fix For: 2.1.4
>
> Attachments: 0001-remove-cold_reads_to_omit.patch, 8860-v2.txt, 
> 8860.txt, cassandra-env.sh, cassandra.yaml, jmap.txt, jstack.txt, 
> jstat-afterv1.txt, jstat-afterv2.txt, jstat-before.txt
>
>
> While I upgrading my cluster to 2.1.3, I find some nodes (not all) may have 
> GC issue after the node restarting successfully. Old gen grows very fast and 
> most of the space can not be recycled after setting its status to normal 
> immediately. The qps of both reading and writing are very low and there is no 
> heavy compaction.
> Jmap result seems strange that there are too many java.util.HashMap$Entry 
> objects in heap, where in my experience the "[B" is usually the No1.
> If I downgrade it to 2.1.1, this issue will not appear.
> I uploaded conf files and jstack/jmap outputs. I'll upload heap dump if 
> someone need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8860) Too many java.util.HashMap$Entry objects in heap

2015-03-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-8860:
--

Assignee: Marcus Eriksson  (was: Phil Yang)

> Too many java.util.HashMap$Entry objects in heap
> 
>
> Key: CASSANDRA-8860
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8860
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.3, jdk 1.7u51
>Reporter: Phil Yang
>Assignee: Marcus Eriksson
> Fix For: 2.1.4
>
> Attachments: 0001-remove-cold_reads_to_omit.patch, 8860-v2.txt, 
> 8860.txt, cassandra-env.sh, cassandra.yaml, jmap.txt, jstack.txt, 
> jstat-afterv1.txt, jstat-afterv2.txt, jstat-before.txt
>
>
> While I upgrading my cluster to 2.1.3, I find some nodes (not all) may have 
> GC issue after the node restarting successfully. Old gen grows very fast and 
> most of the space can not be recycled after setting its status to normal 
> immediately. The qps of both reading and writing are very low and there is no 
> heavy compaction.
> Jmap result seems strange that there are too many java.util.HashMap$Entry 
> objects in heap, where in my experience the "[B" is usually the No1.
> If I downgrade it to 2.1.1, this issue will not appear.
> I uploaded conf files and jstack/jmap outputs. I'll upload heap dump if 
> someone need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8860) Too many java.util.HashMap$Entry objects in heap

2015-03-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8860:
---
Attachment: 0001-remove-cold_reads_to_omit.patch

patch to remove option attached

keeping the option in 2.1 if anyone has automated creating tables etc, but will 
remove entirely in 3.0

> Too many java.util.HashMap$Entry objects in heap
> 
>
> Key: CASSANDRA-8860
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8860
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.3, jdk 1.7u51
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.1.4
>
> Attachments: 0001-remove-cold_reads_to_omit.patch, 8860-v2.txt, 
> 8860.txt, cassandra-env.sh, cassandra.yaml, jmap.txt, jstack.txt, 
> jstat-afterv1.txt, jstat-afterv2.txt, jstat-before.txt
>
>
> While I upgrading my cluster to 2.1.3, I find some nodes (not all) may have 
> GC issue after the node restarting successfully. Old gen grows very fast and 
> most of the space can not be recycled after setting its status to normal 
> immediately. The qps of both reading and writing are very low and there is no 
> heavy compaction.
> Jmap result seems strange that there are too many java.util.HashMap$Entry 
> objects in heap, where in my experience the "[B" is usually the No1.
> If I downgrade it to 2.1.1, this issue will not appear.
> I uploaded conf files and jstack/jmap outputs. I'll upload heap dump if 
> someone need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8366) Repair grows data on nodes, causes load to become unbalanced

2015-03-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-8366.

   Resolution: Fixed
Fix Version/s: 2.1.4
Reproduced In: 2.1.2, 2.1.1  (was: 2.1.1, 2.1.2)

ok, thanks all, committed with that change

> Repair grows data on nodes, causes load to become unbalanced
> 
>
> Key: CASSANDRA-8366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8366
> Project: Cassandra
>  Issue Type: Bug
> Environment: 4 node cluster
> 2.1.2 Cassandra
> Inserts and reads are done with CQL driver
>Reporter: Jan Karlsson
>Assignee: Marcus Eriksson
> Fix For: 2.1.4
>
> Attachments: 0001-8366.patch, results-1000-inc-repairs.txt, 
> results-1750_inc_repair.txt, results-500_1_inc_repairs.txt, 
> results-500_2_inc_repairs.txt, 
> results-500_full_repair_then_inc_repairs.txt, 
> results-500_inc_repairs_not_parallel.txt, 
> run1_with_compact_before_repair.log, run2_no_compact_before_repair.log, 
> run3_no_compact_before_repair.log, test.sh, testv2.sh
>
>
> There seems to be something weird going on when repairing data.
> I have a program that runs 2 hours which inserts 250 random numbers and reads 
> 250 times per second. It creates 2 keyspaces with SimpleStrategy and RF of 3. 
> I use size-tiered compaction for my cluster. 
> After those 2 hours I run a repair and the load of all nodes goes up. If I 
> run incremental repair the load goes up alot more. I saw the load shoot up 8 
> times the original size multiple times with incremental repair. (from 2G to 
> 16G)
> with node 9 8 7 and 6 the repro procedure looked like this:
> (Note that running full repair first is not a requirement to reproduce.)
> {noformat}
> After 2 hours of 250 reads + 250 writes per second:
> UN  9  583.39 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  584.01 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  583.72 MB  256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  583.84 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> Repair -pr -par on all nodes sequentially
> UN  9  746.29 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  751.02 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  748.89 MB  256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  758.34 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> repair -inc -par on all nodes sequentially
> UN  9  2.41 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.53 GB256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  2.6 GB 256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  2.17 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> after rolling restart
> UN  9  1.47 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  1.5 GB 256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  2.46 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.19 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> compact all nodes sequentially
> UN  9  989.99 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  994.75 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  1.46 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  758.82 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> repair -inc -par on all nodes sequentially
> UN  9  1.98 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.3 GB 256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  3.71 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.68 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> restart once more
> UN  9  2 GB   256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.05 GB256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  4.1 GB 256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.68 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> {noformat}
> Is there something im missing or is this strange behavior?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Pick sstables to validate as late as possible with inc repairs

2015-03-03 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7cc51f7ae -> 2818ca4cf


Pick sstables to validate as late as possible with inc repairs

Patch by marcuse; reviewed by yukim for CASSANDRA-8366


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2f7077c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2f7077c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2f7077c0

Branch: refs/heads/trunk
Commit: 2f7077c06ccbd5e8e7259c6891fe98d83ec3359d
Parents: 33279dd
Author: Marcus Eriksson 
Authored: Tue Feb 17 16:20:35 2015 +0100
Committer: Marcus Eriksson 
Committed: Tue Mar 3 10:32:46 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 14 +++
 .../db/compaction/CompactionManager.java| 14 ++-
 .../cassandra/service/ActiveRepairService.java  | 41 +++-
 4 files changed, 42 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f7077c0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 15a5a61..c3c7a19 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Pick sstables for validation as late as possible inc repairs 
(CASSANDRA-8366)
  * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856)
  * Fix parallelism adjustment in range and secondary index queries
when the first fetch does not satisfy the limit (CASSANDRA-8856)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f7077c0/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 62aadf9..e4531f2 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2926,4 +2926,18 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 return new ArrayList<>(view.sstables);
 }
 };
+
+public static final Function> 
UNREPAIRED_SSTABLES = new Function>()
+{
+public List apply(DataTracker.View view)
+{
+List sstables = new ArrayList<>();
+for (SSTableReader sstable : view.sstables)
+{
+if (!sstable.isRepaired())
+sstables.add(sstable);
+}
+return sstables;
+}
+};
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f7077c0/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 68313a3..e54a25f 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -956,7 +956,19 @@ public class CompactionManager implements 
CompactionManagerMBean
 if (validator.desc.parentSessionId == null || 
ActiveRepairService.instance.getParentRepairSession(validator.desc.parentSessionId)
 == null)
 sstables = 
cfs.selectAndReference(ColumnFamilyStore.ALL_SSTABLES).refs;
 else
-sstables = 
ActiveRepairService.instance.getParentRepairSession(validator.desc.parentSessionId).getAndReferenceSSTables(cfs.metadata.cfId);
+{
+ColumnFamilyStore.RefViewFragment refView = 
cfs.selectAndReference(ColumnFamilyStore.UNREPAIRED_SSTABLES);
+sstables = refView.refs;
+Set currentlyRepairing = 
ActiveRepairService.instance.currentlyRepairing(cfs.metadata.cfId, 
validator.desc.parentSessionId);
+
+if (!Sets.intersection(currentlyRepairing, 
Sets.newHashSet(refView.sstables)).isEmpty())
+{
+logger.error("Cannot start multiple repair sessions 
over the same sstables");
+throw new RuntimeException("Cannot start multiple 
repair sessions over the same sstables");
+}
+
+
ActiveRepairService.instance.getParentRepairSession(validator.desc.parentSessionId).addSSTables(cfs.metadata.cfId,
 refView.sstables);
+}
 
 if (validator.gcBefore > 0)
 gcBefore = validator.gcBefore;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f7077c0/src/java/org/apache/cassandra/service/ActiveRepairService.java
--

cassandra git commit: Pick sstables to validate as late as possible with inc repairs

2015-03-03 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 33279dd8c -> 2f7077c06


Pick sstables to validate as late as possible with inc repairs

Patch by marcuse; reviewed by yukim for CASSANDRA-8366


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2f7077c0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2f7077c0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2f7077c0

Branch: refs/heads/cassandra-2.1
Commit: 2f7077c06ccbd5e8e7259c6891fe98d83ec3359d
Parents: 33279dd
Author: Marcus Eriksson 
Authored: Tue Feb 17 16:20:35 2015 +0100
Committer: Marcus Eriksson 
Committed: Tue Mar 3 10:32:46 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 14 +++
 .../db/compaction/CompactionManager.java| 14 ++-
 .../cassandra/service/ActiveRepairService.java  | 41 +++-
 4 files changed, 42 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f7077c0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 15a5a61..c3c7a19 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.4
+ * Pick sstables for validation as late as possible inc repairs 
(CASSANDRA-8366)
  * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856)
  * Fix parallelism adjustment in range and secondary index queries
when the first fetch does not satisfy the limit (CASSANDRA-8856)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f7077c0/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 62aadf9..e4531f2 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2926,4 +2926,18 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 return new ArrayList<>(view.sstables);
 }
 };
+
+public static final Function> 
UNREPAIRED_SSTABLES = new Function>()
+{
+public List apply(DataTracker.View view)
+{
+List sstables = new ArrayList<>();
+for (SSTableReader sstable : view.sstables)
+{
+if (!sstable.isRepaired())
+sstables.add(sstable);
+}
+return sstables;
+}
+};
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f7077c0/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 68313a3..e54a25f 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -956,7 +956,19 @@ public class CompactionManager implements 
CompactionManagerMBean
 if (validator.desc.parentSessionId == null || 
ActiveRepairService.instance.getParentRepairSession(validator.desc.parentSessionId)
 == null)
 sstables = 
cfs.selectAndReference(ColumnFamilyStore.ALL_SSTABLES).refs;
 else
-sstables = 
ActiveRepairService.instance.getParentRepairSession(validator.desc.parentSessionId).getAndReferenceSSTables(cfs.metadata.cfId);
+{
+ColumnFamilyStore.RefViewFragment refView = 
cfs.selectAndReference(ColumnFamilyStore.UNREPAIRED_SSTABLES);
+sstables = refView.refs;
+Set currentlyRepairing = 
ActiveRepairService.instance.currentlyRepairing(cfs.metadata.cfId, 
validator.desc.parentSessionId);
+
+if (!Sets.intersection(currentlyRepairing, 
Sets.newHashSet(refView.sstables)).isEmpty())
+{
+logger.error("Cannot start multiple repair sessions 
over the same sstables");
+throw new RuntimeException("Cannot start multiple 
repair sessions over the same sstables");
+}
+
+
ActiveRepairService.instance.getParentRepairSession(validator.desc.parentSessionId).addSSTables(cfs.metadata.cfId,
 refView.sstables);
+}
 
 if (validator.gcBefore > 0)
 gcBefore = validator.gcBefore;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2f7077c0/src/java/org/apache/cassandra/service/ActiveRepairService.java
--

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-03 Thread marcuse
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/db/compaction/CompactionManager.java
src/java/org/apache/cassandra/service/ActiveRepairService.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2818ca4c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2818ca4c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2818ca4c

Branch: refs/heads/trunk
Commit: 2818ca4cf24f05a75041d220af4d3b0aa5203dbf
Parents: 7cc51f7 2f7077c
Author: Marcus Eriksson 
Authored: Tue Mar 3 10:44:21 2015 +0100
Committer: Marcus Eriksson 
Committed: Tue Mar 3 10:44:21 2015 +0100

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 14 +
 .../db/compaction/CompactionManager.java| 22 +---
 .../cassandra/service/ActiveRepairService.java  |  9 
 4 files changed, 29 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2818ca4c/CHANGES.txt
--
diff --cc CHANGES.txt
index 3892bbb,c3c7a19..23f9590
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,65 -1,5 +1,66 @@@
 +3.0
 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760)
 + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268)
 + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657)
 + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438)
 + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707)
 + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560)
 + * Support direct buffer decompression for reads (CASSANDRA-8464)
 + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039)
 + * Group sstables for anticompaction correctly (CASSANDRA-8578)
 + * Add ReadFailureException to native protocol, respond
 +   immediately when replicas encounter errors while handling
 +   a read request (CASSANDRA-7886)
 + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308)
 + * Allow mixing token and partition key restrictions (CASSANDRA-7016)
 + * Support index key/value entries on map collections (CASSANDRA-8473)
 + * Modernize schema tables (CASSANDRA-8261)
 + * Support for user-defined aggregation functions (CASSANDRA-8053)
 + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419)
 + * Refactor SelectStatement, return IN results in natural order instead
 +   of IN value list order and ignore duplicate values in partition key IN 
restrictions (CASSANDRA-7981)
 + * Support UDTs, tuples, and collections in user-defined
 +   functions (CASSANDRA-7563)
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer
 +   APIs (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any partition key column (CASSANDRA-7855)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 
7929,
 +   7924, 7812, 8063, 7813, 7708)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
 + * Improve concurrency of repair (CASSANDRA-6455, 8208)
 + * Select optimal CRC32 implementation at runtime (CASSANDRA-8614)
 + * Evaluate

[jira] [Commented] (CASSANDRA-8860) Too many java.util.HashMap$Entry objects in heap

2015-03-03 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344879#comment-14344879
 ] 

Benedict commented on CASSANDRA-8860:
-

[~yangzhe1991] yes, both statements are correct, but it's kind of moot since it 
looks like we'll be removing it :)

> Too many java.util.HashMap$Entry objects in heap
> 
>
> Key: CASSANDRA-8860
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8860
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.3, jdk 1.7u51
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.1.4
>
> Attachments: 8860-v2.txt, 8860.txt, cassandra-env.sh, cassandra.yaml, 
> jmap.txt, jstack.txt, jstat-afterv1.txt, jstat-afterv2.txt, jstat-before.txt
>
>
> While I upgrading my cluster to 2.1.3, I find some nodes (not all) may have 
> GC issue after the node restarting successfully. Old gen grows very fast and 
> most of the space can not be recycled after setting its status to normal 
> immediately. The qps of both reading and writing are very low and there is no 
> heavy compaction.
> Jmap result seems strange that there are too many java.util.HashMap$Entry 
> objects in heap, where in my experience the "[B" is usually the No1.
> If I downgrade it to 2.1.1, this issue will not appear.
> I uploaded conf files and jstack/jmap outputs. I'll upload heap dump if 
> someone need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8878) Counter Tables should be more clearly identified

2015-03-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344861#comment-14344861
 ] 

Sylvain Lebresne commented on CASSANDRA-8878:
-

For the record, I'm in favor of adding the {{CREATE COUNTER TABLE}} syntax as 
that's imo the more explicit (and counter tables are different enough that it's 
worth being explicit), though obviously for the sake of backward compatibility 
we'll have to leave that optional. Still, we can encourage using the new syntax 
once we have it, and perhaps more importantly, cqlsh {{DESCRIBE}} can start 
including it for counters table, making the example of the description less 
surprising.

> Counter Tables should be more clearly identified
> 
>
> Key: CASSANDRA-8878
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8878
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Michaël Figuière
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 3.0
>
>
> Counter tables are internally considered as a particular kind of table, 
> different from the regular ones. This counter specific nature is implicitly 
> defined by the fact that columns within a table have the {{counter}} data 
> type. This nature turns out to be persistent over the time, that is if the 
> user do the following:
> {code}
> CREATE TABLE counttable (key uuid primary key, count counter);
> ALTER TABLE counttable DROP count;
> ALTER TABLE counttable ADD count2 int;
> {code} 
> The following error will be thrown:
> {code}
> Cannot add a non counter column (count2) in a counter column family
> {code}
> Even if the table doesn't have any counter column anymore. This implicit, 
> persistent nature can be challenging to understand for users (and impossible 
> to infer in the case above). For this reason a more explicit declaration of 
> counter tables would be appropriate, as:
> {code}
> CREATE COUNTER TABLE counttable (key uuid primary key, count counter);
> {code}
> Besides that, adding a boolean {{counter_table}} column in the 
> {{system.schema_columnfamilies}} table would allow external tools to easily 
> differentiate a counter table from a regular one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7816) Updated the "4.2.6. EVENT" section in the binary protocol specification

2015-03-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344785#comment-14344785
 ] 

Stefania commented on CASSANDRA-7816:
-

It's quite easy to reproduce, I added a new test, {{restart_node_test}} to 
pushed_notifications_test.py, available in this pull request: 
https://github.com/riptano/cassandra-dtest/pull/177.

There are always two DOWN notifications, and this is deterministic. They are 
generated by:

{code}
INFO  [GossipStage:1] 2015-03-03 01:10:47,156 Server.java:413 - 
Thread[GossipStage:1,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1589)
at 
org.apache.cassandra.transport.Server$EventNotifier.getStackTrace(Server.java:396)
at 
org.apache.cassandra.transport.Server$EventNotifier.onDown(Server.java:413)
at 
org.apache.cassandra.service.StorageService.onDead(StorageService.java:2049)
at org.apache.cassandra.gms.Gossiper.markDead(Gossiper.java:932)
at org.apache.cassandra.gms.Gossiper.convict(Gossiper.java:319)
at 
org.apache.cassandra.gms.FailureDetector.forceConviction(FailureDetector.java:251)
at 
org.apache.cassandra.gms.GossipShutdownVerbHandler.doVerb(GossipShutdownVerbHandler.java:37)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

and 

{code}
INFO  [GossipStage:1] 2015-03-03 01:11:04,254 Server.java:413 - 
Thread[GossipStage:1,5,main]
at java.lang.Thread.getStackTrace(Thread.java:1589)
at 
org.apache.cassandra.transport.Server$EventNotifier.getStackTrace(Server.java:396)
at 
org.apache.cassandra.transport.Server$EventNotifier.onDown(Server.java:413)
at 
org.apache.cassandra.service.StorageService.onDead(StorageService.java:2049)
at 
org.apache.cassandra.service.StorageService.onRestart(StorageService.java:2057)
at 
org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:958)
at 
org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:1024)
at 
org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:58)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{code}

There are one or more UP notifications, and this is not deterministic but it 
tends to happen on the third time the node is restarted. They are generated by 
the same stack trace but different threads indicating a contention problem, to 
be investigated further:

{code}
INFO  [SharedPool-Worker-2] 2015-03-03 01:11:04,419 Gossiper.java:916 - 
InetAddress /127.0.0.2 is now UP
INFO  [SharedPool-Worker-2] 2015-03-03 01:11:04,421 Server.java:407 - 
Thread[SharedPool-Worker-2,10,main]
at java.lang.Thread.getStackTrace(Thread.java:1589)
at 
org.apache.cassandra.transport.Server$EventNotifier.getStackTrace(Server.java:396)
at 
org.apache.cassandra.transport.Server$EventNotifier.onUp(Server.java:407)
at 
org.apache.cassandra.service.StorageService.onAlive(StorageService.java:2033)
at org.apache.cassandra.gms.Gossiper.realMarkAlive(Gossiper.java:918)
at org.apache.cassandra.gms.Gossiper.access$900(Gossiper.java:67)
at org.apache.cassandra.gms.Gossiper$2.response(Gossiper.java:900)
at 
org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:54)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105)
at java.lang.Thread.run(Thread.java:745)
{code}

Sample output of the test (with assertions commented out):

{code}
KEEP_LOGS=true PRINT_DEBUG=true nosetests -s -a 'selected' 
pushed_notifications_test.py
cluster ccm directory: /tmp/dtest-AQzO0X
Restarting second node...
Source 127.0.0.1 sent DOWN for 127.0.0.2
Source 127.0.0.1 sent DOWN for 127.0.0.2
Source 127.0.0.1 sent UP for 127.0.0.2
Waiting for notifications from 127.0.0.1
Restarting second node...
Source 127.0.0.1 sent DOWN for 127.0.0.2
Source 127.0.0.1 sent DOWN for 127.0.0.2
Source 127.0.0.1 sent UP for 127.0.0.2
Waiting for notifications from 127.0.0.1
Restarting second node...
Source 127.0.0.1 sent DO

[jira] [Commented] (CASSANDRA-7875) Prepared statements using dropped indexes are not handled correctly

2015-03-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344767#comment-14344767
 ] 

Stefania commented on CASSANDRA-7875:
-

The dtest is available in this pull request: 
https://github.com/riptano/cassandra-dtest/pull/177.

> Prepared statements using dropped indexes are not handled correctly
> ---
>
> Key: CASSANDRA-7875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7875
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.1.4
>
> Attachments: prepared_statements_test.py, repro.py
>
>
> When select statements are prepared, we verify that the column restrictions 
> use indexes (where necessary).  However, we don't perform a similar check 
> when the statement is executed, so it fails somewhere further down the line.  
> In this case, it hits an assertion:
> {noformat}
> java.lang.AssertionError: Sequential scan with filters is not supported (if 
> you just created an index, you need to wait for the creation to be propagated 
> to all nodes before querying it)
>   at 
> org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.getExtraFilter(ExtendedFilter.java:259)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1759)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1709)
>   at 
> org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119)
>   at 
> org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1394)
>   at 
> org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:724)
> {noformat}
> During execution, we should check that the indexes still exist and provide a 
> better error if they do not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8516) NEW_NODE topology event emitted instead of MOVED_NODE by moving node

2015-03-03 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14344765#comment-14344765
 ] 

Stefania commented on CASSANDRA-8516:
-

The dtest patch is available in this pull request: 
https://github.com/riptano/cassandra-dtest/pull/177.

> NEW_NODE topology event emitted instead of MOVED_NODE by moving node
> 
>
> Key: CASSANDRA-8516
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8516
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Tyler Hobbs
>Assignee: Stefania
>Priority: Minor
> Fix For: 2.0.13
>
> Attachments: cassandra_8516_a.txt, cassandra_8516_b.txt, 
> cassandra_8516_dtest.txt
>
>
> As discovered in CASSANDRA-8373, when you move a node in a single-node 
> cluster, a {{NEW_NODE}} event is generated instead of a {{MOVED_NODE}} event.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Improve MD5Digest.hashCode()

2015-03-03 Thread snazy
Improve MD5Digest.hashCode()

Patch by Robert Stupp; Reviewed by Aleksey Yeschenko for CASSANDRA-8847


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33279dd8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33279dd8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33279dd8

Branch: refs/heads/trunk
Commit: 33279dd8c567ce6bcc6fa1c60b1304708a880abc
Parents: 1e74dd0
Author: Robert Stupp 
Authored: Tue Mar 3 09:44:18 2015 +0100
Committer: Robert Stupp 
Committed: Tue Mar 3 09:44:18 2015 +0100

--
 src/java/org/apache/cassandra/utils/MD5Digest.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33279dd8/src/java/org/apache/cassandra/utils/MD5Digest.java
--
diff --git a/src/java/org/apache/cassandra/utils/MD5Digest.java 
b/src/java/org/apache/cassandra/utils/MD5Digest.java
index 59c1aba..3f46458 100644
--- a/src/java/org/apache/cassandra/utils/MD5Digest.java
+++ b/src/java/org/apache/cassandra/utils/MD5Digest.java
@@ -30,10 +30,12 @@ import java.util.Arrays;
 public class MD5Digest
 {
 public final byte[] bytes;
+private final int hashCode;
 
 private MD5Digest(byte[] bytes)
 {
 this.bytes = bytes;
+hashCode = Arrays.hashCode(bytes);
 }
 
 public static MD5Digest wrap(byte[] digest)
@@ -54,7 +56,7 @@ public class MD5Digest
 @Override
 public final int hashCode()
 {
-return Arrays.hashCode(bytes);
+return hashCode;
 }
 
 @Override



[1/3] cassandra git commit: Improve MD5Digest.hashCode()

2015-03-03 Thread snazy
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 1e74dd0d1 -> 33279dd8c
  refs/heads/trunk 7310c054f -> 7cc51f7ae


Improve MD5Digest.hashCode()

Patch by Robert Stupp; Reviewed by Aleksey Yeschenko for CASSANDRA-8847


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33279dd8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33279dd8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33279dd8

Branch: refs/heads/cassandra-2.1
Commit: 33279dd8c567ce6bcc6fa1c60b1304708a880abc
Parents: 1e74dd0
Author: Robert Stupp 
Authored: Tue Mar 3 09:44:18 2015 +0100
Committer: Robert Stupp 
Committed: Tue Mar 3 09:44:18 2015 +0100

--
 src/java/org/apache/cassandra/utils/MD5Digest.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/33279dd8/src/java/org/apache/cassandra/utils/MD5Digest.java
--
diff --git a/src/java/org/apache/cassandra/utils/MD5Digest.java 
b/src/java/org/apache/cassandra/utils/MD5Digest.java
index 59c1aba..3f46458 100644
--- a/src/java/org/apache/cassandra/utils/MD5Digest.java
+++ b/src/java/org/apache/cassandra/utils/MD5Digest.java
@@ -30,10 +30,12 @@ import java.util.Arrays;
 public class MD5Digest
 {
 public final byte[] bytes;
+private final int hashCode;
 
 private MD5Digest(byte[] bytes)
 {
 this.bytes = bytes;
+hashCode = Arrays.hashCode(bytes);
 }
 
 public static MD5Digest wrap(byte[] digest)
@@ -54,7 +56,7 @@ public class MD5Digest
 @Override
 public final int hashCode()
 {
-return Arrays.hashCode(bytes);
+return hashCode;
 }
 
 @Override



[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2015-03-03 Thread snazy
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7cc51f7a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7cc51f7a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7cc51f7a

Branch: refs/heads/trunk
Commit: 7cc51f7aef3cc436a036f67b58d1151237de4ddf
Parents: 7310c05 33279dd
Author: Robert Stupp 
Authored: Tue Mar 3 09:45:32 2015 +0100
Committer: Robert Stupp 
Committed: Tue Mar 3 09:45:32 2015 +0100

--

--




<    1   2   3   >