[jira] [Assigned] (CASSANDRA-8371) DateTieredCompactionStrategy is always compacting

2014-11-24 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-8371:
--

Assignee: Björn Hegerfors

Can you have a look [~Bj0rn]?

[~michaelsembwever] could you attach logs? If you have logs since switching to 
dtcs it would probably help

> DateTieredCompactionStrategy is always compacting 
> --
>
> Key: CASSANDRA-8371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: mck
>Assignee: Björn Hegerfors
>  Labels: compaction, performance
> Attachments: java_gc_counts_rate-month.png, read-latency.png, 
> sstables.png, vg2_iad-month.png
>
>
> Running 2.0.11 and having switched a table to 
> [DTCS|https://issues.apache.org/jira/browse/CASSANDRA-6602] we've seen that 
> disk IO and gc count increase, along with the number of reads happening in 
> the "compaction" hump of cfhistograms.
> Data, and generally performance, looks good, but compactions are always 
> happening, and pending compactions are building up.
> The schema for this is 
> {code}CREATE TABLE search (
>   loginid text,
>   searchid timeuuid,
>   description text,
>   searchkey text,
>   searchurl text,
>   PRIMARY KEY ((loginid), searchid)
> );{code}
> We're sitting on about 82G (per replica) across 6 nodes in 4 DCs.
> CQL executed against this keyspace, and traffic patterns, can be seen in 
> slides 7+8 of https://prezi.com/b9-aj6p2esft
> Attached are sstables-per-read and read-latency graphs from cfhistograms, and 
> screenshots of our munin graphs as we have gone from STCS, to LCS (week ~44), 
> to DTCS (week ~46).
> These screenshots are also found in the prezi on slides 9-11.
> [~pmcfadin], [~Bj0rn], 
> Can this be a consequence of occasional deleted rows, as is described under 
> (3) in the description of CASSANDRA-6602 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8371) DateTieredCompactionStrategy is always compacting

2014-11-24 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14224158#comment-14224158
 ] 

mck commented on CASSANDRA-8371:


[~pmcfadin] All strategies were used with default settings.

[~krummas] No. Once we switched to DTCS first thing we did was a major 
compaction. (Only just read in one of the other DTCS tickets that compaction 
beforehand would have been advantageous).

As far as the deletes go, there's ~one row deleted per minute. (The pattern 
leans towards some active users liking to erase their search history).

> DateTieredCompactionStrategy is always compacting 
> --
>
> Key: CASSANDRA-8371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: mck
>  Labels: compaction, performance
> Attachments: java_gc_counts_rate-month.png, read-latency.png, 
> sstables.png, vg2_iad-month.png
>
>
> Running 2.0.11 and having switched a table to 
> [DTCS|https://issues.apache.org/jira/browse/CASSANDRA-6602] we've seen that 
> disk IO and gc count increase, along with the number of reads happening in 
> the "compaction" hump of cfhistograms.
> Data, and generally performance, looks good, but compactions are always 
> happening, and pending compactions are building up.
> The schema for this is 
> {code}CREATE TABLE search (
>   loginid text,
>   searchid timeuuid,
>   description text,
>   searchkey text,
>   searchurl text,
>   PRIMARY KEY ((loginid), searchid)
> );{code}
> We're sitting on about 82G (per replica) across 6 nodes in 4 DCs.
> CQL executed against this keyspace, and traffic patterns, can be seen in 
> slides 7+8 of https://prezi.com/b9-aj6p2esft
> Attached are sstables-per-read and read-latency graphs from cfhistograms, and 
> screenshots of our munin graphs as we have gone from STCS, to LCS (week ~44), 
> to DTCS (week ~46).
> These screenshots are also found in the prezi on slides 9-11.
> [~pmcfadin], [~Bj0rn], 
> Can this be a consequence of occasional deleted rows, as is described under 
> (3) in the description of CASSANDRA-6602 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8061) tmplink files are not removed

2014-11-24 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14224152#comment-14224152
 ] 

Marcus Eriksson commented on CASSANDRA-8061:


I suspect that it could look like we are leaking tmplink fds since we open a 
new one every 50MB and keep a reference to the old one (via the 
replacedBy/replaces fields). I have seen many open files during compaction, but 
they always disappear once the compaction is done.

abort() has been fixed in CASSANDRA-8320


> tmplink files are not removed
> -
>
> Key: CASSANDRA-8061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Linux
>Reporter: Gianluca Borello
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 2.1.3
>
> Attachments: 8061_v1.txt, 8248-thread_dump.txt
>
>
> After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
> filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
> and that is very similar, and I confirm it happens both on 2.1.0 as well as 
> from the latest commit on the cassandra-2.1 branch 
> (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
>  from the cassandra-2.1)
> Even starting with a clean keyspace, after a few hours I get:
> {noformat}
> $ sudo find /raid0 | grep tmplink | xargs du -hs
> 2.7G  
> /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
> 13M   
> /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
> 1.8G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
> 12M   
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
> 5.2M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
> 822M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
> 7.3M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
> 1.2G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
> 6.7M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
> 1.1G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
> 11M   
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
> 1.7G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
> 812K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
> 122M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
> 744K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-739-Index.db
> 660K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Index.db
> 796K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Index.db
> 137M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Data.db
> 161M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Data.db
> 139M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Data.db
> 940K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Index.db
> 936K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Index.db
> 161M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Data.db
> 672K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-

[jira] [Commented] (CASSANDRA-8371) DateTieredCompactionStrategy is always compacting

2014-11-24 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14224143#comment-14224143
 ] 

Marcus Eriksson commented on CASSANDRA-8371:


Did you start fresh with DTCS? If not, then it makes sense, you will have 
totally mixed timestamps in your sstables, meaning you will most likely spend 
alot of time compacting

> DateTieredCompactionStrategy is always compacting 
> --
>
> Key: CASSANDRA-8371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: mck
>  Labels: compaction, performance
> Attachments: java_gc_counts_rate-month.png, read-latency.png, 
> sstables.png, vg2_iad-month.png
>
>
> Running 2.0.11 and having switched a table to 
> [DTCS|https://issues.apache.org/jira/browse/CASSANDRA-6602] we've seen that 
> disk IO and gc count increase, along with the number of reads happening in 
> the "compaction" hump of cfhistograms.
> Data, and generally performance, looks good, but compactions are always 
> happening, and pending compactions are building up.
> The schema for this is 
> {code}CREATE TABLE search (
>   loginid text,
>   searchid timeuuid,
>   description text,
>   searchkey text,
>   searchurl text,
>   PRIMARY KEY ((loginid), searchid)
> );{code}
> We're sitting on about 82G (per replica) across 6 nodes in 4 DCs.
> CQL executed against this keyspace, and traffic patterns, can be seen in 
> slides 7+8 of https://prezi.com/b9-aj6p2esft
> Attached are sstables-per-read and read-latency graphs from cfhistograms, and 
> screenshots of our munin graphs as we have gone from STCS, to LCS (week ~44), 
> to DTCS (week ~46).
> These screenshots are also found in the prezi on slides 9-11.
> [~pmcfadin], [~Bj0rn], 
> Can this be a consequence of occasional deleted rows, as is described under 
> (3) in the description of CASSANDRA-6602 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8348) allow takeColumnFamilySnapshot to take a list of ColumnFamilies

2014-11-24 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14219742#comment-14219742
 ] 

Nick Bailey edited comment on CASSANDRA-8348 at 11/25/14 2:26 AM:
--

It may make sense to include a method that takes a list of ks.cf pairs to 
snapshot as well.


was (Author: nickmbailey):
It make make sense to include a method that takes a list of ks.cf pairs to 
snapshot as well.

> allow takeColumnFamilySnapshot to take a list of ColumnFamilies
> ---
>
> Key: CASSANDRA-8348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8348
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Peter Halliday
>Priority: Minor
> Fix For: 3.0, 2.1.3
>
>
> Within StorageServiceMBean.java the function takeSnapshot allows for a list 
> of keyspaces to snapshot.  However, the function takeColumnFamilySnapshot 
> only allows for a single ColumnFamily to snapshot.  This should allow for 
> multiple ColumnFamilies within the same Keyspace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8348) allow takeColumnFamilySnapshot to take a list of ColumnFamilies

2014-11-24 Thread Nick Bailey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey updated CASSANDRA-8348:
---
Fix Version/s: 2.1.3
   3.0

> allow takeColumnFamilySnapshot to take a list of ColumnFamilies
> ---
>
> Key: CASSANDRA-8348
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8348
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Peter Halliday
>Priority: Minor
> Fix For: 3.0, 2.1.3
>
>
> Within StorageServiceMBean.java the function takeSnapshot allows for a list 
> of keyspaces to snapshot.  However, the function takeColumnFamilySnapshot 
> only allows for a single ColumnFamily to snapshot.  This should allow for 
> multiple ColumnFamilies within the same Keyspace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8010) cassandra-stress needs better docs for rate options

2014-11-24 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-8010.
---
Resolution: Not a Problem

Thanks for the help help  :)

> cassandra-stress needs better docs for rate options
> ---
>
> Key: CASSANDRA-8010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8010
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation & website, Examples, Tools
>Reporter: Matt Stump
>Priority: Minor
>  Labels: lhf
>
> It's not obvious how to use the rate option. I wasn't able to figure it out 
> via the source, or from the docs. I kept trying to do -rate= or -threads=. I 
> had to search confluence for usage examples.
> Need something like this in the docs:
> -rate threads=900
> -rate threads<=900



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: fix to run test

2014-11-24 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7c9043c67 -> 395720c37


fix to run test


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/395720c3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/395720c3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/395720c3

Branch: refs/heads/trunk
Commit: 395720c37ad772f6031cde79c0402fb2805df015
Parents: 7c9043c
Author: Yuki Morishita 
Authored: Mon Nov 24 18:57:50 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 18:57:50 2014 -0600

--
 build.xml | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/395720c3/build.xml
--
diff --git a/build.xml b/build.xml
index e5c5c83..06c79e0 100644
--- a/build.xml
+++ b/build.xml
@@ -1113,7 +1113,8 @@
 
 
 
-
+
+
 
 
 



[jira] [Resolved] (CASSANDRA-8228) Log malfunctioning host on prepareForRepair

2014-11-24 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-8228.
---
   Resolution: Fixed
Fix Version/s: 2.1.3

Committed, with one change.
I think ConcurrentSkipListSet is too much here, so changed it to 
Collections.synchronizedSet.

Thanks!

> Log malfunctioning host on prepareForRepair
> ---
>
> Key: CASSANDRA-8228
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8228
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Juho Mäkinen
>Assignee: Rajanarayanan Thottuvaikkatumana
>Priority: Trivial
>  Labels: lhf
> Fix For: 2.1.3
>
> Attachments: cassandra-trunk-8228.txt
>
>
> Repair startup goes thru ActiveRepairService.prepareForRepair() which might 
> result with "Repair failed with error Did not get positive replies from all 
> endpoints." error, but there's no other logging regarding to this error.
> It seems that it would be trivial to modify the prepareForRepair() to log the 
> host address which caused the error, thus ease the debugging effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-24 Thread yukim
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7c9043c6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7c9043c6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7c9043c6

Branch: refs/heads/trunk
Commit: 7c9043c679100ba3510ad0bd18bdf8004084bb20
Parents: c023d49 2943684
Author: Yuki Morishita 
Authored: Mon Nov 24 18:49:13 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 18:49:13 2014 -0600

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ActiveRepairService.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c9043c6/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7c9043c6/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --cc src/java/org/apache/cassandra/service/ActiveRepairService.java
index 15d786e,17cf6ef..252bcd1
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@@ -229,12 -241,13 +229,13 @@@ public class ActiveRepairServic
  return neighbors;
  }
  
 -public UUID prepareForRepair(Set endpoints, 
Collection> ranges, List columnFamilyStores)
 +public UUID prepareForRepair(Set endpoints, RepairOption 
options, List columnFamilyStores)
  {
  UUID parentRepairSession = UUIDGen.getTimeUUID();
 -registerParentRepairSession(parentRepairSession, columnFamilyStores, 
ranges);
 +registerParentRepairSession(parentRepairSession, columnFamilyStores, 
options.getRanges(), options.isIncremental());
  final CountDownLatch prepareLatch = new 
CountDownLatch(endpoints.size());
  final AtomicBoolean status = new AtomicBoolean(true);
+ final Set failedNodes = Collections.synchronizedSet(new 
HashSet());
  IAsyncCallbackWithFailure callback = new IAsyncCallbackWithFailure()
  {
  public void response(MessageIn msg)



[2/3] cassandra git commit: Log failed host when preparing incremental repair

2014-11-24 Thread yukim
Log failed host when preparing incremental repair

patch by Rajanarayanan Thottuvaikkatumana; reviewed by yukim for CASSANDRA-8228


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29436845
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29436845
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29436845

Branch: refs/heads/trunk
Commit: 29436845f8dfe0ab6d26ca1cd11ad22e2861bb1a
Parents: 326a9ff
Author: Rajanarayanan Thottuvaikkatumana 
Authored: Mon Nov 24 18:48:44 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 18:48:44 2014 -0600

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ActiveRepairService.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29436845/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa3ce8a..f022b19 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * Have paxos reuse the timestamp generation of normal queries (CASSANDRA-7801)
  * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
  * Improve JBOD disk utilization (CASSANDRA-7386)
+ * Log failed host when preparing incremental repair (CASSANDRA-8228)
 Merged from 2.0:
  * Ignore Paxos commits for truncated tables (CASSANDRA-7538)
  * Validate size of indexed column values (CASSANDRA-8280)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29436845/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index d43143e..17cf6ef 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -247,6 +247,7 @@ public class ActiveRepairService
 registerParentRepairSession(parentRepairSession, columnFamilyStores, 
ranges);
 final CountDownLatch prepareLatch = new 
CountDownLatch(endpoints.size());
 final AtomicBoolean status = new AtomicBoolean(true);
+final Set failedNodes = Collections.synchronizedSet(new 
HashSet());
 IAsyncCallbackWithFailure callback = new IAsyncCallbackWithFailure()
 {
 public void response(MessageIn msg)
@@ -262,6 +263,7 @@ public class ActiveRepairService
 public void onFailure(InetAddress from)
 {
 status.set(false);
+failedNodes.add(from.getHostAddress());
 prepareLatch.countDown();
 }
 };
@@ -283,13 +285,13 @@ public class ActiveRepairService
 catch (InterruptedException e)
 {
 parentRepairSessions.remove(parentRepairSession);
-throw new RuntimeException("Did not get replies from all 
endpoints.", e);
+throw new RuntimeException("Did not get replies from all 
endpoints. List of failed endpoint(s): " + failedNodes.toString(), e);
 }
 
 if (!status.get())
 {
 parentRepairSessions.remove(parentRepairSession);
-throw new RuntimeException("Did not get positive replies from all 
endpoints.");
+throw new RuntimeException("Did not get positive replies from all 
endpoints. List of failed endpoint(s): " + failedNodes.toString());
 }
 
 return parentRepairSession;



[1/3] cassandra git commit: Log failed host when preparing incremental repair

2014-11-24 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 326a9ff2f -> 29436845f
  refs/heads/trunk c023d4922 -> 7c9043c67


Log failed host when preparing incremental repair

patch by Rajanarayanan Thottuvaikkatumana; reviewed by yukim for CASSANDRA-8228


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/29436845
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/29436845
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/29436845

Branch: refs/heads/cassandra-2.1
Commit: 29436845f8dfe0ab6d26ca1cd11ad22e2861bb1a
Parents: 326a9ff
Author: Rajanarayanan Thottuvaikkatumana 
Authored: Mon Nov 24 18:48:44 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 18:48:44 2014 -0600

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ActiveRepairService.java | 6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/29436845/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa3ce8a..f022b19 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -12,6 +12,7 @@
  * Have paxos reuse the timestamp generation of normal queries (CASSANDRA-7801)
  * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
  * Improve JBOD disk utilization (CASSANDRA-7386)
+ * Log failed host when preparing incremental repair (CASSANDRA-8228)
 Merged from 2.0:
  * Ignore Paxos commits for truncated tables (CASSANDRA-7538)
  * Validate size of indexed column values (CASSANDRA-8280)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/29436845/src/java/org/apache/cassandra/service/ActiveRepairService.java
--
diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java 
b/src/java/org/apache/cassandra/service/ActiveRepairService.java
index d43143e..17cf6ef 100644
--- a/src/java/org/apache/cassandra/service/ActiveRepairService.java
+++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java
@@ -247,6 +247,7 @@ public class ActiveRepairService
 registerParentRepairSession(parentRepairSession, columnFamilyStores, 
ranges);
 final CountDownLatch prepareLatch = new 
CountDownLatch(endpoints.size());
 final AtomicBoolean status = new AtomicBoolean(true);
+final Set failedNodes = Collections.synchronizedSet(new 
HashSet());
 IAsyncCallbackWithFailure callback = new IAsyncCallbackWithFailure()
 {
 public void response(MessageIn msg)
@@ -262,6 +263,7 @@ public class ActiveRepairService
 public void onFailure(InetAddress from)
 {
 status.set(false);
+failedNodes.add(from.getHostAddress());
 prepareLatch.countDown();
 }
 };
@@ -283,13 +285,13 @@ public class ActiveRepairService
 catch (InterruptedException e)
 {
 parentRepairSessions.remove(parentRepairSession);
-throw new RuntimeException("Did not get replies from all 
endpoints.", e);
+throw new RuntimeException("Did not get replies from all 
endpoints. List of failed endpoint(s): " + failedNodes.toString(), e);
 }
 
 if (!status.get())
 {
 parentRepairSessions.remove(parentRepairSession);
-throw new RuntimeException("Did not get positive replies from all 
endpoints.");
+throw new RuntimeException("Did not get positive replies from all 
endpoints. List of failed endpoint(s): " + failedNodes.toString());
 }
 
 return parentRepairSession;



[jira] [Commented] (CASSANDRA-8285) OOME in Cassandra 2.0.11

2014-11-24 Thread Kishan Karunaratne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223845#comment-14223845
 ] 

Kishan Karunaratne commented on CASSANDRA-8285:
---

Both the duration and endurance tests on the Ruby side finished successfully 
without any C* errors. These were run with larger instances, thus I launched 
another 6-day trial against C* 2.0 head with regular instance sizes.

> OOME in Cassandra 2.0.11
> 
>
> Key: CASSANDRA-8285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8285
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.11 + java-driver 2.0.8-SNAPSHOT
> Cassandra 2.0.11 + ruby-driver 1.0-beta
>Reporter: Pierre Laporte
>Assignee: Aleksey Yeschenko
> Attachments: OOME_node_system.log, gc-1416849312.log.gz, gc.log.gz, 
> heap-usage-after-gc-zoom.png, heap-usage-after-gc.png, system.log.gz
>
>
> We ran drivers 3-days endurance tests against Cassandra 2.0.11 and C* crashed 
> with an OOME.  This happened both with ruby-driver 1.0-beta and java-driver 
> 2.0.8-snapshot.
> Attached are :
> | OOME_node_system.log | The system.log of one Cassandra node that crashed |
> | gc.log.gz | The GC log on the same node |
> | heap-usage-after-gc.png | The heap occupancy evolution after every GC cycle 
> |
> | heap-usage-after-gc-zoom.png | A focus on when things start to go wrong |
> Workload :
> Our test executes 5 CQL statements (select, insert, select, delete, select) 
> for a given unique id, during 3 days, using multiple threads.  There is not 
> change in the workload during the test.
> Symptoms :
> In the attached log, it seems something starts in Cassandra between 
> 2014-11-06 10:29:22 and 2014-11-06 10:45:32.  This causes an allocation that 
> fills the heap.  We eventually get stuck in a Full GC storm and get an OOME 
> in the logs.
> I have run the java-driver tests against Cassandra 1.2.19 and 2.1.1.  The 
> error does not occur.  It seems specific to 2.0.11.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8371) DateTieredCompactionStrategy is always compacting

2014-11-24 Thread Patrick McFadin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223840#comment-14223840
 ] 

Patrick McFadin commented on CASSANDRA-8371:


Mck, can you post your compaction settings for this keyspace?

For the use of any others reading this Jira. The line legends are:
Orange - STCS
Green - LCS
Purple - DTCS

> DateTieredCompactionStrategy is always compacting 
> --
>
> Key: CASSANDRA-8371
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8371
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: mck
>  Labels: compaction, performance
> Attachments: java_gc_counts_rate-month.png, read-latency.png, 
> sstables.png, vg2_iad-month.png
>
>
> Running 2.0.11 and having switched a table to 
> [DTCS|https://issues.apache.org/jira/browse/CASSANDRA-6602] we've seen that 
> disk IO and gc count increase, along with the number of reads happening in 
> the "compaction" hump of cfhistograms.
> Data, and generally performance, looks good, but compactions are always 
> happening, and pending compactions are building up.
> The schema for this is 
> {code}CREATE TABLE search (
>   loginid text,
>   searchid timeuuid,
>   description text,
>   searchkey text,
>   searchurl text,
>   PRIMARY KEY ((loginid), searchid)
> );{code}
> We're sitting on about 82G (per replica) across 6 nodes in 4 DCs.
> CQL executed against this keyspace, and traffic patterns, can be seen in 
> slides 7+8 of https://prezi.com/b9-aj6p2esft
> Attached are sstables-per-read and read-latency graphs from cfhistograms, and 
> screenshots of our munin graphs as we have gone from STCS, to LCS (week ~44), 
> to DTCS (week ~46).
> These screenshots are also found in the prezi on slides 9-11.
> [~pmcfadin], [~Bj0rn], 
> Can this be a consequence of occasional deleted rows, as is described under 
> (3) in the description of CASSANDRA-6602 ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8281) CQLSSTableWriter close does not work

2014-11-24 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223822#comment-14223822
 ] 

Yuki Morishita commented on CASSANDRA-8281:
---

We can set {{Config.setClientMode(true)}} by default, in fact, the client like 
o.a.c.hadoop.cql3.CqlBulkRecordWriter sets so.

Though I worry that their can be other issues, and if that happens, 
CQLSSTableWriter just leaves writer thread and the program won't stop.
Is there a way to stop thread at exception?

> CQLSSTableWriter close does not work
> 
>
> Key: CASSANDRA-8281
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8281
> Project: Cassandra
>  Issue Type: Bug
>  Components: API
> Environment: Cassandra 2.1.1
>Reporter: Xu Zhongxing
>Assignee: Benjamin Lerer
> Attachments: CASSANDRA-8281.txt
>
>
> I called CQLSSTableWriter.close(). But the program still cannot exit. But the 
> same code works fine on Cassandra 2.0.10.
> It seems that CQLSSTableWriter cannot be closed, and blocks the program from 
> exit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7994) Commit logs on the fly compression

2014-11-24 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-7994:
--
Assignee: Oleg Anastasyev

> Commit logs on the fly compression 
> ---
>
> Key: CASSANDRA-7994
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7994
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Oleg Anastasyev
>Assignee: Oleg Anastasyev
> Attachments: CompressedCommitLogs-7994.txt
>
>
> This patch employs lz4 algo to comress commit logs. This could be useful to 
> conserve disk space either archiving commit logs  for a long time or for 
> conserviing iops for use cases with often and large mutations updating the 
> same record.
> The compression is performed on blocks of 64k, for better cross mutation 
> compression. CRC is computed on each 64k block, unlike original code 
> computing it on each individual mutation.
> On one of our real production cluster this saved 2/3 of the space consumed by 
> commit logs. The replay is 20-30% slower for the same number of mutations.
> While doing this, also refactored commit log reading code to CommitLogReader 
> class, which i believe makes code cleaner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8263) Cqlshlib tests are mostly stubs

2014-11-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8263:
---
Fix Version/s: 3.0

> Cqlshlib tests are mostly stubs
> ---
>
> Key: CASSANDRA-8263
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8263
> Project: Cassandra
>  Issue Type: Test
>Reporter: Philip Thompson
>Assignee: Philip Thompson
>  Labels: cqlsh
> Fix For: 3.0, 2.1.3
>
>
> Most of the tests in cqlshlib/tests are just stubs that look like: {code}
> def test_parse_create_index(self):
> pass
> def test_parse_drop_index(self):
> pass
> {code}
> These tests need implemented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8061) tmplink files are not removed

2014-11-24 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223672#comment-14223672
 ] 

Joshua McKenzie commented on CASSANDRA-8061:


[~gianlucaborello] / [~sterligovak] / [~Antauri] - Do any of you see the 
following message in your system.log:
"Cannot initialize un-mmaper.  (Are you using a non-Oracle JVM?)  Compacted 
data files will not be removed promptly.  Consider using an Oracle JVM or using 
standard disk access mode"

Or the following:
"Error while unmapping segments"

Either should show up on 2.0.X or 2.1.X.  The "pressure" from a failure to 
unmap successfully will be greatly increased in 2.1 due to the increased 
frequency of hard-linking during the compaction process.  The tmplink files are 
created as part of an optimization for data hotness in the page cache (see 
CASSANDRA-6916).  The tmplink files are hard links to the files that are being 
written by the compaction process and should be removed during the 
replace-chaining process (replaces early opened by another per a configurable 
mb limit), and finally removed when the compaction completes.  As they're 
hard-links to the new sstable being written there should be minimal drive-space 
overhead associated with this process.

On CASSANDRA-8248, [~sterligovak]: you had a large collection of index files 
that still had references hanging around in the /proc filesystem but not data 
files.  As the default mode with compression memory maps index files but not 
data files, this implies there might have been a problem with unmapping the 
index files - the messages I mentioned above should show up if that's the 
problem you're facing.

So long as there are compactions in progress we will expect to see some count 
of tmplink files as they're actively created during the compaction process.  
Once that compaction completes, however, the tmplink files should no longer be 
on disk and you certainly shouldn't see tmplink files for sstables that are no 
longer present.

No luck replicating it on this end yet.

> tmplink files are not removed
> -
>
> Key: CASSANDRA-8061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Linux
>Reporter: Gianluca Borello
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 2.1.3
>
> Attachments: 8061_v1.txt, 8248-thread_dump.txt
>
>
> After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
> filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
> and that is very similar, and I confirm it happens both on 2.1.0 as well as 
> from the latest commit on the cassandra-2.1 branch 
> (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
>  from the cassandra-2.1)
> Even starting with a clean keyspace, after a few hours I get:
> {noformat}
> $ sudo find /raid0 | grep tmplink | xargs du -hs
> 2.7G  
> /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
> 13M   
> /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
> 1.8G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
> 12M   
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
> 5.2M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
> 822M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
> 7.3M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
> 1.2G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
> 6.7M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
> 1.1G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
> 11M   
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
> 1.7G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
> 812K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
> 122M  

[jira] [Commented] (CASSANDRA-7874) Validate functionality of different JSR-223 providers in UDFs

2014-11-24 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223655#comment-14223655
 ] 

Mikhail Stepura commented on CASSANDRA-7874:


for some reason I'm unable to apply the patch
{code}
4:21 $ git apply 7874v6.txt
7874v6.txt:29: trailing whitespace.

7874v6.txt:30: trailing whitespace.
REM JSR223 - collect all JSR223 engines' jars
7874v6.txt:31: trailing whitespace.
for /D %%P in ("%CASSANDRA_HOME%\lib\jsr223\*.*") do (
7874v6.txt:32: trailing whitespace.
for %%i in ("%%P\*.jar") do call :append "%%i"
7874v6.txt:33: trailing whitespace.
)
error: patch failed: bin/cassandra.bat:85
error: bin/cassandra.bat: patch does not apply
error: patch failed: bin/cassandra.in.bat:49
error: bin/cassandra.in.bat: patch does not apply
error: patch failed: conf/cassandra-env.ps1:197
error: conf/cassandra-env.ps1: patch does not apply
✘-1 ~/Documents/workspace/cassandra [trunk|…1⚑ 5]
{code}

> Validate functionality of different JSR-223 providers in UDFs
> -
>
> Key: CASSANDRA-7874
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7874
> Project: Cassandra
>  Issue Type: Task
>  Components: Core
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 3.0
>
> Attachments: 7874.txt, 7874v2.txt, 7874v3.txt, 7874v4.txt, 
> 7874v5.txt, 7874v6.txt
>
>
> CASSANDRA-7526 introduces ability to support optional JSR-223 providers like 
> Clojure, Jython, Groovy or JRuby.
> This ticket is about to test functionality with these providers but not to 
> include them in C* distribution.
> Expected result is a "how to" document, wiki page or similar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8342) Remove historical guidance for concurrent reader and writer tunings.

2014-11-24 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223618#comment-14223618
 ] 

Ryan McGuire commented on CASSANDRA-8342:
-

Do we only care about testing the tweak on reads?

I'll start it tomorrow during the day, that instance is $7/hr (!) so requires 
some babysitting time.

> Remove historical guidance for concurrent reader and writer tunings.
> 
>
> Key: CASSANDRA-8342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8342
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>Assignee: Ryan McGuire
>
> The cassandra.yaml and documentation provide guidance on tuning concurrent 
> readers or concurrent writers to system resources (cores, spindles). Testing 
> performed by both myself and customers demonstrates no benefit for thread 
> pool sizes above 64 in size, and for thread pools greater than 128 in size a 
> decrease in throughput. This is due to thread scheduling and synchronization 
> bottlenecks within Cassandra. 
> Additionally, for lower end systems reducing these thread pools provides very 
> little benefit because the bottleneck is typically moved to either IO or CPU.
> I propose that we set the default value to 64 (current default is 32), and 
> remove all guidance/recommendations regarding tuning.
> This recommendation may change in 3.0, but that would require further 
> experimentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6091) Better Vnode support in hadoop/pig

2014-11-24 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223556#comment-14223556
 ] 

mck commented on CASSANDRA-6091:


[~jbellis], [~alexliu68] any thoughts on that last patch? I'm pretty keen to 
wrap it up w/ CFIF+CFRR and submit a proper patch for it all.

> Better Vnode support in hadoop/pig
> --
>
> Key: CASSANDRA-6091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6091
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Alex Liu
>Assignee: Alex Liu
>
> CASSANDRA-6084 shows there are some issues during running hadoop/pig job if 
> vnodes are enable. Also the hadoop performance of vnode enabled nodes  are 
> bad for there are so many splits.
> The idea is to combine vnode splits into a big sudo splits so it work like 
> vnode is disable for hadoop/pig job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/8] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-24 Thread yukim
http://git-wip-us.apache.org/repos/asf/cassandra/blob/326a9ff2/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeTool.java
index 8a59e8d,000..1db0245
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@@ -1,2466 -1,0 +1,2476 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.tools;
 +
 +import java.io.*;
 +import java.lang.management.MemoryUsage;
 +import java.net.InetAddress;
 +import java.net.UnknownHostException;
 +import java.text.DecimalFormat;
 +import java.text.SimpleDateFormat;
 +import java.util.*;
 +import java.util.Map.Entry;
 +import java.util.concurrent.ExecutionException;
 +
 +import javax.management.openmbean.TabularData;
 +
 +import com.google.common.base.Joiner;
 +import com.google.common.base.Throwables;
 +import com.google.common.collect.ArrayListMultimap;
 +import com.google.common.collect.LinkedHashMultimap;
 +import com.google.common.collect.Maps;
 +import com.yammer.metrics.reporting.JmxReporter;
 +
 +import io.airlift.command.*;
 +
 +import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 +import org.apache.cassandra.config.Schema;
 +import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 +import org.apache.cassandra.db.Keyspace;
 +import org.apache.cassandra.db.compaction.CompactionManagerMBean;
 +import org.apache.cassandra.db.compaction.OperationType;
 +import org.apache.cassandra.io.util.FileUtils;
 +import org.apache.cassandra.locator.EndpointSnitchInfoMBean;
 +import org.apache.cassandra.locator.LocalStrategy;
 +import org.apache.cassandra.net.MessagingServiceMBean;
++import org.apache.cassandra.repair.RepairParallelism;
 +import org.apache.cassandra.service.CacheServiceMBean;
 +import org.apache.cassandra.streaming.ProgressInfo;
 +import org.apache.cassandra.streaming.SessionInfo;
 +import org.apache.cassandra.streaming.StreamState;
 +import org.apache.cassandra.utils.EstimatedHistogram;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.JVMStabilityInspector;
 +
 +import static com.google.common.base.Preconditions.checkArgument;
 +import static com.google.common.base.Preconditions.checkState;
 +import static com.google.common.base.Throwables.getStackTraceAsString;
 +import static com.google.common.collect.Iterables.toArray;
 +import static com.google.common.collect.Lists.newArrayList;
 +import static java.lang.Integer.parseInt;
 +import static java.lang.String.format;
 +import static org.apache.commons.lang3.ArrayUtils.EMPTY_STRING_ARRAY;
 +import static org.apache.commons.lang3.StringUtils.*;
 +
 +public class NodeTool
 +{
 +private static final String HISTORYFILE = "nodetool.history";
 +
 +public static void main(String... args)
 +{
 +List> commands = newArrayList(
 +Help.class,
 +Info.class,
 +Ring.class,
 +NetStats.class,
 +CfStats.class,
 +CfHistograms.class,
 +Cleanup.class,
 +ClearSnapshot.class,
 +Compact.class,
 +Scrub.class,
 +Flush.class,
 +UpgradeSSTable.class,
 +DisableAutoCompaction.class,
 +EnableAutoCompaction.class,
 +CompactionStats.class,
 +CompactionHistory.class,
 +Decommission.class,
 +DescribeCluster.class,
 +DisableBinary.class,
 +EnableBinary.class,
 +EnableGossip.class,
 +DisableGossip.class,
 +EnableHandoff.class,
 +EnableThrift.class,
 +GcStats.class,
 +GetCompactionThreshold.class,
 +GetCompactionThroughput.class,
 +GetStreamThroughput.class,
 +GetEndpoints.class,
 +GetSSTables.class,
 +GossipInfo.class,
 +InvalidateKeyCache.class,
 +Inval

[7/8] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-24 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/repair/RepairJob.java
src/java/org/apache/cassandra/repair/RepairSession.java
src/java/org/apache/cassandra/service/ActiveRepairService.java
src/java/org/apache/cassandra/service/StorageService.java
src/java/org/apache/cassandra/service/StorageServiceMBean.java
src/java/org/apache/cassandra/tools/NodeCmd.java
src/java/org/apache/cassandra/tools/NodeProbe.java
src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/326a9ff2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/326a9ff2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/326a9ff2

Branch: refs/heads/cassandra-2.1
Commit: 326a9ff2f831eeafedbc37b7a4b8f8f4a709e399
Parents: eac7781 41469ec
Author: Yuki Morishita 
Authored: Mon Nov 24 15:21:34 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 15:21:34 2014 -0600

--
 CHANGES.txt |   1 +
 .../DatacenterAwareRequestCoordinator.java  |  73 +++
 .../cassandra/repair/IRequestCoordinator.java   |  28 
 .../cassandra/repair/IRequestProcessor.java |  23 
 .../repair/ParallelRequestCoordinator.java  |  49 +++
 .../org/apache/cassandra/repair/RepairJob.java  |  32 -
 .../cassandra/repair/RepairParallelism.java |  22 
 .../apache/cassandra/repair/RepairSession.java  |  14 +-
 .../cassandra/repair/RequestCoordinator.java| 128 ---
 .../repair/SequentialRequestCoordinator.java|  58 +
 .../cassandra/service/ActiveRepairService.java  |   6 +-
 .../cassandra/service/StorageService.java   |  49 +--
 .../cassandra/service/StorageServiceMBean.java  |  20 ++-
 .../org/apache/cassandra/tools/NodeProbe.java   |  29 +++--
 .../org/apache/cassandra/tools/NodeTool.java|  14 +-
 .../repair/RequestCoordinatorTest.java  | 124 ++
 16 files changed, 499 insertions(+), 171 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/326a9ff2/CHANGES.txt
--
diff --cc CHANGES.txt
index c9e35d5,7519653..fa3ce8a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -26,34 -12,7 +26,35 @@@ Merged from 2.0
   * Avoid overlap in L1 when L0 contains many nonoverlapping
 sstables (CASSANDRA-8211)
   * Improve PropertyFileSnitch logging (CASSANDRA-8183)
 - * Abort liveRatio calculation if the memtable is flushed (CASSANDRA-8164)
++ * Add DC-aware sequential repair (CASSANDRA-8193)
 +
 +
 +2.1.2
 + * (cqlsh) parse_for_table_meta errors out on queries with undefined
 +   grammars (CASSANDRA-8262)
 + * (cqlsh) Fix SELECT ... TOKEN() function broken in C* 2.1.1 (CASSANDRA-8258)
 + * Fix Cassandra crash when running on JDK8 update 40 (CASSANDRA-8209)
 + * Optimize partitioner tokens (CASSANDRA-8230)
 + * Improve compaction of repaired/unrepaired sstables (CASSANDRA-8004)
 + * Make cache serializers pluggable (CASSANDRA-8096)
 + * Fix issues with CONTAINS (KEY) queries on secondary indexes
 +   (CASSANDRA-8147)
 + * Fix read-rate tracking of sstables for some queries (CASSANDRA-8239)
 + * Fix default timestamp in QueryOptions (CASSANDRA-8246)
 + * Set socket timeout when reading remote version (CASSANDRA-8188)
 + * Refactor how we track live size (CASSANDRA-7852)
 + * Make sure unfinished compaction files are removed (CASSANDRA-8124)
 + * Fix shutdown when run as Windows service (CASSANDRA-8136)
 + * Fix DESCRIBE TABLE with custom indexes (CASSANDRA-8031)
 + * Fix race in RecoveryManagerTest (CASSANDRA-8176)
 + * Avoid IllegalArgumentException while sorting sstables in
 +   IndexSummaryManager (CASSANDRA-8182)
 + * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)
 + * Add 'die' policy for commit log and disk failure (CASSANDRA-7927)
 + * Fix installing as service on Windows (CASSANDRA-8115)
 + * Fix CREATE TABLE for CQL2 (CASSANDRA-8144)
 + * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
 +Merged from 2.0:
   * Correctly handle non-text column names in cql3 (CASSANDRA-8178)
   * Fix deletion for indexes on primary key columns (CASSANDRA-8206)
   * Add 'nodetool statusgossip' (CASSANDRA-8125)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/326a9ff2/src/java/org/apache/cassandra/repair/RepairJob.java
--
diff --cc src/java/org/apache/cassandra/repair/RepairJob.java
index 8057ed5,7c791aa..20d5d97
--- a/src/java/org/apache/cassandra/repair/RepairJob.java
+++ b/src/java/org/apache/cassandra/repair/RepairJob.java
@@@ -73,12 -72,14 +73,14 @@@ public class RepairJo
   ListeningExecu

[5/8] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-24 Thread yukim
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/repair/RepairJob.java
src/java/org/apache/cassandra/repair/RepairSession.java
src/java/org/apache/cassandra/service/ActiveRepairService.java
src/java/org/apache/cassandra/service/StorageService.java
src/java/org/apache/cassandra/service/StorageServiceMBean.java
src/java/org/apache/cassandra/tools/NodeCmd.java
src/java/org/apache/cassandra/tools/NodeProbe.java
src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/326a9ff2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/326a9ff2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/326a9ff2

Branch: refs/heads/trunk
Commit: 326a9ff2f831eeafedbc37b7a4b8f8f4a709e399
Parents: eac7781 41469ec
Author: Yuki Morishita 
Authored: Mon Nov 24 15:21:34 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 15:21:34 2014 -0600

--
 CHANGES.txt |   1 +
 .../DatacenterAwareRequestCoordinator.java  |  73 +++
 .../cassandra/repair/IRequestCoordinator.java   |  28 
 .../cassandra/repair/IRequestProcessor.java |  23 
 .../repair/ParallelRequestCoordinator.java  |  49 +++
 .../org/apache/cassandra/repair/RepairJob.java  |  32 -
 .../cassandra/repair/RepairParallelism.java |  22 
 .../apache/cassandra/repair/RepairSession.java  |  14 +-
 .../cassandra/repair/RequestCoordinator.java| 128 ---
 .../repair/SequentialRequestCoordinator.java|  58 +
 .../cassandra/service/ActiveRepairService.java  |   6 +-
 .../cassandra/service/StorageService.java   |  49 +--
 .../cassandra/service/StorageServiceMBean.java  |  20 ++-
 .../org/apache/cassandra/tools/NodeProbe.java   |  29 +++--
 .../org/apache/cassandra/tools/NodeTool.java|  14 +-
 .../repair/RequestCoordinatorTest.java  | 124 ++
 16 files changed, 499 insertions(+), 171 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/326a9ff2/CHANGES.txt
--
diff --cc CHANGES.txt
index c9e35d5,7519653..fa3ce8a
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -26,34 -12,7 +26,35 @@@ Merged from 2.0
   * Avoid overlap in L1 when L0 contains many nonoverlapping
 sstables (CASSANDRA-8211)
   * Improve PropertyFileSnitch logging (CASSANDRA-8183)
 - * Abort liveRatio calculation if the memtable is flushed (CASSANDRA-8164)
++ * Add DC-aware sequential repair (CASSANDRA-8193)
 +
 +
 +2.1.2
 + * (cqlsh) parse_for_table_meta errors out on queries with undefined
 +   grammars (CASSANDRA-8262)
 + * (cqlsh) Fix SELECT ... TOKEN() function broken in C* 2.1.1 (CASSANDRA-8258)
 + * Fix Cassandra crash when running on JDK8 update 40 (CASSANDRA-8209)
 + * Optimize partitioner tokens (CASSANDRA-8230)
 + * Improve compaction of repaired/unrepaired sstables (CASSANDRA-8004)
 + * Make cache serializers pluggable (CASSANDRA-8096)
 + * Fix issues with CONTAINS (KEY) queries on secondary indexes
 +   (CASSANDRA-8147)
 + * Fix read-rate tracking of sstables for some queries (CASSANDRA-8239)
 + * Fix default timestamp in QueryOptions (CASSANDRA-8246)
 + * Set socket timeout when reading remote version (CASSANDRA-8188)
 + * Refactor how we track live size (CASSANDRA-7852)
 + * Make sure unfinished compaction files are removed (CASSANDRA-8124)
 + * Fix shutdown when run as Windows service (CASSANDRA-8136)
 + * Fix DESCRIBE TABLE with custom indexes (CASSANDRA-8031)
 + * Fix race in RecoveryManagerTest (CASSANDRA-8176)
 + * Avoid IllegalArgumentException while sorting sstables in
 +   IndexSummaryManager (CASSANDRA-8182)
 + * Shutdown JVM on file descriptor exhaustion (CASSANDRA-7579)
 + * Add 'die' policy for commit log and disk failure (CASSANDRA-7927)
 + * Fix installing as service on Windows (CASSANDRA-8115)
 + * Fix CREATE TABLE for CQL2 (CASSANDRA-8144)
 + * Avoid boxing in ColumnStats min/max trackers (CASSANDRA-8109)
 +Merged from 2.0:
   * Correctly handle non-text column names in cql3 (CASSANDRA-8178)
   * Fix deletion for indexes on primary key columns (CASSANDRA-8206)
   * Add 'nodetool statusgossip' (CASSANDRA-8125)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/326a9ff2/src/java/org/apache/cassandra/repair/RepairJob.java
--
diff --cc src/java/org/apache/cassandra/repair/RepairJob.java
index 8057ed5,7c791aa..20d5d97
--- a/src/java/org/apache/cassandra/repair/RepairJob.java
+++ b/src/java/org/apache/cassandra/repair/RepairJob.java
@@@ -73,12 -72,14 +73,14 @@@ public class RepairJo
   ListeningExecutorServi

[2/8] cassandra git commit: Add DC-aware sequential repair

2014-11-24 Thread yukim
Add DC-aware sequential repair

patch by Jimmy MÃ¥rdell; reviewed by yukim for CASSANDRA-8193


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/41469ecf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/41469ecf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/41469ecf

Branch: refs/heads/cassandra-2.1
Commit: 41469ecf6a27e94441f96ef905ed3b5354c23987
Parents: 17de36f
Author: Jimmy MÃ¥rdell 
Authored: Mon Nov 24 15:07:33 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 15:09:41 2014 -0600

--
 CHANGES.txt |   1 +
 .../DatacenterAwareRequestCoordinator.java  |  73 +++
 .../cassandra/repair/IRequestCoordinator.java   |  28 
 .../cassandra/repair/IRequestProcessor.java |  23 
 .../repair/ParallelRequestCoordinator.java  |  49 +++
 .../org/apache/cassandra/repair/RepairJob.java  |  32 -
 .../cassandra/repair/RepairParallelism.java |  22 
 .../apache/cassandra/repair/RepairSession.java  |  14 +-
 .../cassandra/repair/RequestCoordinator.java| 128 ---
 .../repair/SequentialRequestCoordinator.java|  58 +
 .../cassandra/service/ActiveRepairService.java  |   6 +-
 .../cassandra/service/StorageService.java   |  64 ++
 .../cassandra/service/StorageServiceMBean.java  |  19 ++-
 .../org/apache/cassandra/tools/NodeCmd.java |  21 ++-
 .../org/apache/cassandra/tools/NodeProbe.java   |  30 +++--
 .../apache/cassandra/tools/NodeToolHelp.yaml|   1 +
 .../repair/RequestCoordinatorTest.java  | 124 ++
 17 files changed, 506 insertions(+), 187 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/41469ecf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fe23248..7519653 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -24,6 +24,7 @@
  * Allow concurrent writing of the same table in the same JVM using
CQLSSTableWriter (CASSANDRA-7463)
  * Fix totalDiskSpaceUsed calculation (CASSANDRA-8205)
+ * Add DC-aware sequential repair (CASSANDRA-8193)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/41469ecf/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
--
diff --git 
a/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java 
b/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
new file mode 100644
index 000..ab3e03e
--- /dev/null
+++ 
b/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.repair;
+
+import org.apache.cassandra.config.DatabaseDescriptor;
+
+import java.net.InetAddress;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.Map;
+import java.util.Queue;
+
+public class DatacenterAwareRequestCoordinator implements 
IRequestCoordinator
+{
+private Map> requestsByDatacenter = new 
HashMap<>();
+private int remaining = 0;
+private final IRequestProcessor processor;
+
+protected DatacenterAwareRequestCoordinator(IRequestProcessor 
processor)
+{
+this.processor = processor;
+}
+
+public void add(InetAddress request)
+{
+String dc = 
DatabaseDescriptor.getEndpointSnitch().getDatacenter(request);
+Queue queue = requestsByDatacenter.get(dc);
+if (queue == null)
+{
+queue = new LinkedList<>();
+requestsByDatacenter.put(dc, queue);
+}
+queue.add(request);
+remaining++;
+}
+
+public void start()
+{
+for (Queue requests : requestsByDatacenter.values())
+{
+if (!requests.isEmpty())
+  processor.process(requests.peek());
+}
+}
+
+// Returns how many request remains
+public int completed(InetAddress reque

[jira] [Created] (CASSANDRA-8371) DateTieredCompactionStrategy is always compacting

2014-11-24 Thread mck (JIRA)
mck created CASSANDRA-8371:
--

 Summary: DateTieredCompactionStrategy is always compacting 
 Key: CASSANDRA-8371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: mck
 Attachments: java_gc_counts_rate-month.png, read-latency.png, 
sstables.png, vg2_iad-month.png

Running 2.0.11 and having switched a table to 
[DTCS|https://issues.apache.org/jira/browse/CASSANDRA-6602] we've seen that 
disk IO and gc count increase, along with the number of reads happening in the 
"compaction" hump of cfhistograms.

Data, and generally performance, looks good, but compactions are always 
happening, and pending compactions are building up.

The schema for this is 
{code}CREATE TABLE search (
  loginid text,
  searchid timeuuid,
  description text,
  searchkey text,
  searchurl text,
  PRIMARY KEY ((loginid), searchid)
);{code}

We're sitting on about 82G (per replica) across 6 nodes in 4 DCs.
CQL executed against this keyspace, and traffic patterns, can be seen in slides 
7+8 of https://prezi.com/b9-aj6p2esft

Attached are sstables-per-read and read-latency graphs from cfhistograms, and 
screenshots of our munin graphs as we have gone from STCS, to LCS (week ~44), 
to DTCS (week ~46).

These screenshots are also found in the prezi on slides 9-11.

[~pmcfadin], [~Bj0rn], 

Can this be a consequence of occasional deleted rows, as is described under (3) 
in the description of CASSANDRA-6602 ?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8338) Simplify Token Selection

2014-11-24 Thread Joaquin Casares (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223539#comment-14223539
 ] 

Joaquin Casares commented on CASSANDRA-8338:


The idea was to avoid having to copy-paste formulas around from different Chef, 
Puppet, Docker, etc scripts and instead have token generation be something that 
Cassandra provides out of the box.

> Simplify Token Selection
> 
>
> Key: CASSANDRA-8338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Joaquin Casares
>Assignee: Jeremiah Jordan
>Priority: Trivial
>  Labels: lhf
>
> When creating provisioning scripts, especially when running tools like Chef, 
> each node is launched individually. When not using vnodes your initial setup 
> will always be unbalanced unless you handle token assignment within your 
> scripts. 
> I spoke to someone recently who was using this in production and his 
> operations team wasn't too pleased that they had to use OpsCenter as an extra 
> step for rebalancing. Instead, we should provide this functionality out of 
> the box for new clusters.
> Instead, could we have the following options below the initial_token section?
> {CODE}
> # datacenter_index: 0
> # node_index: 0
> # datacenter_size: 1
> {CODE}
> The above configuration options, when uncommented, would do the math of:
> {CODE}
> token = node_index * (range / datacenter_size) + (datacenter_index * 100) 
> + start_of_range
> {CODE}
> This means that users don't have to repeatedly implement the initial_token 
> selection code nor know the range and offsets of their partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[8/8] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-24 Thread yukim
Merge branch 'cassandra-2.1' into trunk

Conflicts:
src/java/org/apache/cassandra/repair/RepairJob.java
src/java/org/apache/cassandra/repair/RepairSession.java
src/java/org/apache/cassandra/service/ActiveRepairService.java
src/java/org/apache/cassandra/service/StorageService.java
src/java/org/apache/cassandra/service/StorageServiceMBean.java
src/java/org/apache/cassandra/tools/NodeProbe.java
src/java/org/apache/cassandra/tools/NodeTool.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c023d492
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c023d492
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c023d492

Branch: refs/heads/trunk
Commit: c023d4922863bd4e7d3c959035a8634cd370a829
Parents: 5841131 326a9ff
Author: Yuki Morishita 
Authored: Mon Nov 24 15:25:55 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 15:25:55 2014 -0600

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/repair/RepairJob.java  | 123 ---
 .../cassandra/repair/RepairParallelism.java |  57 +
 .../apache/cassandra/repair/RepairSession.java  |  10 +-
 .../cassandra/repair/messages/RepairOption.java |  27 ++--
 .../cassandra/service/ActiveRepairService.java  |   5 +-
 .../cassandra/service/StorageService.java   |  42 +--
 .../cassandra/service/StorageServiceMBean.java  |  10 +-
 .../org/apache/cassandra/tools/NodeProbe.java   |   7 +-
 .../org/apache/cassandra/tools/NodeTool.java|  11 +-
 .../cassandra/repair/RepairSessionTest.java |   2 +-
 .../repair/messages/RepairOptionTest.java   |   8 +-
 12 files changed, 249 insertions(+), 54 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c023d492/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c023d492/src/java/org/apache/cassandra/repair/RepairJob.java
--
diff --cc src/java/org/apache/cassandra/repair/RepairJob.java
index 0b40d4a,20d5d97..34b4217
--- a/src/java/org/apache/cassandra/repair/RepairJob.java
+++ b/src/java/org/apache/cassandra/repair/RepairJob.java
@@@ -18,18 -18,22 +18,17 @@@
  package org.apache.cassandra.repair;
  
  import java.net.InetAddress;
- import java.util.ArrayList;
- import java.util.Collection;
- import java.util.List;
+ import java.util.*;
 -import java.util.concurrent.atomic.AtomicInteger;
 -import java.util.concurrent.locks.Condition;
  
  import com.google.common.util.concurrent.*;
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
  
++import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.db.Keyspace;
 -import org.apache.cassandra.dht.Range;
 -import org.apache.cassandra.dht.Token;
 -import org.apache.cassandra.net.MessagingService;
 -import org.apache.cassandra.repair.messages.ValidationRequest;
 +import org.apache.cassandra.gms.FailureDetector;
  import org.apache.cassandra.utils.FBUtilities;
 -import org.apache.cassandra.utils.MerkleTree;
 -import org.apache.cassandra.utils.concurrent.SimpleCondition;
 +import org.apache.cassandra.utils.Pair;
  
  /**
   * RepairJob runs repair on given ColumnFamily.
@@@ -38,49 -42,87 +37,50 @@@ public class RepairJob extends Abstract
  {
  private static Logger logger = LoggerFactory.getLogger(RepairJob.class);
  
 -public final RepairJobDesc desc;
 +private final RepairSession session;
 +private final RepairJobDesc desc;
- private final boolean isSequential;
+ private final RepairParallelism parallelismDegree;
 -// first we send tree requests. this tracks the endpoints remaining to 
hear from
 -private final IRequestCoordinator treeRequests;
 -// tree responses are then tracked here
 -private final List trees = new ArrayList<>();
 -// once all responses are received, each tree is compared with each 
other, and differencer tasks
 -// are submitted. the job is done when all differencers are complete.
 +private final long repairedAt;
  private final ListeningExecutorService taskExecutor;
  
  /**
   * Create repair job to run on specific columnfamily
 + *
 + * @param session RepairSession that this RepairJob belongs
 + * @param columnFamily name of the ColumnFamily to repair
-  * @param isSequential when true, validation runs sequentially among 
replica
++ * @param parallelismDegree how to run repair job in parallel
 + * @param repairedAt when the repair occurred (millis)
 + * @param taskExecutor Executor to run various repair tasks
   */
 -public RepairJob(IRepairJobEventListener listener,
 - UUID parentSess

[1/8] cassandra git commit: Add DC-aware sequential repair

2014-11-24 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 17de36f24 -> 41469ecf6
  refs/heads/cassandra-2.1 eac7781e7 -> 326a9ff2f
  refs/heads/trunk 584113103 -> c023d4922


Add DC-aware sequential repair

patch by Jimmy MÃ¥rdell; reviewed by yukim for CASSANDRA-8193


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/41469ecf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/41469ecf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/41469ecf

Branch: refs/heads/cassandra-2.0
Commit: 41469ecf6a27e94441f96ef905ed3b5354c23987
Parents: 17de36f
Author: Jimmy MÃ¥rdell 
Authored: Mon Nov 24 15:07:33 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 15:09:41 2014 -0600

--
 CHANGES.txt |   1 +
 .../DatacenterAwareRequestCoordinator.java  |  73 +++
 .../cassandra/repair/IRequestCoordinator.java   |  28 
 .../cassandra/repair/IRequestProcessor.java |  23 
 .../repair/ParallelRequestCoordinator.java  |  49 +++
 .../org/apache/cassandra/repair/RepairJob.java  |  32 -
 .../cassandra/repair/RepairParallelism.java |  22 
 .../apache/cassandra/repair/RepairSession.java  |  14 +-
 .../cassandra/repair/RequestCoordinator.java| 128 ---
 .../repair/SequentialRequestCoordinator.java|  58 +
 .../cassandra/service/ActiveRepairService.java  |   6 +-
 .../cassandra/service/StorageService.java   |  64 ++
 .../cassandra/service/StorageServiceMBean.java  |  19 ++-
 .../org/apache/cassandra/tools/NodeCmd.java |  21 ++-
 .../org/apache/cassandra/tools/NodeProbe.java   |  30 +++--
 .../apache/cassandra/tools/NodeToolHelp.yaml|   1 +
 .../repair/RequestCoordinatorTest.java  | 124 ++
 17 files changed, 506 insertions(+), 187 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/41469ecf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fe23248..7519653 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -24,6 +24,7 @@
  * Allow concurrent writing of the same table in the same JVM using
CQLSSTableWriter (CASSANDRA-7463)
  * Fix totalDiskSpaceUsed calculation (CASSANDRA-8205)
+ * Add DC-aware sequential repair (CASSANDRA-8193)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/41469ecf/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
--
diff --git 
a/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java 
b/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
new file mode 100644
index 000..ab3e03e
--- /dev/null
+++ 
b/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.repair;
+
+import org.apache.cassandra.config.DatabaseDescriptor;
+
+import java.net.InetAddress;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.Map;
+import java.util.Queue;
+
+public class DatacenterAwareRequestCoordinator implements 
IRequestCoordinator
+{
+private Map> requestsByDatacenter = new 
HashMap<>();
+private int remaining = 0;
+private final IRequestProcessor processor;
+
+protected DatacenterAwareRequestCoordinator(IRequestProcessor 
processor)
+{
+this.processor = processor;
+}
+
+public void add(InetAddress request)
+{
+String dc = 
DatabaseDescriptor.getEndpointSnitch().getDatacenter(request);
+Queue queue = requestsByDatacenter.get(dc);
+if (queue == null)
+{
+queue = new LinkedList<>();
+requestsByDatacenter.put(dc, queue);
+}
+queue.add(request);
+remaining++;
+}
+
+public void start()
+{
+for (Queue requests : requestsByDatacenter.values())
+{
+

[3/8] cassandra git commit: Add DC-aware sequential repair

2014-11-24 Thread yukim
Add DC-aware sequential repair

patch by Jimmy MÃ¥rdell; reviewed by yukim for CASSANDRA-8193


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/41469ecf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/41469ecf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/41469ecf

Branch: refs/heads/trunk
Commit: 41469ecf6a27e94441f96ef905ed3b5354c23987
Parents: 17de36f
Author: Jimmy MÃ¥rdell 
Authored: Mon Nov 24 15:07:33 2014 -0600
Committer: Yuki Morishita 
Committed: Mon Nov 24 15:09:41 2014 -0600

--
 CHANGES.txt |   1 +
 .../DatacenterAwareRequestCoordinator.java  |  73 +++
 .../cassandra/repair/IRequestCoordinator.java   |  28 
 .../cassandra/repair/IRequestProcessor.java |  23 
 .../repair/ParallelRequestCoordinator.java  |  49 +++
 .../org/apache/cassandra/repair/RepairJob.java  |  32 -
 .../cassandra/repair/RepairParallelism.java |  22 
 .../apache/cassandra/repair/RepairSession.java  |  14 +-
 .../cassandra/repair/RequestCoordinator.java| 128 ---
 .../repair/SequentialRequestCoordinator.java|  58 +
 .../cassandra/service/ActiveRepairService.java  |   6 +-
 .../cassandra/service/StorageService.java   |  64 ++
 .../cassandra/service/StorageServiceMBean.java  |  19 ++-
 .../org/apache/cassandra/tools/NodeCmd.java |  21 ++-
 .../org/apache/cassandra/tools/NodeProbe.java   |  30 +++--
 .../apache/cassandra/tools/NodeToolHelp.yaml|   1 +
 .../repair/RequestCoordinatorTest.java  | 124 ++
 17 files changed, 506 insertions(+), 187 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/41469ecf/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fe23248..7519653 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -24,6 +24,7 @@
  * Allow concurrent writing of the same table in the same JVM using
CQLSSTableWriter (CASSANDRA-7463)
  * Fix totalDiskSpaceUsed calculation (CASSANDRA-8205)
+ * Add DC-aware sequential repair (CASSANDRA-8193)
 
 
 2.0.11:

http://git-wip-us.apache.org/repos/asf/cassandra/blob/41469ecf/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
--
diff --git 
a/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java 
b/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
new file mode 100644
index 000..ab3e03e
--- /dev/null
+++ 
b/src/java/org/apache/cassandra/repair/DatacenterAwareRequestCoordinator.java
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.repair;
+
+import org.apache.cassandra.config.DatabaseDescriptor;
+
+import java.net.InetAddress;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.Map;
+import java.util.Queue;
+
+public class DatacenterAwareRequestCoordinator implements 
IRequestCoordinator
+{
+private Map> requestsByDatacenter = new 
HashMap<>();
+private int remaining = 0;
+private final IRequestProcessor processor;
+
+protected DatacenterAwareRequestCoordinator(IRequestProcessor 
processor)
+{
+this.processor = processor;
+}
+
+public void add(InetAddress request)
+{
+String dc = 
DatabaseDescriptor.getEndpointSnitch().getDatacenter(request);
+Queue queue = requestsByDatacenter.get(dc);
+if (queue == null)
+{
+queue = new LinkedList<>();
+requestsByDatacenter.put(dc, queue);
+}
+queue.add(request);
+remaining++;
+}
+
+public void start()
+{
+for (Queue requests : requestsByDatacenter.values())
+{
+if (!requests.isEmpty())
+  processor.process(requests.peek());
+}
+}
+
+// Returns how many request remains
+public int completed(InetAddress request)
+   

[4/8] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-24 Thread yukim
http://git-wip-us.apache.org/repos/asf/cassandra/blob/326a9ff2/src/java/org/apache/cassandra/tools/NodeTool.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeTool.java
index 8a59e8d,000..1db0245
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/tools/NodeTool.java
+++ b/src/java/org/apache/cassandra/tools/NodeTool.java
@@@ -1,2466 -1,0 +1,2476 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.tools;
 +
 +import java.io.*;
 +import java.lang.management.MemoryUsage;
 +import java.net.InetAddress;
 +import java.net.UnknownHostException;
 +import java.text.DecimalFormat;
 +import java.text.SimpleDateFormat;
 +import java.util.*;
 +import java.util.Map.Entry;
 +import java.util.concurrent.ExecutionException;
 +
 +import javax.management.openmbean.TabularData;
 +
 +import com.google.common.base.Joiner;
 +import com.google.common.base.Throwables;
 +import com.google.common.collect.ArrayListMultimap;
 +import com.google.common.collect.LinkedHashMultimap;
 +import com.google.common.collect.Maps;
 +import com.yammer.metrics.reporting.JmxReporter;
 +
 +import io.airlift.command.*;
 +
 +import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 +import org.apache.cassandra.config.Schema;
 +import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 +import org.apache.cassandra.db.Keyspace;
 +import org.apache.cassandra.db.compaction.CompactionManagerMBean;
 +import org.apache.cassandra.db.compaction.OperationType;
 +import org.apache.cassandra.io.util.FileUtils;
 +import org.apache.cassandra.locator.EndpointSnitchInfoMBean;
 +import org.apache.cassandra.locator.LocalStrategy;
 +import org.apache.cassandra.net.MessagingServiceMBean;
++import org.apache.cassandra.repair.RepairParallelism;
 +import org.apache.cassandra.service.CacheServiceMBean;
 +import org.apache.cassandra.streaming.ProgressInfo;
 +import org.apache.cassandra.streaming.SessionInfo;
 +import org.apache.cassandra.streaming.StreamState;
 +import org.apache.cassandra.utils.EstimatedHistogram;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.JVMStabilityInspector;
 +
 +import static com.google.common.base.Preconditions.checkArgument;
 +import static com.google.common.base.Preconditions.checkState;
 +import static com.google.common.base.Throwables.getStackTraceAsString;
 +import static com.google.common.collect.Iterables.toArray;
 +import static com.google.common.collect.Lists.newArrayList;
 +import static java.lang.Integer.parseInt;
 +import static java.lang.String.format;
 +import static org.apache.commons.lang3.ArrayUtils.EMPTY_STRING_ARRAY;
 +import static org.apache.commons.lang3.StringUtils.*;
 +
 +public class NodeTool
 +{
 +private static final String HISTORYFILE = "nodetool.history";
 +
 +public static void main(String... args)
 +{
 +List> commands = newArrayList(
 +Help.class,
 +Info.class,
 +Ring.class,
 +NetStats.class,
 +CfStats.class,
 +CfHistograms.class,
 +Cleanup.class,
 +ClearSnapshot.class,
 +Compact.class,
 +Scrub.class,
 +Flush.class,
 +UpgradeSSTable.class,
 +DisableAutoCompaction.class,
 +EnableAutoCompaction.class,
 +CompactionStats.class,
 +CompactionHistory.class,
 +Decommission.class,
 +DescribeCluster.class,
 +DisableBinary.class,
 +EnableBinary.class,
 +EnableGossip.class,
 +DisableGossip.class,
 +EnableHandoff.class,
 +EnableThrift.class,
 +GcStats.class,
 +GetCompactionThreshold.class,
 +GetCompactionThroughput.class,
 +GetStreamThroughput.class,
 +GetEndpoints.class,
 +GetSSTables.class,
 +GossipInfo.class,
 +InvalidateKeyCache.class,
 +Inval

[jira] [Updated] (CASSANDRA-8253) cassandra-stress 2.1 doesn't support LOCAL_ONE

2014-11-24 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8253:
--
Priority: Minor  (was: Major)

> cassandra-stress 2.1 doesn't support LOCAL_ONE
> --
>
> Key: CASSANDRA-8253
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8253
> Project: Cassandra
>  Issue Type: Bug
>Reporter: J.B. Langston
>Assignee: Liang Xie
>Priority: Minor
> Fix For: 2.1.3
>
> Attachments: CASSANDRA-8253.txt
>
>
> Looks like a simple oversight in argument parsing:
> ➜  bin  ./cassandra-stress write cl=LOCAL_ONE
> Invalid value LOCAL_ONE; must match pattern 
> ONE|QUORUM|LOCAL_QUORUM|EACH_QUORUM|ALL|ANY
> Also, CASSANDRA-7077 argues that it should be using LOCAL_ONE by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8253) cassandra-stress 2.1 doesn't support LOCAL_ONE

2014-11-24 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8253:
--
Fix Version/s: 2.1.3

> cassandra-stress 2.1 doesn't support LOCAL_ONE
> --
>
> Key: CASSANDRA-8253
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8253
> Project: Cassandra
>  Issue Type: Bug
>Reporter: J.B. Langston
>Assignee: Liang Xie
> Fix For: 2.1.3
>
> Attachments: CASSANDRA-8253.txt
>
>
> Looks like a simple oversight in argument parsing:
> ➜  bin  ./cassandra-stress write cl=LOCAL_ONE
> Invalid value LOCAL_ONE; must match pattern 
> ONE|QUORUM|LOCAL_QUORUM|EACH_QUORUM|ALL|ANY
> Also, CASSANDRA-7077 argues that it should be using LOCAL_ONE by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8303) Provide "strict mode" for CQL Queries

2014-11-24 Thread Anupam Arora (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223513#comment-14223513
 ] 

Anupam Arora commented on CASSANDRA-8303:
-

I am out of office 11/24-28, and will not be able to check my e-mails. I will 
reply as soon as I can.


> Provide "strict mode" for CQL Queries
> -
>
> Key: CASSANDRA-8303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8303
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Anupam Arora
> Fix For: 3.0
>
>
> Please provide a "strict mode" option in cassandra that will kick out any CQL 
> queries that are expensive, e.g. any query with ALLOWS FILTERING, 
> multi-partition queries, secondary index queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8303) Provide "strict mode" for CQL Queries

2014-11-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8303:
---
Fix Version/s: 3.0

> Provide "strict mode" for CQL Queries
> -
>
> Key: CASSANDRA-8303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8303
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Anupam Arora
> Fix For: 3.0
>
>
> Please provide a "strict mode" option in cassandra that will kick out any CQL 
> queries that are expensive, e.g. any query with ALLOWS FILTERING, 
> multi-partition queries, secondary index queries, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8342) Remove historical guidance for concurrent reader and writer tunings.

2014-11-24 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-8342:
---

Assignee: Ryan McGuire

> Remove historical guidance for concurrent reader and writer tunings.
> 
>
> Key: CASSANDRA-8342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8342
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>Assignee: Ryan McGuire
>
> The cassandra.yaml and documentation provide guidance on tuning concurrent 
> readers or concurrent writers to system resources (cores, spindles). Testing 
> performed by both myself and customers demonstrates no benefit for thread 
> pool sizes above 64 in size, and for thread pools greater than 128 in size a 
> decrease in throughput. This is due to thread scheduling and synchronization 
> bottlenecks within Cassandra. 
> Additionally, for lower end systems reducing these thread pools provides very 
> little benefit because the bottleneck is typically moved to either IO or CPU.
> I propose that we set the default value to 64 (current default is 32), and 
> remove all guidance/recommendations regarding tuning.
> This recommendation may change in 3.0, but that would require further 
> experimentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8366) Repair grows data on nodes, causes load to become unbalanced

2014-11-24 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-8366:
---

Assignee: Alan Boudreault

> Repair grows data on nodes, causes load to become unbalanced
> 
>
> Key: CASSANDRA-8366
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8366
> Project: Cassandra
>  Issue Type: Bug
> Environment: 4 node cluster
> 2.1.2 Cassandra
> Inserts and reads are done with CQL driver
>Reporter: Jan Karlsson
>Assignee: Alan Boudreault
>
> There seems to be something weird going on when repairing data.
> I have a program that runs 2 hours which inserts 250 random numbers and reads 
> 250 times per second. It creates 2 keyspaces with SimpleStrategy and RF of 3. 
> I use size-tiered compaction for my cluster. 
> After those 2 hours I run a repair and the load of all nodes goes up. If I 
> run incremental repair the load goes up alot more. I saw the load shoot up 8 
> times the original size multiple times with incremental repair. (from 2G to 
> 16G)
> with node 9 8 7 and 6 the repro procedure looked like this:
> (Note that running full repair first is not a requirement to reproduce.)
> After 2 hours of 250 reads + 250 writes per second:
> UN  9  583.39 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  584.01 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  583.72 MB  256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  583.84 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> Repair -pr -par on all nodes sequentially
> UN  9  746.29 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  751.02 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  748.89 MB  256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  758.34 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> repair -inc -par on all nodes sequentially
> UN  9  2.41 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.53 GB256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  2.6 GB 256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  2.17 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> after rolling restart
> UN  9  1.47 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  1.5 GB 256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  2.46 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.19 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> compact all nodes sequentially
> UN  9  989.99 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  994.75 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  1.46 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  758.82 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> repair -inc -par on all nodes sequentially
> UN  9  1.98 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.3 GB 256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  3.71 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.68 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> restart once more
> UN  9  2 GB   256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
> UN  8  2.05 GB256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
> UN  7  4.1 GB 256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
> UN  6  1.68 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1
> Is there something im missing or is this strange behavior?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)

2014-11-24 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223479#comment-14223479
 ] 

Ariel Weisberg commented on CASSANDRA-7438:
---

I think that for caches the behavior you want to avoid most is slowly growing 
heap. People hate that because it's unpredictable and they don't know when it's 
going to stop. You can always start with jemalloc and get the feature working 
and then iterate on memory management.

Fixed block sizes is a baby and bath water scenario to get the desirable fixed 
memory utilization property. When you want to build everything out of fixed 
size pages you have to slot the pages or do some other internal page management 
strategy so you can pack multiple things and rewrite pages as they fragment. 
You also need size tiered free lists and fragmentation metadata for pages so 
you can find partial free pages. That kind of thing only makes sense in ye 
olden database land where rewriting an already dirty page is cheaper than more 
IOPs. In memory you can relocate objects. 

Memcached used to have the  problem that instead of the heap growing the cache 
would lose capacity to fragmentation. FB implemented slab rebalancing in their 
fork, and then Memcached did its own implementation. The issue was internal 
fragmentation due to having too many of the wrong size slabs. 

For Robert
* Executor service shutdown, never really got why it takes a timeout nor why 
there is no blocking version. 99% of the time if it doesn't shutdown within the 
timeout it's a bug and you don't want to ignore it. We are pedantic about 
everything else why not this? It's also unused right now.
* Stats could go into an atomic long array with padding. It really depends on 
the access pattern. You want data that is read/written at the same time on the 
same cache line. These are global counters so they will be contended by 
everyone accessing the cache, better that they only have to pull in one cache 
line with all counters then multiple and have to wait for exclusive access 
before writing to each one. Also consider LongAdder.
* If you want to do your own memory management strategy I think something like 
segregated storage as in boost pool with size tiers for powers of two and power 
of two plus previous power of two. You can CAS the head of the free list for 
each tier to make it thread safe, and lock when allocating out a new block 
instead of the free list. This won't adapt to changing size distributions. For 
that stuff needs to be relocatable
* I'll bet you could use a stamped lock pattern and readers might not have to 
lock all. I think getting it working with just a lock is fine.
* I am not sure shrinking is very important? The table is pretty dense and 
should be a small portion of total memory once all the other memory is 
accounted for. You would need a lot of tiny cache entries to really bloat the 
table and then the population distribution would need to change to make that a 
waste.
* LRU lists per segment seems like it's not viable. That isn't a close enough 
approximation to LRU since we want at most two or three entries per partition.
* Some loops of very similar byte munging in HashEntryAccess
* Periodic cleanup check is maybe not so nice. An edge trigger via a CAS field 
would be nicer and move that up to > 80% since on a big-memory machine that is 
a lot of wasted cache space. Walking the entire LRU could take several seconds, 
but if it is amortized across a lot of expiration maybe it is ok.
* Some rehash required checking is duplicated in OHCacheImpl

For Vijay
* sun.misc.Hashing doesn't seem to exist for me, maybe a Java 8 issue?
* The queue really needs to be bounded, producer and consumer could proceed at 
different rates. With striped 
* Tasks submitted to executor services via submit will wrap the result 
including exceptions in a future which silently discards them. The library 
might take at initialization time a listener for these errors, or if it is 
going to be C* specific it could use the wrapped runnable or similar.
* A lot of locking that was spin locking (which unbounded I don't think is 
great) is now blocking locking. There is no adaptive spinning if you don't use 
synchronized. If you are already using unsafe maybe you could do monitor 
enter/exit. Never tried it.
* It looks like concurrent calls to rehash could cause the table to rehash 
twice since the rebalance field is not CASed. You should do the volatile read, 
and then attempt the CAS (avoids putting the cache line in exclusive state 
every time).
* StatsHolder, same AtomicLongArray suggestion. Also consider LongAdder.
* In Segment.java  in the replace path AtomicLong.addAndGet is called back to 
back, could be called once with the math already done. I believe each of those 
stalls processing until the store buffers have flushed. The put path does 
something similar and could have the same op

[jira] [Commented] (CASSANDRA-8354) A better story for dealing with empty values

2014-11-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223472#comment-14223472
 ] 

Aleksey Yeschenko commented on CASSANDRA-8354:
--

bq. What if for instance we added an option strict_cql_values to 3.0 that 
defaults to false. When enabled it rejects nonsensical empty values. For 3.1 we 
default to true, and give people a tool to convert empty to null or some other 
value. For 4.0 it stays permanently true.

That. Except it's not just CQL, there is thrift too, where we should enforce 
this, so maybe should name it 'reject_empty_types' or something. As a tool, 
upgradesstables will probably do.

Don't want to legitimize it on CQL syntax level either.

> A better story for dealing with empty values
> 
>
> Key: CASSANDRA-8354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8354
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
> Fix For: 3.0
>
>
> In CQL, a value of any type can be "empty", even for types for which such 
> values doesn't make any sense (int, uuid, ...). Note that it's different from 
> having no value (i.e. a {{null}}). This is due to historical reasons, and we 
> can't entirely disallow it for backward compatibility, but it's pretty 
> painful when working with CQL since you always need to be defensive about 
> such largely non-sensical values.
> This is particularly annoying with UDF: those empty values are represented as 
> {{null}} for UDF and that plays weirdly with UDF that use unboxed native 
> types.
> So I would suggest that we introduce variations of the types that don't 
> accept empty byte buffers for those type for which it's not a particularly 
> sensible value.
> Ideally we'd use those variant by default, that is:
> {noformat}
> CREATE TABLE foo (k text PRIMARY, v int)
> {noformat}
> would not accept empty values for {{v}}. But
> {noformat}
> CREATE TABLE foo (k text PRIMARY, v int ALLOW EMPTY)
> {noformat}
> would.
> Similarly, for UDF, a function like:
> {noformat}
> CREATE FUNCTION incr(v int) RETURNS int LANGUAGE JAVA AS 'return v + 1';
> {noformat}
> would be guaranteed it can only be applied where no empty values are allowed. 
> A
> function that wants to handle empty values could be created with:
> {noformat}
> CREATE FUNCTION incr(v int ALLOW EMPTY) RETURNS int ALLOW EMPTY LANGUAGE JAVA 
> AS 'return (v == null) ? null : v + 1';
> {noformat}
> Of course, doing that has the problem of backward compatibility. One option 
> could be to say that if a type doesn't accept empties, but we do have an 
> empty internally, then we convert it to some reasonably sensible default 
> value (0 for numeric values, the smallest possible uuid for uuids, etc...). 
> This way, we could allow convesion of types to and from 'ALLOW EMPTY'. And 
> maybe we'd say that existing compact tables gets the 'ALLOW EMPTY' flag for 
> their types by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer

2014-11-24 Thread Julien Anguenot (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223464#comment-14223464
 ] 

Julien Anguenot commented on CASSANDRA-8067:


[~mokemokechicken] [~Andie78] error definitely still occurring in a cluster 
where all nodes are running 2.1.2: error happening against every node of the 
cluster on regular basis. The same cluster running 2.0.11 did not have that 
particular issue before migration.

> NullPointerException in KeyCacheSerializer
> --
>
> Key: CASSANDRA-8067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8067
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Eric Leleu
> Fix For: 2.1.1
>
>
> Hi,
> I have this stack trace in the logs of Cassandra server (v2.1)
> {code}
> ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 
> CassandraDaemon.java:166 - Exception in thread 
> Thread[CompactionExecutor:14,1,main]
> java.lang.NullPointerException: null
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown 
> Source) ~[na:1.7.0]
> at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) 
> ~[na:1.7.0]
> at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0]
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0]
> at java.lang.Thread.run(Unknown Source) [na:1.7.0]
> {code}
> It may not be critical because this error occured in the AutoSavingCache. 
> However the line 475 is about the CFMetaData so it may hide bigger issue...
> {code}
>  474 CFMetaData cfm = 
> Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname);
>  475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, 
> out);
> {code}
> Regards,
> Eric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8354) A better story for dealing with empty values

2014-11-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223462#comment-14223462
 ] 

Jonathan Ellis commented on CASSANDRA-8354:
---

Is there a way we can avoid permanently enshrining this wart?

What if for instance we added an option {{strict_cql_values}} to 3.0 that 
defaults to false.  When enabled it rejects nonsensical empty values.  For 3.1 
we default to true, and give people a tool to convert empty to null or some 
other value.  For 4.0 it stays permanently true.

> A better story for dealing with empty values
> 
>
> Key: CASSANDRA-8354
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8354
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
> Fix For: 3.0
>
>
> In CQL, a value of any type can be "empty", even for types for which such 
> values doesn't make any sense (int, uuid, ...). Note that it's different from 
> having no value (i.e. a {{null}}). This is due to historical reasons, and we 
> can't entirely disallow it for backward compatibility, but it's pretty 
> painful when working with CQL since you always need to be defensive about 
> such largely non-sensical values.
> This is particularly annoying with UDF: those empty values are represented as 
> {{null}} for UDF and that plays weirdly with UDF that use unboxed native 
> types.
> So I would suggest that we introduce variations of the types that don't 
> accept empty byte buffers for those type for which it's not a particularly 
> sensible value.
> Ideally we'd use those variant by default, that is:
> {noformat}
> CREATE TABLE foo (k text PRIMARY, v int)
> {noformat}
> would not accept empty values for {{v}}. But
> {noformat}
> CREATE TABLE foo (k text PRIMARY, v int ALLOW EMPTY)
> {noformat}
> would.
> Similarly, for UDF, a function like:
> {noformat}
> CREATE FUNCTION incr(v int) RETURNS int LANGUAGE JAVA AS 'return v + 1';
> {noformat}
> would be guaranteed it can only be applied where no empty values are allowed. 
> A
> function that wants to handle empty values could be created with:
> {noformat}
> CREATE FUNCTION incr(v int ALLOW EMPTY) RETURNS int ALLOW EMPTY LANGUAGE JAVA 
> AS 'return (v == null) ? null : v + 1';
> {noformat}
> Of course, doing that has the problem of backward compatibility. One option 
> could be to say that if a type doesn't accept empties, but we do have an 
> empty internally, then we convert it to some reasonably sensible default 
> value (0 for numeric values, the smallest possible uuid for uuids, etc...). 
> This way, we could allow convesion of types to and from 'ALLOW EMPTY'. And 
> maybe we'd say that existing compact tables gets the 'ALLOW EMPTY' flag for 
> their types by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-11-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8365:
---
Reproduced In: 2.1.2
Fix Version/s: 2.1.3

> CamelCase name is used as index name instead of lowercase
> -
>
> Key: CASSANDRA-8365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre Laporte
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.3
>
>
> In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
> name is used as index name, even though it is unquoted. Trying to quote the 
> index name results in a syntax error.
> However, when I try to delete the index, I have to quote the index name, 
> otherwise I get an invalid-query error telling me that the index (lowercase) 
> does not exist.
> This seems inconsistent.  Shouldn't the index name be lowercased before the 
> index is created ?
> Here is the code to reproduce the issue :
> {code}
> cqlsh:schemabuilderit> CREATE TABLE IndexTest (a int primary key, b int);
> cqlsh:schemabuilderit> CREATE INDEX FooBar on indextest (b);
> cqlsh:schemabuilderit> DESCRIBE TABLE indextest ;
> CREATE TABLE schemabuilderit.indextest (
> a int PRIMARY KEY,
> b int
> ) ;
> CREATE INDEX FooBar ON schemabuilderit.indextest (b);
> cqlsh:schemabuilderit> DROP INDEX FooBar;
> code=2200 [Invalid query] message="Index 'foobar' could not be found in any 
> of the tables of keyspace 'schemabuilderit'"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7970) JSON support for CQL

2014-11-24 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-7970:

Issue Type: New Feature  (was: Bug)

> JSON support for CQL
> 
>
> Key: CASSANDRA-7970
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7970
> Project: Cassandra
>  Issue Type: New Feature
>  Components: API
>Reporter: Jonathan Ellis
>Assignee: Tyler Hobbs
> Fix For: 3.0
>
>
> JSON is popular enough that not supporting it is becoming a competitive 
> weakness.  We can add JSON support in a way that is compatible with our 
> performance goals by *mapping* JSON to an existing schema: one JSON documents 
> maps to one CQL row.
> Thus, it is NOT a goal to support schemaless documents, which is a misfeature 
> [1] [2] [3].  Rather, it is to allow a convenient way to easily turn a JSON 
> document from a service or a user into a CQL row, with all the validation 
> that entails.
> Since we are not looking to support schemaless documents, we will not be 
> adding a JSON data type (CASSANDRA-6833) a la postgresql.  Rather, we will 
> map the JSON to UDT, collections, and primitive CQL types.
> Here's how this might look:
> {code}
> CREATE TYPE address (
>   street text,
>   city text,
>   zip_code int,
>   phones set
> );
> CREATE TABLE users (
>   id uuid PRIMARY KEY,
>   name text,
>   addresses map
> );
> INSERT INTO users JSON
> {‘id’: 4b856557-7153,
>‘name’: ‘jbellis’,
>‘address’: {“home”: {“street”: “123 Cassandra Dr”,
> “city”: “Austin”,
> “zip_code”: 78747,
> “phones”: [2101234567]}}};
> SELECT JSON id, address FROM users;
> {code}
> (We would also want to_json and from_json functions to allow mapping a single 
> column's worth of data.  These would not require extra syntax.)
> [1] http://rustyrazorblade.com/2014/07/the-myth-of-schema-less/
> [2] https://blog.compose.io/schema-less-is-usually-a-lie/
> [3] http://dl.acm.org/citation.cfm?id=2481247



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8338) Simplify Token Selection

2014-11-24 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223425#comment-14223425
 ] 

Jeremiah Jordan commented on CASSANDRA-8338:


If you know the DC size to put it in the file:

{noformat}
# datacenter_index: 0
# node_index: 0
# datacenter_size: 1
{noformat}

Then you can just include the formula in your chef script.

> Simplify Token Selection
> 
>
> Key: CASSANDRA-8338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Joaquin Casares
>Assignee: Jeremiah Jordan
>Priority: Trivial
>  Labels: lhf
>
> When creating provisioning scripts, especially when running tools like Chef, 
> each node is launched individually. When not using vnodes your initial setup 
> will always be unbalanced unless you handle token assignment within your 
> scripts. 
> I spoke to someone recently who was using this in production and his 
> operations team wasn't too pleased that they had to use OpsCenter as an extra 
> step for rebalancing. Instead, we should provide this functionality out of 
> the box for new clusters.
> Instead, could we have the following options below the initial_token section?
> {CODE}
> # datacenter_index: 0
> # node_index: 0
> # datacenter_size: 1
> {CODE}
> The above configuration options, when uncommented, would do the math of:
> {CODE}
> token = node_index * (range / datacenter_size) + (datacenter_index * 100) 
> + start_of_range
> {CODE}
> This means that users don't have to repeatedly implement the initial_token 
> selection code nor know the range and offsets of their partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8338) Simplify Token Selection

2014-11-24 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan resolved CASSANDRA-8338.

Resolution: Won't Fix

> Simplify Token Selection
> 
>
> Key: CASSANDRA-8338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Joaquin Casares
>Assignee: Jeremiah Jordan
>Priority: Trivial
>  Labels: lhf
>
> When creating provisioning scripts, especially when running tools like Chef, 
> each node is launched individually. When not using vnodes your initial setup 
> will always be unbalanced unless you handle token assignment within your 
> scripts. 
> I spoke to someone recently who was using this in production and his 
> operations team wasn't too pleased that they had to use OpsCenter as an extra 
> step for rebalancing. Instead, we should provide this functionality out of 
> the box for new clusters.
> Instead, could we have the following options below the initial_token section?
> {CODE}
> # datacenter_index: 0
> # node_index: 0
> # datacenter_size: 1
> {CODE}
> The above configuration options, when uncommented, would do the math of:
> {CODE}
> token = node_index * (range / datacenter_size) + (datacenter_index * 100) 
> + start_of_range
> {CODE}
> This means that users don't have to repeatedly implement the initial_token 
> selection code nor know the range and offsets of their partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8150) Revaluate Default JVM tuning parameters

2014-11-24 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223415#comment-14223415
 ] 

T Jake Luciani commented on CASSANDRA-8150:
---

I can run it through some workloads...

> Revaluate Default JVM tuning parameters
> ---
>
> Key: CASSANDRA-8150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Matt Stump
>Assignee: Brandon Williams
> Attachments: upload.png
>
>
> It's been found that the old twitter recommendations of 100m per core up to 
> 800m is harmful and should no longer be used.
> Instead the formula used should be 1/3 or 1/4 max heap with a max of 2G. 1/3 
> or 1/4 is debatable and I'm open to suggestions. If I were to hazard a guess 
> 1/3 is probably better for releases greater than 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8338) Simplify Token Selection

2014-11-24 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223366#comment-14223366
 ] 

Nick Bailey commented on CASSANDRA-8338:


It might be worth putting this in a different file than cassandra.yaml. It's 
already confusing that some options in there (initial_token, num_tokens) only 
matter the very first time a node starts up. I'm not sure if we should be 
adding more. Also we should make sure we convey that this only helps when the 
entire cluster is being set up for the first time, not when adding nodes.

Lastly, this will need to incorporate rack information as well if we want it to 
work correctly when not everything is in the same rack.

> Simplify Token Selection
> 
>
> Key: CASSANDRA-8338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Joaquin Casares
>Assignee: Jeremiah Jordan
>Priority: Trivial
>  Labels: lhf
>
> When creating provisioning scripts, especially when running tools like Chef, 
> each node is launched individually. When not using vnodes your initial setup 
> will always be unbalanced unless you handle token assignment within your 
> scripts. 
> I spoke to someone recently who was using this in production and his 
> operations team wasn't too pleased that they had to use OpsCenter as an extra 
> step for rebalancing. Instead, we should provide this functionality out of 
> the box for new clusters.
> Instead, could we have the following options below the initial_token section?
> {CODE}
> # datacenter_index: 0
> # node_index: 0
> # datacenter_size: 1
> {CODE}
> The above configuration options, when uncommented, would do the math of:
> {CODE}
> token = node_index * (range / datacenter_size) + (datacenter_index * 100) 
> + start_of_range
> {CODE}
> This means that users don't have to repeatedly implement the initial_token 
> selection code nor know the range and offsets of their partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8352) Timeout Exception on Node Failure in Remote Data Center

2014-11-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-8352.
---
Resolution: Cannot Reproduce

Upgrade to 2.0.11 and let us know if you still see unexpected behavior.

> Timeout Exception on Node Failure in Remote Data Center
> ---
>
> Key: CASSANDRA-8352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8352
> Project: Cassandra
>  Issue Type: Bug
> Environment: Unix, Cassandra 2.0.3
>Reporter: Akhtar Hussain
>  Labels: DataCenter, GEO-Red
>
> We have a Geo-red setup with 2 Data centers having 3 nodes each. When we 
> bring down a single Cassandra node down in DC2 by kill -9 , 
> reads fail on DC1 with TimedOutException for a brief amount of time (15-20 
> sec~). 
> Questions:
> 1.We need to understand why reads fail on DC1 when a node in another DC 
> i.e. DC2 fails? As we are using LOCAL_QUORUM for both reads/writes in DC1, 
> request should return once 2 nodes in local DC have replied instead of timing 
> out because of node in remote DC.
> 2.We want to make sure that no Cassandra requests fail in case of node 
> failures. We used rapid read protection of ALWAYS/99percentile/10ms as 
> mentioned in 
> http://www.datastax.com/dev/blog/rapid-read-protection-in-cassandra-2-0-2. 
> But nothing worked. How to ensure zero request failures in case a node fails?
> 3.What is the right way of handling HTimedOutException exceptions in 
> Hector?
> 4.Please confirm are we using public private hostnames as expected?
> We are using Cassandra 2.0.3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8150) Revaluate Default JVM tuning parameters

2014-11-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223371#comment-14223371
 ] 

Jonathan Ellis commented on CASSANDRA-8150:
---

[~tjake] are you going to run the tests or do you want to delegate to 
[~enigmacurry]'s team?

> Revaluate Default JVM tuning parameters
> ---
>
> Key: CASSANDRA-8150
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8150
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Matt Stump
>Assignee: Brandon Williams
> Attachments: upload.png
>
>
> It's been found that the old twitter recommendations of 100m per core up to 
> 800m is harmful and should no longer be used.
> Instead the formula used should be 1/3 or 1/4 max heap with a max of 2G. 1/3 
> or 1/4 is debatable and I'm open to suggestions. If I were to hazard a guess 
> 1/3 is probably better for releases greater than 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8188) don't block SocketThread for MessagingService

2014-11-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223362#comment-14223362
 ] 

Jonathan Ellis commented on CASSANDRA-8188:
---

I'd be okay with adding this to 2.0.12.  Brandon?

> don't block SocketThread for MessagingService
> -
>
> Key: CASSANDRA-8188
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8188
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: yangwei
>Assignee: yangwei
> Fix For: 2.1.2
>
> Attachments: 
> 0001-don-t-block-SocketThread-for-MessagingService.patch, handshake.stack.txt
>
>
> We have two datacenters A and B.
> The node in A cannot handshake version with nodes in B, logs in A as follow:
> {noformat}
>   INFO [HANDSHAKE-/B] 2014-10-24 04:29:49,075 OutboundTcpConnection.java 
> (line 395) Cannot handshake version with B
> TRACE [WRITE-/B] 2014-10-24 11:02:49,044 OutboundTcpConnection.java (line 
> 368) unable to connect to /B
>   java.net.ConnectException: Connection refused
> at sun.nio.ch.Net.connect0(Native Method)
> at sun.nio.ch.Net.connect(Net.java:364)
> at sun.nio.ch.Net.connect(Net.java:356)
> at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:623)
> at java.nio.channels.SocketChannel.open(SocketChannel.java:184)
> at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket(OutboundTcpConnectionPool.java:134)
> at 
> org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket(OutboundTcpConnectionPool.java:119)
> at 
> org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:299)
> at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:150)
> {noformat}
> 
> The jstack output of nodes in B shows it blocks in inputStream.readInt 
> resulting in SocketThread not accept socket any more, logs as follow:
> {noformat}
>  java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:197)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
> - locked <0x0007963747e8> (a java.lang.Object)
> at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:203)
> - locked <0x000796374848> (a java.lang.Object)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
> - locked <0x0007a5c7ca88> (a 
> sun.nio.ch.SocketAdaptor$SocketInputStream)
> at java.io.InputStream.read(InputStream.java:101)
> at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:81)
> - locked <0x0007a5c7ca88> (a 
> sun.nio.ch.SocketAdaptor$SocketInputStream)
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at 
> org.apache.cassandra.net.MessagingService$SocketThread.run(MessagingService.java:879)
> {noformat}
>
> In nodes of B tcpdump shows retransmission of SYN,ACK during the tcp 
> three-way handshake phase because tcp implementation drops the last ack when 
> the backlog queue is full.
> In nodes of B ss -tl shows "Recv-Q 51 Send-Q 50".
> 
> In nodes of B netstat -s shows “SYNs to LISTEN sockets dropped” and “times 
> the listen queue of a socket overflowed” are both increasing.
> This patch sets read timeout to 2 * 
> OutboundTcpConnection.WAIT_FOR_VERSION_MAX_TIME for the accepted socket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8346) Paxos operation can use stale data during multiple range movements

2014-11-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8346:
--
Reviewer: sankalp kohli

[~kohlisankalp] to review

> Paxos operation can use stale data during multiple range movements
> --
>
> Key: CASSANDRA-8346
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8346
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.0.12
>
> Attachments: 8346.txt
>
>
> Paxos operations correctly account for pending ranges for all operation 
> pertaining to the Paxos state, but those pending ranges are not taken into 
> account when reading the data to check for the conditions or during a serial 
> read. It's thus possible to break the LWT guarantees by reading a stale 
> value.  This require 2 node movements (on the same token range) to be a 
> problem though.
> Basically, we have {{RF}} replicas + {{P}} pending nodes. For the Paxos 
> prepare/propose phases, the number of required participants (the "Paxos 
> QUORUM") is {{(RF + P + 1) / 2}} ({{SP.getPaxosParticipants}}), but the read 
> done to check conditions or for serial reads is done at a "normal" QUORUM (or 
> LOCAL_QUORUM), and so a weaker {{(RF + 1) / 2}}. We have a problem if it's 
> possible that said read can read only from nodes that were not part of the 
> paxos participants, and so we have a problem if:
> {noformat}
> "normal quorum" == (RF + 1) / 2 <= (RF + P) - ((RF + P + 1) / 2) == 
> "participants considered - blocked for"
> {noformat}
> We're good if {{P = 0}} or {{P = 1}} since this inequality gives us 
> respectively {{RF + 1 <= RF - 1}} and {{RF + 1 <= RF}}, both of which are 
> impossible. But at {{P = 2}} (2 pending nodes), this inequality is equivalent 
> to {{RF <= RF}} and so we might read stale data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8253) cassandra-stress 2.1 doesn't support LOCAL_ONE

2014-11-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223350#comment-14223350
 ] 

Jonathan Ellis commented on CASSANDRA-8253:
---

[~tjake] to review

> cassandra-stress 2.1 doesn't support LOCAL_ONE
> --
>
> Key: CASSANDRA-8253
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8253
> Project: Cassandra
>  Issue Type: Bug
>Reporter: J.B. Langston
>Assignee: Liang Xie
> Attachments: CASSANDRA-8253.txt
>
>
> Looks like a simple oversight in argument parsing:
> ➜  bin  ./cassandra-stress write cl=LOCAL_ONE
> Invalid value LOCAL_ONE; must match pattern 
> ONE|QUORUM|LOCAL_QUORUM|EACH_QUORUM|ALL|ANY
> Also, CASSANDRA-7077 argues that it should be using LOCAL_ONE by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8338) Simplify Token Selection

2014-11-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8338:
--
Assignee: Jeremiah Jordan  (was: Jonathan Ellis)

> Simplify Token Selection
> 
>
> Key: CASSANDRA-8338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Joaquin Casares
>Assignee: Jeremiah Jordan
>Priority: Trivial
>  Labels: lhf
>
> When creating provisioning scripts, especially when running tools like Chef, 
> each node is launched individually. When not using vnodes your initial setup 
> will always be unbalanced unless you handle token assignment within your 
> scripts. 
> I spoke to someone recently who was using this in production and his 
> operations team wasn't too pleased that they had to use OpsCenter as an extra 
> step for rebalancing. Instead, we should provide this functionality out of 
> the box for new clusters.
> Instead, could we have the following options below the initial_token section?
> {CODE}
> # datacenter_index: 0
> # node_index: 0
> # datacenter_size: 1
> {CODE}
> The above configuration options, when uncommented, would do the math of:
> {CODE}
> token = node_index * (range / datacenter_size) + (datacenter_index * 100) 
> + start_of_range
> {CODE}
> This means that users don't have to repeatedly implement the initial_token 
> selection code nor know the range and offsets of their partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7933) Update cassandra-stress README

2014-11-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223349#comment-14223349
 ] 

Jonathan Ellis commented on CASSANDRA-7933:
---

Can you review [~tjake]?

> Update cassandra-stress README
> --
>
> Key: CASSANDRA-7933
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7933
> Project: Cassandra
>  Issue Type: Task
>Reporter: Benedict
>Assignee: Liang Xie
>Priority: Minor
> Attachments: CASSANDRA-7933.txt
>
>
> There is a README in the tools/stress directory. It is completely out of date.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-8338) Simplify Token Selection

2014-11-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-8338:
-

Assignee: Jonathan Ellis

> Simplify Token Selection
> 
>
> Key: CASSANDRA-8338
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8338
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Config
>Reporter: Joaquin Casares
>Assignee: Jonathan Ellis
>Priority: Trivial
>  Labels: lhf
>
> When creating provisioning scripts, especially when running tools like Chef, 
> each node is launched individually. When not using vnodes your initial setup 
> will always be unbalanced unless you handle token assignment within your 
> scripts. 
> I spoke to someone recently who was using this in production and his 
> operations team wasn't too pleased that they had to use OpsCenter as an extra 
> step for rebalancing. Instead, we should provide this functionality out of 
> the box for new clusters.
> Instead, could we have the following options below the initial_token section?
> {CODE}
> # datacenter_index: 0
> # node_index: 0
> # datacenter_size: 1
> {CODE}
> The above configuration options, when uncommented, would do the math of:
> {CODE}
> token = node_index * (range / datacenter_size) + (datacenter_index * 100) 
> + start_of_range
> {CODE}
> This means that users don't have to repeatedly implement the initial_token 
> selection code nor know the range and offsets of their partitioner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8343) Secondary index creation causes moves/bootstraps to fail

2014-11-24 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-8343:
--
Assignee: Yuki Morishita

> Secondary index creation causes moves/bootstraps to fail
> 
>
> Key: CASSANDRA-8343
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8343
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Frisch
>Assignee: Yuki Morishita
>
> Node moves/bootstraps are failing if the stream timeout is set to a value in 
> which secondary index creation cannot complete.  This happens because at the 
> end of the very last stream the StreamInSession.closeIfFinished() function 
> calls maybeBuildSecondaryIndexes on every column family.  If the stream time 
> + all CF's index creation takes longer than your stream timeout then the 
> socket closes from the sender's side, the receiver of the stream tries to 
> write to said socket because it's not null, an IOException is thrown but not 
> caught in closeIfFinished(), the exception is caught somewhere and not 
> logged, AbstractStreamSession.close() is never called, and the CountDownLatch 
> is never decremented.  This causes the move/bootstrap to continue forever 
> until the node is restarted.
> This problem of stream time + secondary index creation time exists on 
> decommissioning/unbootstrap as well but since it's on the sending side the 
> timeout triggers the onFailure() callback which does decrement the 
> CountDownLatch leading to completion.
> A cursory glance at the 2.0 code leads me to believe this problem would exist 
> there as well.
> Temporary workaround: set a really high/infinite stream timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8285) OOME in Cassandra 2.0.11

2014-11-24 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223346#comment-14223346
 ] 

Russ Hatch commented on CASSANDRA-8285:
---

The heap dump Pierre included directly above looks similar to the earlier one. 
One difference here is there appears to be 2 active memtables (the earlier heap 
dump had just one) consuming most of the heap.

In this newer heap dump the MemTable objects appear to be contained/referenced 
by DataTracker objects, whereas in the earlier heap the voracious MemTable 
looked kinda like a "top level" object. But I could be 
misreading/misunderstanding the MAT report.

> OOME in Cassandra 2.0.11
> 
>
> Key: CASSANDRA-8285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8285
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.11 + java-driver 2.0.8-SNAPSHOT
> Cassandra 2.0.11 + ruby-driver 1.0-beta
>Reporter: Pierre Laporte
>Assignee: Aleksey Yeschenko
> Attachments: OOME_node_system.log, gc-1416849312.log.gz, gc.log.gz, 
> heap-usage-after-gc-zoom.png, heap-usage-after-gc.png, system.log.gz
>
>
> We ran drivers 3-days endurance tests against Cassandra 2.0.11 and C* crashed 
> with an OOME.  This happened both with ruby-driver 1.0-beta and java-driver 
> 2.0.8-snapshot.
> Attached are :
> | OOME_node_system.log | The system.log of one Cassandra node that crashed |
> | gc.log.gz | The GC log on the same node |
> | heap-usage-after-gc.png | The heap occupancy evolution after every GC cycle 
> |
> | heap-usage-after-gc-zoom.png | A focus on when things start to go wrong |
> Workload :
> Our test executes 5 CQL statements (select, insert, select, delete, select) 
> for a given unique id, during 3 days, using multiple threads.  There is not 
> change in the workload during the test.
> Symptoms :
> In the attached log, it seems something starts in Cassandra between 
> 2014-11-06 10:29:22 and 2014-11-06 10:45:32.  This causes an allocation that 
> fills the heap.  We eventually get stuck in a Full GC storm and get an OOME 
> in the logs.
> I have run the java-driver tests against Cassandra 1.2.19 and 2.1.1.  The 
> error does not occur.  It seems specific to 2.0.11.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8342) Remove historical guidance for concurrent reader and writer tunings.

2014-11-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223342#comment-14223342
 ] 

Jonathan Ellis commented on CASSANDRA-8342:
---

[~enigmacurry] as a sanity check can you stress i2.8xl reads at 64 128 and 256 
concurrent read threads?

> Remove historical guidance for concurrent reader and writer tunings.
> 
>
> Key: CASSANDRA-8342
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8342
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Stump
>
> The cassandra.yaml and documentation provide guidance on tuning concurrent 
> readers or concurrent writers to system resources (cores, spindles). Testing 
> performed by both myself and customers demonstrates no benefit for thread 
> pool sizes above 64 in size, and for thread pools greater than 128 in size a 
> decrease in throughput. This is due to thread scheduling and synchronization 
> bottlenecks within Cassandra. 
> Additionally, for lower end systems reducing these thread pools provides very 
> little benefit because the bottleneck is typically moved to either IO or CPU.
> I propose that we set the default value to 64 (current default is 32), and 
> remove all guidance/recommendations regarding tuning.
> This recommendation may change in 3.0, but that would require further 
> experimentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8285) OOME in Cassandra 2.0.11

2014-11-24 Thread Pierre Laporte (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223321#comment-14223321
 ] 

Pierre Laporte edited comment on CASSANDRA-8285 at 11/24/14 7:09 PM:
-

I just reproduced the issue on my machine against Cassandra 2.1.2.

*Howto*

Create 3-nodes C* cluster

{code}ccm create -n 3 -v 2.1.2 -b -s -i 127.0.0. cassandra-2.1{code}

Insert/delete a lot of rows inside a single table.  I was actually trying to 
reproduce the TombstoneOverwhelmingException but got an OOME instead.

{code}
public class CassandraTest implements AutoCloseable {
public static final String KEYSPACE = "TombstonesOverwhelming";

private Cluster cluster;
protected Session session;

public CassandraTest() {
this(new RoundRobinPolicy());
}

public CassandraTest(LoadBalancingPolicy loadBalancingPolicy) {
System.out.println("Creating builder...");
cluster = 
Cluster.builder().addContactPoint("127.0.0.1").withLoadBalancingPolicy(loadBalancingPolicy).build();
for (Host host : cluster.getMetadata().getAllHosts()) {
System.out.println("Found host " + host.getAddress() + " in DC " + 
host.getDatacenter());
}
session = cluster.connect();
}

private void executeQuietly(String query) {
try {
execute(query);
} catch (Exception e) {
e.printStackTrace();
}
}

private ResultSet execute(String query) {
return session.execute(query);
}

private ResultSet execute(Statement statement) {
return session.execute(statement);
}

@Override
public void close() throws IOException {
cluster.close();
}

public static void main(String... args) throws Exception {
try (CassandraTest test = new CassandraTest()) {
test.executeQuietly("DROP KEYSPACE IF EXISTS " + KEYSPACE);
test.execute("CREATE KEYSPACE " + KEYSPACE + " " +
"WITH REPLICATION = { 'class' : 'SimpleStrategy', 
'replication_factor' : 3 }");
test.execute("USE " + KEYSPACE);
test.execute("CREATE TABLE useful (run int, iteration int, copy 
int, PRIMARY KEY (run, iteration, copy))");

System.out.println("Press ENTER to start the test");
System.in.read();

for (int run = 0; run < 1_000_000; run++) {
System.out.printf("Starting run % 7d... ", run);
System.out.print("Inserting...");
for (int iteration = 0; iteration < 1_000_000; iteration++) {
Batch batch = QueryBuilder.batch();
batch.setConsistencyLevel(ConsistencyLevel.QUORUM);
for (int copy = 0; copy < 100; copy++) {
batch.add(QueryBuilder.insertInto("useful")
.value("run", run).value("iteration", 
iteration).value("copy", copy));
}
test.execute(batch);
}
System.out.println("Deleting...");
for (int iteration = 0; iteration < 1_000_000; iteration++) {
Batch batch = QueryBuilder.batch();
batch.setConsistencyLevel(ConsistencyLevel.QUORUM);
for (int copy = 0; copy < 100; copy++) {
batch.add(QueryBuilder.delete().from("useful")
.where(eq("run", run)).and(eq("iteration", 
iteration)).and(eq("copy", copy)));
}
test.execute(batch);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
{code}

I took ~50 minutes before two instances OOME'd.  Please find attached the gc 
log (gc-1416849312.log.gz) and the system log (system.log.gz).  If needed, I 
can upload a heap dump too.

Hope that helps


was (Author: pingtimeout):
I just reproduced the issue on my machine against Cassandra 2.1.2.

*Howto*

Create 3-nodes C* cluster

{code}ccm create -n 3 -v 2.1.2 -b -s -i 127.0.0. cassandra-2.1{code}

Insert/delete a lot of rows inside a single table.  I was actually trying to 
reproduce the TombstoneOverwhelmingException but got an OOME instead.

{code}
public class CassandraTest implements AutoCloseable {
public static final String KEYSPACE = "TombstonesOverwhelming";

private Cluster cluster;
protected Session session;

public CassandraTest() {
this(new RoundRobinPolicy());
}

public CassandraTest(LoadBalancingPolicy loadBalancingPolicy) {
System.out.println("Creating builder...");
cluster = 
Cluster.builder().addContactPoint("127.0.0.1").withLoadBalancingPolicy(loadBalancingPolicy).build();
for (Host host : cluster.getMetadata().getAllHosts()) {
System.out.println("Found host " + host.getAddress() + "

[jira] [Commented] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-11-24 Thread Pierre Laporte (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223328#comment-14223328
 ] 

Pierre Laporte commented on CASSANDRA-8365:
---

[~philipthompson] I am using 2.1.2

> CamelCase name is used as index name instead of lowercase
> -
>
> Key: CASSANDRA-8365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre Laporte
>Priority: Minor
>  Labels: cqlsh
>
> In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
> name is used as index name, even though it is unquoted. Trying to quote the 
> index name results in a syntax error.
> However, when I try to delete the index, I have to quote the index name, 
> otherwise I get an invalid-query error telling me that the index (lowercase) 
> does not exist.
> This seems inconsistent.  Shouldn't the index name be lowercased before the 
> index is created ?
> Here is the code to reproduce the issue :
> {code}
> cqlsh:schemabuilderit> CREATE TABLE IndexTest (a int primary key, b int);
> cqlsh:schemabuilderit> CREATE INDEX FooBar on indextest (b);
> cqlsh:schemabuilderit> DESCRIBE TABLE indextest ;
> CREATE TABLE schemabuilderit.indextest (
> a int PRIMARY KEY,
> b int
> ) ;
> CREATE INDEX FooBar ON schemabuilderit.indextest (b);
> cqlsh:schemabuilderit> DROP INDEX FooBar;
> code=2200 [Invalid query] message="Index 'foobar' could not be found in any 
> of the tables of keyspace 'schemabuilderit'"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8285) OOME in Cassandra 2.0.11

2014-11-24 Thread Pierre Laporte (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pierre Laporte updated CASSANDRA-8285:
--
Attachment: system.log.gz
gc-1416849312.log.gz

I just reproduced the issue on my machine against Cassandra 2.1.2.

*Howto*

Create 3-nodes C* cluster

{code}ccm create -n 3 -v 2.1.2 -b -s -i 127.0.0. cassandra-2.1{code}

Insert/delete a lot of rows inside a single table.  I was actually trying to 
reproduce the TombstoneOverwhelmingException but got an OOME instead.

{code}
public class CassandraTest implements AutoCloseable {
public static final String KEYSPACE = "TombstonesOverwhelming";

private Cluster cluster;
protected Session session;

public CassandraTest() {
this(new RoundRobinPolicy());
}

public CassandraTest(LoadBalancingPolicy loadBalancingPolicy) {
System.out.println("Creating builder...");
cluster = 
Cluster.builder().addContactPoint("127.0.0.1").withLoadBalancingPolicy(loadBalancingPolicy).build();
for (Host host : cluster.getMetadata().getAllHosts()) {
System.out.println("Found host " + host.getAddress() + " in DC " + 
host.getDatacenter());
}
session = cluster.connect();
}

private void executeQuietly(String query) {
try {
execute(query);
} catch (Exception e) {
e.printStackTrace();
}
}

private ResultSet execute(String query) {
return session.execute(query);
}

private ResultSet execute(Statement statement) {
return session.execute(statement);
}

@Override
public void close() throws IOException {
cluster.close();
}

public static void main(String... args) throws Exception {
try (CassandraTest test = new CassandraTest()) {
test.executeQuietly("DROP KEYSPACE IF EXISTS " + KEYSPACE);
test.execute("CREATE KEYSPACE " + KEYSPACE + " " +
"WITH REPLICATION = { 'class' : 'SimpleStrategy', 
'replication_factor' : 3 }");
test.execute("USE " + KEYSPACE);
test.execute("CREATE TABLE useful (run int, iteration int, copy 
int, PRIMARY KEY (run, iteration, copy))");

System.out.println("Press ENTER to start the test");
System.in.read();

for (int run = 0; run < 1_000_000; run++) {
System.out.printf("Starting run % 7d... ", run);
System.out.print("Inserting...");
for (int iteration = 0; iteration < 1_000_000; iteration++) {
Batch batch = QueryBuilder.batch();
batch.setConsistencyLevel(ConsistencyLevel.QUORUM);
for (int copy = 0; copy < 100; copy++) {
batch.add(QueryBuilder.insertInto("useful")
.value("run", run).value("iteration", 
iteration).value("copy", copy));
}
test.execute(batch);
}
System.out.println("Deleting...");
for (int iteration = 0; iteration < 1_000_000; iteration++) {
Batch batch = QueryBuilder.batch();
batch.setConsistencyLevel(ConsistencyLevel.QUORUM);
for (int copy = 0; copy < 100; copy++) {
batch.add(QueryBuilder.delete().from("useful")
.where(eq("run", run)).and(eq("iteration", 
iteration)).and(eq("copy", copy)));
}
test.execute(batch);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
{code}

I took ~50 minutes before two instances OOME'd.  Please find attached the gc 
log and the system log.  If needed, I can upload a heap dump too.

Hope that helps

> OOME in Cassandra 2.0.11
> 
>
> Key: CASSANDRA-8285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8285
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.11 + java-driver 2.0.8-SNAPSHOT
> Cassandra 2.0.11 + ruby-driver 1.0-beta
>Reporter: Pierre Laporte
>Assignee: Aleksey Yeschenko
> Attachments: OOME_node_system.log, gc-1416849312.log.gz, gc.log.gz, 
> heap-usage-after-gc-zoom.png, heap-usage-after-gc.png, system.log.gz
>
>
> We ran drivers 3-days endurance tests against Cassandra 2.0.11 and C* crashed 
> with an OOME.  This happened both with ruby-driver 1.0-beta and java-driver 
> 2.0.8-snapshot.
> Attached are :
> | OOME_node_system.log | The system.log of one Cassandra node that crashed |
> | gc.log.gz | The GC log on the same node |
> | heap-usage-after-gc.png | The heap occupancy evolution after every GC cycle 
> |
> | heap-usage-after-gc-zoom.png | A focus on when things start to go wrong |
> Workload 

[jira] [Commented] (CASSANDRA-8061) tmplink files are not removed

2014-11-24 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223283#comment-14223283
 ] 

Michael Shuler commented on CASSANDRA-8061:
---

[~JoshuaMcKenzie] my comment was not a repro of the files remaining on disk - I 
was able to monkey with the test to see the deleted files in lsof, but they 
were gone from disk.

> tmplink files are not removed
> -
>
> Key: CASSANDRA-8061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Linux
>Reporter: Gianluca Borello
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 2.1.3
>
> Attachments: 8061_v1.txt, 8248-thread_dump.txt
>
>
> After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
> filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
> and that is very similar, and I confirm it happens both on 2.1.0 as well as 
> from the latest commit on the cassandra-2.1 branch 
> (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
>  from the cassandra-2.1)
> Even starting with a clean keyspace, after a few hours I get:
> {noformat}
> $ sudo find /raid0 | grep tmplink | xargs du -hs
> 2.7G  
> /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
> 13M   
> /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
> 1.8G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
> 12M   
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
> 5.2M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
> 822M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
> 7.3M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
> 1.2G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
> 6.7M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
> 1.1G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
> 11M   
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
> 1.7G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
> 812K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
> 122M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
> 744K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-739-Index.db
> 660K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-193-Index.db
> 796K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Index.db
> 137M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-230-Data.db
> 161M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Data.db
> 139M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Data.db
> 940K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Index.db
> 936K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-269-Index.db
> 161M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-786-Data.db
> 672K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-197-Index.db
> 113M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/dr

[jira] [Updated] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-11-24 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8365:
---
Labels: cqlsh  (was: )

> CamelCase name is used as index name instead of lowercase
> -
>
> Key: CASSANDRA-8365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre Laporte
>Priority: Minor
>  Labels: cqlsh
>
> In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
> name is used as index name, even though it is unquoted. Trying to quote the 
> index name results in a syntax error.
> However, when I try to delete the index, I have to quote the index name, 
> otherwise I get an invalid-query error telling me that the index (lowercase) 
> does not exist.
> This seems inconsistent.  Shouldn't the index name be lowercased before the 
> index is created ?
> Here is the code to reproduce the issue :
> {code}
> cqlsh:schemabuilderit> CREATE TABLE IndexTest (a int primary key, b int);
> cqlsh:schemabuilderit> CREATE INDEX FooBar on indextest (b);
> cqlsh:schemabuilderit> DESCRIBE TABLE indextest ;
> CREATE TABLE schemabuilderit.indextest (
> a int PRIMARY KEY,
> b int
> ) ;
> CREATE INDEX FooBar ON schemabuilderit.indextest (b);
> cqlsh:schemabuilderit> DROP INDEX FooBar;
> code=2200 [Invalid query] message="Index 'foobar' could not be found in any 
> of the tables of keyspace 'schemabuilderit'"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8365) CamelCase name is used as index name instead of lowercase

2014-11-24 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223214#comment-14223214
 ] 

Philip Thompson commented on CASSANDRA-8365:


What Cassandra version were you running [~pingtimeout]

> CamelCase name is used as index name instead of lowercase
> -
>
> Key: CASSANDRA-8365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8365
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Pierre Laporte
>Priority: Minor
>  Labels: cqlsh
>
> In cqlsh, when I execute a CREATE INDEX FooBar ... statement, the CamelCase 
> name is used as index name, even though it is unquoted. Trying to quote the 
> index name results in a syntax error.
> However, when I try to delete the index, I have to quote the index name, 
> otherwise I get an invalid-query error telling me that the index (lowercase) 
> does not exist.
> This seems inconsistent.  Shouldn't the index name be lowercased before the 
> index is created ?
> Here is the code to reproduce the issue :
> {code}
> cqlsh:schemabuilderit> CREATE TABLE IndexTest (a int primary key, b int);
> cqlsh:schemabuilderit> CREATE INDEX FooBar on indextest (b);
> cqlsh:schemabuilderit> DESCRIBE TABLE indextest ;
> CREATE TABLE schemabuilderit.indextest (
> a int PRIMARY KEY,
> b int
> ) ;
> CREATE INDEX FooBar ON schemabuilderit.indextest (b);
> cqlsh:schemabuilderit> DROP INDEX FooBar;
> code=2200 [Invalid query] message="Index 'foobar' could not be found in any 
> of the tables of keyspace 'schemabuilderit'"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8370) cqlsh doesn't handle LIST statements correctly

2014-11-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8370:
---
Description: 
{{LIST USERS}} and {{LIST PERMISSIONS}} statements are not handled correctly by 
cqlsh in 2.1 (since CASSANDRA-6307).

Running such a query results in errors along the lines of:

{noformat}
sam@easy:~/projects/cassandra$ bin/cqlsh --debug -u cassandra -p cassandra
Using CQL driver: 
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.2-SNAPSHOT | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cassandra@cqlsh> list users;
Traceback (most recent call last):
  File "bin/cqlsh", line 879, in onecmd
self.handle_statement(st, statementtext)
  File "bin/cqlsh", line 920, in handle_statement
return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
  File "bin/cqlsh", line 953, in perform_statement
result = self.perform_simple_statement(stmt)
  File "bin/cqlsh", line 989, in perform_simple_statement
self.print_result(rows, self.parse_for_table_meta(statement.query_string))
  File "bin/cqlsh", line 970, in parse_for_table_meta
return self.get_table_meta(ks, cf)
  File "bin/cqlsh", line 732, in get_table_meta
ksmeta = self.get_keyspace_meta(ksname)
  File "bin/cqlsh", line 717, in get_keyspace_meta
raise KeyspaceNotFound('Keyspace %r not found.' % ksname)
KeyspaceNotFound: Keyspace None not found.
{noformat}

  was:
{{LIST USERS}} and {{LIST PERMISSIONS}} statements are not handled correctly by 
cqlsh in 2.1 (since CASSANDRA-6307).

Running such a query results in errors along the lines of:

{{noformat}}
sam@easy:~/projects/cassandra$ bin/cqlsh --debug -u cassandra -p cassandra
Using CQL driver: 
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.2-SNAPSHOT | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cassandra@cqlsh> list users;
Traceback (most recent call last):
  File "bin/cqlsh", line 879, in onecmd
self.handle_statement(st, statementtext)
  File "bin/cqlsh", line 920, in handle_statement
return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
  File "bin/cqlsh", line 953, in perform_statement
result = self.perform_simple_statement(stmt)
  File "bin/cqlsh", line 989, in perform_simple_statement
self.print_result(rows, self.parse_for_table_meta(statement.query_string))
  File "bin/cqlsh", line 970, in parse_for_table_meta
return self.get_table_meta(ks, cf)
  File "bin/cqlsh", line 732, in get_table_meta
ksmeta = self.get_keyspace_meta(ksname)
  File "bin/cqlsh", line 717, in get_keyspace_meta
raise KeyspaceNotFound('Keyspace %r not found.' % ksname)
KeyspaceNotFound: Keyspace None not found.
{{noformat}}


> cqlsh doesn't handle LIST statements correctly
> --
>
> Key: CASSANDRA-8370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8370
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.3
>
> Attachments: 8370.txt
>
>
> {{LIST USERS}} and {{LIST PERMISSIONS}} statements are not handled correctly 
> by cqlsh in 2.1 (since CASSANDRA-6307).
> Running such a query results in errors along the lines of:
> {noformat}
> sam@easy:~/projects/cassandra$ bin/cqlsh --debug -u cassandra -p cassandra
> Using CQL driver:  '/home/sam/projects/cassandra/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/__init__.py'>
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.2-SNAPSHOT | CQL spec 3.2.0 | Native protocol v3]
> Use HELP for help.
> cassandra@cqlsh> list users;
> Traceback (most recent call last):
>   File "bin/cqlsh", line 879, in onecmd
> self.handle_statement(st, statementtext)
>   File "bin/cqlsh", line 920, in handle_statement
> return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
>   File "bin/cqlsh", line 953, in perform_statement
> result = self.perform_simple_statement(stmt)
>   File "bin/cqlsh", line 989, in perform_simple_statement
> self.print_result(rows, self.parse_for_table_meta(statement.query_string))
>   File "bin/cqlsh", line 970, in parse_for_table_meta
> return self.get_table_meta(ks, cf)
>   File "bin/cqlsh", line 732, in get_table_meta
> ksmeta = self.get_keyspace_meta(ksname)
>   File "bin/cqlsh", line 717, in get_keyspace_meta
> raise KeyspaceNotFound('Keyspace %r not found.' % ksname)
> KeyspaceNotFound: Keyspace None not found.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8370) cqlsh doesn't handle LIST statements correctly

2014-11-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8370:
---
 Reviewer: Mikhail Stepura
Reproduced In: 2.1.2, 2.1.1, 2.1.0  (was: 2.1.0, 2.1.1, 2.1.2)

> cqlsh doesn't handle LIST statements correctly
> --
>
> Key: CASSANDRA-8370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8370
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.3
>
> Attachments: 8370.txt
>
>
> {{LIST USERS}} and {{LIST PERMISSIONS}} statements are not handled correctly 
> by cqlsh in 2.1 (since CASSANDRA-6307).
> Running such a query results in errors along the lines of:
> {{noformat}}
> sam@easy:~/projects/cassandra$ bin/cqlsh --debug -u cassandra -p cassandra
> Using CQL driver:  '/home/sam/projects/cassandra/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/__init__.py'>
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.2-SNAPSHOT | CQL spec 3.2.0 | Native protocol v3]
> Use HELP for help.
> cassandra@cqlsh> list users;
> Traceback (most recent call last):
>   File "bin/cqlsh", line 879, in onecmd
> self.handle_statement(st, statementtext)
>   File "bin/cqlsh", line 920, in handle_statement
> return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
>   File "bin/cqlsh", line 953, in perform_statement
> result = self.perform_simple_statement(stmt)
>   File "bin/cqlsh", line 989, in perform_simple_statement
> self.print_result(rows, self.parse_for_table_meta(statement.query_string))
>   File "bin/cqlsh", line 970, in parse_for_table_meta
> return self.get_table_meta(ks, cf)
>   File "bin/cqlsh", line 732, in get_table_meta
> ksmeta = self.get_keyspace_meta(ksname)
>   File "bin/cqlsh", line 717, in get_keyspace_meta
> raise KeyspaceNotFound('Keyspace %r not found.' % ksname)
> KeyspaceNotFound: Keyspace None not found.
> {{noformat}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8370) cqlsh doesn't handle LIST statements correctly

2014-11-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-8370:
---
Attachment: 8370.txt

Attached a trivial patch & opened dtest PR: 
https://github.com/riptano/cassandra-dtest/pull/120


> cqlsh doesn't handle LIST statements correctly
> --
>
> Key: CASSANDRA-8370
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8370
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Tunnicliffe
>Assignee: Sam Tunnicliffe
>Priority: Minor
>  Labels: cqlsh
> Fix For: 2.1.3
>
> Attachments: 8370.txt
>
>
> {{LIST USERS}} and {{LIST PERMISSIONS}} statements are not handled correctly 
> by cqlsh in 2.1 (since CASSANDRA-6307).
> Running such a query results in errors along the lines of:
> {{noformat}}
> sam@easy:~/projects/cassandra$ bin/cqlsh --debug -u cassandra -p cassandra
> Using CQL driver:  '/home/sam/projects/cassandra/bin/../lib/cassandra-driver-internal-only-2.1.2.zip/cassandra-driver-2.1.2/cassandra/__init__.py'>
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.1.2-SNAPSHOT | CQL spec 3.2.0 | Native protocol v3]
> Use HELP for help.
> cassandra@cqlsh> list users;
> Traceback (most recent call last):
>   File "bin/cqlsh", line 879, in onecmd
> self.handle_statement(st, statementtext)
>   File "bin/cqlsh", line 920, in handle_statement
> return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
>   File "bin/cqlsh", line 953, in perform_statement
> result = self.perform_simple_statement(stmt)
>   File "bin/cqlsh", line 989, in perform_simple_statement
> self.print_result(rows, self.parse_for_table_meta(statement.query_string))
>   File "bin/cqlsh", line 970, in parse_for_table_meta
> return self.get_table_meta(ks, cf)
>   File "bin/cqlsh", line 732, in get_table_meta
> ksmeta = self.get_keyspace_meta(ksname)
>   File "bin/cqlsh", line 717, in get_keyspace_meta
> raise KeyspaceNotFound('Keyspace %r not found.' % ksname)
> KeyspaceNotFound: Keyspace None not found.
> {{noformat}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8370) cqlsh doesn't handle LIST statements correctly

2014-11-24 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-8370:
--

 Summary: cqlsh doesn't handle LIST statements correctly
 Key: CASSANDRA-8370
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8370
 Project: Cassandra
  Issue Type: Bug
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 2.1.3


{{LIST USERS}} and {{LIST PERMISSIONS}} statements are not handled correctly by 
cqlsh in 2.1 (since CASSANDRA-6307).

Running such a query results in errors along the lines of:

{{noformat}}
sam@easy:~/projects/cassandra$ bin/cqlsh --debug -u cassandra -p cassandra
Using CQL driver: 
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.1.2-SNAPSHOT | CQL spec 3.2.0 | Native protocol v3]
Use HELP for help.
cassandra@cqlsh> list users;
Traceback (most recent call last):
  File "bin/cqlsh", line 879, in onecmd
self.handle_statement(st, statementtext)
  File "bin/cqlsh", line 920, in handle_statement
return self.perform_statement(cqlruleset.cql_extract_orig(tokens, srcstr))
  File "bin/cqlsh", line 953, in perform_statement
result = self.perform_simple_statement(stmt)
  File "bin/cqlsh", line 989, in perform_simple_statement
self.print_result(rows, self.parse_for_table_meta(statement.query_string))
  File "bin/cqlsh", line 970, in parse_for_table_meta
return self.get_table_meta(ks, cf)
  File "bin/cqlsh", line 732, in get_table_meta
ksmeta = self.get_keyspace_meta(ksname)
  File "bin/cqlsh", line 717, in get_keyspace_meta
raise KeyspaceNotFound('Keyspace %r not found.' % ksname)
KeyspaceNotFound: Keyspace None not found.
{{noformat}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8168) Require Java 8

2014-11-24 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223159#comment-14223159
 ] 

Dave Brosius commented on CASSANDRA-8168:
-

{quote}
One thing I've learned is java 8 should give us some performance boost due to 
improvements in CAS contention improvements. We use these types extensively on 
the hot path
http://ashkrit.blogspot.com/2014/02/atomicinteger-java-7-vs-java-8.html
{quote}

Yes but isn't that the JVM doing that? Compiling with 8 shouldn't effect that.

I'm not a big fan of going to 8 at the moment, to me, just because i'm not sure 
the added value outweighs the pain you add to users.

> Require Java 8
> --
>
> Key: CASSANDRA-8168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8168
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.0
>
>
> This is to discuss requiring Java 8 for version >= 3.0  
> There are a couple big reasons for this.
> * Better support for complex async work  e.g (CASSANDRA-5239)
> http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html
> * Use Nashorn for Javascript UDFs CASSANDRA-7395



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations

2014-11-24 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223153#comment-14223153
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-7124:
-

[~yukim], Once you have a look at the patch attached and if the methodology is 
right, I can start working on the other ones like "compact, decommission, move, 
relocate" etc. Thanks 

> Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
> 
>
> Key: CASSANDRA-7124
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7124
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Rajanarayanan Thottuvaikkatumana
>Priority: Minor
>  Labels: lhf
> Fix For: 3.0
>
> Attachments: cassandra-trunk-cleanup-7124.txt
>
>
> If {{nodetool cleanup}} or some other long-running operation takes too long 
> to complete, you'll see an error like the one in CASSANDRA-2126, so you can't 
> tell if the operation completed successfully or not.  CASSANDRA-4767 fixed 
> this for repairs with JMX notifications.  We should do something similar for 
> nodetool cleanup, compact, decommission, move, relocate, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8061) tmplink files are not removed

2014-11-24 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-8061:
---
Attachment: 8061_v1.txt

[~enigmacurry] [~mshuler] Have we been able to reproduce this?

>From tracing through the code I see a couple of places where it looks like we 
>could potentially leak tmplink files in SSTableWriter.openEarly.  We create 
>the hard links, grab an sstablereader w/those internally, and then if our 
>iwriter.getMaxReadableKey comes back null, immediately return null from the 
>method without calling releaseReference on the SSTR.  I've added a 
>releaseReference call in there to prevent that.

I've also converted SSTableRewriter to implement AutoClosable as the current 
pattern of having to manually finish or abort is prone to error, though a 
manual inspection of our usage doesn't uncover any errors.  I've also tightened 
up the finish() method by protecting against duplicate usage, removed the 
redundant / premature optimization of tracking early opened SSTableReaders 
separately from their SSTableWriters (prevented construction of list on abort 
path), and did a little renaming cleanup.
   
While the logic to track tmplink / early opened SSTR's and release references 
to them is complex it appears to be sound from my first inspection - either the 
finish() or abort() path should correctly remove those files.

I'm setting up a long running test with the above schema from Alexander to see 
if I can replicate this locally.

v1 attached with the above changes.  If anyone has a reproduction of this 
problem and could take a spin with this patch, that would be great!


> tmplink files are not removed
> -
>
> Key: CASSANDRA-8061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8061
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Linux
>Reporter: Gianluca Borello
>Assignee: Joshua McKenzie
>Priority: Critical
> Fix For: 2.1.3
>
> Attachments: 8061_v1.txt, 8248-thread_dump.txt
>
>
> After installing 2.1.0, I'm experiencing a bunch of tmplink files that are 
> filling my disk. I found https://issues.apache.org/jira/browse/CASSANDRA-7803 
> and that is very similar, and I confirm it happens both on 2.1.0 as well as 
> from the latest commit on the cassandra-2.1 branch 
> (https://github.com/apache/cassandra/commit/aca80da38c3d86a40cc63d9a122f7d45258e4685
>  from the cassandra-2.1)
> Even starting with a clean keyspace, after a few hours I get:
> {noformat}
> $ sudo find /raid0 | grep tmplink | xargs du -hs
> 2.7G  
> /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Data.db
> 13M   
> /raid0/cassandra/data/draios/protobuf1-ccc6dce04beb11e4abf997b38fbf920b/draios-protobuf1-tmplink-ka-4515-Index.db
> 1.8G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Data.db
> 12M   
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-1788-Index.db
> 5.2M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Index.db
> 822M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-2678-Data.db
> 7.3M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Index.db
> 1.2G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3283-Data.db
> 6.7M  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Index.db
> 1.1G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-3951-Data.db
> 11M   
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Index.db
> 1.7G  
> /raid0/cassandra/data/draios/protobuf_by_agent1-cd071a304beb11e4abf997b38fbf920b/draios-protobuf_by_agent1-tmplink-ka-4799-Data.db
> 812K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-234-Index.db
> 122M  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-208-Data.db
> 744K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmplink-ka-739-Index.db
> 660K  
> /raid0/cassandra/data/draios/mounted_fs_by_agent1-d7bf3e304beb11e4abf997b38fbf920b/draios-mounted_fs_by_agent1-tmpl

[jira] [Commented] (CASSANDRA-8231) Wrong size of cached prepared statements

2014-11-24 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223140#comment-14223140
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-8231:
-

Hi Benjamin,
Still the problem persists. Here are the steps I have gone through.

Rajanarayanans-MacBook-Pro:lib RajT$ rm jamm-0.3.0.jar
Rajanarayanans-MacBook-Pro:lib RajT$ wget 
http://search.maven.org/remotecontent?filepath=com/github/jbellis/jamm/0.3.0/jamm-0.3.0.jar
--2014-11-24 16:24:19--  
http://search.maven.org/remotecontent?filepath=com/github/jbellis/jamm/0.3.0/jamm-0.3.0.jar
Resolving search.maven.org... 207.223.241.72
Connecting to search.maven.org|207.223.241.72|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: 
https://repo1.maven.org/maven2/com/github/jbellis/jamm/0.3.0/jamm-0.3.0.jar 
[following]
--2014-11-24 16:24:19--  
https://repo1.maven.org/maven2/com/github/jbellis/jamm/0.3.0/jamm-0.3.0.jar
Resolving repo1.maven.org... 185.31.18.209
Connecting to repo1.maven.org|185.31.18.209|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21033 (21K) [application/java-archive]
Saving to: 
'remotecontent?filepath=com%2Fgithub%2Fjbellis%2Fjamm%2F0.3.0%2Fjamm-0.3.0.jar'

100%[>]
 21,033  --.-K/s   in 0.04s   

2014-11-24 16:24:20 (582 KB/s) - 
'remotecontent?filepath=com%2Fgithub%2Fjbellis%2Fjamm%2F0.3.0%2Fjamm-0.3.0.jar' 
saved [21033/21033]

Rajanarayanans-MacBook-Pro:lib RajT$ mv 
remotecontent?filepath=com%2Fgithub%2Fjbellis%2Fjamm%2F0.3.0%2Fjamm-0.3.0.jar 
jamm-0.3.0.jar
Rajanarayanans-MacBook-Pro:lib RajT$ cd ..
Rajanarayanans-MacBook-Pro:cassandra-trunk RajT$ ant test 
-Dtest.name=CleanupTest
Buildfile: /Users/RajT/cassandra-source/cassandra-trunk/build.xml

init:

maven-ant-tasks-localrepo:

maven-ant-tasks-download:

maven-ant-tasks-init:

maven-declare-dependencies:

maven-ant-tasks-retrieve-build:

init-dependencies:
 [echo] Loading dependency paths from file: 
/Users/RajT/cassandra-source/cassandra-trunk/build/build-dependencies.xml
[unzip] Expanding: 
/Users/RajT/cassandra-source/cassandra-trunk/build/lib/jars/org.jacoco.agent-0.7.1.201405082137.jar
 into /Users/RajT/cassandra-source/cassandra-trunk/build/lib/jars

check-gen-cql3-grammar:

gen-cql3-grammar:

build-project:
 [echo] apache-cassandra: 
/Users/RajT/cassandra-source/cassandra-trunk/build.xml

createVersionPropFile:
[propertyfile] Updating property file: 
/Users/RajT/cassandra-source/cassandra-trunk/src/resources/org/apache/cassandra/config/version.properties
 [copy] Copying 1 file to 
/Users/RajT/cassandra-source/cassandra-trunk/build/classes/main

build:

build-test:

test:

testlist:
 [echo] running test bucket 0 tests
[junit] WARNING: multiple versions of ant detected in path for junit 
[junit]  
jar:file:/usr/local/Cellar/ant/1.9.4/libexec/lib/ant.jar!/org/apache/tools/ant/Project.class
[junit]  and 
jar:file:/Users/RajT/cassandra-source/cassandra-trunk/build/lib/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
[junit] Error occurred during initialization of VM
[junit] agent library failed to init: instrument
[junit] objc[7769]: Class JavaLaunchHelper is implemented in both 
/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/jre/bin/java 
and 
/Library/Java/JavaVirtualMachines/jdk1.7.0_67.jdk/Contents/Home/jre/lib/libinstrument.dylib.
 One of the two will be used. Which one is undefined.
[junit] Error opening zip file or JAR manifest missing : 
/Users/RajT/cassandra-source/cassandra-trunk/lib/jamm-0.3.0.jar 
[junit] Testsuite: org.apache.cassandra.db.CleanupTest
[junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 
sec
[junit] 
[junit] Testcase: org.apache.cassandra.db.CleanupTest:null: Caused an ERROR
[junit] Forked Java VM exited abnormally. Please note the time in the 
report does not reflect the time until the VM exit.
[junit] junit.framework.AssertionFailedError: Forked Java VM exited 
abnormally. Please note the time in the report does not reflect the time until 
the VM exit.
[junit] at java.lang.Thread.run(Thread.java:745)
[junit] 
[junit] 
[junit] Test org.apache.cassandra.db.CleanupTest FAILED (crashed)
[junitreport] Processing 
/Users/RajT/cassandra-source/cassandra-trunk/build/test/TESTS-TestSuites.xml to 
/var/folders/nf/trtmyt9534z03kq8p8zgbnxhgn/T/null1829125363
[junitreport] Loading stylesheet 
jar:file:/usr/local/Cellar/ant/1.9.4/libexec/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl
[junitreport] Transform time: 2046ms
[junitreport] Deleting: 
/var/folders/nf/trtmyt9534z03kq8p8zgbnxhgn/T/null1829125363

BUILD FAILED
/Users/RajT/cassandr

[jira] [Updated] (CASSANDRA-8332) Null pointer after droping keyspace

2014-11-24 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8332:
--
Fix Version/s: 2.0.12

> Null pointer after droping keyspace
> ---
>
> Key: CASSANDRA-8332
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8332
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Lohfink
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 2.0.12, 2.1.3
>
> Attachments: 8332.txt, CassandraStressTest-8332.zip
>
>
> After dropping keyspace, sometimes I see this in logs:
> {code}
> ERROR 03:40:29 Exception in thread Thread[CompactionExecutor:2,1,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1142)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1896)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:68) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1681)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1693)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:181)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getScanners(WrappingCompactionStrategy.java:320)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:340)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:233)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_71]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_71]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_71]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_71]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
> {code}
> Minor issue since doesn't really affect anything, but the error makes it look 
> like somethings wrong.  Seen on 2.1 branch 
> (1b21aef8152d96a180e75f2fcc5afad9ded6c595), not sure how far back (may be 
> post 2.1.2).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8332) Null pointer after droping keyspace

2014-11-24 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-8332:
--
Attachment: 8332.txt

> Null pointer after droping keyspace
> ---
>
> Key: CASSANDRA-8332
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8332
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Lohfink
>Assignee: T Jake Luciani
>Priority: Minor
> Fix For: 2.0.12, 2.1.3
>
> Attachments: 8332.txt, CassandraStressTest-8332.zip
>
>
> After dropping keyspace, sometimes I see this in logs:
> {code}
> ERROR 03:40:29 Exception in thread Thread[CompactionExecutor:2,1,main]
> java.lang.AssertionError: null
>   at 
> org.apache.cassandra.io.compress.CompressionParameters.setLiveMetadata(CompressionParameters.java:108)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:1142)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1896)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:68) 
> ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1681)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1693)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getScanners(LeveledCompactionStrategy.java:181)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getScanners(WrappingCompactionStrategy.java:320)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:340)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:151)
>  ~[main/:na]
>   at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[main/:na]
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:75)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[main/:na]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:233)
>  ~[main/:na]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_71]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_71]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_71]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_71]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]
> {code}
> Minor issue since doesn't really affect anything, but the error makes it look 
> like somethings wrong.  Seen on 2.1 branch 
> (1b21aef8152d96a180e75f2fcc5afad9ded6c595), not sure how far back (may be 
> post 2.1.2).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8285) OOME in Cassandra 2.0.11

2014-11-24 Thread Pierre Laporte (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223083#comment-14223083
 ] 

Pierre Laporte commented on CASSANDRA-8285:
---

I have the issue after ~1.5 day on the endurance test of java-driver 2.1.3 
against 2.0.12.

Please find the associated heap dump 
[here|https://drive.google.com/open?id=0BxvGkaXP3ayeOElqY1ZNQTlBNTg&authuser=1]



> OOME in Cassandra 2.0.11
> 
>
> Key: CASSANDRA-8285
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8285
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.0.11 + java-driver 2.0.8-SNAPSHOT
> Cassandra 2.0.11 + ruby-driver 1.0-beta
>Reporter: Pierre Laporte
>Assignee: Aleksey Yeschenko
> Attachments: OOME_node_system.log, gc.log.gz, 
> heap-usage-after-gc-zoom.png, heap-usage-after-gc.png
>
>
> We ran drivers 3-days endurance tests against Cassandra 2.0.11 and C* crashed 
> with an OOME.  This happened both with ruby-driver 1.0-beta and java-driver 
> 2.0.8-snapshot.
> Attached are :
> | OOME_node_system.log | The system.log of one Cassandra node that crashed |
> | gc.log.gz | The GC log on the same node |
> | heap-usage-after-gc.png | The heap occupancy evolution after every GC cycle 
> |
> | heap-usage-after-gc-zoom.png | A focus on when things start to go wrong |
> Workload :
> Our test executes 5 CQL statements (select, insert, select, delete, select) 
> for a given unique id, during 3 days, using multiple threads.  There is not 
> change in the workload during the test.
> Symptoms :
> In the attached log, it seems something starts in Cassandra between 
> 2014-11-06 10:29:22 and 2014-11-06 10:45:32.  This causes an allocation that 
> fills the heap.  We eventually get stuck in a Full GC storm and get an OOME 
> in the logs.
> I have run the java-driver tests against Cassandra 1.2.19 and 2.1.1.  The 
> error does not occur.  It seems specific to 2.0.11.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223070#comment-14223070
 ] 

Andreas Ländle commented on CASSANDRA-8192:
---

At least I'm using a 64-bit JVM JDK 7u60. For now I tested to run cassandra 
with 4GB heap-size (instead 3GB before) and at least until now I couldn't 
reproduce the error.

> AssertionError in Memory.java
> -
>
> Key: CASSANDRA-8192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
>Reporter: Andreas Schnitzerling
>Assignee: Joshua McKenzie
> Attachments: cassandra.bat, cassandra.yaml, system.log
>
>
> Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
> start up.
> {panel:title=system.log}
> ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]
> java.lang.AssertionError: null
>   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:135)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
> ~[na:1.7.0_55]
>   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0_55]
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0_55]
>   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
> {panel}
> In the attached log you can still see as well CASSANDRA-8069 and 
> CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8168) Require Java 8

2014-11-24 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223062#comment-14223062
 ] 

T Jake Luciani commented on CASSANDRA-8168:
---

bq.  And that CompletableFuture gives us something that guava's 
ListenableFuture don't give us?

Agreed.  I do think lambas help make this code more maintainable but not 
required.  


One thing I've learned is java 8 should give us some performance boost due to 
improvements in CAS contention improvements.  We use these types extensively on 
the hot path
http://ashkrit.blogspot.com/2014/02/atomicinteger-java-7-vs-java-8.html



> Require Java 8
> --
>
> Key: CASSANDRA-8168
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8168
> Project: Cassandra
>  Issue Type: Task
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
> Fix For: 3.0
>
>
> This is to discuss requiring Java 8 for version >= 3.0  
> There are a couple big reasons for this.
> * Better support for complex async work  e.g (CASSANDRA-5239)
> http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletableFuture.html
> * Use Nashorn for Javascript UDFs CASSANDRA-7395



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8369) Better error handling in CQLSH for invalid password

2014-11-24 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-8369:


 Summary: Better error handling in CQLSH for invalid password
 Key: CASSANDRA-8369
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8369
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor


On C* 2.0.11/Cqlsh 4.1.1 when logging with an invalid password you get back a 
stacktrace rather than a more user friendly error. It might be better if this 
was more user friendly.

For example - this is what you get back now:

root@cass1:~# cqlsh -u cassandra -p johnny
Traceback (most recent call last):
  File "/usr/bin/cqlsh", line 2113, in 
main(*read_options(sys.argv[1:], os.environ))
  File "/usr/bin/cqlsh", line 2093, in main
single_statement=options.execute)
  File "/usr/bin/cqlsh", line 505, in __init__
password=password, cql_version=cqlver, transport=transport)
  File 
"/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/connection.py",
 line 143, in connect
  File 
"/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/connection.py",
 line 59, in __init__
  File 
"/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/thrifteries.py",
 line 157, in establish_connection
  File 
"/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py",
 line 465, in login
  File 
"/usr/share/dse/cassandra/lib/cql-internal-only-1.4.1.zip/cql-1.4.1/cql/cassandra/Cassandra.py",
 line 486, in recv_login
cql.cassandra.ttypes.AuthenticationException: 
AuthenticationException(why='Username and/or password are incorrect')



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-24 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223047#comment-14223047
 ] 

Joshua McKenzie commented on CASSANDRA-8192:


I assume a 32-bit JVM due to the heap size limitation.  Were you able to test 
out the 64-bit environment w/3G heap outside of Upsource?

> AssertionError in Memory.java
> -
>
> Key: CASSANDRA-8192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
>Reporter: Andreas Schnitzerling
>Assignee: Joshua McKenzie
> Attachments: cassandra.bat, cassandra.yaml, system.log
>
>
> Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
> start up.
> {panel:title=system.log}
> ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]
> java.lang.AssertionError: null
>   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:135)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
> ~[na:1.7.0_55]
>   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0_55]
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0_55]
>   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
> {panel}
> In the attached log you can still see as well CASSANDRA-8069 and 
> CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7874) Validate functionality of different JSR-223 providers in UDFs

2014-11-24 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223046#comment-14223046
 ] 

Joshua McKenzie commented on CASSANDRA-7874:


+1 from me on the Windows changes

> Validate functionality of different JSR-223 providers in UDFs
> -
>
> Key: CASSANDRA-7874
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7874
> Project: Cassandra
>  Issue Type: Task
>  Components: Core
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>  Labels: udf
> Fix For: 3.0
>
> Attachments: 7874.txt, 7874v2.txt, 7874v3.txt, 7874v4.txt, 
> 7874v5.txt, 7874v6.txt
>
>
> CASSANDRA-7526 introduces ability to support optional JSR-223 providers like 
> Clojure, Jython, Groovy or JRuby.
> This ticket is about to test functionality with these providers but not to 
> include them in C* distribution.
> Expected result is a "how to" document, wiki page or similar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7949) LCS compaction low performance, many pending compactions, nodes are almost idle

2014-11-24 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223036#comment-14223036
 ] 

Nikolai Grigoriev commented on CASSANDRA-7949:
--

I have recently realized that there may be relatively cheap (operationally and 
development-wise) workaround for that limitation. It would also partially 
address the problem with bootstrapping new node. The root cause of all this is 
a large amount of data in a single CF on a single node when using LCS for that 
CF. The performance of a single compaction task running on a single thread is 
limited anyway. One of the obvious ways to break this limitation is to shard 
the data across multiple "clones" of that CF at the application level. 
Something as dumb as row key hash mod X and add this suffix to the CF name. In 
my case looks like having X=4 would be more than enough to solve the problem.

> LCS compaction low performance, many pending compactions, nodes are almost 
> idle
> ---
>
> Key: CASSANDRA-7949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7949
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: DSE 4.5.1-1, Cassandra 2.0.8
>Reporter: Nikolai Grigoriev
> Attachments: iostats.txt, nodetool_compactionstats.txt, 
> nodetool_tpstats.txt, pending compactions 2day.png, system.log.gz, vmstat.txt
>
>
> I've been evaluating new cluster of 15 nodes (32 core, 6x800Gb SSD disks + 
> 2x600Gb SAS, 128Gb RAM, OEL 6.5) and I've built a simulator that creates the 
> load similar to the load in our future product. Before running the simulator 
> I had to pre-generate enough data. This was done using Java code and DataStax 
> Java driver. To avoid going deep into details, two tables have been 
> generated. Each table currently has about 55M rows and between few dozens and 
> few thousands of columns in each row.
> This data generation process was generating massive amount of non-overlapping 
> data. Thus, the activity was write-only and highly parallel. This is not the 
> type of the traffic that the system will have ultimately to deal with, it 
> will be mix of reads and updates to the existing data in the future. This is 
> just to explain the choice of LCS, not mentioning the expensive SSD disk 
> space.
> At some point while generating the data I have noticed that the compactions 
> started to pile up. I knew that I was overloading the cluster but I still 
> wanted the genration test to complete. I was expecting to give the cluster 
> enough time to finish the pending compactions and get ready for real traffic.
> However, after the storm of write requests have been stopped I have noticed 
> that the number of pending compactions remained constant (and even climbed up 
> a little bit) on all nodes. After trying to tune some parameters (like 
> setting the compaction bandwidth cap to 0) I have noticed a strange pattern: 
> the nodes were compacting one of the CFs in a single stream using virtually 
> no CPU and no disk I/O. This process was taking hours. After that it would be 
> followed by a short burst of few dozens of compactions running in parallel 
> (CPU at 2000%, some disk I/O - up to 10-20%) and then getting stuck again for 
> many hours doing one compaction at time. So it looks like this:
> # nodetool compactionstats
> pending tasks: 3351
>   compaction typekeyspace   table   completed 
>   total  unit  progress
>Compaction  myks table_list1 66499295588   
> 1910515889913 bytes 3.48%
> Active compaction remaining time :n/a
> # df -h
> ...
> /dev/sdb1.5T  637G  854G  43% /cassandra-data/disk1
> /dev/sdc1.5T  425G  1.1T  29% /cassandra-data/disk2
> /dev/sdd1.5T  429G  1.1T  29% /cassandra-data/disk3
> # find . -name **table_list1**Data** | grep -v snapshot | wc -l
> 1310
> Among these files I see:
> 1043 files of 161Mb (my sstable size is 160Mb)
> 9 large files - 3 between 1 and 2Gb, 3 of 5-8Gb, 55Gb, 70Gb and 370Gb
> 263 files of various sized - between few dozens of Kb and 160Mb
> I've been running the heavy load for about 1,5days and it's been close to 3 
> days after that and the number of pending compactions does not go down.
> I have applied one of the not-so-obvious recommendations to disable 
> multithreaded compactions and that seems to be helping a bit - I see some 
> nodes started to have fewer pending compactions. About half of the cluster, 
> in fact. But even there I see they are sitting idle most of the time lazily 
> compacting in one stream with CPU at ~140% and occasionally doing the bursts 
> of compaction work for few minutes.
> I am wondering if this is really a bug or something in the LCS logic that 
> would manife

[jira] [Commented] (CASSANDRA-8267) Only stream from unrepaired sstables during incremental repair

2014-11-24 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14223034#comment-14223034
 ] 

Marcus Eriksson commented on CASSANDRA-8267:


To solve this we need to tell a node whether or not this is an incremental 
repair when requesting ranges from it. This breaks streaming message 
versioning, meaning we would not be able to stream between two nodes unless 
they were both upgraded, which would suck in a minor release.

One "solution" could be to only break streaming for incremental repairs (when 
they are initiated on an upgraded node) by adding a new 
IncrementalStreamRequest message and failing early if we notice that not all 
endpoints included in the incremental repair are upgraded. This would make 
old-style repairs still work since they don't use the new message (and full 
repairs are the default in 2.1).

WDYT [~yukim]? Would this be acceptable or do you have a better solution? We 
kind of have to fix this in 2.1 since it makes incremental repairs quite bad.

> Only stream from unrepaired sstables during incremental repair
> --
>
> Key: CASSANDRA-8267
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8267
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
>
> Seems we stream from all sstables even if we do incremental repair, we should 
> limit this to only stream from the unrepaired sstables if we do incremental 
> repair



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8368) Consider not using hints for batchlog replay, in any capacity

2014-11-24 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-8368:


 Summary: Consider not using hints for batchlog replay, in any 
capacity
 Key: CASSANDRA-8368
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8368
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
 Fix For: 3.0


Currently, when replaying a batch, if a request times out, we simply write a 
hint for it and call it a day.

It's simple, but it does tie us to hints, which some people prefer to disable 
altogether (and some still will even after CASSANDRA-6230).

It also potentially violates the consistency level of the original request.

As an alternative, once CASSANDRA-7237 is complete, I suggest we stop relying 
on hints at all, and do this instead:

1. Store the consistency level as batch metadata
2. On replay, hint in case of a timeout, but not if the node is down as per FD
3. If CL is met, consider the batch replayed and discard it, but not account 
the hints towards CL (as per usual write patch), unless CL.ANY is being used
4. If CL is *not* met, write a new batch with contents of the current one, but 
with timeuuid set in the future, for later replay (delayed by fixed 
configurable time or exponentially backed off). With that new batch store the 
list of nodes we've delivered the hint to, so that next time we replay it we 
don't waste writes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-8367) Clash between Cassandra and Crunch mapreduce config

2014-11-24 Thread Radovan Zvoncek (JIRA)
Radovan Zvoncek created CASSANDRA-8367:
--

 Summary: Clash between Cassandra and Crunch mapreduce config
 Key: CASSANDRA-8367
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8367
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Radovan Zvoncek
Priority: Minor


We would like to use Cassandra's (Cql)BulkOutputFormats to implement Resource 
IOs for Crunch. We want to do this to allow Crunch users write results of their 
jobs directly to Cassandra (thus skipping writing them to file system).

In the process of doing this, we found out there is a clash in the mapreduce 
job config. The affected config key is 'mapreduce.output.basename'. Cassandra 
is using it [1] for something different than Crunch [2]. This is resulting in 
some obscure behavior I personally don't understand, but it causes the jobs to 
fail.

We went ahead and re-implemented the output format classes to use different 
config key, but we'd very much like to stop using them.

[1] 
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hadoop/ConfigHelper.java#L54
[2] 
https://github.com/apache/crunch/blob/3f13ee65c9debcf6bd7366607f58beae6c73ffe2/crunch-core/src/main/java/org/apache/crunch/io/CrunchOutputs.java#L99




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8192) AssertionError in Memory.java

2014-11-24 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14222995#comment-14222995
 ] 

Andreas Schnitzerling commented on CASSANDRA-8192:
--

I managed to start 2.1.2 w/finalizer-patch and  -Xms1229M  -Xmx1229M^. Same 
error. Windows Task-Manager shows only 308MB used after that error. So one node 
w/ 1GB jvm set no error (my post before) and another same node w/1,2GB jvm set 
throws that error.

> AssertionError in Memory.java
> -
>
> Key: CASSANDRA-8192
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8192
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows-7-32 bit, 3GB RAM, Java 1.7.0_67
>Reporter: Andreas Schnitzerling
>Assignee: Joshua McKenzie
> Attachments: cassandra.bat, cassandra.yaml, system.log
>
>
> Since update of 1 of 12 nodes from 2.1.0-rel to 2.1.1-rel Exception during 
> start up.
> {panel:title=system.log}
> ERROR [SSTableBatchOpen:1] 2014-10-27 09:44:00,079 CassandraDaemon.java:153 - 
> Exception in thread Thread[SSTableBatchOpen:1,5,main]
> java.lang.AssertionError: null
>   at org.apache.cassandra.io.util.Memory.size(Memory.java:307) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:135)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:83)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:50)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:48)
>  ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:766) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:725) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:402) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:302) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:438) 
> ~[apache-cassandra-2.1.1.jar:2.1.1]
>   at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) 
> ~[na:1.7.0_55]
>   at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_55]
>   at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> [na:1.7.0_55]
>   at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> [na:1.7.0_55]
>   at java.lang.Thread.run(Unknown Source) [na:1.7.0_55]
> {panel}
> In the attached log you can still see as well CASSANDRA-8069 and 
> CASSANDRA-6283.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8231) Wrong size of cached prepared statements

2014-11-24 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14222982#comment-14222982
 ] 

Benjamin Lerer commented on CASSANDRA-8231:
---

Could you try the following: 
1) delete the jar 
2) Dowload the jar from: 
http://search.maven.org/remotecontent?filepath=com/github/jbellis/jamm/0.3.0/jamm-0.3.0.jar
 and put it in the lib folder
3) tell me if you still have the error


> Wrong size of cached prepared statements
> 
>
> Key: CASSANDRA-8231
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8231
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaroslav Kamenik
>Assignee: Benjamin Lerer
> Fix For: 2.1.3
>
> Attachments: 8231-notes.txt, CASSANDRA-8231-V2-trunk.txt, 
> CASSANDRA-8231-V2.txt, CASSANDRA-8231.txt, Unsafes.java
>
>
> Cassandra counts memory footprint of prepared statements for caching 
> purposes. It seems, that there is problem with some statements, ie 
> SelectStatement. Even simple selects is counted as 100KB object, updates, 
> deletes etc have few hundreds or thousands bytes. Result is that cache - 
> QueryProcessor.preparedStatements  - holds just fraction of statements..
> I dig a little into the code, and it seems that problem is in jamm in class 
> MemoryMeter. It seems that if instance contains reference to class, it counts 
> size of whole class too. SelectStatement references EnumSet through 
> ResultSet.Metadata and EnumSet holds reference to Enum class...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Ignore Paxos commits for truncated tables

2014-11-24 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 cab2b25b0 -> eac7781e7


Ignore Paxos commits for truncated tables

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-7538


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17de36f2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17de36f2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17de36f2

Branch: refs/heads/cassandra-2.1
Commit: 17de36f246c912287b85eb7015583a35f5040919
Parents: 0e3d9fc
Author: Sam Tunnicliffe 
Authored: Mon Nov 24 16:07:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Mon Nov 24 16:07:17 2014 +0300

--
 CHANGES.txt |   1 +
 .../cassandra/service/paxos/PaxosState.java |  17 ++-
 .../cassandra/service/PaxosStateTest.java   | 108 +++
 3 files changed, 122 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 412eb59..fe23248 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Ignore Paxos commits for truncated tables (CASSANDRA-7538)
  * Validate size of indexed column values (CASSANDRA-8280)
  * Make LCS split compaction results over all data directories (CASSANDRA-8329)
  * Fix some failing queries that use multi-column relations

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/src/java/org/apache/cassandra/service/paxos/PaxosState.java
--
diff --git a/src/java/org/apache/cassandra/service/paxos/PaxosState.java 
b/src/java/org/apache/cassandra/service/paxos/PaxosState.java
index 0196122..2adecec 100644
--- a/src/java/org/apache/cassandra/service/paxos/PaxosState.java
+++ b/src/java/org/apache/cassandra/service/paxos/PaxosState.java
@@ -31,6 +31,7 @@ import org.apache.cassandra.db.RowMutation;
 import org.apache.cassandra.db.Keyspace;
 import org.apache.cassandra.db.SystemKeyspace;
 import org.apache.cassandra.tracing.Tracing;
+import org.apache.cassandra.utils.UUIDGen;
 
 public class PaxosState
 {
@@ -132,10 +133,18 @@ public class PaxosState
 // Committing it is however always safe due to column timestamps, 
so always do it. However,
 // if our current in-progress ballot is strictly greater than the 
proposal one, we shouldn't
 // erase the in-progress update.
-Tracing.trace("Committing proposal {}", proposal);
-RowMutation rm = proposal.makeMutation();
-Keyspace.open(rm.getKeyspaceName()).apply(rm, true);
-
+// The table may have been truncated since the proposal was 
initiated. In that case, we
+// don't want to perform the mutation and potentially resurrect 
truncated data
+if (UUIDGen.unixTimestamp(proposal.ballot) >= 
SystemKeyspace.getTruncatedAt(proposal.update.metadata().cfId))
+{
+Tracing.trace("Committing proposal {}", proposal);
+RowMutation rm = proposal.makeMutation();
+Keyspace.open(rm.getKeyspaceName()).apply(rm, true);
+}
+else
+{
+Tracing.trace("Not committing proposal {} as ballot timestamp 
predates last truncation time", proposal);
+}
 // We don't need to lock, we're just blindly updating
 SystemKeyspace.savePaxosCommit(proposal);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/test/unit/org/apache/cassandra/service/PaxosStateTest.java
--
diff --git a/test/unit/org/apache/cassandra/service/PaxosStateTest.java 
b/test/unit/org/apache/cassandra/service/PaxosStateTest.java
new file mode 100644
index 000..306c424
--- /dev/null
+++ b/test/unit/org/apache/cassandra/service/PaxosStateTest.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing per

[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-24 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/service/paxos/PaxosState.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eac7781e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eac7781e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eac7781e

Branch: refs/heads/trunk
Commit: eac7781e7c429ac25b56ddc2ec20bc34f3244df6
Parents: cab2b25 17de36f
Author: Aleksey Yeschenko 
Authored: Mon Nov 24 16:27:56 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Mon Nov 24 16:27:56 2014 +0300

--
 CHANGES.txt |   1 +
 .../cassandra/service/paxos/PaxosState.java |  17 ++-
 .../cassandra/service/PaxosStateTest.java   | 104 +++
 3 files changed, 118 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eac7781e/CHANGES.txt
--
diff --cc CHANGES.txt
index 9db65e9,fe23248..c9e35d5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,5 +1,19 @@@
 -2.0.12:
 +2.1.3
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 +Merged from 2.0:
+  * Ignore Paxos commits for truncated tables (CASSANDRA-7538)
   * Validate size of indexed column values (CASSANDRA-8280)
   * Make LCS split compaction results over all data directories 
(CASSANDRA-8329)
   * Fix some failing queries that use multi-column relations

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eac7781e/src/java/org/apache/cassandra/service/paxos/PaxosState.java
--
diff --cc src/java/org/apache/cassandra/service/paxos/PaxosState.java
index abd173c,2adecec..01e03f4
--- a/src/java/org/apache/cassandra/service/paxos/PaxosState.java
+++ b/src/java/org/apache/cassandra/service/paxos/PaxosState.java
@@@ -18,17 -19,19 +18,18 @@@
   * under the License.
   * 
   */
 -
 +package org.apache.cassandra.service.paxos;
  
  import java.nio.ByteBuffer;
 +import java.util.concurrent.locks.Lock;
  
 -import org.slf4j.Logger;
 -import org.slf4j.LoggerFactory;
 +import com.google.common.util.concurrent.Striped;
  
  import org.apache.cassandra.config.CFMetaData;
 -import org.apache.cassandra.db.RowMutation;
 -import org.apache.cassandra.db.Keyspace;
 -import org.apache.cassandra.db.SystemKeyspace;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.*;
  import org.apache.cassandra.tracing.Tracing;
+ import org.apache.cassandra.utils.UUIDGen;
  
  public class PaxosState
  {
@@@ -131,10 -133,18 +132,18 @@@
  // Committing it is however always safe due to column timestamps, 
so always do it. However,
  // if our current in-progress ballot is strictly greater than the 
proposal one, we shouldn't
  // erase the in-progress update.
- Tracing.trace("Committing proposal {}", proposal);
- Mutation mutation = proposal.makeMutation();
- Keyspace.open(mutation.getKeyspaceName()).apply(mutation, true);
- 
+ // The table may have been truncated since the proposal was 
initiated. In that case, we
+ // don't want to perform the mutation and potentially resurrect 
truncated data
+ if (UUIDGen.unixTimestamp(proposal.ballot) >= 
SystemKeyspace.getTruncatedAt(proposal.update.metadata().cfId))
+ {
+ Tracing.trace("Committing proposal {}", proposal);
 -RowMutation rm = proposal.makeMutation();
 -Keyspace.open(rm.getKeyspaceName()).apply(rm, true);
++Mutation mutation = proposal.makeMutation();
++Keyspace.open(mutation.getKeyspaceName()).apply(mutation, 
true);
+ }
+ else
+ {
+ Tracing.trace("Not committing proposal {} as ballot timestamp 
predates last truncation time", proposal);
+ }
  // We don't need to loc

[1/3] cassandra git commit: Ignore Paxos commits for truncated tables

2014-11-24 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 41435ef6c -> 584113103


Ignore Paxos commits for truncated tables

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-7538


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17de36f2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17de36f2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17de36f2

Branch: refs/heads/trunk
Commit: 17de36f246c912287b85eb7015583a35f5040919
Parents: 0e3d9fc
Author: Sam Tunnicliffe 
Authored: Mon Nov 24 16:07:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Mon Nov 24 16:07:17 2014 +0300

--
 CHANGES.txt |   1 +
 .../cassandra/service/paxos/PaxosState.java |  17 ++-
 .../cassandra/service/PaxosStateTest.java   | 108 +++
 3 files changed, 122 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 412eb59..fe23248 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Ignore Paxos commits for truncated tables (CASSANDRA-7538)
  * Validate size of indexed column values (CASSANDRA-8280)
  * Make LCS split compaction results over all data directories (CASSANDRA-8329)
  * Fix some failing queries that use multi-column relations

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/src/java/org/apache/cassandra/service/paxos/PaxosState.java
--
diff --git a/src/java/org/apache/cassandra/service/paxos/PaxosState.java 
b/src/java/org/apache/cassandra/service/paxos/PaxosState.java
index 0196122..2adecec 100644
--- a/src/java/org/apache/cassandra/service/paxos/PaxosState.java
+++ b/src/java/org/apache/cassandra/service/paxos/PaxosState.java
@@ -31,6 +31,7 @@ import org.apache.cassandra.db.RowMutation;
 import org.apache.cassandra.db.Keyspace;
 import org.apache.cassandra.db.SystemKeyspace;
 import org.apache.cassandra.tracing.Tracing;
+import org.apache.cassandra.utils.UUIDGen;
 
 public class PaxosState
 {
@@ -132,10 +133,18 @@ public class PaxosState
 // Committing it is however always safe due to column timestamps, 
so always do it. However,
 // if our current in-progress ballot is strictly greater than the 
proposal one, we shouldn't
 // erase the in-progress update.
-Tracing.trace("Committing proposal {}", proposal);
-RowMutation rm = proposal.makeMutation();
-Keyspace.open(rm.getKeyspaceName()).apply(rm, true);
-
+// The table may have been truncated since the proposal was 
initiated. In that case, we
+// don't want to perform the mutation and potentially resurrect 
truncated data
+if (UUIDGen.unixTimestamp(proposal.ballot) >= 
SystemKeyspace.getTruncatedAt(proposal.update.metadata().cfId))
+{
+Tracing.trace("Committing proposal {}", proposal);
+RowMutation rm = proposal.makeMutation();
+Keyspace.open(rm.getKeyspaceName()).apply(rm, true);
+}
+else
+{
+Tracing.trace("Not committing proposal {} as ballot timestamp 
predates last truncation time", proposal);
+}
 // We don't need to lock, we're just blindly updating
 SystemKeyspace.savePaxosCommit(proposal);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/test/unit/org/apache/cassandra/service/PaxosStateTest.java
--
diff --git a/test/unit/org/apache/cassandra/service/PaxosStateTest.java 
b/test/unit/org/apache/cassandra/service/PaxosStateTest.java
new file mode 100644
index 000..306c424
--- /dev/null
+++ b/test/unit/org/apache/cassandra/service/PaxosStateTest.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ *

[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-24 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/58411310
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/58411310
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/58411310

Branch: refs/heads/trunk
Commit: 5841131037155ab76fcfb0e84cacb3b00400830b
Parents: 41435ef eac7781
Author: Aleksey Yeschenko 
Authored: Mon Nov 24 16:28:23 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Mon Nov 24 16:28:51 2014 +0300

--
 CHANGES.txt |   4 +-
 .../cassandra/service/paxos/PaxosState.java |  17 ++-
 .../cassandra/service/PaxosStateTest.java   | 104 +++
 3 files changed, 120 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/58411310/CHANGES.txt
--
diff --cc CHANGES.txt
index 1beb2e2,c9e35d5..af73426
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,38 -1,5 +1,39 @@@
 +3.0
 + * Fix aggregate fn results on empty selection, result column name,
 +   and cqlsh parsing (CASSANDRA-8229)
 + * Mark sstables as repaired after full repair (CASSANDRA-7586)
 + * Extend Descriptor to include a format value and refactor reader/writer 
apis (CASSANDRA-7443)
 + * Integrate JMH for microbenchmarks (CASSANDRA-8151)
 + * Keep sstable levels when bootstrapping (CASSANDRA-7460)
 + * Add Sigar library and perform basic OS settings check on startup 
(CASSANDRA-7838)
 + * Support for aggregation functions (CASSANDRA-4914)
 + * Remove cassandra-cli (CASSANDRA-7920)
 + * Accept dollar quoted strings in CQL (CASSANDRA-7769)
 + * Make assassinate a first class command (CASSANDRA-7935)
 + * Support IN clause on any clustering column (CASSANDRA-4762)
 + * Improve compaction logging (CASSANDRA-7818)
 + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917)
 + * Do anticompaction in groups (CASSANDRA-6851)
 + * Support pure user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 
7781, 7929,
 +   7924, 7812, 8063, 7813)
 + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416)
 + * Move sstable RandomAccessReader to nio2, which allows using the
 +   FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050)
 + * Remove CQL2 (CASSANDRA-5918)
 + * Add Thrift get_multi_slice call (CASSANDRA-6757)
 + * Optimize fetching multiple cells by name (CASSANDRA-6933)
 + * Allow compilation in java 8 (CASSANDRA-7028)
 + * Make incremental repair default (CASSANDRA-7250)
 + * Enable code coverage thru JaCoCo (CASSANDRA-7226)
 + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) 
 + * Shorten SSTable path (CASSANDRA-6962)
 + * Use unsafe mutations for most unit tests (CASSANDRA-6969)
 + * Fix race condition during calculation of pending ranges (CASSANDRA-7390)
 + * Fail on very large batch sizes (CASSANDRA-8011)
-  * improve concurrency of repair (CASSANDRA-6455, 8208)
++ * Improve concurrency of repair (CASSANDRA-6455, 8208)
++
 +
  2.1.3
 - * Fix high size calculations for prepared statements (CASSANDRA-8231)
   * Centralize shared executors (CASSANDRA-8055)
   * Fix filtering for CONTAINS (KEY) relations on frozen collection
 clustering columns when the query is restricted to a single



[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-11-24 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/service/paxos/PaxosState.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/eac7781e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/eac7781e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/eac7781e

Branch: refs/heads/cassandra-2.1
Commit: eac7781e7c429ac25b56ddc2ec20bc34f3244df6
Parents: cab2b25 17de36f
Author: Aleksey Yeschenko 
Authored: Mon Nov 24 16:27:56 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Mon Nov 24 16:27:56 2014 +0300

--
 CHANGES.txt |   1 +
 .../cassandra/service/paxos/PaxosState.java |  17 ++-
 .../cassandra/service/PaxosStateTest.java   | 104 +++
 3 files changed, 118 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/eac7781e/CHANGES.txt
--
diff --cc CHANGES.txt
index 9db65e9,fe23248..c9e35d5
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,18 -1,5 +1,19 @@@
 -2.0.12:
 +2.1.3
 + * Fix high size calculations for prepared statements (CASSANDRA-8231)
 + * Centralize shared executors (CASSANDRA-8055)
 + * Fix filtering for CONTAINS (KEY) relations on frozen collection
 +   clustering columns when the query is restricted to a single
 +   partition (CASSANDRA-8203)
 + * Do more aggressive entire-sstable TTL expiry checks (CASSANDRA-8243)
 + * Add more log info if readMeter is null (CASSANDRA-8238)
 + * add check of the system wall clock time at startup (CASSANDRA-8305)
 + * Support for frozen collections (CASSANDRA-7859)
 + * Fix overflow on histogram computation (CASSANDRA-8028)
 + * Have paxos reuse the timestamp generation of normal queries 
(CASSANDRA-7801)
 + * Fix incremental repair not remove parent session on remote (CASSANDRA-8291)
 + * Improve JBOD disk utilization (CASSANDRA-7386)
 +Merged from 2.0:
+  * Ignore Paxos commits for truncated tables (CASSANDRA-7538)
   * Validate size of indexed column values (CASSANDRA-8280)
   * Make LCS split compaction results over all data directories 
(CASSANDRA-8329)
   * Fix some failing queries that use multi-column relations

http://git-wip-us.apache.org/repos/asf/cassandra/blob/eac7781e/src/java/org/apache/cassandra/service/paxos/PaxosState.java
--
diff --cc src/java/org/apache/cassandra/service/paxos/PaxosState.java
index abd173c,2adecec..01e03f4
--- a/src/java/org/apache/cassandra/service/paxos/PaxosState.java
+++ b/src/java/org/apache/cassandra/service/paxos/PaxosState.java
@@@ -18,17 -19,19 +18,18 @@@
   * under the License.
   * 
   */
 -
 +package org.apache.cassandra.service.paxos;
  
  import java.nio.ByteBuffer;
 +import java.util.concurrent.locks.Lock;
  
 -import org.slf4j.Logger;
 -import org.slf4j.LoggerFactory;
 +import com.google.common.util.concurrent.Striped;
  
  import org.apache.cassandra.config.CFMetaData;
 -import org.apache.cassandra.db.RowMutation;
 -import org.apache.cassandra.db.Keyspace;
 -import org.apache.cassandra.db.SystemKeyspace;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.*;
  import org.apache.cassandra.tracing.Tracing;
+ import org.apache.cassandra.utils.UUIDGen;
  
  public class PaxosState
  {
@@@ -131,10 -133,18 +132,18 @@@
  // Committing it is however always safe due to column timestamps, 
so always do it. However,
  // if our current in-progress ballot is strictly greater than the 
proposal one, we shouldn't
  // erase the in-progress update.
- Tracing.trace("Committing proposal {}", proposal);
- Mutation mutation = proposal.makeMutation();
- Keyspace.open(mutation.getKeyspaceName()).apply(mutation, true);
- 
+ // The table may have been truncated since the proposal was 
initiated. In that case, we
+ // don't want to perform the mutation and potentially resurrect 
truncated data
+ if (UUIDGen.unixTimestamp(proposal.ballot) >= 
SystemKeyspace.getTruncatedAt(proposal.update.metadata().cfId))
+ {
+ Tracing.trace("Committing proposal {}", proposal);
 -RowMutation rm = proposal.makeMutation();
 -Keyspace.open(rm.getKeyspaceName()).apply(rm, true);
++Mutation mutation = proposal.makeMutation();
++Keyspace.open(mutation.getKeyspaceName()).apply(mutation, 
true);
+ }
+ else
+ {
+ Tracing.trace("Not committing proposal {} as ballot timestamp 
predates last truncation time", proposal);
+ }
  // We don't nee

[jira] [Assigned] (CASSANDRA-8267) Only stream from unrepaired sstables during incremental repair

2014-11-24 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-8267:
--

Assignee: Marcus Eriksson

> Only stream from unrepaired sstables during incremental repair
> --
>
> Key: CASSANDRA-8267
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8267
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
>
> Seems we stream from all sstables even if we do incremental repair, we should 
> limit this to only stream from the unrepaired sstables if we do incremental 
> repair



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Ignore Paxos commits for truncated tables

2014-11-24 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 0e3d9fc14 -> 17de36f24


Ignore Paxos commits for truncated tables

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-7538


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/17de36f2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/17de36f2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/17de36f2

Branch: refs/heads/cassandra-2.0
Commit: 17de36f246c912287b85eb7015583a35f5040919
Parents: 0e3d9fc
Author: Sam Tunnicliffe 
Authored: Mon Nov 24 16:07:17 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Mon Nov 24 16:07:17 2014 +0300

--
 CHANGES.txt |   1 +
 .../cassandra/service/paxos/PaxosState.java |  17 ++-
 .../cassandra/service/PaxosStateTest.java   | 108 +++
 3 files changed, 122 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 412eb59..fe23248 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.0.12:
+ * Ignore Paxos commits for truncated tables (CASSANDRA-7538)
  * Validate size of indexed column values (CASSANDRA-8280)
  * Make LCS split compaction results over all data directories (CASSANDRA-8329)
  * Fix some failing queries that use multi-column relations

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/src/java/org/apache/cassandra/service/paxos/PaxosState.java
--
diff --git a/src/java/org/apache/cassandra/service/paxos/PaxosState.java 
b/src/java/org/apache/cassandra/service/paxos/PaxosState.java
index 0196122..2adecec 100644
--- a/src/java/org/apache/cassandra/service/paxos/PaxosState.java
+++ b/src/java/org/apache/cassandra/service/paxos/PaxosState.java
@@ -31,6 +31,7 @@ import org.apache.cassandra.db.RowMutation;
 import org.apache.cassandra.db.Keyspace;
 import org.apache.cassandra.db.SystemKeyspace;
 import org.apache.cassandra.tracing.Tracing;
+import org.apache.cassandra.utils.UUIDGen;
 
 public class PaxosState
 {
@@ -132,10 +133,18 @@ public class PaxosState
 // Committing it is however always safe due to column timestamps, 
so always do it. However,
 // if our current in-progress ballot is strictly greater than the 
proposal one, we shouldn't
 // erase the in-progress update.
-Tracing.trace("Committing proposal {}", proposal);
-RowMutation rm = proposal.makeMutation();
-Keyspace.open(rm.getKeyspaceName()).apply(rm, true);
-
+// The table may have been truncated since the proposal was 
initiated. In that case, we
+// don't want to perform the mutation and potentially resurrect 
truncated data
+if (UUIDGen.unixTimestamp(proposal.ballot) >= 
SystemKeyspace.getTruncatedAt(proposal.update.metadata().cfId))
+{
+Tracing.trace("Committing proposal {}", proposal);
+RowMutation rm = proposal.makeMutation();
+Keyspace.open(rm.getKeyspaceName()).apply(rm, true);
+}
+else
+{
+Tracing.trace("Not committing proposal {} as ballot timestamp 
predates last truncation time", proposal);
+}
 // We don't need to lock, we're just blindly updating
 SystemKeyspace.savePaxosCommit(proposal);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/17de36f2/test/unit/org/apache/cassandra/service/PaxosStateTest.java
--
diff --git a/test/unit/org/apache/cassandra/service/PaxosStateTest.java 
b/test/unit/org/apache/cassandra/service/PaxosStateTest.java
new file mode 100644
index 000..306c424
--- /dev/null
+++ b/test/unit/org/apache/cassandra/service/PaxosStateTest.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing per

[jira] [Commented] (CASSANDRA-8231) Wrong size of cached prepared statements

2014-11-24 Thread Rajanarayanan Thottuvaikkatumana (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14222962#comment-14222962
 ] 

Rajanarayanan Thottuvaikkatumana commented on CASSANDRA-8231:
-

Looks like the issue is still persisting even after getting the latest… Here is 
my git log and I see the jamm jar file changes in there.

Rajanarayanans-MacBook-Pro:cassandra-trunk RajT$ git log --oneline
41435ef Merge branch 'cassandra-2.1' into trunk
cab2b25 Merge branch 'cassandra-2.0' into cassandra-2.1
0e3d9fc Validate size of indexed column values
065aeeb Merge branch 'trunk' of 
https://git-wip-us.apache.org/repos/asf/cassandra into trunk
35f173a Merge branch 'cassandra-2.1' of 
https://git-wip-us.apache.org/repos/asf/cassandra into cassandra-2.1
cd4f729 Merge branch 'cassandra-2.1' into trunk
528cc3d Merge branch 'cassandra-2.1' into trunk
6ae1b42 Better jamm 0.3.0 jar


The error message is same though:
[junit] Error opening zip file or JAR manifest missing : 
/Users/RajT/cassandra-source/cassandra-trunk/lib/jamm-0.3.0.jar 

Thanks
-Raj





> Wrong size of cached prepared statements
> 
>
> Key: CASSANDRA-8231
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8231
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaroslav Kamenik
>Assignee: Benjamin Lerer
> Fix For: 2.1.3
>
> Attachments: 8231-notes.txt, CASSANDRA-8231-V2-trunk.txt, 
> CASSANDRA-8231-V2.txt, CASSANDRA-8231.txt, Unsafes.java
>
>
> Cassandra counts memory footprint of prepared statements for caching 
> purposes. It seems, that there is problem with some statements, ie 
> SelectStatement. Even simple selects is counted as 100KB object, updates, 
> deletes etc have few hundreds or thousands bytes. Result is that cache - 
> QueryProcessor.preparedStatements  - holds just fraction of statements..
> I dig a little into the code, and it seems that problem is in jamm in class 
> MemoryMeter. It seems that if instance contains reference to class, it counts 
> size of whole class too. SelectStatement references EnumSet through 
> ResultSet.Metadata and EnumSet holds reference to Enum class...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7886) TombstoneOverwhelmingException should not wait for timeout

2014-11-24 Thread Christian Spriegel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14222955#comment-14222955
 ] 

Christian Spriegel commented on CASSANDRA-7886:
---

Hi [~slebresne], I finally had the time to port my patch to trunk and add error 
handling to the ErrorMessage class.

Thrift and "CQL protocol 3" will get an Unavailable error instead of my new 
READ_FAILURE. CQL protocol >= 4 will get the new READ_FAILURE.

It seems there is no CQL protcol 4 yet, so my code always returns Unavailable 
at the moment.

Let me know if I you want me to improve anything.

> TombstoneOverwhelmingException should not wait for timeout
> --
>
> Key: CASSANDRA-7886
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Tested with Cassandra 2.0.8
>Reporter: Christian Spriegel
>Assignee: Christian Spriegel
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 7886_v1.txt, 7886_v2_trunk.txt
>
>
> *Issue*
> When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
> cause the query to be simply dropped on every data-node, but no response is 
> sent back to the coordinator. Instead the coordinator waits for the specified 
> read_request_timeout_in_ms.
> On the application side this can cause memory issues, since the application 
> is waiting for the timeout interval for every request.Therefore, if our 
> application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
> our entire application cluster goes down :-(
> *Proposed solution*
> I think the data nodes should send a error message to the coordinator when 
> they run into a TombstoneOverwhelmingException. Then the coordinator does not 
> have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-8356) Slice query on a super column family with counters doesn't get all the data

2014-11-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14222950#comment-14222950
 ] 

Nicolas Lalevée edited comment on CASSANDRA-8356 at 11/24/14 12:50 PM:
---

I got the snapshot data from a node on my local machine, and I tried to load it 
up in a local cassandra node 2.0.11.
The node did the "opening" of the files correctly. But querying against it is 
impossible, I hit the following error:
{noformat}
ERROR 11:28:45,693 Exception in thread Thread[ReadStage:2,5,main]
java.lang.RuntimeException: java.lang.IllegalArgumentException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1981)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:267)
at 
org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:587)
at 
org.apache.cassandra.utils.ByteBufferUtil.readBytesWithShortLength(ByteBufferUtil.java:596)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:61)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:1)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:436)
at 
org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:141)
at 
org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:113)
at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:202)
at 
org.apache.cassandra.db.AbstractThreadUnsafeSortedColumns.delete(AbstractThreadUnsafeSortedColumns.java:54)
at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:155)
at 
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:168)
at 
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1413)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1977)
... 3 more
{noformat}

This reminded me of an error we had on our test cluster, when we tested the 
upgrade to 2.0.x : CASSANDRA-6733
So here, I ran an upgradesstable on our production cluster, and now the slice 
queries return all the expected data. So everything is back to normal (and I am 
very pleased by the lower cpu activity with 2.0.x for the same load).

I looked up again the logs in prod, I still don't see any such Buffer.limit 
errors. I don't know what was going wrong.

As for CASSANDRA-6733, I have a snapshot of the data before the upgrade_sstable 
(unfortunately I don't have a snapshot pre-upgrade, but somme sstables are sill 
in the old format). If someone wants the data to analyse it, concat me, 
nlalevee at scoop.it.



was (Author: hibou):
I got the snapshot data from a node on my local machine, and I tried to load it 
up in a local cassandra node 2.0.11.
The node did the "opening" of the files correctly. But querying against it is 
impossible, I hit the following error:
{noformat}
ERROR 11:28:45,693 Exception in thread Thread[ReadStage:2,5,main]
java.lang.RuntimeException: java.lang.IllegalArgumentException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1981)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:6

[jira] [Commented] (CASSANDRA-8356) Slice query on a super column family with counters doesn't get all the data

2014-11-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14222950#comment-14222950
 ] 

Nicolas Lalevée commented on CASSANDRA-8356:


I got the snapshot data from a node on my local machine, and I tried to load it 
up in a local cassandra node 2.0.11.
The node did the "opening" of the files correctly. But querying against it is 
impossible, I hit the following error:
{noformat}
ERROR 11:28:45,693 Exception in thread Thread[ReadStage:2,5,main]
java.lang.RuntimeException: java.lang.IllegalArgumentException
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1981)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:267)
at 
org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:587)
at 
org.apache.cassandra.utils.ByteBufferUtil.readBytesWithShortLength(ByteBufferUtil.java:596)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:61)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:1)
at 
org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:436)
at 
org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:141)
at 
org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:113)
at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:202)
at 
org.apache.cassandra.db.AbstractThreadUnsafeSortedColumns.delete(AbstractThreadUnsafeSortedColumns.java:54)
at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:155)
at 
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:168)
at 
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
at 
org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87)
at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46)
at 
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
at 
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1547)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1376)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:333)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1413)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1977)
... 3 more
{nofomart}

This reminded me of an error we had on our test cluster, when we tested the 
upgrade to 2.0.x : CASSANDRA-6733
So here, I ran an upgradesstable on our production cluster, and now the slice 
queries return all the expected data. So everything is back to normal (and I am 
very pleased by the lower cpu activity with 2.0.x for the same load).

I looked up again the logs in prod, I still don't see any such Buffer.limit 
errors. I don't know what was going wrong.

As for CASSANDRA-6733, I have a snapshot of the data before the upgrade_sstable 
(unfortunately I don't have a snapshot pre-upgrade, but somme sstables are sill 
in the old format). If someone wants the data to analyse it, concat me, 
nlalevee at scoop.it.


> Slice query on a super column family with counters doesn't get all the data
> ---
>
> Key: CASSANDRA-8356
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8356
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nicolas Lalevée
>Assignee: Aleksey Yeschenko
> Fix For: 2.0.12
>
>
> We've finally been able to upgrade our cluster to 2.0.11, after 
> CASSANDRA-7188 being fixed.
> But now slice queries on a super column family with counters doesn't return 
> all the expected data. We first though because of all the trouble we had that 
> we lost data, but there a 

[jira] [Updated] (CASSANDRA-7886) TombstoneOverwhelmingException should not wait for timeout

2014-11-24 Thread Christian Spriegel (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Spriegel updated CASSANDRA-7886:
--
Attachment: 7886_v2_trunk.txt

> TombstoneOverwhelmingException should not wait for timeout
> --
>
> Key: CASSANDRA-7886
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7886
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
> Environment: Tested with Cassandra 2.0.8
>Reporter: Christian Spriegel
>Assignee: Christian Spriegel
>Priority: Minor
> Fix For: 3.0
>
> Attachments: 7886_v1.txt, 7886_v2_trunk.txt
>
>
> *Issue*
> When you have TombstoneOverwhelmingExceptions occuring in queries, this will 
> cause the query to be simply dropped on every data-node, but no response is 
> sent back to the coordinator. Instead the coordinator waits for the specified 
> read_request_timeout_in_ms.
> On the application side this can cause memory issues, since the application 
> is waiting for the timeout interval for every request.Therefore, if our 
> application runs into TombstoneOverwhelmingExceptions, then (sooner or later) 
> our entire application cluster goes down :-(
> *Proposed solution*
> I think the data nodes should send a error message to the coordinator when 
> they run into a TombstoneOverwhelmingException. Then the coordinator does not 
> have to wait for the timeout-interval.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8366) Repair grows data on nodes, causes load to become unbalanced

2014-11-24 Thread Jan Karlsson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Karlsson updated CASSANDRA-8366:

Description: 
There seems to be something weird going on when repairing data.

I have a program that runs 2 hours which inserts 250 random numbers and reads 
250 times per second. It creates 2 keyspaces with SimpleStrategy and RF of 3. 

I use size-tiered compaction for my cluster. 

After those 2 hours I run a repair and the load of all nodes goes up. If I run 
incremental repair the load goes up alot more. I saw the load shoot up 8 times 
the original size multiple times with incremental repair. (from 2G to 16G)


with node 9 8 7 and 6 the repro procedure looked like this:
(Note that running full repair first is not a requirement to reproduce.)

After 2 hours of 250 reads + 250 writes per second:
UN  9  583.39 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  584.01 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  583.72 MB  256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  583.84 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1

Repair -pr -par on all nodes sequentially
UN  9  746.29 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  751.02 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  748.89 MB  256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  758.34 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1

repair -inc -par on all nodes sequentially
UN  9  2.41 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  2.53 GB256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  2.6 GB 256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  2.17 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1

after rolling restart
UN  9  1.47 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  1.5 GB 256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  2.46 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  1.19 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1

compact all nodes sequentially
UN  9  989.99 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  994.75 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  1.46 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  758.82 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1

repair -inc -par on all nodes sequentially
UN  9  1.98 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  2.3 GB 256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  3.71 GB256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  1.68 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1

restart once more
UN  9  2 GB   256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  2.05 GB256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  4.1 GB 256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  1.68 GB256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1


Is there something im missing or is this strange behavior?

  was:
There seems to be something weird going on when repairing data.

I have a program that runs 2 hours which inserts 250 random numbers and reads 
250 times per second. It creates 2 keyspaces with SimpleStrategy and RF of 3. 

I use size-tiered compaction for my cluster. 

After those 2 hours I run a repair and the load of all nodes goes up. If I run 
incremental repair the load goes up alot more. I saw the load shoot up 8 times 
the original size multiple times with incremental repair. (from 2G to 16G)


with node 9 8 7 and 7 the repro procedure looked like this:
(Note that running full repair first is not a requirement to reproduce.)

After 2 hours of 250 reads + 250 writes per second:
UN  9  583.39 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  584.01 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  583.72 MB  256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  583.84 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1

Repair -pr -par on all nodes sequentially
UN  9  746.29 MB  256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  751.02 MB  256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  748.89 MB  256 ?   2b6b5d66-13c8-43d8-855c-290c0f3c3a0b  rack1
UN  6  758.34 MB  256 ?   b8bd67f1-a816-46ff-b4a4-136ad5af6d4b  rack1

repair -inc -par on all nodes sequentially
UN  9  2.41 GB256 ?   28220962-26ae-4eeb-8027-99f96e377406  rack1
UN  8  2.53 GB256 ?   f2de6ea1-de88-4056-8fde-42f9c476a090  rack1
UN  7  2.6 GB 256 ?   2b6b5d66-13c8-43d8-855c-290c

[4/4] cassandra git commit: Merge branch 'cassandra-2.1' into trunk

2014-11-24 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/41435ef6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/41435ef6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/41435ef6

Branch: refs/heads/trunk
Commit: 41435ef6c1fec1cadf6606eb6eb66fe15bd8c46d
Parents: 065aeeb cab2b25
Author: Aleksey Yeschenko 
Authored: Mon Nov 24 15:19:33 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Mon Nov 24 15:19:33 2014 +0300

--
 CHANGES.txt |  1 +
 .../cql3/statements/UpdateStatement.java| 17 +++-
 .../io/sstable/format/big/BigTableWriter.java   |  9 ++
 .../cql3/IndexedValuesValidationTest.java   | 86 
 4 files changed, 112 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/41435ef6/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/41435ef6/src/java/org/apache/cassandra/cql3/statements/UpdateStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/41435ef6/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java
--
diff --cc 
src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java
index ec53b4e,000..5221509
mode 100644,00..100644
--- a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java
@@@ -1,541 -1,0 +1,550 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one
 + * or more contributor license agreements.  See the NOTICE file
 + * distributed with this work for additional information
 + * regarding copyright ownership.  The ASF licenses this file
 + * to you under the Apache License, Version 2.0 (the
 + * "License"); you may not use this file except in compliance
 + * with the License.  You may obtain a copy of the License at
 + *
 + * http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.cassandra.io.sstable.format.big;
 +
 +import java.io.Closeable;
 +import java.io.DataInput;
 +import java.io.File;
 +import java.io.FileOutputStream;
 +import java.io.IOException;
 +import java.nio.ByteBuffer;
 +import java.util.Collections;
 +import java.util.Iterator;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Set;
 +
 +import org.apache.cassandra.db.*;
 +import org.apache.cassandra.io.sstable.*;
 +import org.apache.cassandra.io.sstable.format.SSTableReader;
 +import org.apache.cassandra.io.sstable.format.SSTableWriter;
 +import org.apache.cassandra.io.sstable.format.Version;
 +import org.apache.cassandra.io.util.*;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import org.apache.cassandra.config.CFMetaData;
 +import org.apache.cassandra.config.DatabaseDescriptor;
 +import org.apache.cassandra.db.compaction.AbstractCompactedRow;
 +import org.apache.cassandra.dht.IPartitioner;
 +import org.apache.cassandra.io.FSWriteError;
 +import org.apache.cassandra.io.compress.CompressedSequentialWriter;
 +import org.apache.cassandra.io.sstable.metadata.MetadataCollector;
 +import org.apache.cassandra.io.sstable.metadata.MetadataComponent;
 +import org.apache.cassandra.io.sstable.metadata.MetadataType;
 +import org.apache.cassandra.io.sstable.metadata.StatsMetadata;
 +import org.apache.cassandra.io.util.DataOutputPlus;
 +import org.apache.cassandra.io.util.DataOutputStreamAndChannel;
 +import org.apache.cassandra.io.util.FileMark;
 +import org.apache.cassandra.io.util.FileUtils;
 +import org.apache.cassandra.io.util.SegmentedFile;
 +import org.apache.cassandra.io.util.SequentialWriter;
 +import org.apache.cassandra.service.ActiveRepairService;
 +import org.apache.cassandra.service.StorageService;
 +import org.apache.cassandra.utils.ByteBufferUtil;
++import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.FilterFactory;
 +import org.apache.cassandra.utils.IFilter;
 +import org.apache.cassandra.utils.Pair;
 +import org.apache.cassandra.utils.StreamingHistogram;
 +
 +public class BigTableWriter extends SSTableWriter
 +{
 +private static final Logger logger = 
LoggerFactory.getLogger(BigTableWriter.class);
 +
 +// not very random, but the only value that can'

  1   2   >