[jira] [Commented] (CASSANDRA-6307) Switch cqlsh from cassandra-dbapi2 to python-driver

2014-03-11 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13931386#comment-13931386
 ] 

Mikhail Stepura commented on CASSANDRA-6307:


https://github.com/Mishail/cassandra/compare/apache:cassandra-2.1...CASSANDRA-6307-cqlsh-driver

1) Removed duplicated methods - 
https://github.com/Mishail/cassandra/commit/4ad5e6df0e8d447ca0914670b19f2ae48337e084
2) Switched to the driver's Tracing - 
https://github.com/Mishail/cassandra/commit/393833c54574d6c625ab23f53a35cac0fb80bece
3) Added options for client's certs - 
https://github.com/Mishail/cassandra/commit/d8862cfd96de3bbd41d2c26b368152e667bc307d


> Switch cqlsh from cassandra-dbapi2 to python-driver
> ---
>
> Key: CASSANDRA-6307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6307
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 2.1 beta2
>
>
> python-driver is hitting 1.0 soon. cassandra-dbapi2 development has stalled.
> It's time to switch cqlsh to native protocol and cassandra-dbapi2, especially 
> now that
> 1. Some CQL3 things are not supported by Thrift transport
> 2. cqlsh no longer has to support CQL2 (dropped in 2.0)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6838) FileCacheService overcounting its memoryUsage

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6838:
--

 Reviewer: Jonathan Ellis
 Priority: Minor  (was: Major)
Fix Version/s: 2.0.7

> FileCacheService overcounting its memoryUsage
> -
>
> Key: CASSANDRA-6838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6838
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
>  Labels: performance
> Fix For: 2.0.7, 2.1 beta2
>
> Attachments: 6838.txt
>
>
> On investigating why I was seeing dramatically worse performance for counter 
> updates over prepared CQL3 statements compred to unprepared CQL2 statements, 
> I stumbled upon a bug in FileCacheService wherein, on returning a cached 
> reader back to the pool, its memory is counted again towards the total memory 
> usage, but is not matched by a decrement when checked out. So we effectively 
> are probably not caching readers most of the time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/3] git commit: merge from 2.0

2014-03-11 Thread jbellis
merge from 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6e037824
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6e037824
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6e037824

Branch: refs/heads/cassandra-2.1
Commit: 6e0378249a54e8005302aac3e4d4f8b67c6c8f39
Parents: e22d0b1 31cbdfd
Author: Jonathan Ellis 
Authored: Tue Mar 11 23:19:39 2014 -0500
Committer: Jonathan Ellis 
Committed: Tue Mar 11 23:19:39 2014 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/FileCacheService.java | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6e037824/CHANGES.txt
--
diff --cc CHANGES.txt
index 06331ad,d8a348d..0442b4e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,19 -1,10 +1,20 @@@
 -2.0.7
 +2.1.0-beta2
 + * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 +Merged from 2.0:
   * Fix saving triggers to schema (CASSANDRA-6789)
   * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
+  * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 -
 -
 -2.0.6
   * Avoid race-prone second "scrub" of system keyspace (CASSANDRA-6797)
   * Pool CqlRecordWriter clients by inetaddress rather than Range 
 (CASSANDRA-6665)



[1/3] git commit: Fix accounting in FileCacheService to allow re-using RAR Patch by Benedict Elliott Smith; reviewed by jbellis for CASSANDRA-6838

2014-03-11 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 fc9cad90d -> 31cbdfd7b
  refs/heads/cassandra-2.1 e22d0b1b0 -> 6e0378249


Fix accounting in FileCacheService to allow re-using RAR
Patch by Benedict Elliott Smith; reviewed by jbellis for CASSANDRA-6838


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/31cbdfd7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/31cbdfd7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/31cbdfd7

Branch: refs/heads/cassandra-2.0
Commit: 31cbdfd7ba9e7ff2ae5f99f3f0f1a7831cd88147
Parents: fc9cad9
Author: Jonathan Ellis 
Authored: Tue Mar 11 23:18:54 2014 -0500
Committer: Jonathan Ellis 
Committed: Tue Mar 11 23:18:54 2014 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/FileCacheService.java | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/31cbdfd7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 91037d1..d8a348d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 2.0.7
  * Fix saving triggers to schema (CASSANDRA-6789)
  * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
+ * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 
 
 2.0.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/31cbdfd7/src/java/org/apache/cassandra/service/FileCacheService.java
--
diff --git a/src/java/org/apache/cassandra/service/FileCacheService.java 
b/src/java/org/apache/cassandra/service/FileCacheService.java
index c939a6f..d22763b 100644
--- a/src/java/org/apache/cassandra/service/FileCacheService.java
+++ b/src/java/org/apache/cassandra/service/FileCacheService.java
@@ -91,7 +91,10 @@ public class FileCacheService
 Queue instances = getCacheFor(path);
 RandomAccessReader result = instances.poll();
 if (result != null)
+{
 metrics.hits.mark();
+memoryUsage.addAndGet(-result.getTotalBufferSize());
+}
 
 return result;
 }



[2/3] git commit: Fix accounting in FileCacheService to allow re-using RAR Patch by Benedict Elliott Smith; reviewed by jbellis for CASSANDRA-6838

2014-03-11 Thread jbellis
Fix accounting in FileCacheService to allow re-using RAR
Patch by Benedict Elliott Smith; reviewed by jbellis for CASSANDRA-6838


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/31cbdfd7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/31cbdfd7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/31cbdfd7

Branch: refs/heads/cassandra-2.1
Commit: 31cbdfd7ba9e7ff2ae5f99f3f0f1a7831cd88147
Parents: fc9cad9
Author: Jonathan Ellis 
Authored: Tue Mar 11 23:18:54 2014 -0500
Committer: Jonathan Ellis 
Committed: Tue Mar 11 23:18:54 2014 -0500

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/service/FileCacheService.java | 3 +++
 2 files changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/31cbdfd7/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 91037d1..d8a348d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 2.0.7
  * Fix saving triggers to schema (CASSANDRA-6789)
  * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
+ * Fix accounting in FileCacheService to allow re-using RAR (CASSANDRA-6838)
 
 
 2.0.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/31cbdfd7/src/java/org/apache/cassandra/service/FileCacheService.java
--
diff --git a/src/java/org/apache/cassandra/service/FileCacheService.java 
b/src/java/org/apache/cassandra/service/FileCacheService.java
index c939a6f..d22763b 100644
--- a/src/java/org/apache/cassandra/service/FileCacheService.java
+++ b/src/java/org/apache/cassandra/service/FileCacheService.java
@@ -91,7 +91,10 @@ public class FileCacheService
 Queue instances = getCacheFor(path);
 RandomAccessReader result = instances.poll();
 if (result != null)
+{
 metrics.hits.mark();
+memoryUsage.addAndGet(-result.getTotalBufferSize());
+}
 
 return result;
 }



[jira] [Resolved] (CASSANDRA-6811) nodetool no longer shows node joining

2014-03-11 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay resolved CASSANDRA-6811.
--

Resolution: Fixed

Committed, Thanks!

> nodetool no longer shows node joining
> -
>
> Key: CASSANDRA-6811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6811
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Vijay
>Priority: Minor
> Fix For: 1.2.16
>
> Attachments: 0001-CASSANDRA-6811-v2.patch, ringfix.txt
>
>
> When we added effective ownership output to nodetool ring/status, we 
> accidentally began excluding joining nodes because we iterate the ownership 
> maps instead of the the endpoint to token map when printing the output, and 
> the joining nodes don't have any ownership.  The simplest thing to do is 
> probably iterate the token map instead, and not output any ownership info for 
> joining nodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[5/5] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-11 Thread vijay
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5023486f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5023486f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5023486f

Branch: refs/heads/trunk
Commit: 5023486f4c7b6caa2d1b628cc4f702d993553bba
Parents: 2037a8d e22d0b1
Author: Vijay 
Authored: Tue Mar 11 21:14:44 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 21:14:44 2014 -0700

--
 .../org/apache/cassandra/tools/NodeTool.java| 217 +--
 1 file changed, 102 insertions(+), 115 deletions(-)
--




[4/5] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-11 Thread vijay
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e22d0b1b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e22d0b1b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e22d0b1b

Branch: refs/heads/trunk
Commit: e22d0b1b0f8d4185ca983bb37fbe805b63409639
Parents: 8e360f8 fc9cad9
Author: Vijay 
Authored: Tue Mar 11 21:13:30 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 21:13:30 2014 -0700

--
 .../org/apache/cassandra/tools/NodeTool.java| 217 +--
 1 file changed, 102 insertions(+), 115 deletions(-)
--




[1/5] git commit: nodetool no longer shows node joining (Also fix nodetool status) patch by Vijay; reviewed by driftx for CASSANDRA-6811

2014-03-11 Thread vijay
Repository: cassandra
Updated Branches:
  refs/heads/trunk 2037a8d7a -> 5023486f4


nodetool no longer shows node joining (Also fix nodetool status)
patch by Vijay; reviewed by driftx for CASSANDRA-6811


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91d220b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91d220b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91d220b3

Branch: refs/heads/trunk
Commit: 91d220b350f512ef283748dfcbcc304bde2f9db2
Parents: dfd28d2
Author: Vijay 
Authored: Tue Mar 11 02:52:45 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 20:13:03 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 197 +--
 1 file changed, 95 insertions(+), 102 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/91d220b3/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 75af915..85afdc1 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -29,8 +29,10 @@ import java.util.Map.Entry;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Joiner;
+import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.LinkedHashMultimap;
 import com.google.common.collect.Maps;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.commons.cli.*;
@@ -38,7 +40,6 @@ import org.yaml.snakeyaml.Loader;
 import org.yaml.snakeyaml.TypeDescription;
 import org.yaml.snakeyaml.Yaml;
 import org.yaml.snakeyaml.constructor.Constructor;
-
 import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 import org.apache.cassandra.db.Table;
@@ -268,16 +269,7 @@ public class NodeCmd
 try
 {
 outs.println();
-Map> perDcOwnerships = 
Maps.newLinkedHashMap();
-// get the different datasets and map to tokens
-for (Map.Entry ownership : 
ownerships.entrySet())
-{
-String dc = 
probe.getEndpointSnitchInfoProxy().getDatacenter(ownership.getKey().getHostAddress());
-if (!perDcOwnerships.containsKey(dc))
-perDcOwnerships.put(dc, new LinkedHashMap());
-perDcOwnerships.get(dc).put(ownership.getKey(), 
ownership.getValue());
-}
-for (Map.Entry> entry : 
perDcOwnerships.entrySet())
+for (Entry entry : getOwnershipByDc(false, 
tokensToEndpoints, ownerships).entrySet())
 printDc(outs, format, entry.getKey(), endpointsToTokens, 
keyspaceSelected, entry.getValue());
 }
 catch (UnknownHostException e)
@@ -293,7 +285,7 @@ public class NodeCmd
 }
 
 private void printDc(PrintStream outs, String format, String dc, 
LinkedHashMultimap endpointsToTokens,
-boolean keyspaceSelected, Map 
filteredOwnerships)
+ boolean keyspaceSelected, SetHostStat hoststats)
 {
 Collection liveNodes = probe.getLiveNodes();
 Collection deadNodes = probe.getUnreachableNodes();
@@ -310,27 +302,27 @@ public class NodeCmd
 float totalReplicas = 0f;
 String lastToken = "";
 
-for (Map.Entry entry : 
filteredOwnerships.entrySet())
+for (HostStat stat : hoststats)
 {
-
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+tokens.addAll(endpointsToTokens.get(stat.ip));
 lastToken = tokens.get(tokens.size() - 1);
-totalReplicas += entry.getValue();
+if (stat.owns != null)
+totalReplicas += stat.owns;
 }
 
-
 if (keyspaceSelected)
 outs.print("Replicas: " + (int) totalReplicas + "\n\n");
 
 outs.printf(format, "Address", "Rack", "Status", "State", "Load", 
"Owns", "Token");
 
-if (filteredOwnerships.size() > 1)
+if (hoststats.size() > 1)
 outs.printf(format, "", "", "", "", "", "", lastToken);
 else
 outs.println();
 
-for (Map.Entry entry : endpointsToTokens.entries())
+for (HostStat stat : hoststats)
 {
-String endpoint = entry.getKey();
+String endpoint = stat.ip;
 String rack;
 try
 {
@@ -359,18 +351,8 @@ public class NodeCmd
 String load = loadMap.containsKey(endpoint)
 ? loadMap.get(endpoint)
 : "?";
-String owns;
-try

[2/5] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread vijay
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc9cad90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc9cad90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc9cad90

Branch: refs/heads/trunk
Commit: fc9cad90d532a3af89dbbf1b004bfd333a85b33e
Parents: f7eca98 91d220b
Author: Vijay 
Authored: Tue Mar 11 20:32:07 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 20:32:07 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 194 +--
 1 file changed, 93 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc9cad90/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index 89cfb94,85afdc1..0e7ff2a
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -27,22 -27,25 +27,23 @@@ import java.text.SimpleDateFormat
  import java.util.*;
  import java.util.Map.Entry;
  import java.util.concurrent.ExecutionException;
 +import javax.management.openmbean.TabularData;
  
  import com.google.common.base.Joiner;
+ import com.google.common.collect.ArrayListMultimap;
  import com.google.common.collect.LinkedHashMultimap;
  import com.google.common.collect.Maps;
+ 
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.utils.FBUtilities;
  import org.apache.commons.cli.*;
 -import org.yaml.snakeyaml.Loader;
 -import org.yaml.snakeyaml.TypeDescription;
  import org.yaml.snakeyaml.Yaml;
  import org.yaml.snakeyaml.constructor.Constructor;
- 
  import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
  import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 -import org.apache.cassandra.db.Table;
 +import org.apache.cassandra.db.Keyspace;
  import org.apache.cassandra.db.compaction.CompactionManagerMBean;
  import org.apache.cassandra.db.compaction.OperationType;
 -import org.apache.cassandra.exceptions.ConfigurationException;
  import org.apache.cassandra.io.util.FileUtils;
  import org.apache.cassandra.locator.EndpointSnitchInfoMBean;
  import org.apache.cassandra.net.MessagingServiceMBean;
@@@ -318,18 -299,23 +310,17 @@@ public class NodeCm
  
  // get the total amount of replicas for this dc and the last token in 
this dc's ring
  List tokens = new ArrayList();
 -float totalReplicas = 0f;
  String lastToken = "";
  
- for (Map.Entry entry : 
filteredOwnerships.entrySet())
+ for (HostStat stat : hoststats)
  {
- 
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+ tokens.addAll(endpointsToTokens.get(stat.ip));
  lastToken = tokens.get(tokens.size() - 1);
 -if (stat.owns != null)
 -totalReplicas += stat.owns;
  }
  
- 
 -if (keyspaceSelected)
 -outs.print("Replicas: " + (int) totalReplicas + "\n\n");
 -
  outs.printf(format, "Address", "Rack", "Status", "State", "Load", 
"Owns", "Token");
  
- if (filteredOwnerships.size() > 1)
+ if (hoststats.size() > 1)
  outs.printf(format, "", "", "", "", "", "", lastToken);
  else
  outs.println();
@@@ -584,7 -508,70 +513,70 @@@
  }
  }
  
+ private Map getOwnershipByDc(boolean resolveIp, 
Map tokenToEndpoint, 
+   Map 
ownerships) throws UnknownHostException
+ {
+ Map ownershipByDc = Maps.newLinkedHashMap();
+ EndpointSnitchInfoMBean epSnitchInfo = 
probe.getEndpointSnitchInfoProxy();
+ 
+ for (Entry tokenAndEndPoint : 
tokenToEndpoint.entrySet())
+ {
+ String dc = 
epSnitchInfo.getDatacenter(tokenAndEndPoint.getValue());
+ if (!ownershipByDc.containsKey(dc))
+ ownershipByDc.put(dc, new SetHostStat(resolveIp));
+ ownershipByDc.get(dc).add(tokenAndEndPoint.getKey(), 
tokenAndEndPoint.getValue(), ownerships);
+ }
+ 
+ return ownershipByDc;
+ }
+ 
+ static class SetHostStat implements Iterable {
+ final List hostStats = new ArrayList();
+ final boolean resolveIp;
+ 
+ public SetHostStat(boolean resolveIp)
+ {
+ this.resolveIp = resolveIp;
+ }
+ 
+ public int size()
+ {
+ return hostStats.size();
+ }
+ 
+ @Override
+ public Iterator iterator() {
+ return hostStats.iterator();
+ }
+ 
+ public void add(String token, Str

[2/4] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread vijay
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc9cad90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc9cad90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc9cad90

Branch: refs/heads/cassandra-2.1
Commit: fc9cad90d532a3af89dbbf1b004bfd333a85b33e
Parents: f7eca98 91d220b
Author: Vijay 
Authored: Tue Mar 11 20:32:07 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 20:32:07 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 194 +--
 1 file changed, 93 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc9cad90/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index 89cfb94,85afdc1..0e7ff2a
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -27,22 -27,25 +27,23 @@@ import java.text.SimpleDateFormat
  import java.util.*;
  import java.util.Map.Entry;
  import java.util.concurrent.ExecutionException;
 +import javax.management.openmbean.TabularData;
  
  import com.google.common.base.Joiner;
+ import com.google.common.collect.ArrayListMultimap;
  import com.google.common.collect.LinkedHashMultimap;
  import com.google.common.collect.Maps;
+ 
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.utils.FBUtilities;
  import org.apache.commons.cli.*;
 -import org.yaml.snakeyaml.Loader;
 -import org.yaml.snakeyaml.TypeDescription;
  import org.yaml.snakeyaml.Yaml;
  import org.yaml.snakeyaml.constructor.Constructor;
- 
  import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
  import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 -import org.apache.cassandra.db.Table;
 +import org.apache.cassandra.db.Keyspace;
  import org.apache.cassandra.db.compaction.CompactionManagerMBean;
  import org.apache.cassandra.db.compaction.OperationType;
 -import org.apache.cassandra.exceptions.ConfigurationException;
  import org.apache.cassandra.io.util.FileUtils;
  import org.apache.cassandra.locator.EndpointSnitchInfoMBean;
  import org.apache.cassandra.net.MessagingServiceMBean;
@@@ -318,18 -299,23 +310,17 @@@ public class NodeCm
  
  // get the total amount of replicas for this dc and the last token in 
this dc's ring
  List tokens = new ArrayList();
 -float totalReplicas = 0f;
  String lastToken = "";
  
- for (Map.Entry entry : 
filteredOwnerships.entrySet())
+ for (HostStat stat : hoststats)
  {
- 
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+ tokens.addAll(endpointsToTokens.get(stat.ip));
  lastToken = tokens.get(tokens.size() - 1);
 -if (stat.owns != null)
 -totalReplicas += stat.owns;
  }
  
- 
 -if (keyspaceSelected)
 -outs.print("Replicas: " + (int) totalReplicas + "\n\n");
 -
  outs.printf(format, "Address", "Rack", "Status", "State", "Load", 
"Owns", "Token");
  
- if (filteredOwnerships.size() > 1)
+ if (hoststats.size() > 1)
  outs.printf(format, "", "", "", "", "", "", lastToken);
  else
  outs.println();
@@@ -584,7 -508,70 +513,70 @@@
  }
  }
  
+ private Map getOwnershipByDc(boolean resolveIp, 
Map tokenToEndpoint, 
+   Map 
ownerships) throws UnknownHostException
+ {
+ Map ownershipByDc = Maps.newLinkedHashMap();
+ EndpointSnitchInfoMBean epSnitchInfo = 
probe.getEndpointSnitchInfoProxy();
+ 
+ for (Entry tokenAndEndPoint : 
tokenToEndpoint.entrySet())
+ {
+ String dc = 
epSnitchInfo.getDatacenter(tokenAndEndPoint.getValue());
+ if (!ownershipByDc.containsKey(dc))
+ ownershipByDc.put(dc, new SetHostStat(resolveIp));
+ ownershipByDc.get(dc).add(tokenAndEndPoint.getKey(), 
tokenAndEndPoint.getValue(), ownerships);
+ }
+ 
+ return ownershipByDc;
+ }
+ 
+ static class SetHostStat implements Iterable {
+ final List hostStats = new ArrayList();
+ final boolean resolveIp;
+ 
+ public SetHostStat(boolean resolveIp)
+ {
+ this.resolveIp = resolveIp;
+ }
+ 
+ public int size()
+ {
+ return hostStats.size();
+ }
+ 
+ @Override
+ public Iterator iterator() {
+ return hostStats.iterator();
+ }
+ 
+ public void add(String to

[1/4] git commit: nodetool no longer shows node joining (Also fix nodetool status) patch by Vijay; reviewed by driftx for CASSANDRA-6811

2014-03-11 Thread vijay
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 8e360f80f -> e22d0b1b0


nodetool no longer shows node joining (Also fix nodetool status)
patch by Vijay; reviewed by driftx for CASSANDRA-6811


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91d220b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91d220b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91d220b3

Branch: refs/heads/cassandra-2.1
Commit: 91d220b350f512ef283748dfcbcc304bde2f9db2
Parents: dfd28d2
Author: Vijay 
Authored: Tue Mar 11 02:52:45 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 20:13:03 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 197 +--
 1 file changed, 95 insertions(+), 102 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/91d220b3/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 75af915..85afdc1 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -29,8 +29,10 @@ import java.util.Map.Entry;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Joiner;
+import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.LinkedHashMultimap;
 import com.google.common.collect.Maps;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.commons.cli.*;
@@ -38,7 +40,6 @@ import org.yaml.snakeyaml.Loader;
 import org.yaml.snakeyaml.TypeDescription;
 import org.yaml.snakeyaml.Yaml;
 import org.yaml.snakeyaml.constructor.Constructor;
-
 import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 import org.apache.cassandra.db.Table;
@@ -268,16 +269,7 @@ public class NodeCmd
 try
 {
 outs.println();
-Map> perDcOwnerships = 
Maps.newLinkedHashMap();
-// get the different datasets and map to tokens
-for (Map.Entry ownership : 
ownerships.entrySet())
-{
-String dc = 
probe.getEndpointSnitchInfoProxy().getDatacenter(ownership.getKey().getHostAddress());
-if (!perDcOwnerships.containsKey(dc))
-perDcOwnerships.put(dc, new LinkedHashMap());
-perDcOwnerships.get(dc).put(ownership.getKey(), 
ownership.getValue());
-}
-for (Map.Entry> entry : 
perDcOwnerships.entrySet())
+for (Entry entry : getOwnershipByDc(false, 
tokensToEndpoints, ownerships).entrySet())
 printDc(outs, format, entry.getKey(), endpointsToTokens, 
keyspaceSelected, entry.getValue());
 }
 catch (UnknownHostException e)
@@ -293,7 +285,7 @@ public class NodeCmd
 }
 
 private void printDc(PrintStream outs, String format, String dc, 
LinkedHashMultimap endpointsToTokens,
-boolean keyspaceSelected, Map 
filteredOwnerships)
+ boolean keyspaceSelected, SetHostStat hoststats)
 {
 Collection liveNodes = probe.getLiveNodes();
 Collection deadNodes = probe.getUnreachableNodes();
@@ -310,27 +302,27 @@ public class NodeCmd
 float totalReplicas = 0f;
 String lastToken = "";
 
-for (Map.Entry entry : 
filteredOwnerships.entrySet())
+for (HostStat stat : hoststats)
 {
-
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+tokens.addAll(endpointsToTokens.get(stat.ip));
 lastToken = tokens.get(tokens.size() - 1);
-totalReplicas += entry.getValue();
+if (stat.owns != null)
+totalReplicas += stat.owns;
 }
 
-
 if (keyspaceSelected)
 outs.print("Replicas: " + (int) totalReplicas + "\n\n");
 
 outs.printf(format, "Address", "Rack", "Status", "State", "Load", 
"Owns", "Token");
 
-if (filteredOwnerships.size() > 1)
+if (hoststats.size() > 1)
 outs.printf(format, "", "", "", "", "", "", lastToken);
 else
 outs.println();
 
-for (Map.Entry entry : endpointsToTokens.entries())
+for (HostStat stat : hoststats)
 {
-String endpoint = entry.getKey();
+String endpoint = stat.ip;
 String rack;
 try
 {
@@ -359,18 +351,8 @@ public class NodeCmd
 String load = loadMap.containsKey(endpoint)
 ? loadMap.get(endpoint)
 : "?";
-String owns;

[4/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-11 Thread vijay
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e22d0b1b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e22d0b1b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e22d0b1b

Branch: refs/heads/cassandra-2.1
Commit: e22d0b1b0f8d4185ca983bb37fbe805b63409639
Parents: 8e360f8 fc9cad9
Author: Vijay 
Authored: Tue Mar 11 21:13:30 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 21:13:30 2014 -0700

--
 .../org/apache/cassandra/tools/NodeTool.java| 217 +--
 1 file changed, 102 insertions(+), 115 deletions(-)
--




[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread vijay
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/fc9cad90
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/fc9cad90
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/fc9cad90

Branch: refs/heads/cassandra-2.0
Commit: fc9cad90d532a3af89dbbf1b004bfd333a85b33e
Parents: f7eca98 91d220b
Author: Vijay 
Authored: Tue Mar 11 20:32:07 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 20:32:07 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 194 +--
 1 file changed, 93 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/fc9cad90/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index 89cfb94,85afdc1..0e7ff2a
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -27,22 -27,25 +27,23 @@@ import java.text.SimpleDateFormat
  import java.util.*;
  import java.util.Map.Entry;
  import java.util.concurrent.ExecutionException;
 +import javax.management.openmbean.TabularData;
  
  import com.google.common.base.Joiner;
+ import com.google.common.collect.ArrayListMultimap;
  import com.google.common.collect.LinkedHashMultimap;
  import com.google.common.collect.Maps;
+ 
  import org.apache.cassandra.config.DatabaseDescriptor;
  import org.apache.cassandra.utils.FBUtilities;
  import org.apache.commons.cli.*;
 -import org.yaml.snakeyaml.Loader;
 -import org.yaml.snakeyaml.TypeDescription;
  import org.yaml.snakeyaml.Yaml;
  import org.yaml.snakeyaml.constructor.Constructor;
- 
  import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
  import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 -import org.apache.cassandra.db.Table;
 +import org.apache.cassandra.db.Keyspace;
  import org.apache.cassandra.db.compaction.CompactionManagerMBean;
  import org.apache.cassandra.db.compaction.OperationType;
 -import org.apache.cassandra.exceptions.ConfigurationException;
  import org.apache.cassandra.io.util.FileUtils;
  import org.apache.cassandra.locator.EndpointSnitchInfoMBean;
  import org.apache.cassandra.net.MessagingServiceMBean;
@@@ -318,18 -299,23 +310,17 @@@ public class NodeCm
  
  // get the total amount of replicas for this dc and the last token in 
this dc's ring
  List tokens = new ArrayList();
 -float totalReplicas = 0f;
  String lastToken = "";
  
- for (Map.Entry entry : 
filteredOwnerships.entrySet())
+ for (HostStat stat : hoststats)
  {
- 
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+ tokens.addAll(endpointsToTokens.get(stat.ip));
  lastToken = tokens.get(tokens.size() - 1);
 -if (stat.owns != null)
 -totalReplicas += stat.owns;
  }
  
- 
 -if (keyspaceSelected)
 -outs.print("Replicas: " + (int) totalReplicas + "\n\n");
 -
  outs.printf(format, "Address", "Rack", "Status", "State", "Load", 
"Owns", "Token");
  
- if (filteredOwnerships.size() > 1)
+ if (hoststats.size() > 1)
  outs.printf(format, "", "", "", "", "", "", lastToken);
  else
  outs.println();
@@@ -584,7 -508,70 +513,70 @@@
  }
  }
  
+ private Map getOwnershipByDc(boolean resolveIp, 
Map tokenToEndpoint, 
+   Map 
ownerships) throws UnknownHostException
+ {
+ Map ownershipByDc = Maps.newLinkedHashMap();
+ EndpointSnitchInfoMBean epSnitchInfo = 
probe.getEndpointSnitchInfoProxy();
+ 
+ for (Entry tokenAndEndPoint : 
tokenToEndpoint.entrySet())
+ {
+ String dc = 
epSnitchInfo.getDatacenter(tokenAndEndPoint.getValue());
+ if (!ownershipByDc.containsKey(dc))
+ ownershipByDc.put(dc, new SetHostStat(resolveIp));
+ ownershipByDc.get(dc).add(tokenAndEndPoint.getKey(), 
tokenAndEndPoint.getValue(), ownerships);
+ }
+ 
+ return ownershipByDc;
+ }
+ 
+ static class SetHostStat implements Iterable {
+ final List hostStats = new ArrayList();
+ final boolean resolveIp;
+ 
+ public SetHostStat(boolean resolveIp)
+ {
+ this.resolveIp = resolveIp;
+ }
+ 
+ public int size()
+ {
+ return hostStats.size();
+ }
+ 
+ @Override
+ public Iterator iterator() {
+ return hostStats.iterator();
+ }
+ 
+ public void add(String to

[1/2] git commit: nodetool no longer shows node joining (Also fix nodetool status) patch by Vijay; reviewed by driftx for CASSANDRA-6811

2014-03-11 Thread vijay
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 f7eca98a7 -> fc9cad90d


nodetool no longer shows node joining (Also fix nodetool status)
patch by Vijay; reviewed by driftx for CASSANDRA-6811


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91d220b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91d220b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91d220b3

Branch: refs/heads/cassandra-2.0
Commit: 91d220b350f512ef283748dfcbcc304bde2f9db2
Parents: dfd28d2
Author: Vijay 
Authored: Tue Mar 11 02:52:45 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 20:13:03 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 197 +--
 1 file changed, 95 insertions(+), 102 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/91d220b3/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 75af915..85afdc1 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -29,8 +29,10 @@ import java.util.Map.Entry;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Joiner;
+import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.LinkedHashMultimap;
 import com.google.common.collect.Maps;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.commons.cli.*;
@@ -38,7 +40,6 @@ import org.yaml.snakeyaml.Loader;
 import org.yaml.snakeyaml.TypeDescription;
 import org.yaml.snakeyaml.Yaml;
 import org.yaml.snakeyaml.constructor.Constructor;
-
 import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 import org.apache.cassandra.db.Table;
@@ -268,16 +269,7 @@ public class NodeCmd
 try
 {
 outs.println();
-Map> perDcOwnerships = 
Maps.newLinkedHashMap();
-// get the different datasets and map to tokens
-for (Map.Entry ownership : 
ownerships.entrySet())
-{
-String dc = 
probe.getEndpointSnitchInfoProxy().getDatacenter(ownership.getKey().getHostAddress());
-if (!perDcOwnerships.containsKey(dc))
-perDcOwnerships.put(dc, new LinkedHashMap());
-perDcOwnerships.get(dc).put(ownership.getKey(), 
ownership.getValue());
-}
-for (Map.Entry> entry : 
perDcOwnerships.entrySet())
+for (Entry entry : getOwnershipByDc(false, 
tokensToEndpoints, ownerships).entrySet())
 printDc(outs, format, entry.getKey(), endpointsToTokens, 
keyspaceSelected, entry.getValue());
 }
 catch (UnknownHostException e)
@@ -293,7 +285,7 @@ public class NodeCmd
 }
 
 private void printDc(PrintStream outs, String format, String dc, 
LinkedHashMultimap endpointsToTokens,
-boolean keyspaceSelected, Map 
filteredOwnerships)
+ boolean keyspaceSelected, SetHostStat hoststats)
 {
 Collection liveNodes = probe.getLiveNodes();
 Collection deadNodes = probe.getUnreachableNodes();
@@ -310,27 +302,27 @@ public class NodeCmd
 float totalReplicas = 0f;
 String lastToken = "";
 
-for (Map.Entry entry : 
filteredOwnerships.entrySet())
+for (HostStat stat : hoststats)
 {
-
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+tokens.addAll(endpointsToTokens.get(stat.ip));
 lastToken = tokens.get(tokens.size() - 1);
-totalReplicas += entry.getValue();
+if (stat.owns != null)
+totalReplicas += stat.owns;
 }
 
-
 if (keyspaceSelected)
 outs.print("Replicas: " + (int) totalReplicas + "\n\n");
 
 outs.printf(format, "Address", "Rack", "Status", "State", "Load", 
"Owns", "Token");
 
-if (filteredOwnerships.size() > 1)
+if (hoststats.size() > 1)
 outs.printf(format, "", "", "", "", "", "", lastToken);
 else
 outs.println();
 
-for (Map.Entry entry : endpointsToTokens.entries())
+for (HostStat stat : hoststats)
 {
-String endpoint = entry.getKey();
+String endpoint = stat.ip;
 String rack;
 try
 {
@@ -359,18 +351,8 @@ public class NodeCmd
 String load = loadMap.containsKey(endpoint)
 ? loadMap.get(endpoint)
 : "?";
-String owns;

git commit: nodetool no longer shows node joining (Also fix nodetool status) patch by Vijay; reviewed by driftx for CASSANDRA-6811

2014-03-11 Thread vijay
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-1.2 dfd28d226 -> 91d220b35


nodetool no longer shows node joining (Also fix nodetool status)
patch by Vijay; reviewed by driftx for CASSANDRA-6811


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/91d220b3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/91d220b3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/91d220b3

Branch: refs/heads/cassandra-1.2
Commit: 91d220b350f512ef283748dfcbcc304bde2f9db2
Parents: dfd28d2
Author: Vijay 
Authored: Tue Mar 11 02:52:45 2014 -0700
Committer: Vijay 
Committed: Tue Mar 11 20:13:03 2014 -0700

--
 .../org/apache/cassandra/tools/NodeCmd.java | 197 +--
 1 file changed, 95 insertions(+), 102 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/91d220b3/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 75af915..85afdc1 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -29,8 +29,10 @@ import java.util.Map.Entry;
 import java.util.concurrent.ExecutionException;
 
 import com.google.common.base.Joiner;
+import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.LinkedHashMultimap;
 import com.google.common.collect.Maps;
+
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.utils.FBUtilities;
 import org.apache.commons.cli.*;
@@ -38,7 +40,6 @@ import org.yaml.snakeyaml.Loader;
 import org.yaml.snakeyaml.TypeDescription;
 import org.yaml.snakeyaml.Yaml;
 import org.yaml.snakeyaml.constructor.Constructor;
-
 import org.apache.cassandra.concurrent.JMXEnabledThreadPoolExecutorMBean;
 import org.apache.cassandra.db.ColumnFamilyStoreMBean;
 import org.apache.cassandra.db.Table;
@@ -268,16 +269,7 @@ public class NodeCmd
 try
 {
 outs.println();
-Map> perDcOwnerships = 
Maps.newLinkedHashMap();
-// get the different datasets and map to tokens
-for (Map.Entry ownership : 
ownerships.entrySet())
-{
-String dc = 
probe.getEndpointSnitchInfoProxy().getDatacenter(ownership.getKey().getHostAddress());
-if (!perDcOwnerships.containsKey(dc))
-perDcOwnerships.put(dc, new LinkedHashMap());
-perDcOwnerships.get(dc).put(ownership.getKey(), 
ownership.getValue());
-}
-for (Map.Entry> entry : 
perDcOwnerships.entrySet())
+for (Entry entry : getOwnershipByDc(false, 
tokensToEndpoints, ownerships).entrySet())
 printDc(outs, format, entry.getKey(), endpointsToTokens, 
keyspaceSelected, entry.getValue());
 }
 catch (UnknownHostException e)
@@ -293,7 +285,7 @@ public class NodeCmd
 }
 
 private void printDc(PrintStream outs, String format, String dc, 
LinkedHashMultimap endpointsToTokens,
-boolean keyspaceSelected, Map 
filteredOwnerships)
+ boolean keyspaceSelected, SetHostStat hoststats)
 {
 Collection liveNodes = probe.getLiveNodes();
 Collection deadNodes = probe.getUnreachableNodes();
@@ -310,27 +302,27 @@ public class NodeCmd
 float totalReplicas = 0f;
 String lastToken = "";
 
-for (Map.Entry entry : 
filteredOwnerships.entrySet())
+for (HostStat stat : hoststats)
 {
-
tokens.addAll(endpointsToTokens.get(entry.getKey().getHostAddress()));
+tokens.addAll(endpointsToTokens.get(stat.ip));
 lastToken = tokens.get(tokens.size() - 1);
-totalReplicas += entry.getValue();
+if (stat.owns != null)
+totalReplicas += stat.owns;
 }
 
-
 if (keyspaceSelected)
 outs.print("Replicas: " + (int) totalReplicas + "\n\n");
 
 outs.printf(format, "Address", "Rack", "Status", "State", "Load", 
"Owns", "Token");
 
-if (filteredOwnerships.size() > 1)
+if (hoststats.size() > 1)
 outs.printf(format, "", "", "", "", "", "", lastToken);
 else
 outs.println();
 
-for (Map.Entry entry : endpointsToTokens.entries())
+for (HostStat stat : hoststats)
 {
-String endpoint = entry.getKey();
+String endpoint = stat.ip;
 String rack;
 try
 {
@@ -359,18 +351,8 @@ public class NodeCmd
 String load = loadMap.containsKey(endpoint)
 ? loadMap.get(endpoint)
 : "?";
-String owns;

git commit: use junit assertions over assert

2014-03-11 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk 5bc76b97e -> 2037a8d7a


use junit assertions over assert


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2037a8d7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2037a8d7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2037a8d7

Branch: refs/heads/trunk
Commit: 2037a8d7acb4d3a3a44204f077663fbd5869995c
Parents: 5bc76b9
Author: Dave Brosius 
Authored: Tue Mar 11 22:47:11 2014 -0400
Committer: Dave Brosius 
Committed: Tue Mar 11 22:47:11 2014 -0400

--
 .../org/apache/cassandra/config/DefsTest.java   | 123 ++-
 1 file changed, 62 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2037a8d7/test/unit/org/apache/cassandra/config/DefsTest.java
--
diff --git a/test/unit/org/apache/cassandra/config/DefsTest.java 
b/test/unit/org/apache/cassandra/config/DefsTest.java
index 1251ff7..6c06648 100644
--- a/test/unit/org/apache/cassandra/config/DefsTest.java
+++ b/test/unit/org/apache/cassandra/config/DefsTest.java
@@ -40,6 +40,7 @@ import org.apache.cassandra.service.MigrationManager;
 import org.apache.cassandra.utils.ByteBufferUtil;
 import static org.apache.cassandra.Util.cellname;
 
+import org.junit.Assert;
 import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.runner.RunWith;
@@ -68,7 +69,7 @@ public class DefsTest extends SchemaLoader
.maxCompactionThreshold(500);
 
 // we'll be adding this one later. make sure it's not already there.
-assert cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 5 })) == 
null;
+Assert.assertNull(cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 
5 })));
 
 CFMetaData cfNew = cfm.clone();
 
@@ -80,14 +81,14 @@ public class DefsTest extends SchemaLoader
 // remove one.
 ColumnDefinition removeIndexDef = ColumnDefinition.regularDef(cfm, 
ByteBuffer.wrap(new byte[] { 0 }), BytesType.instance, null)
   .setIndex("0", 
IndexType.KEYS, null);
-assert cfNew.removeColumnDefinition(removeIndexDef);
+Assert.assertTrue(cfNew.removeColumnDefinition(removeIndexDef));
 
 cfm.apply(cfNew);
 
 for (int i = 1; i < cfm.allColumns().size(); i++)
-assert cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 1 })) 
!= null;
-assert cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 0 })) == 
null;
-assert cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 5 })) != 
null;
+Assert.assertNotNull(cfm.getColumnDefinition(ByteBuffer.wrap(new 
byte[] { 1 })));
+Assert.assertNull(cfm.getColumnDefinition(ByteBuffer.wrap(new byte[] { 
0 })));
+Assert.assertNotNull(cfm.getColumnDefinition(ByteBuffer.wrap(new 
byte[] { 5 })));
 }
 
 @Test
@@ -95,11 +96,11 @@ public class DefsTest extends SchemaLoader
 {
 String[] valid = {"1", "a", "_1", "b_", "__", "1_a"};
 for (String s : valid)
-assert CFMetaData.isNameValid(s);
+Assert.assertTrue(CFMetaData.isNameValid(s));
 
 String[] invalid = {"b@t", "dash-y", "", " ", "dot.s", ".hidden"};
 for (String s : invalid)
-assert !CFMetaData.isNameValid(s);
+Assert.assertFalse(CFMetaData.isNameValid(s));
 }
 
 @Ignore
@@ -112,12 +113,12 @@ public class DefsTest extends SchemaLoader
 DefsTables.dumpToStorage(first);
 List defs = new 
ArrayList(DefsTables.loadFromStorage(first));
 
-assert defs.size() > 0;
-assert defs.size() == Schema.instance.getNonSystemKeyspaces().size();
+Assert.assertTrue(defs.size() > 0);
+Assert.assertEquals(defs.size(), 
Schema.instance.getNonSystemKeyspaces().size());
 for (KSMetaData loaded : defs)
 {
 KSMetaData defined = 
Schema.instance.getKeyspaceDefinition(loaded.name);
-assert defined.equals(loaded) : String.format("%s != %s", loaded, 
defined);
+Assert.assertTrue(String.format("%s != %s", loaded, defined), 
defined.equals(loaded));
 }
 */
 }
@@ -145,11 +146,11 @@ public class DefsTest extends SchemaLoader
 
 CFMetaData newCf = addTestCF(original.name, cf, null);
 
-assert 
!Schema.instance.getKSMetaData(ks).cfMetaData().containsKey(newCf.cfName);
+
Assert.assertFalse(Schema.instance.getKSMetaData(ks).cfMetaData().containsKey(newCf.cfName));
 MigrationManager.announceNewColumnFamily(newCf);
 
-assert 
Schema.instance.getKSMetaData(ks).cfMetaData().containsKey(newCf.cfName);
-assert 
Schema.instance.getKSMetaData(ks).cfMetaData().get(newCf.cfName).equa

[jira] [Created] (CASSANDRA-6838) FileCacheService dramatically overcounting its memoryUsage

2014-03-11 Thread Benedict (JIRA)
Benedict created CASSANDRA-6838:
---

 Summary: FileCacheService dramatically overcounting its memoryUsage
 Key: CASSANDRA-6838
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6838
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2


On investigating why I was seeing dramatically worse performance for counter 
updates over prepared CQL3 statements compred to unprepared CQL2 statements, I 
stumbled upon a bug in FileCacheService wherein, on returning a cached reader 
back to the pool, its memory is counted again towards the total memory usage, 
but is not matched by a decrement when checked out. So we effectively are 
probably not caching readers most of the time.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6838) FileCacheService dramatically overcounting its memoryUsage

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6838:


Attachment: 6838.txt

> FileCacheService dramatically overcounting its memoryUsage
> --
>
> Key: CASSANDRA-6838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6838
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>  Labels: performance
> Fix For: 2.1 beta2
>
> Attachments: 6838.txt
>
>
> On investigating why I was seeing dramatically worse performance for counter 
> updates over prepared CQL3 statements compred to unprepared CQL2 statements, 
> I stumbled upon a bug in FileCacheService wherein, on returning a cached 
> reader back to the pool, its memory is counted again towards the total memory 
> usage, but is not matched by a decrement when checked out. So we effectively 
> are probably not caching readers most of the time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13931115#comment-13931115
 ] 

Robert Coli commented on CASSANDRA-6833:


I agree with Aleksey, above. If you make a JSON data type that validates, you 
*will* see users constantly using it. If we don't want them to do Stupid 
Things, we shouldn't suggest that Cassandra expects them to do said Stupid 
Things and wants to make it easier by providing validation. As it is trivial 
for them to validate outside of Cassandra, validation within Cassandra suggests 
endorsement.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6835) cassandra-stress should support a variable number of counter columns

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13931206#comment-13931206
 ] 

Benedict commented on CASSANDRA-6835:
-

Uploaded patch 
[here|https://github.com/belliottsmith/cassandra/commits/iss-6835]

This actually makes a lot more changes than planned, a couple of which are 
pretty important:

# It _fixes counter reads_ - they've been hitting the non-counter table since 
this stress was introduced, which is kind of not the point
# Super columns reads had the same problem
# As part of this fix, I rescind the ability to specify a CF name, as it 
doesn't really make much sense, and it only overrode the CF name for 
non-counter, non-supercolumn operations. Making it more generic seemed like too 
much work for the payoff
# It permits operations on counters to operate over a variable number of 
columns, selecting a random sample of the possible column names (note that 
reads may still fail if they get nothing back, so ideally all possible columns 
should be populated once before any random read/write workload is let loose)
# It permits varying the amount a counter is incremented by, based on a 
distribution
# It permits selecting if you want to perform a range slice query (/select *) 
or a name filter query for reads (defaulting to the latter where possible)
# It slightly modifies the -mode parameter spec to make it clearer what kind of 
CQL3/2 connection you're making


> cassandra-stress should support a variable number of counter columns
> 
>
> Key: CASSANDRA-6835
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6835
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5708) Add DELETE ... IF EXISTS to CQL3

2014-03-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13931204#comment-13931204
 ] 

Tyler Hobbs commented on CASSANDRA-5708:


There's one scenario where I'm not sure what the best behavior would be:

{noformat}
CREATE TABLE foo (k int PRIMARY KEY, v int);
INSERT INTO foo (k, v) VALUES (0, 0);
DELETE v FROM foo WHERE k=0 IF EXISTS;  -- cas succeeds
DELETE v FROM foo WHERE k=0 IF EXISTS;  -- cas fails
DELETE FROM foo WHERE k=0 IF EXISTS; -- cas succeeds
{noformat}

When deleting a set of columns (instead of the entire row), should EXISTS only 
check to see if any of the deleted cells are live, or should it check to see if 
the entire row has any live cells?  (I think the latter behavior is less 
surprising.)

> Add DELETE ... IF EXISTS to CQL3
> 
>
> Key: CASSANDRA-5708
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5708
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Tyler Hobbs
>Priority: Minor
> Fix For: 2.0.7
>
>
> I've been slightly lazy in CASSANDRA-5443 and didn't added a {{DELETE .. IF 
> EXISTS}} syntax to CQL because it wasn't immediately clear what was the 
> correct condition to use for the "IF EXISTS". But at least for CQL3 tables, 
> this is in fact pretty easy to do using the row marker so we should probably 
> add it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13931173#comment-13931173
 ] 

Pavel Yaskevich commented on CASSANDRA-6833:


+1 with [~iamaleksey], if users really want validation for JSON strings to be 
handled by Cassandra they can just add JSONType to their project and use it 
(that's still supported). That way at least it would be clear what it does, 
otherwise it would be the same as super columns, I've seen couple of examples 
when people started prototyping with them and moved to production unchanged 
just because it felt natural for the type of data that they were storing so no 
thought was given to re-modeling until the every end before they hit the 
bottleneck.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6838) FileCacheService overcounting its memoryUsage

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6838:


Summary: FileCacheService overcounting its memoryUsage  (was: 
FileCacheService dramatically overcounting its memoryUsage)

> FileCacheService overcounting its memoryUsage
> -
>
> Key: CASSANDRA-6838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6838
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
>  Labels: performance
> Fix For: 2.1 beta2
>
> Attachments: 6838.txt
>
>
> On investigating why I was seeing dramatically worse performance for counter 
> updates over prepared CQL3 statements compred to unprepared CQL2 statements, 
> I stumbled upon a bug in FileCacheService wherein, on returning a cached 
> reader back to the pool, its memory is counted again towards the total memory 
> usage, but is not matched by a decrement when checked out. So we effectively 
> are probably not caching readers most of the time.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6307) Switch cqlsh from cassandra-dbapi2 to python-driver

2014-03-11 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13931047#comment-13931047
 ] 

Mikhail Stepura commented on CASSANDRA-6307:


thanks [~thobbs].

I've also discovered that Python 2.7.x can't connect to the Cassandra running 
in Java 7, if Cassandra's keypair was generated using instructions from 
(http://www.datastax.com/documentation/cassandra/2.0/cassandra/security/secureSSLCertificates_t.html?scroll=task_ds_c14_xjy_2k):
 {{keytool -genkey -alias  -keystore .keystore}}

In this case Python will fail with {{SSLError(1, '_ssl.c:507: 
error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure')}}

So I had to generate keys with {{-keyalg RSA}}  to work around that. Not sure 
what how that will impact existing setups.

http://stackoverflow.com/questions/14167508/intermittent-sslv3-alert-handshake-failure-under-python
 suggests to {{disable DHE cipher suites (at either end)}}, so I'll try to do 
that at the cqlsh side.


> Switch cqlsh from cassandra-dbapi2 to python-driver
> ---
>
> Key: CASSANDRA-6307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6307
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 2.1 beta2
>
>
> python-driver is hitting 1.0 soon. cassandra-dbapi2 development has stalled.
> It's time to switch cqlsh to native protocol and cassandra-dbapi2, especially 
> now that
> 1. Some CQL3 things are not supported by Thrift transport
> 2. cqlsh no longer has to support CQL2 (dropped in 2.0)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6800) ant codecoverage no longer works due jdk 1.7

2014-03-11 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated CASSANDRA-6800:
---

Assignee: (was: Edward Capriolo)

> ant codecoverage no longer works due jdk 1.7
> 
>
> Key: CASSANDRA-6800
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6800
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tests
>Reporter: Edward Capriolo
>Priority: Minor
> Fix For: 2.1 beta2
>
>
> Code coverage does not run currently due to cobertura jdk incompatibility. 
> Fix is coming. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6704) Create wide row scanners

2014-03-11 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo resolved CASSANDRA-6704.


Resolution: Won't Fix

No point in doing this. Since no one cares to support thrift any more. CQL does 
everything better.

> Create wide row scanners
> 
>
> Key: CASSANDRA-6704
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6704
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>
> The BigTable white paper demonstrates the use of scanners to iterate over 
> rows and columns. 
> http://static.googleusercontent.com/media/research.google.com/en/us/archive/bigtable-osdi06.pdf
> Because Cassandra does not have a primary sorting on row keys scanning over 
> ranges of row keys is less useful. 
> However we can use the scanner concept to operate on wide rows. For example 
> many times a user wishes to do some custom processing inside a row and does 
> not wish to carry the data across the network to do this processing. 
> I have already implemented thrift methods to compile dynamic groovy code into 
> Filters as well as some code that uses a Filter to page through and process 
> data on the server side.
> https://github.com/edwardcapriolo/cassandra/compare/apache:trunk...trunk
> The following is a working code snippet.
> {code}
> @Test
> public void test_scanner() throws Exception
> {
>   ColumnParent cp = new ColumnParent();
>   cp.setColumn_family("Standard1");
>   ByteBuffer key = ByteBuffer.wrap("rscannerkey".getBytes());
>   for (char a='a'; a < 'g'; a++){
> Column c1 = new Column();
> c1.setName((a+"").getBytes());
> c1.setValue(new byte [0]);
> c1.setTimestamp(System.nanoTime());
> server.insert(key, cp, c1, ConsistencyLevel.ONE);
>   }
>   
>   FilterDesc d = new FilterDesc();
>   d.setSpec("GROOVY_CLASS_LOADER");
>   d.setName("limit3");
>   d.setCode("import org.apache.cassandra.dht.* \n" +
>   "import org.apache.cassandra.thrift.* \n" +
>   "public class Limit3 implements SFilter { \n " +
>   "public FilterReturn filter(ColumnOrSuperColumn col, 
> List filtered) {\n"+
>   " filtered.add(col);\n"+
>   " return filtered.size()< 3 ? FilterReturn.FILTER_MORE : 
> FilterReturn.FILTER_DONE;\n"+
>   "} \n" +
> "}\n");
>   server.create_filter(d);
>   
>   
>   ScannerResult res = server.create_scanner("Standard1", "limit3", key, 
> ByteBuffer.wrap("a".getBytes()));
>   Assert.assertEquals(3, res.results.size());
> }
> {code}
> I am going to be working on this code over the next few weeks but I wanted to 
> get the concept our early so the design can see some criticism.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930948#comment-13930948
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

bq. A timestamp is "just" a bigint by that reasoning. Should we not support 
that extra layer of meaning either?

It's about messaging. Having a JSON type, even if it's really a blob with some 
extra validation, sends a message that putting JSON blobs in cells if OK, where 
in reality it's more often NOT OK. You might not see it that way, but users 
will. We should not be encouraging it.

The risk of poisonous message vs. the extremely minor benefit of the type (blob 
with validation) makes this issue a no-brainer won't fix, imo.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930944#comment-13930944
 ] 

Jonathan Ellis commented on CASSANDRA-6833:
---

A timestamp is "just" a bigint by that reasoning.  Should we not support that 
extra layer of meaning either?

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930936#comment-13930936
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

bq. What if I don't really care about the json contents per se, I just want to 
store json from a third party?

Validate it and then put it in a text column?

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930931#comment-13930931
 ] 

Jonathan Ellis commented on CASSANDRA-6833:
---

What if I don't really care about the json contents per se, I just want to 
store json from a third party?

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Sergio Bossa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930922#comment-13930922
 ] 

Sergio Bossa commented on CASSANDRA-6833:
-

I agree with [~iamaleksey]: that would definitely give users the wrong message, 
and encourage what in the end is a bad practice.
I've personally done that in the past (stuffing JSON blobs inside columns), and 
that's a pretty opaque and inefficient way of modelling your data, as opposed 
to CQL3.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6779) BooleanType is not too boolean

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930914#comment-13930914
 ] 

Jonathan Ellis commented on CASSANDRA-6779:
---

([~thobbs] to review)

> BooleanType is not too boolean
> --
>
> Key: CASSANDRA-6779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6779
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0.7
>
> Attachments: 6779.txt
>
>
> The BooleanType validator accepts any byte (it only checks it's one byte 
> long) and the comparator actually uses the ByteBuffer.compareTo() method on 
> it's input. So that BooleanType is really ByteType and accepts 256 values.
> Note that in practice, it's likely no-one or almost no-one has ever used 
> BooleanType as a comparator, and almost surely the handful that might have 
> done it have stick to sending only 0 for false and 1 for true. Still, it's 
> probably worth fixing before it actually hurt someone. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6779) BooleanType is not too boolean

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6779:
--

Reviewer: Tyler Hobbs

> BooleanType is not too boolean
> --
>
> Key: CASSANDRA-6779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6779
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 2.0.7
>
> Attachments: 6779.txt
>
>
> The BooleanType validator accepts any byte (it only checks it's one byte 
> long) and the comparator actually uses the ByteBuffer.compareTo() method on 
> it's input. So that BooleanType is really ByteType and accepts 256 values.
> Note that in practice, it's likely no-one or almost no-one has ever used 
> BooleanType as a comparator, and almost surely the handful that might have 
> done it have stick to sending only 0 for false and 1 for true. Still, it's 
> probably worth fixing before it actually hurt someone. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930911#comment-13930911
 ] 

Jeremiah Jordan commented on CASSANDRA-6833:


I am +0 on it, json type validation seems pretty easy to do as long as we 
aren't going to add json 2i's or something.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-4165) Generate Digest file for compressed SSTables

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4165:
--

Reviewer: Marcus Eriksson  (was: Jonathan Ellis)
Assignee: Jonathan Ellis

> Generate Digest file for compressed SSTables
> 
>
> Key: CASSANDRA-4165
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4165
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Jonathan Ellis
>Priority: Minor
>  Labels: performance
> Fix For: 2.1 beta2
>
> Attachments: 0001-Generate-digest-for-compressed-files-as-well.patch, 
> 0002-dont-do-crc-and-add-digests-for-compressed-files.txt, 4165-rebased.txt
>
>
> We use the generated *Digest.sha1-files to verify backups, would be nice if 
> they were generated for compressed sstables as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4165) Generate Digest file for compressed SSTables

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930913#comment-13930913
 ] 

Jonathan Ellis commented on CASSANDRA-4165:
---

Can you review that branch, [~krummas]?

> Generate Digest file for compressed SSTables
> 
>
> Key: CASSANDRA-4165
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4165
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Marcus Eriksson
>Assignee: Jonathan Ellis
>Priority: Minor
>  Labels: performance
> Fix For: 2.1 beta2
>
> Attachments: 0001-Generate-digest-for-compressed-files-as-well.patch, 
> 0002-dont-do-crc-and-add-digests-for-compressed-files.txt, 4165-rebased.txt
>
>
> We use the generated *Digest.sha1-files to verify backups, would be nice if 
> they were generated for compressed sstables as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6745) Require specifying rows_per_partition_to_cache

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6745:
--

Reviewer: Sylvain Lebresne

> Require specifying rows_per_partition_to_cache
> --
>
> Key: CASSANDRA-6745
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6745
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jonathan Ellis
>Assignee: Marcus Eriksson
>Priority: Trivial
> Fix For: 2.1 beta2
>
> Attachments: 0001-wip-caching-options.patch
>
>
> We should require specifying rows_to_cache_per_partition for new tables or 
> newly ALTERed when row caching is enabled.
> Pre-upgrade should be grandfathered in as ALL to match existing semantics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6745) Require specifying rows_per_partition_to_cache

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930907#comment-13930907
 ] 

Jonathan Ellis commented on CASSANDRA-6745:
---

([~slebresne] to review)

> Require specifying rows_per_partition_to_cache
> --
>
> Key: CASSANDRA-6745
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6745
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jonathan Ellis
>Assignee: Marcus Eriksson
>Priority: Trivial
> Fix For: 2.1 beta2
>
> Attachments: 0001-wip-caching-options.patch
>
>
> We should require specifying rows_to_cache_per_partition for new tables or 
> newly ALTERed when row caching is enabled.
> Pre-upgrade should be grandfathered in as ALL to match existing semantics.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6783) Collections should have a proper compare() method for UDT

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930905#comment-13930905
 ] 

Jonathan Ellis commented on CASSANDRA-6783:
---

([~thobbs] to review)

> Collections should have a proper compare() method for UDT
> -
>
> Key: CASSANDRA-6783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6783
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.1 beta2
>
> Attachments: 6783.txt
>
>
> So far, ListType, SetType and MapType don't have a proper implementation of 
> compare() (they throw UnsupportedOperationException) because we haven't need 
> one since as far as the cell comparator is concenred, only parts of a 
> collection ends up in the comparator and need to be compared, but the full 
> collection itself does not.
> But with UDT can nest a collection and that sometimes require to be able to 
> compare them. Typically, I pushed a dtest 
> [here|https://github.com/riptano/cassandra-dtest/commit/290e9496d1b2c45158c7d7f5487d09ba48897a7f]
>  that ends up throwing:
> {noformat}
> java.lang.UnsupportedOperationException: CollectionType should not be use 
> directly as a comparator
> at 
> org.apache.cassandra.db.marshal.CollectionType.compare(CollectionType.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.marshal.CollectionType.compare(CollectionType.java:37)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:174)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:101)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
>  ~[main/:na]
> at java.util.TreeMap.compare(TreeMap.java:1188) ~[na:1.7.0_45]
> at java.util.TreeMap.put(TreeMap.java:531) ~[na:1.7.0_45]
> at java.util.TreeSet.add(TreeSet.java:255) ~[na:1.7.0_45]
> at org.apache.cassandra.cql3.Sets$DelayedValue.bind(Sets.java:205) 
> ~[main/:na]
> at org.apache.cassandra.cql3.Sets$Literal.prepare(Sets.java:91) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.UserTypes$Literal.prepare(UserTypes.java:60) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.Operation$SetElement.prepare(Operation.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.UpdateStatement$ParsedUpdate.prepareInternal(UpdateStatement.java:201)
>  ~[main/:na]
> ...
> {noformat}
> Note that this stack doesn't involve cell name comparison at all, it's just 
> that CQL3 sometimes uses a SortedSet underneath to deal with set literals 
> (since internal sets are sorted by their value), and so when a set contains 
> UDT that has set themselves, we need the collection comparison. That being 
> said, for some cases like having a UDT as a map key, we do would need 
> collections to be comparable for the purpose of cell name comparison.
> Attaching relatively simple patch. The patch is a bit bigger than it should 
> be because while adding the 3 simple compare() method, I realized that we had 
> methods to read a short length (2 unsigned short) from a ByteBuffer 
> duplicated all over the place and that it was time to consolidate that in 
> ByteBufferUtil where it should have been from day one (thus removing the 
> duplication). I can separate that trivial refactor in a separate patch if we 
> really need to, but really, the new stuff is the compare() method 
> implementation in ListType, SetType and MapType and the rest is a bit of 
> trivial cleanup. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6783) Collections should have a proper compare() method for UDT

2014-03-11 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6783:
--

Reviewer: Tyler Hobbs

> Collections should have a proper compare() method for UDT
> -
>
> Key: CASSANDRA-6783
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6783
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 2.1 beta2
>
> Attachments: 6783.txt
>
>
> So far, ListType, SetType and MapType don't have a proper implementation of 
> compare() (they throw UnsupportedOperationException) because we haven't need 
> one since as far as the cell comparator is concenred, only parts of a 
> collection ends up in the comparator and need to be compared, but the full 
> collection itself does not.
> But with UDT can nest a collection and that sometimes require to be able to 
> compare them. Typically, I pushed a dtest 
> [here|https://github.com/riptano/cassandra-dtest/commit/290e9496d1b2c45158c7d7f5487d09ba48897a7f]
>  that ends up throwing:
> {noformat}
> java.lang.UnsupportedOperationException: CollectionType should not be use 
> directly as a comparator
> at 
> org.apache.cassandra.db.marshal.CollectionType.compare(CollectionType.java:72)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.marshal.CollectionType.compare(CollectionType.java:37)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.marshal.AbstractType.compareCollectionMembers(AbstractType.java:174)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:101)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:35)
>  ~[main/:na]
> at java.util.TreeMap.compare(TreeMap.java:1188) ~[na:1.7.0_45]
> at java.util.TreeMap.put(TreeMap.java:531) ~[na:1.7.0_45]
> at java.util.TreeSet.add(TreeSet.java:255) ~[na:1.7.0_45]
> at org.apache.cassandra.cql3.Sets$DelayedValue.bind(Sets.java:205) 
> ~[main/:na]
> at org.apache.cassandra.cql3.Sets$Literal.prepare(Sets.java:91) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.UserTypes$Literal.prepare(UserTypes.java:60) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.Operation$SetElement.prepare(Operation.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.cql3.statements.UpdateStatement$ParsedUpdate.prepareInternal(UpdateStatement.java:201)
>  ~[main/:na]
> ...
> {noformat}
> Note that this stack doesn't involve cell name comparison at all, it's just 
> that CQL3 sometimes uses a SortedSet underneath to deal with set literals 
> (since internal sets are sorted by their value), and so when a set contains 
> UDT that has set themselves, we need the collection comparison. That being 
> said, for some cases like having a UDT as a map key, we do would need 
> collections to be comparable for the purpose of cell name comparison.
> Attaching relatively simple patch. The patch is a bit bigger than it should 
> be because while adding the 3 simple compare() method, I realized that we had 
> methods to read a short length (2 unsigned short) from a ByteBuffer 
> duplicated all over the place and that it was time to consolidate that in 
> ByteBufferUtil where it should have been from day one (thus removing the 
> duplication). I can separate that trivial refactor in a separate patch if we 
> really need to, but really, the new stuff is the compare() method 
> implementation in ListType, SetType and MapType and the rest is a bit of 
> trivial cleanup. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930873#comment-13930873
 ] 

Jonathan Ellis commented on CASSANDRA-6793:
---

IMO we should come up with a separate example for that, otherwise people are 
going to get the wrong idea since word count really shouldn't be that 
complicated.

> NPE in Hadoop Word count example
> 
>
> Key: CASSANDRA-6793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Examples
>Reporter: Chander S Pechetty
>Assignee: Chander S Pechetty
>Priority: Minor
>  Labels: hadoop
> Attachments: trunk-6793.txt
>
>
> The partition keys requested in WordCount.java do not match the primary key 
> set up in the table output_words. It looks this patch was not merged properly 
> from 
> [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
> attached patch addresses the NPE and uses the correct keys defined in #5622.
> I am assuming there is no need to fix the actual NPE like throwing an 
> InvalidRequestException back to user to fix the partition keys, as it would 
> be trivial to get the same from the TableMetadata using the driver API.
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:92)
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:40)
>   at org.apache.cassandra.client.RingCache.getRange(RingCache.java:117)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:163)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:63)
>   at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
>   at 
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>   at WordCount$ReducerToCassandra.reduce(Unknown Source)
>   at WordCount$ReducerToCassandra.reduce(Unknown Source)
>   at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5483) Repair tracing

2014-03-11 Thread Ben Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930861#comment-13930861
 ] 

Ben Chan commented on CASSANDRA-5483:
-

Okay, final set of changes (I think).

repro code:
{noformat}
#git checkout 5483
W=https://issues.apache.org/jira/secure/attachment
for url in \
  
$W/12633989/5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch
 \
  $W/12633990/5483-v07-08-Fix-brace-style.patch \
  
$W/12633991/5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch
 \
  $W/12633992/5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch
do [ -e $(basename $url) ] || curl -sO $url; done &&
git apply 5483-v07-*.patch &&
ant clean && ant

./ccm-repair-test
{noformat}

Comments:
* {{v07-07}} I get multiple {{RepairJobTask:...}} ids in the trace, so 
hopefully this resolves the issue. This is as much as I can say with any 
assurance, since I don't normally work with threading.
* {{v07-09}} Without having to worry about backwards compatibility, I decided 
to trace-enable all repair functions I could find and conveniently convert. No 
real change functionally, since nothing in the source tree calls those 
functions and/or overloadings (otherwise I would have had to change much more 
in order to get things to compile).
* {{v07-10}} Again, no functional change. I noticed the difference in parameter 
name when doing the conversion.

You can take or leave the last two; they don't do anything, really.


> Repair tracing
> --
>
> Key: CASSANDRA-5483
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Yuki Morishita
>Assignee: Ben Chan
>Priority: Minor
>  Labels: repair
> Attachments: 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
> 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
> 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
> 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
>  5483-v07-08-Fix-brace-style.patch, 
> 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
>  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
> ccm-repair-test, test-5483-system_traces-events.txt, 
> trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
> trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
>  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
> tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
> v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
> v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
>  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch
>
>
> I think it would be nice to log repair stats and results like query tracing 
> stores traces to system keyspace. With it, you don't have to lookup each log 
> file to see what was the status and how it performed the repair you invoked. 
> Instead, you can query the repair log with session ID to see the state and 
> stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-5483) Repair tracing

2014-03-11 Thread Ben Chan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Chan updated CASSANDRA-5483:


Attachment: 
5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch

5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch
5483-v07-08-Fix-brace-style.patch

5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch

> Repair tracing
> --
>
> Key: CASSANDRA-5483
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5483
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Yuki Morishita
>Assignee: Ben Chan
>Priority: Minor
>  Labels: repair
> Attachments: 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 
> 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 
> 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 
> 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch,
>  5483-v07-08-Fix-brace-style.patch, 
> 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch,
>  5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 
> ccm-repair-test, test-5483-system_traces-events.txt, 
> trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, 
> trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch,
>  tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, 
> tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt, 
> v02p02-5483-v03-0003-Make-repair-tracing-controllable-via-nodetool.patch, 
> v02p02-5483-v04-0003-This-time-use-an-EnumSet-to-pass-boolean-repair-options.patch,
>  v02p02-5483-v05-0003-Use-long-instead-of-EnumSet-to-work-with-JMX.patch
>
>
> I think it would be nice to log repair stats and results like query tracing 
> stores traces to system keyspace. With it, you don't have to lookup each log 
> file to see what was the status and how it performed the repair you invoked. 
> Instead, you can query the repair log with session ID to see the state and 
> stats of all nodes involved in that repair session.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930740#comment-13930740
 ] 

Alex Liu commented on CASSANDRA-6793:
-

(word text primary key, count int), and make a similar simplification for the 
input.
-
This should work.

Original implementation is to show how to use composite primary key, so it has 
PRIMARY KEY ((row_id1, row_id2), word)

> NPE in Hadoop Word count example
> 
>
> Key: CASSANDRA-6793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Examples
>Reporter: Chander S Pechetty
>Assignee: Chander S Pechetty
>Priority: Minor
>  Labels: hadoop
> Attachments: trunk-6793.txt
>
>
> The partition keys requested in WordCount.java do not match the primary key 
> set up in the table output_words. It looks this patch was not merged properly 
> from 
> [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
> attached patch addresses the NPE and uses the correct keys defined in #5622.
> I am assuming there is no need to fix the actual NPE like throwing an 
> InvalidRequestException back to user to fix the partition keys, as it would 
> be trivial to get the same from the TableMetadata using the driver API.
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:92)
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:40)
>   at org.apache.cassandra.client.RingCache.getRange(RingCache.java:117)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:163)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:63)
>   at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
>   at 
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>   at WordCount$ReducerToCassandra.reduce(Unknown Source)
>   at WordCount$ReducerToCassandra.reduce(Unknown Source)
>   at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6811) nodetool no longer shows node joining

2014-03-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930717#comment-13930717
 ] 

Brandon Williams commented on CASSANDRA-6811:
-

LGTM, and more efficient by not making a jmx call for every node just to get 
the first token. +1

> nodetool no longer shows node joining
> -
>
> Key: CASSANDRA-6811
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6811
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Brandon Williams
>Assignee: Vijay
>Priority: Minor
> Fix For: 1.2.16
>
> Attachments: 0001-CASSANDRA-6811-v2.patch, ringfix.txt
>
>
> When we added effective ownership output to nodetool ring/status, we 
> accidentally began excluding joining nodes because we iterate the ownership 
> maps instead of the the endpoint to token map when printing the output, and 
> the joining nodes don't have any ownership.  The simplest thing to do is 
> probably iterate the token map instead, and not output any ownership info for 
> joining nodes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-6837) Batch CAS does not support LOCAL_SERIAL

2014-03-11 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-6837:
---

Assignee: Sylvain Lebresne

> Batch CAS does not support LOCAL_SERIAL
> ---
>
> Key: CASSANDRA-6837
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6837
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Nicolas Favre-Felix
>Assignee: Sylvain Lebresne
>
> The batch CAS feature introduced in Cassandra 2.0.6 does not support the 
> LOCAL_SERIAL consistency level, and always uses SERIAL.
> Create a cluster with 4 nodes with the following topology:
> {code}
> Datacenter: DC2
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  Owns   Host ID  
>  Rack
> UN  127.0.0.3  269 KB 256 26.3%  ae92d997-6042-42d9-b447-943080569742 
>  RAC1
> UN  127.0.0.4  197.81 KB  256 25.1%  3edc92d7-9d1b-472a-8452-24dddbc4502c 
>  RAC1
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  Owns   Host ID  
>  Rack
> UN  127.0.0.1  226.92 KB  256 24.8%  dbc17bd7-1ede-47a2-9b31-6063752d6eb3 
>  RAC1
> UN  127.0.0.2  179.27 KB  256 23.7%  bb0ad285-34d2-4989-a664-b068986ab6fa 
>  RAC1
> {code}
> In cqlsh:
> {code}
> cqlsh> CREATE KEYSPACE foo WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'DC1': 2, 'DC2': 2};
> cqlsh> USE foo;
> cqlsh:foo> CREATE TABLE bar (x text, y bigint, z bigint, t bigint, PRIMARY 
> KEY(x,y));
> {code}
> Kill nodes 127.0.0.3 and 127.0.0.4:
> {code}
> Datacenter: DC2
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  Owns   Host ID  
>  Rack
> DN  127.0.0.3  262.37 KB  256 26.3%  ae92d997-6042-42d9-b447-943080569742 
>  RAC1
> DN  127.0.0.4  208.04 KB  256 25.1%  3edc92d7-9d1b-472a-8452-24dddbc4502c 
>  RAC1
> Datacenter: DC1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  Owns   Host ID  
>  Rack
> UN  127.0.0.1  214.82 KB  256 24.8%  dbc17bd7-1ede-47a2-9b31-6063752d6eb3 
>  RAC1
> UN  127.0.0.2  178.23 KB  256 23.7%  bb0ad285-34d2-4989-a664-b068986ab6fa 
>  RAC1
> {code}
> Connect to 127.0.0.1 in DC1 and run a CAS batch at 
> CL.LOCAL_SERIAL+LOCAL_QUORUM:
> {code}
> final Cluster cluster = new Cluster.Builder()
> .addContactPoint("127.0.0.1")
> .withLoadBalancingPolicy(new DCAwareRoundRobinPolicy("DC1"))
> .build();
> final Session session = cluster.connect("foo");
> Batch batch = QueryBuilder.batch();
> batch.add(new SimpleStatement("INSERT INTO bar (x,y,z) VALUES ('abc', 
> 123, 1) IF NOT EXISTS"));
> batch.add(new SimpleStatement("UPDATE bar SET t=2 WHERE x='abc' AND 
> y=123"));
> batch.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
> batch.setSerialConsistencyLevel(ConsistencyLevel.LOCAL_SERIAL);
> session.execute(batch);
> {code}
> The batch fails with:
> {code}
> Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not 
> enough replica available for query at consistency SERIAL (3 required but only 
> 2 alive)
>   at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:44)
>   at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:33)
>   at 
> com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:182)
>   at 
> org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
>   ... 21 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/3] git commit: Allow cassandra-stress to set compaction strategy options patch by Benedict Elliott Smith; reviewed by Russell Spitzer for CASSANDRA-6451

2014-03-11 Thread jbellis
Allow cassandra-stress to set compaction strategy options
patch by Benedict Elliott Smith; reviewed by Russell Spitzer for CASSANDRA-6451


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e360f80
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e360f80
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e360f80

Branch: refs/heads/trunk
Commit: 8e360f80f4454c1c40edfefdf44b92bfbb9be6f1
Parents: b4f262e
Author: Jonathan Ellis 
Authored: Tue Mar 11 13:00:28 2014 -0500
Committer: Jonathan Ellis 
Committed: Tue Mar 11 13:01:10 2014 -0500

--
 CHANGES.txt |   1 +
 .../stress/settings/OptionCompaction.java   |  62 ++
 .../cassandra/stress/settings/OptionMulti.java  |  62 +-
 .../stress/settings/OptionReplication.java  | 112 ++-
 .../cassandra/stress/settings/OptionSimple.java |  59 +++---
 .../stress/settings/SettingsCommandMixed.java   |   2 +-
 .../stress/settings/SettingsSchema.java |  32 +++---
 7 files changed, 213 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 607e2dc..06331ad 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-beta2
+ * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
  * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
  * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
  * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
new file mode 100644
index 000..da74e43
--- /dev/null
+++ 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
@@ -0,0 +1,62 @@
+package org.apache.cassandra.stress.settings;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.base.Function;
+
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.exceptions.ConfigurationException;
+
+/**
+ * For specifying replication options
+ */
+class OptionCompaction extends OptionMulti
+{
+
+private final OptionSimple strategy = new OptionSimple("strategy=", new 
StrategyAdapter(), null, "The compaction strategy to use", false);
+
+public OptionCompaction()
+{
+super("compaction", "Define the compaction strategy and any 
parameters", true);
+}
+
+public String getStrategy()
+{
+return strategy.value();
+}
+
+public Map getOptions()
+{
+return extraOptions();
+}
+
+protected List options()
+{
+return Arrays.asList(strategy);
+}
+
+@Override
+public boolean happy()
+{
+return true;
+}
+
+private static final class StrategyAdapter implements Function
+{
+
+public String apply(String name)
+{
+try
+{
+CFMetaData.createCompactionStrategy(name);
+} catch (ConfigurationException e)
+{
+throw new IllegalArgumentException("Invalid compaction 
strategy: " + name);
+}
+return name;
+}
+}
+
+}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
index 1901587..7074dc6 100644
--- a/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
+++ b/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
@@ -22,7 +22,11 @@ package org.apache.cassandra.stress.settings;
 
 
 import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -39,21 +43,34 @@ abstract class OptionMulti extends Option
 @Override
 public List options()
 {
-return OptionMulti.this.options();
+if (collectAsMap == null)
+return OptionMulti.this.options();
+
+List options = new ArrayList<>(OptionMulti.th

[1/3] git commit: Allow cassandra-stress to set compaction strategy options patch by Benedict Elliott Smith; reviewed by Russell Spitzer for CASSANDRA-6451

2014-03-11 Thread jbellis
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 b4f262e1b -> 8e360f80f
  refs/heads/trunk 2d92f14ba -> 5bc76b97e


Allow cassandra-stress to set compaction strategy options
patch by Benedict Elliott Smith; reviewed by Russell Spitzer for CASSANDRA-6451


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e360f80
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e360f80
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e360f80

Branch: refs/heads/cassandra-2.1
Commit: 8e360f80f4454c1c40edfefdf44b92bfbb9be6f1
Parents: b4f262e
Author: Jonathan Ellis 
Authored: Tue Mar 11 13:00:28 2014 -0500
Committer: Jonathan Ellis 
Committed: Tue Mar 11 13:01:10 2014 -0500

--
 CHANGES.txt |   1 +
 .../stress/settings/OptionCompaction.java   |  62 ++
 .../cassandra/stress/settings/OptionMulti.java  |  62 +-
 .../stress/settings/OptionReplication.java  | 112 ++-
 .../cassandra/stress/settings/OptionSimple.java |  59 +++---
 .../stress/settings/SettingsCommandMixed.java   |   2 +-
 .../stress/settings/SettingsSchema.java |  32 +++---
 7 files changed, 213 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 607e2dc..06331ad 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.0-beta2
+ * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
  * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
  * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
  * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
new file mode 100644
index 000..da74e43
--- /dev/null
+++ 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionCompaction.java
@@ -0,0 +1,62 @@
+package org.apache.cassandra.stress.settings;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+
+import com.google.common.base.Function;
+
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.exceptions.ConfigurationException;
+
+/**
+ * For specifying replication options
+ */
+class OptionCompaction extends OptionMulti
+{
+
+private final OptionSimple strategy = new OptionSimple("strategy=", new 
StrategyAdapter(), null, "The compaction strategy to use", false);
+
+public OptionCompaction()
+{
+super("compaction", "Define the compaction strategy and any 
parameters", true);
+}
+
+public String getStrategy()
+{
+return strategy.value();
+}
+
+public Map getOptions()
+{
+return extraOptions();
+}
+
+protected List options()
+{
+return Arrays.asList(strategy);
+}
+
+@Override
+public boolean happy()
+{
+return true;
+}
+
+private static final class StrategyAdapter implements Function
+{
+
+public String apply(String name)
+{
+try
+{
+CFMetaData.createCompactionStrategy(name);
+} catch (ConfigurationException e)
+{
+throw new IllegalArgumentException("Invalid compaction 
strategy: " + name);
+}
+return name;
+}
+}
+
+}

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e360f80/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java 
b/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
index 1901587..7074dc6 100644
--- a/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
+++ b/tools/stress/src/org/apache/cassandra/stress/settings/OptionMulti.java
@@ -22,7 +22,11 @@ package org.apache.cassandra.stress.settings;
 
 
 import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedHashMap;
 import java.util.List;
+import java.util.Map;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -39,21 +43,34 @@ abstract class OptionMulti extends Option
 @Override
 public List options()
 {
-return OptionMulti.this.options();
+

[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-11 Thread jbellis
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5bc76b97
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5bc76b97
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5bc76b97

Branch: refs/heads/trunk
Commit: 5bc76b97e4843fd366819523bb9e035964c07b37
Parents: 2d92f14 8e360f8
Author: Jonathan Ellis 
Authored: Tue Mar 11 13:01:16 2014 -0500
Committer: Jonathan Ellis 
Committed: Tue Mar 11 13:01:16 2014 -0500

--
 CHANGES.txt |   1 +
 .../stress/settings/OptionCompaction.java   |  62 ++
 .../cassandra/stress/settings/OptionMulti.java  |  62 +-
 .../stress/settings/OptionReplication.java  | 112 ++-
 .../cassandra/stress/settings/OptionSimple.java |  59 +++---
 .../stress/settings/SettingsCommandMixed.java   |   2 +-
 .../stress/settings/SettingsSchema.java |  32 +++---
 7 files changed, 213 insertions(+), 117 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5bc76b97/CHANGES.txt
--
diff --cc CHANGES.txt
index 107db23,06331ad..e867867
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,9 -1,5 +1,10 @@@
 +3.0
 + * Remove CQL2 (CASSANDRA-5918)
 + * add Thrift get_multi_slice call (CASSANDRA-6757)
 +
 +
  2.1.0-beta2
+  * Allow cassandra-stress to set compaction strategy options (CASSANDRA-6451)
   * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
   * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
   * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)



[jira] [Commented] (CASSANDRA-6436) AbstractColumnFamilyInputFormat does not use start and end tokens configured via ConfigHelper.setInputRange()

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930659#comment-13930659
 ] 

Jonathan Ellis commented on CASSANDRA-6436:
---

[~pkolaczk] can you review?

> AbstractColumnFamilyInputFormat does not use start and end tokens configured 
> via ConfigHelper.setInputRange()
> -
>
> Key: CASSANDRA-6436
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6436
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hadoop
>Reporter: Paulo Ricardo Motta Gomes
>  Labels: hadoop, patch
> Attachments: cassandra-1.2-6436.txt, cassandra-1.2-6436.txt
>
>
> ConfigHelper allows to set a token input range via the setInputRange(conf, 
> startToken, endToken) call (ConfigHelper:254).
> We used this feature to limit a hadoop job range to a single Cassandra node's 
> range, or even to single row key, mostly for testing purposes. 
> This worked before the fix for CASSANDRA-5536 
> (https://github.com/apache/cassandra/commit/aaf18bd08af50bbaae0954d78d5e6cbb684aded9),
>  but after this ColumnFamilyInputFormat never uses the value of 
> KeyRange.start_token when defining the input splits 
> (AbstractColumnFamilyInputFormat:142-160), but only KeyRange.start_key, which 
> needs an order preserving partitioner to work.
> I propose the attached fix in order to allow defining Cassandra token ranges 
> for a given Hadoop job even when using a non-order preserving partitioner.
> Example use of ConfigHelper.setInputRange(conf, startToken, endToken) to 
> limit the range to a single Cassandra Key with RandomPartitioner: 
> IPartitioner part = ConfigHelper.getInputPartitioner(job.getConfiguration());
> Token token = part.getToken(ByteBufferUtil.bytes("Cassandra Key"));
> BigInteger endToken = (BigInteger) new 
> BigIntegerConverter().convert(BigInteger.class, 
> part.getTokenFactory().toString(token));
> BigInteger startToken = endToken.subtract(new BigInteger("1"));
> ConfigHelper.setInputRange(job.getConfiguration(), startToken.toString(), 
> endToken.toString());



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930645#comment-13930645
 ] 

Jonathan Ellis commented on CASSANDRA-6793:
---

I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {(word text 
primary key, count int)}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?

> NPE in Hadoop Word count example
> 
>
> Key: CASSANDRA-6793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Examples
>Reporter: Chander S Pechetty
>Assignee: Chander S Pechetty
>Priority: Minor
>  Labels: hadoop
> Attachments: trunk-6793.txt
>
>
> The partition keys requested in WordCount.java do not match the primary key 
> set up in the table output_words. It looks this patch was not merged properly 
> from 
> [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
> attached patch addresses the NPE and uses the correct keys defined in #5622.
> I am assuming there is no need to fix the actual NPE like throwing an 
> InvalidRequestException back to user to fix the partition keys, as it would 
> be trivial to get the same from the TableMetadata using the driver API.
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:92)
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:40)
>   at org.apache.cassandra.client.RingCache.getRange(RingCache.java:117)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:163)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:63)
>   at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
>   at 
> org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
>   at WordCount$ReducerToCassandra.reduce(Unknown Source)
>   at WordCount$ReducerToCassandra.reduce(Unknown Source)
>   at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:176)
>   at 
> org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:649)
>   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:417)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930645#comment-13930645
 ] 

Jonathan Ellis edited comment on CASSANDRA-6793 at 3/11/14 5:50 PM:


I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {{(word text 
primary key, count int)}}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?


was (Author: jbellis):
I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {(word text 
primary key, count int)}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?

> NPE in Hadoop Word count example
> 
>
> Key: CASSANDRA-6793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Examples
>Reporter: Chander S Pechetty
>Assignee: Chander S Pechetty
>Priority: Minor
>  Labels: hadoop
> Attachments: trunk-6793.txt
>
>
> The partition keys requested in WordCount.java do not match the primary key 
> set up in the table output_words. It looks this patch was not merged properly 
> from 
> [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
> attached patch addresses the NPE and uses the correct keys defined in #5622.
> I am assuming there is no need to fix the actual NPE like throwing an 
> InvalidRequestException back to user t

[jira] [Comment Edited] (CASSANDRA-6793) NPE in Hadoop Word count example

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930645#comment-13930645
 ] 

Jonathan Ellis edited comment on CASSANDRA-6793 at 3/11/14 5:51 PM:


I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {{(word text 
primary key, count int)}}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?


was (Author: jbellis):
I confess that I'm mystified by the schema introduced in CASSANDRA-4421:

{noformat}
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
/**
 * This counts the occurrences of words in ColumnFamily
 *   cql3_worldcount ( user_id text,
 *   category_id text,
 *   sub_category_id text,
 *   title  text,
 *   body  text,
 *   PRIMARY KEY (user_id, category_id, sub_category_id))
 *
 * For each word, we output the total number of occurrences across all body 
texts.
 *
 * When outputting to Cassandra, we write the word counts to column family
 *  output_words ( row_id1 text,
 * row_id2 text,
 * word text,
 * count_num text,
 * PRIMARY KEY ((row_id1, row_id2), word))
 * as a {word, count} to columns: word, count_num with a row key of "word sum"
 */
{noformat}

Both the input and output tables look far more complex than necessary.  

My preferred solution would be to just strip the output down to {{(word text 
primary key, count int)}}, and make a similar simplification for the input.

Can you shed any light [~alexliu68]?

> NPE in Hadoop Word count example
> 
>
> Key: CASSANDRA-6793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Examples
>Reporter: Chander S Pechetty
>Assignee: Chander S Pechetty
>Priority: Minor
>  Labels: hadoop
> Attachments: trunk-6793.txt
>
>
> The partition keys requested in WordCount.java do not match the primary key 
> set up in the table output_words. It looks this patch was not merged properly 
> from 
> [CASSANDRA-5622|https://issues.apache.org/jira/browse/CASSANDRA-5622].The 
> attached patch addresses the NPE and uses the correct keys defined in #5622.
> I am assuming there is no need to fix the actual NPE like throwing an 
> InvalidRequestException back to user to fix the partition keys, as it would 
> be trivial to get the same from the TableMetadata using the driver API.
> java.lang.NullPointerException
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:92)
>   at 
> org.apache.cassandra.dht.Murmur3Partitioner.getToken(Murmur3Partitioner.java:40)
>   at org.apache.cassandra.client.RingCache.getRange(RingCache.java:117)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:163)
>   at 
> org.apache.cassandra.hadoop.cql3.CqlRecordWriter.write(CqlRecordWriter.java:63)
>   at 
> org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:587)
>   at 
> org.apache.hadoop.mapreduce.TaskInputO

[Cassandra Wiki] Trivial Update of "GettingStarted" by TylerHobbs

2014-03-11 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "GettingStarted" page has been changed by TylerHobbs:
https://wiki.apache.org/cassandra/GettingStarted?action=diff&rev1=97&rev2=98

Comment:
Fix DataModel link

  }}}
  
  == Write your application ==
- Review the resources on DataModeling.  The full CQL documentation is 
[[http://www.datastax.com/documentation/cql/3.0/webhelp/index.html|here]].
+ Review the resources on how to DataModel.  The full CQL documentation is 
[[http://www.datastax.com/documentation/cql/3.0/webhelp/index.html|here]].
  
  DataStax sponsors development of the CQL drivers at 
https://github.com/datastax.  The full list of CQL drivers is on the 
ClientOptions page.
  


[jira] [Commented] (CASSANDRA-6307) Switch cqlsh from cassandra-dbapi2 to python-driver

2014-03-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930610#comment-13930610
 ] 

Tyler Hobbs commented on CASSANDRA-6307:


bq. Tyler Hobbs is it possible to get a trace for any session which was traced 
before? I believe the driver only populates trace for statements executed with 
trace=True. But CQLSH has to support SHOW SESSION  command, for any 
particular session, and I see no way to retrieve that info from the driver

[~mishail] you can create a new {{cassandra.query.Trace}} object (which takes a 
trace uuid and a Session to query with) and call {{populate()}} on it.  That 
would work for any trace.

> Switch cqlsh from cassandra-dbapi2 to python-driver
> ---
>
> Key: CASSANDRA-6307
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6307
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Aleksey Yeschenko
>Assignee: Mikhail Stepura
>Priority: Minor
> Fix For: 2.1 beta2
>
>
> python-driver is hitting 1.0 soon. cassandra-dbapi2 development has stalled.
> It's time to switch cqlsh to native protocol and cassandra-dbapi2, especially 
> now that
> 1. Some CQL3 things are not supported by Thrift transport
> 2. cqlsh no longer has to support CQL2 (dropped in 2.0)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6828) inline thrift documentation is slightly sparse

2014-03-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930594#comment-13930594
 ] 

Tyler Hobbs commented on CASSANDRA-6828:


Overall the new docs look good, thanks.  The one point I disagree with is this:

{quote}
Batch mutations are very efficient and should be prefered over doing multiple 
inserts.
{quote}

Batch mutations also have downsides.  They put more temporary load on the 
coordinator, which can cause GC problems and spikes in latency when the batches 
are large.  When a large batch mutation fails, you have to retry the entire 
thing, even if only one of the mutations in the batch failed.  I would just 
tone down the "preferred" language and add those disclaimers.

Other than that, I think this is good to go.

> inline thrift documentation is slightly sparse 
> ---
>
> Key: CASSANDRA-6828
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6828
> Project: Cassandra
>  Issue Type: Improvement
>  Components: API, Documentation & website
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6823) TimedOutException/dropped mutations running stress on 2.1

2014-03-11 Thread dan jatnieks (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930507#comment-13930507
 ] 

dan jatnieks commented on CASSANDRA-6823:
-

yup, thanks Benedict


> TimedOutException/dropped mutations running stress on 2.1 
> --
>
> Key: CASSANDRA-6823
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6823
> Project: Cassandra
>  Issue Type: Bug
>Reporter: dan jatnieks
>Priority: Minor
>  Labels: stress
> Attachments: stress.log, system.log
>
>
> While testing CASSANDRA-6357, I am seeing TimedOutException errors running 
> stress on both 2.1 and trunk, and system log is showing dropped mutation 
> messages.
> {noformat}
> $ ant -Dversion=2.1.0-SNAPSHOT jar
> $ ./bin/cassandra
> $ ./cassandra-2.1/tools/bin/cassandra-stress write n=1000
> Created keyspaces. Sleeping 1s for propagation.
> Warming up WRITE with 5 iterations...
> Connected to cluster: Test Cluster
> Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
> Sleeping 2s...
> Running WRITE with 50 threads  for 1000 iterations
> ops   ,op/s,adj op/s,   key/s,mean, med, .95, .99,
> .999, max,   time,   stderr
> 74597 ,   74590,   74590,   74590, 0.7, 0.3, 1.7, 7.8,
> 39.4,   156.0,1.0,  0.0
> 175807,  100469,  111362,  100469, 0.5, 0.3, 1.0, 2.2,
> 16.4,   105.2,2.0,  0.0
> 278037,  100483,  110412,  100483, 0.5, 0.4, 0.9, 2.2,
> 15.9,95.4,3.0,  0.13983
> 366806,   86301,   86301,   86301, 0.6, 0.4, 0.9, 2.4,
> 97.6,   107.0,4.1,  0.10002
> 473244,  105209,  115906,  105209, 0.5, 0.3, 1.0, 2.2,
> 10.2,99.6,5.1,  0.08246
> 574363,   99939,  112606,   99939, 0.5, 0.3, 1.0, 2.2,
>  8.4,   115.3,6.1,  0.07297
> 665162,   89343,   89343,   89343, 0.6, 0.3, 1.1, 2.3,
> 12.5,   116.4,7.1,  0.06256
> 768575,  102028,  102028,  102028, 0.5, 0.3, 1.0, 2.1,
> 10.7,   116.0,8.1,  0.05703
> 870318,  100383,  112278,  100383, 0.5, 0.4, 1.0, 2.1,
>  8.2,   109.1,9.1,  0.04984
> 972584,  100496,  111616,  100496, 0.5, 0.3, 1.0, 2.3,
> 10.3,   109.1,   10.1,  0.04542
> 1063466   ,   88566,   88566,   88566, 0.6, 0.3, 1.1, 2.5,   
> 107.3,   116.9,   11.2,  0.04152
> 1163218   ,   98512,  107549,   98512, 0.5, 0.3, 1.2, 3.4,
> 17.9,92.9,   12.2,  0.04007
> 1257989   ,   93578,  103808,   93578, 0.5, 0.3, 1.4, 3.8,
> 12.6,   105.6,   13.2,  0.03687
> 1349628   ,   90205,   99257,   90205, 0.6, 0.3, 1.2, 2.9,
> 20.3,99.6,   14.2,  0.03401
> 1448125   ,   97133,  106429,   97133, 0.5, 0.3, 1.2, 2.9,
> 11.9,   102.2,   15.2,  0.03170
> 1536662   ,   87137,   95464,   87137, 0.6, 0.4, 1.1, 2.9,
> 83.7,94.0,   16.2,  0.02964
> 1632373   ,   94446,  102735,   94446, 0.5, 0.4, 1.1, 2.6,
> 11.7,85.5,   17.2,  0.02818
> 1717028   ,   83533,   83533,   83533, 0.6, 0.4, 1.1, 2.7,
> 87.4,   101.8,   18.3,  0.02651
> 1817081   ,   97807,  108004,   97807, 0.5, 0.3, 1.1, 2.5,
> 14.5,99.1,   19.3,  0.02712
> 1904103   ,   85634,   94846,   85634, 0.6, 0.3, 1.2, 3.0,
> 92.4,   105.3,   20.3,  0.02585
> 2001438   ,   95991,  104822,   95991, 0.5, 0.3, 1.2, 2.7,
> 13.5,95.3,   21.3,  0.02482
> 2086571   ,   89121,   99429,   89121, 0.6, 0.3, 1.2, 3.2,
> 30.9,   103.3,   22.3,  0.02367
> 2184096   ,   88718,   97020,   88718, 0.6, 0.3, 1.3, 3.2,
> 85.6,98.0,   23.4,  0.02262
> 2276823   ,   91795,   91795,   91795, 0.5, 0.3, 1.3, 3.5,
> 81.1,   102.1,   24.4,  0.02174
> 2381493   ,  101074,  101074,  101074, 0.5, 0.3, 1.3, 3.3,
> 12.9,99.1,   25.4,  0.02123
> 2466415   ,   83368,   92292,   83368, 0.6, 0.4, 1.2, 3.0,
> 14.3,   188.5,   26.4,  0.02037
> 2567406   ,  100099,  109267,  100099, 0.5, 0.3, 1.4, 3.3,
> 10.9,94.2,   27.4,  0.01989
> 2653040   ,   84476,   91922,   84476, 0.6, 0.3, 1.4, 3.2,
> 77.0,   100.3,   28.5,  0.01937
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> ...
> 9825371   ,   84636,   91716,   84636,   

[jira] [Created] (CASSANDRA-6837) Batch CAS does not support LOCAL_SERIAL

2014-03-11 Thread Nicolas Favre-Felix (JIRA)
Nicolas Favre-Felix created CASSANDRA-6837:
--

 Summary: Batch CAS does not support LOCAL_SERIAL
 Key: CASSANDRA-6837
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6837
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Nicolas Favre-Felix


The batch CAS feature introduced in Cassandra 2.0.6 does not support the 
LOCAL_SERIAL consistency level, and always uses SERIAL.

Create a cluster with 4 nodes with the following topology:

{code}
Datacenter: DC2
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
UN  127.0.0.3  269 KB 256 26.3%  ae92d997-6042-42d9-b447-943080569742  
RAC1
UN  127.0.0.4  197.81 KB  256 25.1%  3edc92d7-9d1b-472a-8452-24dddbc4502c  
RAC1
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
UN  127.0.0.1  226.92 KB  256 24.8%  dbc17bd7-1ede-47a2-9b31-6063752d6eb3  
RAC1
UN  127.0.0.2  179.27 KB  256 23.7%  bb0ad285-34d2-4989-a664-b068986ab6fa  
RAC1
{code}

In cqlsh:
{code}
cqlsh> CREATE KEYSPACE foo WITH replication = {'class': 
'NetworkTopologyStrategy', 'DC1': 2, 'DC2': 2};
cqlsh> USE foo;
cqlsh:foo> CREATE TABLE bar (x text, y bigint, z bigint, t bigint, PRIMARY 
KEY(x,y));
{code}

Kill nodes 127.0.0.3 and 127.0.0.4:

{code}
Datacenter: DC2
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
DN  127.0.0.3  262.37 KB  256 26.3%  ae92d997-6042-42d9-b447-943080569742  
RAC1
DN  127.0.0.4  208.04 KB  256 25.1%  3edc92d7-9d1b-472a-8452-24dddbc4502c  
RAC1
Datacenter: DC1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns   Host ID   
Rack
UN  127.0.0.1  214.82 KB  256 24.8%  dbc17bd7-1ede-47a2-9b31-6063752d6eb3  
RAC1
UN  127.0.0.2  178.23 KB  256 23.7%  bb0ad285-34d2-4989-a664-b068986ab6fa  
RAC1
{code}

Connect to 127.0.0.1 in DC1 and run a CAS batch at CL.LOCAL_SERIAL+LOCAL_QUORUM:

{code}
final Cluster cluster = new Cluster.Builder()
.addContactPoint("127.0.0.1")
.withLoadBalancingPolicy(new DCAwareRoundRobinPolicy("DC1"))
.build();

final Session session = cluster.connect("foo");

Batch batch = QueryBuilder.batch();
batch.add(new SimpleStatement("INSERT INTO bar (x,y,z) VALUES ('abc', 
123, 1) IF NOT EXISTS"));
batch.add(new SimpleStatement("UPDATE bar SET t=2 WHERE x='abc' AND 
y=123"));

batch.setConsistencyLevel(ConsistencyLevel.LOCAL_QUORUM);
batch.setSerialConsistencyLevel(ConsistencyLevel.LOCAL_SERIAL);

session.execute(batch);
{code}

The batch fails with:

{code}
Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough 
replica available for query at consistency SERIAL (3 required but only 2 alive)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:44)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:33)
at 
com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:182)
at 
org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:66)
... 21 more
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6836) WriteTimeoutException always reports that the serial CL is "SERIAL"

2014-03-11 Thread Nicolas Favre-Felix (JIRA)
Nicolas Favre-Felix created CASSANDRA-6836:
--

 Summary: WriteTimeoutException always reports that the serial CL 
is "SERIAL"
 Key: CASSANDRA-6836
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6836
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Nicolas Favre-Felix
Priority: Minor


In StorageProxy.proposePaxos, the WriteTimeoutException is thrown with 
information about the consistency level. This CL is hardcoded to 
ConsistencyLevel.SERIAL, which might be wrong when LOCAL_SERIAL is used:

{code}
if (timeoutIfPartial && !callback.isFullyRefused())
throw new WriteTimeoutException(WriteType.CAS, 
ConsistencyLevel.SERIAL, callback.getAcceptCount(), requiredParticipants);
{code}

Suggested fix: pass consistencyForPaxos as a parameter to proposePaxos().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-6823) TimedOutException/dropped mutations running stress on 2.1

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict resolved CASSANDRA-6823.
-

Resolution: Not A Problem

> TimedOutException/dropped mutations running stress on 2.1 
> --
>
> Key: CASSANDRA-6823
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6823
> Project: Cassandra
>  Issue Type: Bug
>Reporter: dan jatnieks
>Priority: Minor
>  Labels: stress
> Attachments: stress.log, system.log
>
>
> While testing CASSANDRA-6357, I am seeing TimedOutException errors running 
> stress on both 2.1 and trunk, and system log is showing dropped mutation 
> messages.
> {noformat}
> $ ant -Dversion=2.1.0-SNAPSHOT jar
> $ ./bin/cassandra
> $ ./cassandra-2.1/tools/bin/cassandra-stress write n=1000
> Created keyspaces. Sleeping 1s for propagation.
> Warming up WRITE with 5 iterations...
> Connected to cluster: Test Cluster
> Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
> Sleeping 2s...
> Running WRITE with 50 threads  for 1000 iterations
> ops   ,op/s,adj op/s,   key/s,mean, med, .95, .99,
> .999, max,   time,   stderr
> 74597 ,   74590,   74590,   74590, 0.7, 0.3, 1.7, 7.8,
> 39.4,   156.0,1.0,  0.0
> 175807,  100469,  111362,  100469, 0.5, 0.3, 1.0, 2.2,
> 16.4,   105.2,2.0,  0.0
> 278037,  100483,  110412,  100483, 0.5, 0.4, 0.9, 2.2,
> 15.9,95.4,3.0,  0.13983
> 366806,   86301,   86301,   86301, 0.6, 0.4, 0.9, 2.4,
> 97.6,   107.0,4.1,  0.10002
> 473244,  105209,  115906,  105209, 0.5, 0.3, 1.0, 2.2,
> 10.2,99.6,5.1,  0.08246
> 574363,   99939,  112606,   99939, 0.5, 0.3, 1.0, 2.2,
>  8.4,   115.3,6.1,  0.07297
> 665162,   89343,   89343,   89343, 0.6, 0.3, 1.1, 2.3,
> 12.5,   116.4,7.1,  0.06256
> 768575,  102028,  102028,  102028, 0.5, 0.3, 1.0, 2.1,
> 10.7,   116.0,8.1,  0.05703
> 870318,  100383,  112278,  100383, 0.5, 0.4, 1.0, 2.1,
>  8.2,   109.1,9.1,  0.04984
> 972584,  100496,  111616,  100496, 0.5, 0.3, 1.0, 2.3,
> 10.3,   109.1,   10.1,  0.04542
> 1063466   ,   88566,   88566,   88566, 0.6, 0.3, 1.1, 2.5,   
> 107.3,   116.9,   11.2,  0.04152
> 1163218   ,   98512,  107549,   98512, 0.5, 0.3, 1.2, 3.4,
> 17.9,92.9,   12.2,  0.04007
> 1257989   ,   93578,  103808,   93578, 0.5, 0.3, 1.4, 3.8,
> 12.6,   105.6,   13.2,  0.03687
> 1349628   ,   90205,   99257,   90205, 0.6, 0.3, 1.2, 2.9,
> 20.3,99.6,   14.2,  0.03401
> 1448125   ,   97133,  106429,   97133, 0.5, 0.3, 1.2, 2.9,
> 11.9,   102.2,   15.2,  0.03170
> 1536662   ,   87137,   95464,   87137, 0.6, 0.4, 1.1, 2.9,
> 83.7,94.0,   16.2,  0.02964
> 1632373   ,   94446,  102735,   94446, 0.5, 0.4, 1.1, 2.6,
> 11.7,85.5,   17.2,  0.02818
> 1717028   ,   83533,   83533,   83533, 0.6, 0.4, 1.1, 2.7,
> 87.4,   101.8,   18.3,  0.02651
> 1817081   ,   97807,  108004,   97807, 0.5, 0.3, 1.1, 2.5,
> 14.5,99.1,   19.3,  0.02712
> 1904103   ,   85634,   94846,   85634, 0.6, 0.3, 1.2, 3.0,
> 92.4,   105.3,   20.3,  0.02585
> 2001438   ,   95991,  104822,   95991, 0.5, 0.3, 1.2, 2.7,
> 13.5,95.3,   21.3,  0.02482
> 2086571   ,   89121,   99429,   89121, 0.6, 0.3, 1.2, 3.2,
> 30.9,   103.3,   22.3,  0.02367
> 2184096   ,   88718,   97020,   88718, 0.6, 0.3, 1.3, 3.2,
> 85.6,98.0,   23.4,  0.02262
> 2276823   ,   91795,   91795,   91795, 0.5, 0.3, 1.3, 3.5,
> 81.1,   102.1,   24.4,  0.02174
> 2381493   ,  101074,  101074,  101074, 0.5, 0.3, 1.3, 3.3,
> 12.9,99.1,   25.4,  0.02123
> 2466415   ,   83368,   92292,   83368, 0.6, 0.4, 1.2, 3.0,
> 14.3,   188.5,   26.4,  0.02037
> 2567406   ,  100099,  109267,  100099, 0.5, 0.3, 1.4, 3.3,
> 10.9,94.2,   27.4,  0.01989
> 2653040   ,   84476,   91922,   84476, 0.6, 0.3, 1.4, 3.2,
> 77.0,   100.3,   28.5,  0.01937
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> ...
> 9825371   ,   84636,   91716,   84636, 0.6, 0.3, 1.4, 4.5,
> 23.4,86.4, 

[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930411#comment-13930411
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

bq. Actually, to my mind this whole thing makes a lot of sense: use jsonblob 
for prototyping then once the schema settles convert to UDT. The json 
read/writes still work as intended, but magically it gets better for field 
lookups etc.

Or, you know, just put it in a blob, since it only affects validation anyway, 
and changes literally nothing else - except maybe cqlsh output formatting.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930412#comment-13930412
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

Or a text field, so it still looks reasonably decent in cqlsh.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930410#comment-13930410
 ] 

Benedict commented on CASSANDRA-6833:
-

Actually, to my mind this whole thing makes a lot of sense: use jsonblob for 
prototyping then once the schema settles convert to UDT. The json read/writes 
still work as intended, but magically it gets better for field lookups etc.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930407#comment-13930407
 ] 

Benedict commented on CASSANDRA-6833:
-

If we call the datatype a 'jsonblob' maybe it will remind people that it isn't 
efficient.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930408#comment-13930408
 ] 

Edward Capriolo commented on CASSANDRA-6790:


Sweet

> Triggers are broken in trunk because of imutable list
> -
>
> Key: CASSANDRA-6790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 2.0.7
>
> Attachments: 
> 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch
>
>
> The trigger code is uncovered by any tests (that I can find). When inserting 
> single columns an immutable list is created. When the trigger attempts to 
> edit this list the operation fails.
> Fix coming shortly.
> {noformat}
> java.lang.UnsupportedOperationException
> at java.util.AbstractList.add(AbstractList.java:148)
> at java.util.AbstractList.add(AbstractList.java:108)
> at 
> java.util.AbstractCollection.addAll(AbstractCollection.java:342)
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
> at 
> org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
> at 
> org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
> at 
> org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
> at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
> at 
> org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930404#comment-13930404
 ] 

Benedict commented on CASSANDRA-6689:
-

[~krummas]:

It will make me sad, but you're absolutely right, it isn't necessary just yet.

Only thing to question is the change of default in conf/cassandra.yaml, but 
guessing this is a debugging oversight.

> Partially Off Heap Memtables
> 
>
> Key: CASSANDRA-6689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1 beta2
>
> Attachments: CASSANDRA-6689-small-changes.patch
>
>
> Move the contents of ByteBuffers off-heap for records written to a memtable.
> (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930394#comment-13930394
 ] 

Jonathan Ellis commented on CASSANDRA-6833:
---

bq. I'm also strongly -1 on adding new CQL syntax for it, and even stronger -1 
on making it cqlsh-only. There is an expectation that CQL queries that work in 
cqlsh can be copied to the actual application code and be used with the 
java/python-drivers, and this would violate that expectation.

I'm not sure what you're reacting to here, but nothing I actually wrote 
suggests adding queries that work in cqlsh but not python|java|other drivers.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6689) Partially Off Heap Memtables

2014-03-11 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930396#comment-13930396
 ] 

Marcus Eriksson commented on CASSANDRA-6689:


Reviewing under the assumption that 1-3 will go in 2.1 and the rest into 3.0, 
otherwise there is a some of stuff in #1 that should be in #3 etc, but leaving 
that aside for now.

Main point when reviewing was that I found myself trying to wrap my head around 
the Group concept several times, especially since it is not actually adding any 
functionality at this stage (I know it will when we do GC). We should probably 
remove it since it adds indirection that we don't need right now. Pushed a 
branch with the DataGroup and various Group classes in o.a.c.u.memory removed 
here: https://github.com/krummas/cassandra/commits/bes/6689-3.1 , wdyt?



> Partially Off Heap Memtables
> 
>
> Key: CASSANDRA-6689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6689
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core
>Reporter: Benedict
>Assignee: Benedict
> Fix For: 2.1 beta2
>
> Attachments: CASSANDRA-6689-small-changes.patch
>
>
> Move the contents of ByteBuffers off-heap for records written to a memtable.
> (See comments for details)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6736) Windows7 AccessDeniedException on commit log

2014-03-11 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930391#comment-13930391
 ] 

Joshua McKenzie commented on CASSANDRA-6736:


Bill - thanks for the heads up.  As far as I know nobody else has seen this and 
I haven't been able to reproduce even with a threefold increase in batchers.  
Excluding C* folders from AV processing is probably something we need to 
document from a performance implication perspective regardless of file locking.



> Windows7 AccessDeniedException on commit log 
> -
>
> Key: CASSANDRA-6736
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6736
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Windows 7, quad core, 8GB RAM, single Cassandra node, 
> Cassandra 2.0.5 with leakdetect patch from CASSANDRA-6283
>Reporter: Bill Mitchell
>Assignee: Joshua McKenzie
> Attachments: 2014-02-18-22-16.log
>
>
> Similar to the data file deletion of CASSANDRA-6283, under heavy load with 
> logged batches, I am seeing a problem where the Commit log cannot be deleted:
>  ERROR [COMMIT-LOG-ALLOCATOR] 2014-02-18 22:15:58,252 CassandraDaemon.java 
> (line 192) Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
>  FSWriteError in C:\Program Files\DataStax 
> Community\data\commitlog\CommitLog-3-1392761510706.log
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:120)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogSegment.discard(CommitLogSegment.java:150)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogAllocator$4.run(CommitLogAllocator.java:217)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogAllocator$1.runMayThrow(CommitLogAllocator.java:95)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at java.lang.Thread.run(Unknown Source)
> Caused by: java.nio.file.AccessDeniedException: C:\Program Files\DataStax 
> Community\data\commitlog\CommitLog-3-1392761510706.log
>   at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
>   at sun.nio.fs.WindowsFileSystemProvider.implDelete(Unknown Source)
>   at sun.nio.fs.AbstractFileSystemProvider.delete(Unknown Source)
>   at java.nio.file.Files.delete(Unknown Source)
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:116)
>   ... 5 more
> (Attached in 2014-02-18-22-16.log is a larger excerpt from the cassandra.log.)
> In this particular case, I was trying to do 100 million inserts into two 
> tables in parallel, one with a single wide row and one with narrow rows, and 
> the error appeared after inserting 43,151,232 rows.  So it does take a while 
> to trip over this timing issue.  
> It may be aggravated by the size of the batches. This test was writing 10,000 
> rows to each table in a batch.  
> When I try switching the same test from using a logged batch to an unlogged 
> batch, and no such failure appears. So the issue could be related to the use 
> of large, logged batches, or it could be that unlogged batches just change 
> the probability of failure.  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6833) Add json data type

2014-03-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930317#comment-13930317
 ] 

Aleksey Yeschenko commented on CASSANDRA-6833:
--

I'm relatively strongly -1 on this. Even if it's just validation, it does send 
a wrong message to users - especially the newcomers.

I'm afraid that people migrating from other databases, esp. with JSON-centric 
data models, would just choose the 'easy' migration route and just continue 
sticking their stuff into C* JSON columns (now using C* as a primitive 
key->JSON value store), instead of remodeling it for wide C* partitions, 
collections, and user types.

We shouldn't be making the wrong way to do things easier (and it's already easy 
as it is - you can stick all your JSON into a blob/text column). Adding an 
official JSON type on top of that only legitimizes it and thus makes it worse.

I'm also strongly -1 on adding new CQL syntax for it, and even stronger -1 on 
making it cqlsh-only. There is an expectation that CQL queries that work in 
cqlsh can be copied to the actual application code and be used with the 
java/python-drivers, and this would violate that expectation.

> Add json data type
> --
>
> Key: CASSANDRA-6833
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6833
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jonathan Ellis
>Priority: Minor
> Fix For: 2.0.7
>
>
> While recognizing that UDT (CASSANDRA-5590) is the Right Way to store 
> hierarchical data in C*, it can still be useful to store json blobs as text.  
> Adding a json type would allow validating that data.  (And adding formatting 
> support in cqlsh?)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6834) cassandra-stress should fail if the same option is provided multiple times

2014-03-11 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930315#comment-13930315
 ] 

Lyuben Todorov commented on CASSANDRA-6834:
---

+1

> cassandra-stress should fail if the same option is provided multiple times
> --
>
> Key: CASSANDRA-6834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6834
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
> Fix For: 2.1 beta2
>
> Attachments: 6834.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-2380) Cassandra requires hostname is resolvable even when specifying IP's for listen and rpc addresses

2014-03-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930287#comment-13930287
 ] 

Johan Idrén commented on CASSANDRA-2380:


Uncommenting and editing that line in cassandra-env.sh does actually not help.

Adding a line in /etc/hosts works fine, but if hostname doesn't resolve to 
anything it fails regardless of having specified ip addresses in the 
configuration.

Suggest reopening as this is actually broken, if not very serious.

Cassandra 2.0.5, jdk-1.7.0_51-fcs.x86_64.

> Cassandra requires hostname is resolvable even when specifying IP's for 
> listen and rpc addresses
> 
>
> Key: CASSANDRA-2380
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2380
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 0.7.4
> Environment: open jdk 1.6.0_20 64-Bit 
>Reporter: Eric Tamme
>Priority: Trivial
>
> A strange looking error is printed out, with no stack trace and no other log, 
> when hostname is not resolvable regardless of whether or not the hostname is 
> being used to specify a listen or rpc address.  I am specifically using IPv6 
> addresses but I have tested it with IPv4 and gotten the same result.
> Error: Exception thrown by the agent : java.net.MalformedURLException: Local 
> host name unknown: java.net.UnknownHostException
> I have spent several hours trying to track down what is happening and have 
> been unable to determine if this is down in the java 
> getByName->getAllByName->getAllByName0 set of methods that is happening when  
> listenAddress = InetAddress.getByName(conf.listen_address);
> is called from DatabaseDescriptor.java
> I am not able to replicate the error in a stand alone java program (see 
> below) so I am not sure what cassandra is doing to force name resolution.  
> Perhaps the issue is not in DatabaseDescriptor, but some where else?  I get 
> no log output, and no stack trace when this happens, only the single line 
> error.
> import java.net.InetAddress;
> import java.net.UnknownHostException;
> class Test
> {
> public static void main(String args[])
> {
> try
> {
> InetAddress listenAddress = InetAddress.getByName("foo");
> System.out.println(listenAddress);
> }
> catch (UnknownHostException e)
> {
> System.out.println("Unable to parse address");
> }
> }
> }
> People have just said "oh go put a line in your hosts file" and while that 
> does work, it is not right.  If I am not using my hostname for any reason 
> cassandra should not have to resolve it, and carrying around that application 
> specific stuff in your hosts file is not correct.
> Regardless of if this bug gets fixed, I want to better understand what the 
> heck is going on that makes cassandra crash and print out that exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6789) Triggers can not be added from thrift

2014-03-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6789:
-

Priority: Minor  (was: Major)

> Triggers can not be added from thrift
> -
>
> Key: CASSANDRA-6789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6789
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 2.0.7
>
> Attachments: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch
>
>
> While playing with groovy triggers, I determined that you can not add 
> triggers from thrift, unless I am doing something wrong. (I see no coverage 
> of this feature from thrift/python)
> https://github.com/edwardcapriolo/cassandra/compare/trigger_coverage?expand=1
> {code}
> package org.apache.cassandra.triggers;
> import java.io.IOException;
> import java.net.InetSocketAddress;
> import java.nio.ByteBuffer;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.HashMap;
> import java.util.List;
> import java.util.Map;
> import junit.framework.Assert;
> import org.apache.cassandra.SchemaLoader;
> import org.apache.cassandra.config.Schema;
> import org.apache.cassandra.service.EmbeddedCassandraService;
> import org.apache.cassandra.thrift.CassandraServer;
> import org.apache.cassandra.thrift.CfDef;
> import org.apache.cassandra.thrift.ColumnParent;
> import org.apache.cassandra.thrift.KsDef;
> import org.apache.cassandra.thrift.ThriftSessionManager;
> import org.apache.cassandra.thrift.TriggerDef;
> import org.apache.cassandra.utils.ByteBufferUtil;
> import org.apache.thrift.TException;
> import org.junit.BeforeClass;
> import org.junit.Test;
> public class TriggerTest extends SchemaLoader
> {
> private static CassandraServer server;
> 
> @BeforeClass
> public static void setup() throws IOException, TException
> {
> Schema.instance.clear(); // Schema are now written on disk and will 
> be reloaded
> new EmbeddedCassandraService().start();
> ThriftSessionManager.instance.setCurrentSocket(new 
> InetSocketAddress(9160));
> server = new CassandraServer();
> server.set_keyspace("Keyspace1");
> }
> 
> @Test
> public void createATrigger() throws TException
> {
> TriggerDef td = new TriggerDef();
> td.setName("gimme5");
> Map options = new HashMap<>();
> options.put("class", "org.apache.cassandra.triggers.ITriggerImpl");
> td.setOptions(options);
> CfDef cfDef = new CfDef();
> cfDef.setKeyspace("Keyspace1");
> cfDef.setTriggers(Arrays.asList(td));
> cfDef.setName("triggercf");
> server.system_add_column_family(cfDef);
> 
> KsDef keyspace1 = server.describe_keyspace("Keyspace1");
> CfDef triggerCf = null;
> for (CfDef cfs :keyspace1.cf_defs){
>   if (cfs.getName().equals("triggercf")){
> triggerCf=cfs;
>   }
> }
> Assert.assertNotNull(triggerCf);
> Assert.assertEquals(1, triggerCf.getTriggers().size());
> }
> }
> {code}
> junit.framework.AssertionFailedError: expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6790:
-

 Priority: Minor  (was: Major)
Fix Version/s: (was: 2.1 beta2)
   2.0.7

> Triggers are broken in trunk because of imutable list
> -
>
> Key: CASSANDRA-6790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Sam Tunnicliffe
>Priority: Minor
> Fix For: 2.0.7
>
> Attachments: 
> 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch
>
>
> The trigger code is uncovered by any tests (that I can find). When inserting 
> single columns an immutable list is created. When the trigger attempts to 
> edit this list the operation fails.
> Fix coming shortly.
> {noformat}
> java.lang.UnsupportedOperationException
> at java.util.AbstractList.add(AbstractList.java:148)
> at java.util.AbstractList.add(AbstractList.java:108)
> at 
> java.util.AbstractCollection.addAll(AbstractCollection.java:342)
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
> at 
> org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
> at 
> org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
> at 
> org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
> at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
> at 
> org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[2/7] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f383612
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f383612
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f383612

Branch: refs/heads/trunk
Commit: 3f38361271ffc84d4aca32e29b9b5af996825424
Parents: 8d2c3fe dfd28d2
Author: Sylvain Lebresne 
Authored: Mon Mar 10 18:02:46 2014 +0100
Committer: Sylvain Lebresne 
Committed: Mon Mar 10 18:02:46 2014 +0100

--
 doc/cql3/CQL.textile | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f383612/doc/cql3/CQL.textile
--
diff --cc doc/cql3/CQL.textile
index aa2c176,ecd3b7e..2de59d1
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@@ -219,12 -214,9 +219,9 @@@ bc(syntax).
'('  ( ','  )* ')'
( WITH  ( AND )* )?
  
 - ::=   ( PRIMARY KEY )?
 + ::=   ( STATIC )? ( PRIMARY KEY )?
| PRIMARY KEY '('  ( ','  )* 
')'
  
-  ::= 
-   | '('  ( ','  )* ')'
- 
   ::= 
| '('  (','  )* ')'
  



[4/7] git commit: Fix trigger mutations when base mutation list is immutable

2014-03-11 Thread aleksey
Fix trigger mutations when base mutation list is immutable

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6790


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7eca98a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7eca98a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7eca98a

Branch: refs/heads/trunk
Commit: f7eca98a7487b5e4013fbc07e43ebf0055520856
Parents: 553401d
Author: Sam Tunnicliffe 
Authored: Tue Mar 11 14:55:16 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 14:55:16 2014 +0300

--
 CHANGES.txt |   1 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 3 files changed, 183 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 39656ff..91037d1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 2.0.7
  * Fix saving triggers to schema (CASSANDRA-6789)
+ * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 
 
 2.0.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 14c1ce3..a6db9cd 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -508,13 +508,13 @@ public class StorageProxy implements StorageProxyMBean
 }
 }
 
-public static void mutateWithTriggers(Collection 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically) throws 
WriteTimeoutException, UnavailableException,
-OverloadedException, InvalidRequestException
+public static void mutateWithTriggers(Collection 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically)
+throws WriteTimeoutException, UnavailableException, OverloadedException, 
InvalidRequestException
 {
 Collection tmutations = 
TriggerExecutor.instance.execute(mutations);
 if (mutateAtomically || tmutations != null)
 {
-Collection allMutations = (Collection) 
mutations;
+Collection allMutations = new 
ArrayList<>((Collection) mutations);
 if (tmutations != null)
 allMutations.addAll(tmutations);
 StorageProxy.mutateAtomically(allMutations, consistencyLevel);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
new file mode 100644
index 000..6ca3880
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
+import org.apache.cassandra.db.ArrayBackedSortedColumns;
+import org.apache.cassandra.db.Column;
+import org.apache.cassandra.db.ColumnFamily;
+import org.apache.cassandra.db.ConsistencyLevel;
+import org.apache.cassandra.db.RowMutation;
+import org.apache.cassandra.service.StorageService;
+import org.apache.cassandra.thrift.Cassandr

[6/7] git commit: Fix TriggersTest

2014-03-11 Thread aleksey
Fix TriggersTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4f262e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4f262e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4f262e1

Branch: refs/heads/trunk
Commit: b4f262e1b0520a683666186d952f9913f568a71b
Parents: 362148d
Author: Aleksey Yeschenko 
Authored: Tue Mar 11 15:32:25 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 15:32:25 2014 +0300

--
 test/unit/org/apache/cassandra/triggers/TriggersTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4f262e1/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
index 947674f..b374759 100644
--- a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -169,7 +169,7 @@ public class TriggersTest extends SchemaLoader
 public Collection augment(ByteBuffer key, ColumnFamily 
update)
 {
 ColumnFamily extraUpdate = 
update.cloneMeShallow(ArrayBackedSortedColumns.factory, false);
-extraUpdate.addColumn(new 
Cell(CellNames.compositeDense(bytes("v2")),
+extraUpdate.addColumn(new 
Cell(update.metadata().comparator.makeCellName(bytes("v2")),
bytes(999)));
 Mutation mutation = new Mutation(ksName, key);
 mutation.add(extraUpdate);



[5/7] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-11 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/CFMetaData.java
src/java/org/apache/cassandra/service/StorageProxy.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/362148dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/362148dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/362148dd

Branch: refs/heads/trunk
Commit: 362148dd233001e3139b7631a9d4f3b06f51b6f2
Parents: 639ddac f7eca98
Author: Aleksey Yeschenko 
Authored: Tue Mar 11 15:20:45 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 15:20:45 2014 +0300

--
 CHANGES.txt |   2 +
 doc/cql3/CQL.textile|   3 -
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 6 files changed, 313 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/CHANGES.txt
--
diff --cc CHANGES.txt
index 709b05a,91037d1..607e2dc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,9 +1,18 @@@
 -2.0.7
 +2.1.0-beta2
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 +Merged from 2.0:
+  * Fix saving triggers to schema (CASSANDRA-6789)
+  * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 -
 -
 -2.0.6
   * Avoid race-prone second "scrub" of system keyspace (CASSANDRA-6797)
   * Pool CqlRecordWriter clients by inetaddress rather than Range 
 (CASSANDRA-6665)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/doc/cql3/CQL.textile
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --cc src/java/org/apache/cassandra/config/CFMetaData.java
index 25b7314,ff40e65..ac5dea7
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@@ -1670,45 -1507,39 +1670,48 @@@ public final class CFMetaDat
   *
   * @param timestamp Timestamp to use
   *
 - * @return RowMutation to use to completely remove cf from schema
 + * @return Mutation to use to completely remove cf from schema
   */
 -public RowMutation dropFromSchema(long timestamp)
 +public Mutation dropFromSchema(long timestamp)
  {
 -RowMutation rm = new RowMutation(Keyspace.SYSTEM_KS, 
SystemKeyspace.getSchemaKSKey(ksName));
 -ColumnFamily cf = rm.addOrGet(SchemaColumnFamiliesCf);
 +Mutation mutation = new Mutation(Keyspace.SYSTEM_KS, 
SystemKeyspace.getSchemaKSKey(ksName));
 +ColumnFamily cf = mutation.addOrGet(SchemaColumnFamiliesCf);
  int ldt = (int) (System.currentTimeMillis() / 1000);
  
 -ColumnNameBuilder builder = 
SchemaColumnFamiliesCf.getCfDef().getColumnNameBuilder();
 -builder.add(ByteBufferUtil.bytes(cfName));
 -cf.addAtom(new RangeTombstone(builder.build(), 
builder.buildAsEndOfRange(), timestamp, ldt));
 +Composite prefix = SchemaColumnFamiliesCf.comparator.make(cfName);
 +cf.addAtom(new RangeTombstone(prefix, prefix.end(), timestamp, ldt));
  
 -for (ColumnDefinition cd : column_metadata.values())
 -cd.deleteFromSchema(rm, cfName, 
getColumnDefinitionComparator(cd), timestamp);
 +for (ColumnDefinition cd : allColumns())
 +cd.deleteFromSchema(mutation, timestamp);
  
  for (TriggerDefinition td : triggers.values())
 -td.deleteFromSchema(rm, cfName, timestamp);
 +td.deleteFromSchema(mutation, cfName, timestamp);
 +
 +return mutation;
 +}
  
 -return rm;
 +public boolean isPurged()
 +{
 +return isPurged;
 +}
 

[3/7] git commit: Fix saving triggers to schema

2014-03-11 Thread aleksey
Fix saving triggers to schema

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6789


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/553401d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/553401d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/553401d2

Branch: refs/heads/trunk
Commit: 553401d2fef2a8ab66b2da7a79d865be4dd669d9
Parents: 3f38361
Author: Sam Tunnicliffe 
Authored: Tue Mar 11 14:48:53 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 14:48:53 2014 +0300

--
 CHANGES.txt |   4 +
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +++
 3 files changed, 133 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 920f073..39656ff 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.7
+ * Fix saving triggers to schema (CASSANDRA-6789)
+
+
 2.0.6
  * Avoid race-prone second "scrub" of system keyspace (CASSANDRA-6797)
  * Pool CqlRecordWriter clients by inetaddress rather than Range 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index a319930..ff40e65 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1532,6 +1532,9 @@ public final class CFMetaData
 {
 toSchemaNoColumnsNoTriggers(rm, timestamp);
 
+for (TriggerDefinition td : triggers.values())
+td.toSchema(rm, cfName, timestamp);
+
 for (ColumnDefinition cd : column_metadata.values())
 cd.toSchema(rm, cfName, getColumnDefinitionComparator(cd), 
timestamp);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
new file mode 100644
index 000..f9d71ee
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.util.Collections;
+
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.KSMetaData;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.config.TriggerDefinition;
+import org.apache.cassandra.locator.SimpleStrategy;
+import org.apache.cassandra.service.MigrationManager;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TriggersSchemaTest extends SchemaLoader
+{
+String ksName = "ks" + System.nanoTime();
+String cfName = "cf" + System.nanoTime();
+String triggerName = "trigger_" + System.nanoTime();
+String triggerClass = "org.apache.cassandra.triggers.NoSuchTrigger.class";
+
+@Test
+public void newKsContainsCfWithTrigger() throws Exception
+{
+TriggerDefinition td = TriggerDefinition.create(triggerName, 
triggerClass);
+CFMetaData cfm1 = CFMetaData.compile(String.format("CREATE TABLE %s (k 
int PRIMARY KEY, v int)", cfName), ksName);
+cfm1.addTriggerDefinition(td);
+KSMetaData ksm = KSMetaData.newKeyspace(ksName,
+SimpleStrategy.class,
+
Collection

[1/7] git commit: Fix CQL doc

2014-03-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk e185afab6 -> 2d92f14ba


Fix CQL doc


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfd28d22
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfd28d22
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfd28d22

Branch: refs/heads/trunk
Commit: dfd28d226abe5eb2087b633b0e9634b207d32655
Parents: 57f6f92
Author: Sylvain Lebresne 
Authored: Mon Mar 10 18:02:20 2014 +0100
Committer: Sylvain Lebresne 
Committed: Mon Mar 10 18:02:30 2014 +0100

--
 doc/cql3/CQL.textile | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfd28d22/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 8d853c5..ecd3b7e 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -217,9 +217,6 @@ bc(syntax)..
  ::=   ( PRIMARY KEY )?
   | PRIMARY KEY '('  ( ','  )* 
')'
 
- ::= 
-  | '('  ( ','  )* ')'
-
  ::= 
   | '('  (','  )* ')'
 



[7/7] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-11 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d92f14b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d92f14b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d92f14b

Branch: refs/heads/trunk
Commit: 2d92f14baaae7f2dd4a61f602896dd3a4abf7d1f
Parents: e185afa b4f262e
Author: Aleksey Yeschenko 
Authored: Tue Mar 11 15:33:10 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 15:33:10 2014 +0300

--
 CHANGES.txt |   2 +
 doc/cql3/CQL.textile|   3 -
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 6 files changed, 313 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d92f14b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d92f14b/src/java/org/apache/cassandra/config/CFMetaData.java
--



git commit: Fix TriggersTest

2014-03-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 362148dd2 -> b4f262e1b


Fix TriggersTest


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b4f262e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b4f262e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b4f262e1

Branch: refs/heads/cassandra-2.1
Commit: b4f262e1b0520a683666186d952f9913f568a71b
Parents: 362148d
Author: Aleksey Yeschenko 
Authored: Tue Mar 11 15:32:25 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 15:32:25 2014 +0300

--
 test/unit/org/apache/cassandra/triggers/TriggersTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b4f262e1/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
index 947674f..b374759 100644
--- a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -169,7 +169,7 @@ public class TriggersTest extends SchemaLoader
 public Collection augment(ByteBuffer key, ColumnFamily 
update)
 {
 ColumnFamily extraUpdate = 
update.cloneMeShallow(ArrayBackedSortedColumns.factory, false);
-extraUpdate.addColumn(new 
Cell(CellNames.compositeDense(bytes("v2")),
+extraUpdate.addColumn(new 
Cell(update.metadata().comparator.makeCellName(bytes("v2")),
bytes(999)));
 Mutation mutation = new Mutation(ksName, key);
 mutation.add(extraUpdate);



[jira] [Updated] (CASSANDRA-6834) cassandra-stress should fail if the same option is provided multiple times

2014-03-11 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6834:


Attachment: 6834.txt

Attached fix for this, and also a tidy up of some command line help printing 
(distributions now have an explanation next to them, and a commands supporting 
multiple writes/reads at once, e.g. readmulti, correctly print the at-once 
option)

> cassandra-stress should fail if the same option is provided multiple times
> --
>
> Key: CASSANDRA-6834
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6834
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Benedict
>Priority: Minor
> Fix For: 2.1 beta2
>
> Attachments: 6834.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[3/5] git commit: Fix saving triggers to schema

2014-03-11 Thread aleksey
Fix saving triggers to schema

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6789


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/553401d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/553401d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/553401d2

Branch: refs/heads/cassandra-2.1
Commit: 553401d2fef2a8ab66b2da7a79d865be4dd669d9
Parents: 3f38361
Author: Sam Tunnicliffe 
Authored: Tue Mar 11 14:48:53 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 14:48:53 2014 +0300

--
 CHANGES.txt |   4 +
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +++
 3 files changed, 133 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 920f073..39656ff 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.7
+ * Fix saving triggers to schema (CASSANDRA-6789)
+
+
 2.0.6
  * Avoid race-prone second "scrub" of system keyspace (CASSANDRA-6797)
  * Pool CqlRecordWriter clients by inetaddress rather than Range 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index a319930..ff40e65 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1532,6 +1532,9 @@ public final class CFMetaData
 {
 toSchemaNoColumnsNoTriggers(rm, timestamp);
 
+for (TriggerDefinition td : triggers.values())
+td.toSchema(rm, cfName, timestamp);
+
 for (ColumnDefinition cd : column_metadata.values())
 cd.toSchema(rm, cfName, getColumnDefinitionComparator(cd), 
timestamp);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
new file mode 100644
index 000..f9d71ee
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.util.Collections;
+
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.KSMetaData;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.config.TriggerDefinition;
+import org.apache.cassandra.locator.SimpleStrategy;
+import org.apache.cassandra.service.MigrationManager;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TriggersSchemaTest extends SchemaLoader
+{
+String ksName = "ks" + System.nanoTime();
+String cfName = "cf" + System.nanoTime();
+String triggerName = "trigger_" + System.nanoTime();
+String triggerClass = "org.apache.cassandra.triggers.NoSuchTrigger.class";
+
+@Test
+public void newKsContainsCfWithTrigger() throws Exception
+{
+TriggerDefinition td = TriggerDefinition.create(triggerName, 
triggerClass);
+CFMetaData cfm1 = CFMetaData.compile(String.format("CREATE TABLE %s (k 
int PRIMARY KEY, v int)", cfName), ksName);
+cfm1.addTriggerDefinition(td);
+KSMetaData ksm = KSMetaData.newKeyspace(ksName,
+SimpleStrategy.class,
+
Co

[2/5] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-03-11 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f383612
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f383612
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f383612

Branch: refs/heads/cassandra-2.1
Commit: 3f38361271ffc84d4aca32e29b9b5af996825424
Parents: 8d2c3fe dfd28d2
Author: Sylvain Lebresne 
Authored: Mon Mar 10 18:02:46 2014 +0100
Committer: Sylvain Lebresne 
Committed: Mon Mar 10 18:02:46 2014 +0100

--
 doc/cql3/CQL.textile | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f383612/doc/cql3/CQL.textile
--
diff --cc doc/cql3/CQL.textile
index aa2c176,ecd3b7e..2de59d1
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@@ -219,12 -214,9 +219,9 @@@ bc(syntax).
'('  ( ','  )* ')'
( WITH  ( AND )* )?
  
 - ::=   ( PRIMARY KEY )?
 + ::=   ( STATIC )? ( PRIMARY KEY )?
| PRIMARY KEY '('  ( ','  )* 
')'
  
-  ::= 
-   | '('  ( ','  )* ')'
- 
   ::= 
| '('  (','  )* ')'
  



[5/5] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-11 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/CFMetaData.java
src/java/org/apache/cassandra/service/StorageProxy.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/362148dd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/362148dd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/362148dd

Branch: refs/heads/cassandra-2.1
Commit: 362148dd233001e3139b7631a9d4f3b06f51b6f2
Parents: 639ddac f7eca98
Author: Aleksey Yeschenko 
Authored: Tue Mar 11 15:20:45 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 15:20:45 2014 +0300

--
 CHANGES.txt |   2 +
 doc/cql3/CQL.textile|   3 -
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 6 files changed, 313 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/CHANGES.txt
--
diff --cc CHANGES.txt
index 709b05a,91037d1..607e2dc
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,9 +1,18 @@@
 -2.0.7
 +2.1.0-beta2
 + * Add broadcast_rpc_address option to cassandra.yaml (CASSANDRA-5899)
 + * Auto reload GossipingPropertyFileSnitch config (CASSANDRA-5897)
 + * Fix overflow of memtable_total_space_in_mb (CASSANDRA-6573)
 + * Fix ABTC NPE (CASSANDRA-6692)
 + * Allow nodetool to use a file or prompt for password (CASSANDRA-6660)
 + * Fix AIOOBE when concurrently accessing ABSC (CASSANDRA-6742)
 + * Fix assertion error in ALTER TYPE RENAME (CASSANDRA-6705)
 + * Scrub should not always clear out repaired status (CASSANDRA-5351)
 + * Improve handling of range tombstone for wide partitions (CASSANDRA-6446)
 + * Fix ClassCastException for compact table with composites (CASSANDRA-6738)
 + * Fix potentially repairing with wrong nodes (CASSANDRA-6808)
 +Merged from 2.0:
+  * Fix saving triggers to schema (CASSANDRA-6789)
+  * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 -
 -
 -2.0.6
   * Avoid race-prone second "scrub" of system keyspace (CASSANDRA-6797)
   * Pool CqlRecordWriter clients by inetaddress rather than Range 
 (CASSANDRA-6665)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/doc/cql3/CQL.textile
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/362148dd/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --cc src/java/org/apache/cassandra/config/CFMetaData.java
index 25b7314,ff40e65..ac5dea7
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@@ -1670,45 -1507,39 +1670,48 @@@ public final class CFMetaDat
   *
   * @param timestamp Timestamp to use
   *
 - * @return RowMutation to use to completely remove cf from schema
 + * @return Mutation to use to completely remove cf from schema
   */
 -public RowMutation dropFromSchema(long timestamp)
 +public Mutation dropFromSchema(long timestamp)
  {
 -RowMutation rm = new RowMutation(Keyspace.SYSTEM_KS, 
SystemKeyspace.getSchemaKSKey(ksName));
 -ColumnFamily cf = rm.addOrGet(SchemaColumnFamiliesCf);
 +Mutation mutation = new Mutation(Keyspace.SYSTEM_KS, 
SystemKeyspace.getSchemaKSKey(ksName));
 +ColumnFamily cf = mutation.addOrGet(SchemaColumnFamiliesCf);
  int ldt = (int) (System.currentTimeMillis() / 1000);
  
 -ColumnNameBuilder builder = 
SchemaColumnFamiliesCf.getCfDef().getColumnNameBuilder();
 -builder.add(ByteBufferUtil.bytes(cfName));
 -cf.addAtom(new RangeTombstone(builder.build(), 
builder.buildAsEndOfRange(), timestamp, ldt));
 +Composite prefix = SchemaColumnFamiliesCf.comparator.make(cfName);
 +cf.addAtom(new RangeTombstone(prefix, prefix.end(), timestamp, ldt));
  
 -for (ColumnDefinition cd : column_metadata.values())
 -cd.deleteFromSchema(rm, cfName, 
getColumnDefinitionComparator(cd), timestamp);
 +for (ColumnDefinition cd : allColumns())
 +cd.deleteFromSchema(mutation, timestamp);
  
  for (TriggerDefinition td : triggers.values())
 -td.deleteFromSchema(rm, cfName, timestamp);
 +td.deleteFromSchema(mutation, cfName, timestamp);
 +
 +return mutation;
 +}
  
 -return rm;
 +public boolean isPurged()
 +{
 +return isPurged;
 

[1/5] git commit: Fix CQL doc

2014-03-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 639ddace4 -> 362148dd2


Fix CQL doc


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfd28d22
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfd28d22
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfd28d22

Branch: refs/heads/cassandra-2.1
Commit: dfd28d226abe5eb2087b633b0e9634b207d32655
Parents: 57f6f92
Author: Sylvain Lebresne 
Authored: Mon Mar 10 18:02:20 2014 +0100
Committer: Sylvain Lebresne 
Committed: Mon Mar 10 18:02:30 2014 +0100

--
 doc/cql3/CQL.textile | 3 ---
 1 file changed, 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfd28d22/doc/cql3/CQL.textile
--
diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile
index 8d853c5..ecd3b7e 100644
--- a/doc/cql3/CQL.textile
+++ b/doc/cql3/CQL.textile
@@ -217,9 +217,6 @@ bc(syntax)..
  ::=   ( PRIMARY KEY )?
   | PRIMARY KEY '('  ( ','  )* 
')'
 
- ::= 
-  | '('  ( ','  )* ')'
-
  ::= 
   | '('  (','  )* ')'
 



[4/5] git commit: Fix trigger mutations when base mutation list is immutable

2014-03-11 Thread aleksey
Fix trigger mutations when base mutation list is immutable

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6790


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7eca98a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7eca98a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7eca98a

Branch: refs/heads/cassandra-2.1
Commit: f7eca98a7487b5e4013fbc07e43ebf0055520856
Parents: 553401d
Author: Sam Tunnicliffe 
Authored: Tue Mar 11 14:55:16 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 14:55:16 2014 +0300

--
 CHANGES.txt |   1 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 3 files changed, 183 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 39656ff..91037d1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 2.0.7
  * Fix saving triggers to schema (CASSANDRA-6789)
+ * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 
 
 2.0.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 14c1ce3..a6db9cd 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -508,13 +508,13 @@ public class StorageProxy implements StorageProxyMBean
 }
 }
 
-public static void mutateWithTriggers(Collection 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically) throws 
WriteTimeoutException, UnavailableException,
-OverloadedException, InvalidRequestException
+public static void mutateWithTriggers(Collection 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically)
+throws WriteTimeoutException, UnavailableException, OverloadedException, 
InvalidRequestException
 {
 Collection tmutations = 
TriggerExecutor.instance.execute(mutations);
 if (mutateAtomically || tmutations != null)
 {
-Collection allMutations = (Collection) 
mutations;
+Collection allMutations = new 
ArrayList<>((Collection) mutations);
 if (tmutations != null)
 allMutations.addAll(tmutations);
 StorageProxy.mutateAtomically(allMutations, consistencyLevel);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
new file mode 100644
index 000..6ca3880
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
+import org.apache.cassandra.db.ArrayBackedSortedColumns;
+import org.apache.cassandra.db.Column;
+import org.apache.cassandra.db.ColumnFamily;
+import org.apache.cassandra.db.ConsistencyLevel;
+import org.apache.cassandra.db.RowMutation;
+import org.apache.cassandra.service.StorageService;
+import org.apache.cassandra.thrift.

[2/2] git commit: Fix trigger mutations when base mutation list is immutable

2014-03-11 Thread aleksey
Fix trigger mutations when base mutation list is immutable

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6790


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7eca98a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7eca98a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7eca98a

Branch: refs/heads/cassandra-2.0
Commit: f7eca98a7487b5e4013fbc07e43ebf0055520856
Parents: 553401d
Author: Sam Tunnicliffe 
Authored: Tue Mar 11 14:55:16 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 14:55:16 2014 +0300

--
 CHANGES.txt |   1 +
 .../apache/cassandra/service/StorageProxy.java  |   6 +-
 .../apache/cassandra/triggers/TriggersTest.java | 179 +++
 3 files changed, 183 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 39656ff..91037d1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 2.0.7
  * Fix saving triggers to schema (CASSANDRA-6789)
+ * Fix trigger mutations when base mutation list is immutable (CASSANDRA-6790)
 
 
 2.0.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 14c1ce3..a6db9cd 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -508,13 +508,13 @@ public class StorageProxy implements StorageProxyMBean
 }
 }
 
-public static void mutateWithTriggers(Collection 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically) throws 
WriteTimeoutException, UnavailableException,
-OverloadedException, InvalidRequestException
+public static void mutateWithTriggers(Collection 
mutations, ConsistencyLevel consistencyLevel, boolean mutateAtomically)
+throws WriteTimeoutException, UnavailableException, OverloadedException, 
InvalidRequestException
 {
 Collection tmutations = 
TriggerExecutor.instance.execute(mutations);
 if (mutateAtomically || tmutations != null)
 {
-Collection allMutations = (Collection) 
mutations;
+Collection allMutations = new 
ArrayList<>((Collection) mutations);
 if (tmutations != null)
 allMutations.addAll(tmutations);
 StorageProxy.mutateAtomically(allMutations, consistencyLevel);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7eca98a/test/unit/org/apache/cassandra/triggers/TriggersTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
new file mode 100644
index 000..6ca3880
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersTest.java
@@ -0,0 +1,179 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.net.InetAddress;
+import java.nio.ByteBuffer;
+import java.util.Collection;
+import java.util.Collections;
+
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.cql3.QueryProcessor;
+import org.apache.cassandra.cql3.UntypedResultSet;
+import org.apache.cassandra.db.ArrayBackedSortedColumns;
+import org.apache.cassandra.db.Column;
+import org.apache.cassandra.db.ColumnFamily;
+import org.apache.cassandra.db.ConsistencyLevel;
+import org.apache.cassandra.db.RowMutation;
+import org.apache.cassandra.service.StorageService;
+import org.apache.cassandra.thrift.

[1/2] git commit: Fix saving triggers to schema

2014-03-11 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 3f3836127 -> f7eca98a7


Fix saving triggers to schema

patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for
CASSANDRA-6789


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/553401d2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/553401d2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/553401d2

Branch: refs/heads/cassandra-2.0
Commit: 553401d2fef2a8ab66b2da7a79d865be4dd669d9
Parents: 3f38361
Author: Sam Tunnicliffe 
Authored: Tue Mar 11 14:48:53 2014 +0300
Committer: Aleksey Yeschenko 
Committed: Tue Mar 11 14:48:53 2014 +0300

--
 CHANGES.txt |   4 +
 .../org/apache/cassandra/config/CFMetaData.java |   3 +
 .../cassandra/triggers/TriggersSchemaTest.java  | 126 +++
 3 files changed, 133 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 920f073..39656ff 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+2.0.7
+ * Fix saving triggers to schema (CASSANDRA-6789)
+
+
 2.0.6
  * Avoid race-prone second "scrub" of system keyspace (CASSANDRA-6797)
  * Pool CqlRecordWriter clients by inetaddress rather than Range 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index a319930..ff40e65 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1532,6 +1532,9 @@ public final class CFMetaData
 {
 toSchemaNoColumnsNoTriggers(rm, timestamp);
 
+for (TriggerDefinition td : triggers.values())
+td.toSchema(rm, cfName, timestamp);
+
 for (ColumnDefinition cd : column_metadata.values())
 cd.toSchema(rm, cfName, getColumnDefinitionComparator(cd), 
timestamp);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/553401d2/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
--
diff --git a/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java 
b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
new file mode 100644
index 000..f9d71ee
--- /dev/null
+++ b/test/unit/org/apache/cassandra/triggers/TriggersSchemaTest.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.triggers;
+
+import java.util.Collections;
+
+import org.junit.Test;
+
+import org.apache.cassandra.SchemaLoader;
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.KSMetaData;
+import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.config.TriggerDefinition;
+import org.apache.cassandra.locator.SimpleStrategy;
+import org.apache.cassandra.service.MigrationManager;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+public class TriggersSchemaTest extends SchemaLoader
+{
+String ksName = "ks" + System.nanoTime();
+String cfName = "cf" + System.nanoTime();
+String triggerName = "trigger_" + System.nanoTime();
+String triggerClass = "org.apache.cassandra.triggers.NoSuchTrigger.class";
+
+@Test
+public void newKsContainsCfWithTrigger() throws Exception
+{
+TriggerDefinition td = TriggerDefinition.create(triggerName, 
triggerClass);
+CFMetaData cfm1 = CFMetaData.compile(String.format("CREATE TABLE %s (k 
int PRIMARY KEY, v int)", cfName), ksName);
+cfm1.addTriggerDefinition(td);
+KSMetaData ksm = KSMetaData.newKeyspace(ksName,
+  

[jira] [Created] (CASSANDRA-6835) cassandra-stress should support a variable number of counter columns

2014-03-11 Thread Benedict (JIRA)
Benedict created CASSANDRA-6835:
---

 Summary: cassandra-stress should support a variable number of 
counter columns
 Key: CASSANDRA-6835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6835
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6834) cassandra-stress should fail if the same option is provided multiple times

2014-03-11 Thread Benedict (JIRA)
Benedict created CASSANDRA-6834:
---

 Summary: cassandra-stress should fail if the same option is 
provided multiple times
 Key: CASSANDRA-6834
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6834
 Project: Cassandra
  Issue Type: Bug
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1 beta2






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6823) TimedOutException/dropped mutations running stress on 2.1

2014-03-11 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930265#comment-13930265
 ] 

Benedict commented on CASSANDRA-6823:
-

Basically new stress is just too brutal, and C* doesn't currently degrade 
gracefully in the event of overload.

So, yes, if you need it to survive longer, try imposing a rate limit (can be 
done within stress with the -rate option)

> TimedOutException/dropped mutations running stress on 2.1 
> --
>
> Key: CASSANDRA-6823
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6823
> Project: Cassandra
>  Issue Type: Bug
>Reporter: dan jatnieks
>Priority: Minor
>  Labels: stress
> Attachments: stress.log, system.log
>
>
> While testing CASSANDRA-6357, I am seeing TimedOutException errors running 
> stress on both 2.1 and trunk, and system log is showing dropped mutation 
> messages.
> {noformat}
> $ ant -Dversion=2.1.0-SNAPSHOT jar
> $ ./bin/cassandra
> $ ./cassandra-2.1/tools/bin/cassandra-stress write n=1000
> Created keyspaces. Sleeping 1s for propagation.
> Warming up WRITE with 5 iterations...
> Connected to cluster: Test Cluster
> Datatacenter: datacenter1; Host: localhost/127.0.0.1; Rack: rack1
> Sleeping 2s...
> Running WRITE with 50 threads  for 1000 iterations
> ops   ,op/s,adj op/s,   key/s,mean, med, .95, .99,
> .999, max,   time,   stderr
> 74597 ,   74590,   74590,   74590, 0.7, 0.3, 1.7, 7.8,
> 39.4,   156.0,1.0,  0.0
> 175807,  100469,  111362,  100469, 0.5, 0.3, 1.0, 2.2,
> 16.4,   105.2,2.0,  0.0
> 278037,  100483,  110412,  100483, 0.5, 0.4, 0.9, 2.2,
> 15.9,95.4,3.0,  0.13983
> 366806,   86301,   86301,   86301, 0.6, 0.4, 0.9, 2.4,
> 97.6,   107.0,4.1,  0.10002
> 473244,  105209,  115906,  105209, 0.5, 0.3, 1.0, 2.2,
> 10.2,99.6,5.1,  0.08246
> 574363,   99939,  112606,   99939, 0.5, 0.3, 1.0, 2.2,
>  8.4,   115.3,6.1,  0.07297
> 665162,   89343,   89343,   89343, 0.6, 0.3, 1.1, 2.3,
> 12.5,   116.4,7.1,  0.06256
> 768575,  102028,  102028,  102028, 0.5, 0.3, 1.0, 2.1,
> 10.7,   116.0,8.1,  0.05703
> 870318,  100383,  112278,  100383, 0.5, 0.4, 1.0, 2.1,
>  8.2,   109.1,9.1,  0.04984
> 972584,  100496,  111616,  100496, 0.5, 0.3, 1.0, 2.3,
> 10.3,   109.1,   10.1,  0.04542
> 1063466   ,   88566,   88566,   88566, 0.6, 0.3, 1.1, 2.5,   
> 107.3,   116.9,   11.2,  0.04152
> 1163218   ,   98512,  107549,   98512, 0.5, 0.3, 1.2, 3.4,
> 17.9,92.9,   12.2,  0.04007
> 1257989   ,   93578,  103808,   93578, 0.5, 0.3, 1.4, 3.8,
> 12.6,   105.6,   13.2,  0.03687
> 1349628   ,   90205,   99257,   90205, 0.6, 0.3, 1.2, 2.9,
> 20.3,99.6,   14.2,  0.03401
> 1448125   ,   97133,  106429,   97133, 0.5, 0.3, 1.2, 2.9,
> 11.9,   102.2,   15.2,  0.03170
> 1536662   ,   87137,   95464,   87137, 0.6, 0.4, 1.1, 2.9,
> 83.7,94.0,   16.2,  0.02964
> 1632373   ,   94446,  102735,   94446, 0.5, 0.4, 1.1, 2.6,
> 11.7,85.5,   17.2,  0.02818
> 1717028   ,   83533,   83533,   83533, 0.6, 0.4, 1.1, 2.7,
> 87.4,   101.8,   18.3,  0.02651
> 1817081   ,   97807,  108004,   97807, 0.5, 0.3, 1.1, 2.5,
> 14.5,99.1,   19.3,  0.02712
> 1904103   ,   85634,   94846,   85634, 0.6, 0.3, 1.2, 3.0,
> 92.4,   105.3,   20.3,  0.02585
> 2001438   ,   95991,  104822,   95991, 0.5, 0.3, 1.2, 2.7,
> 13.5,95.3,   21.3,  0.02482
> 2086571   ,   89121,   99429,   89121, 0.6, 0.3, 1.2, 3.2,
> 30.9,   103.3,   22.3,  0.02367
> 2184096   ,   88718,   97020,   88718, 0.6, 0.3, 1.3, 3.2,
> 85.6,98.0,   23.4,  0.02262
> 2276823   ,   91795,   91795,   91795, 0.5, 0.3, 1.3, 3.5,
> 81.1,   102.1,   24.4,  0.02174
> 2381493   ,  101074,  101074,  101074, 0.5, 0.3, 1.3, 3.3,
> 12.9,99.1,   25.4,  0.02123
> 2466415   ,   83368,   92292,   83368, 0.6, 0.4, 1.2, 3.0,
> 14.3,   188.5,   26.4,  0.02037
> 2567406   ,  100099,  109267,  100099, 0.5, 0.3, 1.4, 3.3,
> 10.9,94.2,   27.4,  0.01989
> 2653040   ,   84476,   91922,   84476, 0.6, 0.3, 1.4, 3.2,
> 77.0,   100.3,   28.5,  0.01937
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> TimedOutException(acknowledged_by:0)
> 

[jira] [Assigned] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe reassigned CASSANDRA-6790:
--

Assignee: Sam Tunnicliffe  (was: Edward Capriolo)

> Triggers are broken in trunk because of imutable list
> -
>
> Key: CASSANDRA-6790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Sam Tunnicliffe
> Fix For: 2.1 beta2
>
> Attachments: 
> 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch
>
>
> The trigger code is uncovered by any tests (that I can find). When inserting 
> single columns an immutable list is created. When the trigger attempts to 
> edit this list the operation fails.
> Fix coming shortly.
> {noformat}
> java.lang.UnsupportedOperationException
> at java.util.AbstractList.add(AbstractList.java:148)
> at java.util.AbstractList.add(AbstractList.java:108)
> at 
> java.util.AbstractCollection.addAll(AbstractCollection.java:342)
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
> at 
> org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
> at 
> org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
> at 
> org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
> at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
> at 
> org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6789) Triggers can not be added from thrift

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6789:
---

Fix Version/s: 2.0.7

> Triggers can not be added from thrift
> -
>
> Key: CASSANDRA-6789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6789
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Sam Tunnicliffe
> Fix For: 2.0.7
>
> Attachments: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch
>
>
> While playing with groovy triggers, I determined that you can not add 
> triggers from thrift, unless I am doing something wrong. (I see no coverage 
> of this feature from thrift/python)
> https://github.com/edwardcapriolo/cassandra/compare/trigger_coverage?expand=1
> {code}
> package org.apache.cassandra.triggers;
> import java.io.IOException;
> import java.net.InetSocketAddress;
> import java.nio.ByteBuffer;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.HashMap;
> import java.util.List;
> import java.util.Map;
> import junit.framework.Assert;
> import org.apache.cassandra.SchemaLoader;
> import org.apache.cassandra.config.Schema;
> import org.apache.cassandra.service.EmbeddedCassandraService;
> import org.apache.cassandra.thrift.CassandraServer;
> import org.apache.cassandra.thrift.CfDef;
> import org.apache.cassandra.thrift.ColumnParent;
> import org.apache.cassandra.thrift.KsDef;
> import org.apache.cassandra.thrift.ThriftSessionManager;
> import org.apache.cassandra.thrift.TriggerDef;
> import org.apache.cassandra.utils.ByteBufferUtil;
> import org.apache.thrift.TException;
> import org.junit.BeforeClass;
> import org.junit.Test;
> public class TriggerTest extends SchemaLoader
> {
> private static CassandraServer server;
> 
> @BeforeClass
> public static void setup() throws IOException, TException
> {
> Schema.instance.clear(); // Schema are now written on disk and will 
> be reloaded
> new EmbeddedCassandraService().start();
> ThriftSessionManager.instance.setCurrentSocket(new 
> InetSocketAddress(9160));
> server = new CassandraServer();
> server.set_keyspace("Keyspace1");
> }
> 
> @Test
> public void createATrigger() throws TException
> {
> TriggerDef td = new TriggerDef();
> td.setName("gimme5");
> Map options = new HashMap<>();
> options.put("class", "org.apache.cassandra.triggers.ITriggerImpl");
> td.setOptions(options);
> CfDef cfDef = new CfDef();
> cfDef.setKeyspace("Keyspace1");
> cfDef.setTriggers(Arrays.asList(td));
> cfDef.setName("triggercf");
> server.system_add_column_family(cfDef);
> 
> KsDef keyspace1 = server.describe_keyspace("Keyspace1");
> CfDef triggerCf = null;
> for (CfDef cfs :keyspace1.cf_defs){
>   if (cfs.getName().equals("triggercf")){
> triggerCf=cfs;
>   }
> }
> Assert.assertNotNull(triggerCf);
> Assert.assertEquals(1, triggerCf.getTriggers().size());
> }
> }
> {code}
> junit.framework.AssertionFailedError: expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6789) Triggers can not be added from thrift

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6789:
---

Reviewer: Aleksey Yeschenko

> Triggers can not be added from thrift
> -
>
> Key: CASSANDRA-6789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6789
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Sam Tunnicliffe
> Attachments: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch
>
>
> While playing with groovy triggers, I determined that you can not add 
> triggers from thrift, unless I am doing something wrong. (I see no coverage 
> of this feature from thrift/python)
> https://github.com/edwardcapriolo/cassandra/compare/trigger_coverage?expand=1
> {code}
> package org.apache.cassandra.triggers;
> import java.io.IOException;
> import java.net.InetSocketAddress;
> import java.nio.ByteBuffer;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.HashMap;
> import java.util.List;
> import java.util.Map;
> import junit.framework.Assert;
> import org.apache.cassandra.SchemaLoader;
> import org.apache.cassandra.config.Schema;
> import org.apache.cassandra.service.EmbeddedCassandraService;
> import org.apache.cassandra.thrift.CassandraServer;
> import org.apache.cassandra.thrift.CfDef;
> import org.apache.cassandra.thrift.ColumnParent;
> import org.apache.cassandra.thrift.KsDef;
> import org.apache.cassandra.thrift.ThriftSessionManager;
> import org.apache.cassandra.thrift.TriggerDef;
> import org.apache.cassandra.utils.ByteBufferUtil;
> import org.apache.thrift.TException;
> import org.junit.BeforeClass;
> import org.junit.Test;
> public class TriggerTest extends SchemaLoader
> {
> private static CassandraServer server;
> 
> @BeforeClass
> public static void setup() throws IOException, TException
> {
> Schema.instance.clear(); // Schema are now written on disk and will 
> be reloaded
> new EmbeddedCassandraService().start();
> ThriftSessionManager.instance.setCurrentSocket(new 
> InetSocketAddress(9160));
> server = new CassandraServer();
> server.set_keyspace("Keyspace1");
> }
> 
> @Test
> public void createATrigger() throws TException
> {
> TriggerDef td = new TriggerDef();
> td.setName("gimme5");
> Map options = new HashMap<>();
> options.put("class", "org.apache.cassandra.triggers.ITriggerImpl");
> td.setOptions(options);
> CfDef cfDef = new CfDef();
> cfDef.setKeyspace("Keyspace1");
> cfDef.setTriggers(Arrays.asList(td));
> cfDef.setName("triggercf");
> server.system_add_column_family(cfDef);
> 
> KsDef keyspace1 = server.describe_keyspace("Keyspace1");
> CfDef triggerCf = null;
> for (CfDef cfs :keyspace1.cf_defs){
>   if (cfs.getName().equals("triggercf")){
> triggerCf=cfs;
>   }
> }
> Assert.assertNotNull(triggerCf);
> Assert.assertEquals(1, triggerCf.getTriggers().size());
> }
> }
> {code}
> junit.framework.AssertionFailedError: expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6789) Triggers can not be added from thrift

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6789:
---

Attachment: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch

Attaching patch against 2.0 branch

> Triggers can not be added from thrift
> -
>
> Key: CASSANDRA-6789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6789
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Attachments: 0001-Include-trigger-defs-in-CFMetaData.toSchema.patch
>
>
> While playing with groovy triggers, I determined that you can not add 
> triggers from thrift, unless I am doing something wrong. (I see no coverage 
> of this feature from thrift/python)
> https://github.com/edwardcapriolo/cassandra/compare/trigger_coverage?expand=1
> {code}
> package org.apache.cassandra.triggers;
> import java.io.IOException;
> import java.net.InetSocketAddress;
> import java.nio.ByteBuffer;
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.HashMap;
> import java.util.List;
> import java.util.Map;
> import junit.framework.Assert;
> import org.apache.cassandra.SchemaLoader;
> import org.apache.cassandra.config.Schema;
> import org.apache.cassandra.service.EmbeddedCassandraService;
> import org.apache.cassandra.thrift.CassandraServer;
> import org.apache.cassandra.thrift.CfDef;
> import org.apache.cassandra.thrift.ColumnParent;
> import org.apache.cassandra.thrift.KsDef;
> import org.apache.cassandra.thrift.ThriftSessionManager;
> import org.apache.cassandra.thrift.TriggerDef;
> import org.apache.cassandra.utils.ByteBufferUtil;
> import org.apache.thrift.TException;
> import org.junit.BeforeClass;
> import org.junit.Test;
> public class TriggerTest extends SchemaLoader
> {
> private static CassandraServer server;
> 
> @BeforeClass
> public static void setup() throws IOException, TException
> {
> Schema.instance.clear(); // Schema are now written on disk and will 
> be reloaded
> new EmbeddedCassandraService().start();
> ThriftSessionManager.instance.setCurrentSocket(new 
> InetSocketAddress(9160));
> server = new CassandraServer();
> server.set_keyspace("Keyspace1");
> }
> 
> @Test
> public void createATrigger() throws TException
> {
> TriggerDef td = new TriggerDef();
> td.setName("gimme5");
> Map options = new HashMap<>();
> options.put("class", "org.apache.cassandra.triggers.ITriggerImpl");
> td.setOptions(options);
> CfDef cfDef = new CfDef();
> cfDef.setKeyspace("Keyspace1");
> cfDef.setTriggers(Arrays.asList(td));
> cfDef.setName("triggercf");
> server.system_add_column_family(cfDef);
> 
> KsDef keyspace1 = server.describe_keyspace("Keyspace1");
> CfDef triggerCf = null;
> for (CfDef cfs :keyspace1.cf_defs){
>   if (cfs.getName().equals("triggercf")){
> triggerCf=cfs;
>   }
> }
> Assert.assertNotNull(triggerCf);
> Assert.assertEquals(1, triggerCf.getTriggers().size());
> }
> }
> {code}
> junit.framework.AssertionFailedError: expected:<1> but was:<0>



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-6790:
---

Reviewer: Aleksey Yeschenko

> Triggers are broken in trunk because of imutable list
> -
>
> Key: CASSANDRA-6790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 2.1 beta2
>
> Attachments: 
> 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch
>
>
> The trigger code is uncovered by any tests (that I can find). When inserting 
> single columns an immutable list is created. When the trigger attempts to 
> edit this list the operation fails.
> Fix coming shortly.
> {noformat}
> java.lang.UnsupportedOperationException
> at java.util.AbstractList.add(AbstractList.java:148)
> at java.util.AbstractList.add(AbstractList.java:108)
> at 
> java.util.AbstractCollection.addAll(AbstractCollection.java:342)
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
> at 
> org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
> at 
> org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
> at 
> org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
> at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
> at 
> org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6790) Triggers are broken in trunk because of imutable list

2014-03-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13930248#comment-13930248
 ] 

Sam Tunnicliffe commented on CASSANDRA-6790:


Attached patch applies to 2.0 branch.

> Triggers are broken in trunk because of imutable list
> -
>
> Key: CASSANDRA-6790
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6790
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
> Fix For: 2.1 beta2
>
> Attachments: 
> 0001-Apply-trigger-mutations-when-base-mutation-list-is-i.patch
>
>
> The trigger code is uncovered by any tests (that I can find). When inserting 
> single columns an immutable list is created. When the trigger attempts to 
> edit this list the operation fails.
> Fix coming shortly.
> {noformat}
> java.lang.UnsupportedOperationException
> at java.util.AbstractList.add(AbstractList.java:148)
> at java.util.AbstractList.add(AbstractList.java:108)
> at 
> java.util.AbstractCollection.addAll(AbstractCollection.java:342)
> at 
> org.apache.cassandra.service.StorageProxy.mutateWithTriggers(StorageProxy.java:522)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1084)
> at 
> org.apache.cassandra.thrift.CassandraServer.doInsert(CassandraServer.java:1066)
> at 
> org.apache.cassandra.thrift.CassandraServer.internal_insert(CassandraServer.java:676)
> at 
> org.apache.cassandra.thrift.CassandraServer.insert(CassandraServer.java:697)
> at 
> org.apache.cassandra.triggers.TriggerTest.createATriggerWithCqlAndReadItBackFromthrift(TriggerTest.java:108)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44)
> at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41)
> at 
> org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
> at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >