[jira] [Updated] (CASSANDRA-6212) TimestampType doesn't support pre-epoch long

2013-10-21 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6212:
---

Attachment: cassandra-2.0-6212.patch

Changed the regexp to accept a leading -

 TimestampType doesn't support pre-epoch long
 

 Key: CASSANDRA-6212
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6212
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Reporter: Simon Hopkin
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.0.2

 Attachments: cassandra-2.0-6212.patch


 org.apache.cassandra.db.marshal.TimestampType.dateStringToTimestamp() 
 contains a regular expression that checks to see if the String argument 
 contains a number.  If so it parses it as a long timestamp.  However 
 pre-epoch timestamps are negative and the code doesn't account for this so it 
 tries to parse it as a formatted Date.  A tweak to the regular expression in 
 TimestampType.dateStringToTimestamp() would solve this issue.
 I could use formatted date strings instead, but the TimestampType date parser 
 uses ISO8601 patterns which would cause the timestamp to be rounded to the 
 nearest second.
 Currently I get the following exception message:
 unable to coerce '-8640' to a  formatted date (long)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: CQL3: support pre-epoch longs for TimestampType

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 1187c7aa0 - 28caff5a4


CQL3: support pre-epoch longs for TimestampType

patch by Mikhail Stepura; reviewed by Aleksey Yeschenko for
CASSANDRA-6212


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/28caff5a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/28caff5a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/28caff5a

Branch: refs/heads/cassandra-2.0
Commit: 28caff5a44c3128394984fe8c968f62a5c3db0ff
Parents: 1187c7a
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:14:16 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:14:16 2013 +0800

--
 src/java/org/apache/cassandra/db/marshal/TimestampType.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/28caff5a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TimestampType.java 
b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
index 69ef07d..cf1ea41 100644
--- a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
@@ -20,10 +20,10 @@ package org.apache.cassandra.db.marshal;
 import java.nio.ByteBuffer;
 import java.text.ParseException;
 import java.util.Date;
+import java.util.regex.Pattern;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import org.apache.cassandra.cql3.CQL3Type;
 import org.apache.cassandra.serializers.TypeSerializer;
 import org.apache.cassandra.serializers.MarshalException;
@@ -44,6 +44,8 @@ public class TimestampType extends AbstractTypeDate
 
 public static final TimestampType instance = new TimestampType();
 
+private static final Pattern timestampPattern = 
Pattern.compile(^-?\\d+$);
+
 private TimestampType() {} // singleton
 
 public int compare(ByteBuffer o1, ByteBuffer o2)
@@ -69,7 +71,7 @@ public class TimestampType extends AbstractTypeDate
   millis = System.currentTimeMillis();
   }
   // Milliseconds since epoch?
-  else if (source.matches(^\\d+$))
+  else if (timestampPattern.matcher(source).matches())
   {
   try
   {



[1/2] git commit: CQL3: support pre-epoch longs for TimestampType

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 2d46a2bdb - c747874a5


CQL3: support pre-epoch longs for TimestampType

patch by Mikhail Stepura; reviewed by Aleksey Yeschenko for
CASSANDRA-6212


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/28caff5a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/28caff5a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/28caff5a

Branch: refs/heads/trunk
Commit: 28caff5a44c3128394984fe8c968f62a5c3db0ff
Parents: 1187c7a
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:14:16 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:14:16 2013 +0800

--
 src/java/org/apache/cassandra/db/marshal/TimestampType.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/28caff5a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/TimestampType.java 
b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
index 69ef07d..cf1ea41 100644
--- a/src/java/org/apache/cassandra/db/marshal/TimestampType.java
+++ b/src/java/org/apache/cassandra/db/marshal/TimestampType.java
@@ -20,10 +20,10 @@ package org.apache.cassandra.db.marshal;
 import java.nio.ByteBuffer;
 import java.text.ParseException;
 import java.util.Date;
+import java.util.regex.Pattern;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-
 import org.apache.cassandra.cql3.CQL3Type;
 import org.apache.cassandra.serializers.TypeSerializer;
 import org.apache.cassandra.serializers.MarshalException;
@@ -44,6 +44,8 @@ public class TimestampType extends AbstractTypeDate
 
 public static final TimestampType instance = new TimestampType();
 
+private static final Pattern timestampPattern = 
Pattern.compile(^-?\\d+$);
+
 private TimestampType() {} // singleton
 
 public int compare(ByteBuffer o1, ByteBuffer o2)
@@ -69,7 +71,7 @@ public class TimestampType extends AbstractTypeDate
   millis = System.currentTimeMillis();
   }
   // Milliseconds since epoch?
-  else if (source.matches(^\\d+$))
+  else if (timestampPattern.matcher(source).matches())
   {
   try
   {



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-21 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c747874a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c747874a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c747874a

Branch: refs/heads/trunk
Commit: c747874a5306cde9628d9e2c93679a84e22644c7
Parents: 2d46a2b 28caff5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:17:46 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:17:46 2013 +0800

--
 src/java/org/apache/cassandra/db/marshal/TimestampType.java | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)
--




git commit: CASSANDRA-6212 CHANGES.txt

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 28caff5a4 - 146f813e5


CASSANDRA-6212 CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/146f813e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/146f813e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/146f813e

Branch: refs/heads/cassandra-2.0
Commit: 146f813e56a97472c88d67d1917d78c4e0f3e81a
Parents: 28caff5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:20:03 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:20:03 2013 +0800

--
 CHANGES.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/146f813e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 41e885e..401b3ff 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -23,6 +23,7 @@
  * Use Java7 file-handling APIs and fix file moving on Windows (CASSANDRA-5383)
  * Save compaction history to system keyspace (CASSANDRA-5078)
  * Fix NPE if StorageService.getOperationMode() is executed before full 
startup (CASSANDRA-6166)
+ * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
 Merged from 1.2:
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
  * Add a warning for small LCS sstable size (CASSANDRA-6191)



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-21 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/46681108
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/46681108
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/46681108

Branch: refs/heads/trunk
Commit: 46681108e60cfa2afc229cf15157620b7ea0852f
Parents: c747874 146f813
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:20:18 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:20:18 2013 +0800

--
 CHANGES.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/46681108/CHANGES.txt
--



[1/2] git commit: CASSANDRA-6212 CHANGES.txt

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/trunk c747874a5 - 46681108e


CASSANDRA-6212 CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/146f813e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/146f813e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/146f813e

Branch: refs/heads/trunk
Commit: 146f813e56a97472c88d67d1917d78c4e0f3e81a
Parents: 28caff5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:20:03 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:20:03 2013 +0800

--
 CHANGES.txt | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/146f813e/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 41e885e..401b3ff 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -23,6 +23,7 @@
  * Use Java7 file-handling APIs and fix file moving on Windows (CASSANDRA-5383)
  * Save compaction history to system keyspace (CASSANDRA-5078)
  * Fix NPE if StorageService.getOperationMode() is executed before full 
startup (CASSANDRA-6166)
+ * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
 Merged from 1.2:
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
  * Add a warning for small LCS sstable size (CASSANDRA-6191)



[jira] [Updated] (CASSANDRA-6196) Add compaction, compression to cqlsh tab completion for CREATE TABLE

2013-10-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6196:
-

 Reviewer: Aleksey Yeschenko  (was: Brandon Williams)
Reproduced In: 2.0.1, 1.2.10  (was: 1.2.10, 2.0.1)

 Add compaction, compression to cqlsh tab completion for CREATE TABLE
 

 Key: CASSANDRA-6196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6196
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.12, 2.0.2

 Attachments: cassandra-2.0-6196.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6185) Can't update int column to blob type.

2013-10-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800424#comment-13800424
 ] 

Aleksey Yeschenko commented on CASSANDRA-6185:
--

^

 Can't update int column to blob type.
 -

 Key: CASSANDRA-6185
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6185
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.2.12, 2.0.2

 Attachments: 6185.txt


 Patch for dtests:
 {noformat}
 diff --git a/cql_tests.py b/cql_tests.py
 index 11461e4..405c998 100644
 --- a/cql_tests.py
 +++ b/cql_tests.py
 @@ -1547,35 +1547,35 @@ class TestCQL(Tester):
  CREATE TABLE test (
  k text,
  c text,
 -v text,
 +v int,
  PRIMARY KEY (k, c)
  )
  )
 -req = INSERT INTO test (k, c, v) VALUES ('%s', '%s', '%s')
 +req = INSERT INTO test (k, c, v) VALUES ('%s', '%s', %d)
  # using utf8 character so that we can see the transition to BytesType
 -cursor.execute(req % ('ɸ', 'ɸ', 'ɸ'))
 +cursor.execute(req % ('ɸ', 'ɸ', 1))
  cursor.execute(SELECT * FROM test)
  cursor.execute(SELECT * FROM test)
  res = cursor.fetchall()
 -assert res == [[u'ɸ', u'ɸ', u'ɸ']], res
 +assert res == [[u'ɸ', u'ɸ', 1]], res
  cursor.execute(ALTER TABLE test ALTER v TYPE blob)
  cursor.execute(SELECT * FROM test)
  res = cursor.fetchall()
  # the last should not be utf8 but a raw string
 -assert res == [[u'ɸ', u'ɸ', 'ɸ']], res
 +assert res == [[u'ɸ', u'ɸ', '\x00\x00\x00\x01']], res
  cursor.execute(ALTER TABLE test ALTER k TYPE blob)
  cursor.execute(SELECT * FROM test)
  res = cursor.fetchall()
 -assert res == [['ɸ', u'ɸ', 'ɸ']], res
 +assert res == [['ɸ', u'ɸ', '\x00\x00\x00\x01']], res
  cursor.execute(ALTER TABLE test ALTER c TYPE blob)
  cursor.execute(SELECT * FROM test)
  res = cursor.fetchall()
 -assert res == [['ɸ', 'ɸ', 'ɸ']], res
 +assert res == [['ɸ', 'ɸ', '\x00\x00\x00\x01']], res
  @since('1.2')
  def composite_row_key_test(self):
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: Add reloadtriggers command to nodetool

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 146f813e5 - edd1226fd


Add reloadtriggers command to nodetool

patch by Suresh; reviewed by Aleksey Yeschenko for CASSANDRA-4949


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edd1226f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edd1226f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edd1226f

Branch: refs/heads/cassandra-2.0
Commit: edd1226fda7408f97ded52a8cdab0ba1dea8d0df
Parents: 146f813
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:38:33 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:38:33 2013 +0800

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java  | 2 +-
 src/java/org/apache/cassandra/service/StorageProxyMBean.java | 2 +-
 src/java/org/apache/cassandra/tools/NodeCmd.java | 7 ++-
 src/java/org/apache/cassandra/tools/NodeProbe.java   | 5 +
 src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml   | 3 +++
 6 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 401b3ff..351c625 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -24,6 +24,7 @@
  * Save compaction history to system keyspace (CASSANDRA-5078)
  * Fix NPE if StorageService.getOperationMode() is executed before full 
startup (CASSANDRA-6166)
  * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
+ * Add reloadtriggers command to nodetool (CASSANDRA-4949)
 Merged from 1.2:
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
  * Add a warning for small LCS sstable size (CASSANDRA-6191)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 259d2f5..e177eed 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -2043,7 +2043,7 @@ public class StorageProxy implements StorageProxyMBean
 
 public Long getTruncateRpcTimeout() { return 
DatabaseDescriptor.getTruncateRpcTimeout(); }
 public void setTruncateRpcTimeout(Long timeoutInMillis) { 
DatabaseDescriptor.setTruncateRpcTimeout(timeoutInMillis); }
-public void reloadTriggerClass() { 
TriggerExecutor.instance.reloadClasses(); }
+public void reloadTriggerClasses() { 
TriggerExecutor.instance.reloadClasses(); }
 
 
 public long getReadRepairAttempted() {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/src/java/org/apache/cassandra/service/StorageProxyMBean.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxyMBean.java 
b/src/java/org/apache/cassandra/service/StorageProxyMBean.java
index 98c1850..ad7d4c7 100644
--- a/src/java/org/apache/cassandra/service/StorageProxyMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageProxyMBean.java
@@ -92,7 +92,7 @@ public interface StorageProxyMBean
 public Long getTruncateRpcTimeout();
 public void setTruncateRpcTimeout(Long timeoutInMillis);
 
-public void reloadTriggerClass();
+public void reloadTriggerClasses();
 
 public long getReadRepairAttempted();
 public long getReadRepairRepairedBlocking();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 62b15dd..57de7d0 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -169,7 +169,8 @@ public class NodeCmd
 RESETLOCALSCHEMA,
 ENABLEBACKUP,
 DISABLEBACKUP,
-SETCACHEKEYSTOSAVE
+SETCACHEKEYSTOSAVE,
+RELOADTRIGGERS
 }
 
 
@@ -1299,6 +1300,10 @@ public class NodeCmd
 nodeCmd.printRangeKeySample(System.out);
 break;
 
+case RELOADTRIGGERS :
+probe.reloadTriggers();
+break;
+
 default :
 throw new RuntimeException(Unreachable code.);
 }


[1/2] git commit: Add reloadtriggers command to nodetool

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 46681108e - d1f8c6f7a


Add reloadtriggers command to nodetool

patch by Suresh; reviewed by Aleksey Yeschenko for CASSANDRA-4949


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edd1226f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edd1226f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edd1226f

Branch: refs/heads/trunk
Commit: edd1226fda7408f97ded52a8cdab0ba1dea8d0df
Parents: 146f813
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:38:33 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:38:33 2013 +0800

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java  | 2 +-
 src/java/org/apache/cassandra/service/StorageProxyMBean.java | 2 +-
 src/java/org/apache/cassandra/tools/NodeCmd.java | 7 ++-
 src/java/org/apache/cassandra/tools/NodeProbe.java   | 5 +
 src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml   | 3 +++
 6 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 401b3ff..351c625 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -24,6 +24,7 @@
  * Save compaction history to system keyspace (CASSANDRA-5078)
  * Fix NPE if StorageService.getOperationMode() is executed before full 
startup (CASSANDRA-6166)
  * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
+ * Add reloadtriggers command to nodetool (CASSANDRA-4949)
 Merged from 1.2:
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
  * Add a warning for small LCS sstable size (CASSANDRA-6191)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index 259d2f5..e177eed 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -2043,7 +2043,7 @@ public class StorageProxy implements StorageProxyMBean
 
 public Long getTruncateRpcTimeout() { return 
DatabaseDescriptor.getTruncateRpcTimeout(); }
 public void setTruncateRpcTimeout(Long timeoutInMillis) { 
DatabaseDescriptor.setTruncateRpcTimeout(timeoutInMillis); }
-public void reloadTriggerClass() { 
TriggerExecutor.instance.reloadClasses(); }
+public void reloadTriggerClasses() { 
TriggerExecutor.instance.reloadClasses(); }
 
 
 public long getReadRepairAttempted() {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/src/java/org/apache/cassandra/service/StorageProxyMBean.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxyMBean.java 
b/src/java/org/apache/cassandra/service/StorageProxyMBean.java
index 98c1850..ad7d4c7 100644
--- a/src/java/org/apache/cassandra/service/StorageProxyMBean.java
+++ b/src/java/org/apache/cassandra/service/StorageProxyMBean.java
@@ -92,7 +92,7 @@ public interface StorageProxyMBean
 public Long getTruncateRpcTimeout();
 public void setTruncateRpcTimeout(Long timeoutInMillis);
 
-public void reloadTriggerClass();
+public void reloadTriggerClasses();
 
 public long getReadRepairAttempted();
 public long getReadRepairRepairedBlocking();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 62b15dd..57de7d0 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -169,7 +169,8 @@ public class NodeCmd
 RESETLOCALSCHEMA,
 ENABLEBACKUP,
 DISABLEBACKUP,
-SETCACHEKEYSTOSAVE
+SETCACHEKEYSTOSAVE,
+RELOADTRIGGERS
 }
 
 
@@ -1299,6 +1300,10 @@ public class NodeCmd
 nodeCmd.printRangeKeySample(System.out);
 break;
 
+case RELOADTRIGGERS :
+probe.reloadTriggers();
+break;
+
 default :
 throw new RuntimeException(Unreachable code.);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/edd1226f/src/java/org/apache/cassandra/tools/NodeProbe.java

[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-21 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d1f8c6f7
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d1f8c6f7
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d1f8c6f7

Branch: refs/heads/trunk
Commit: d1f8c6f7a8492a32efa8a2dd29a6061d0c618550
Parents: 4668110 edd1226
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 15:40:30 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 15:40:30 2013 +0800

--
 CHANGES.txt  | 1 +
 src/java/org/apache/cassandra/service/StorageProxy.java  | 2 +-
 src/java/org/apache/cassandra/service/StorageProxyMBean.java | 2 +-
 src/java/org/apache/cassandra/tools/NodeCmd.java | 7 ++-
 src/java/org/apache/cassandra/tools/NodeProbe.java   | 5 +
 src/resources/org/apache/cassandra/tools/NodeToolHelp.yaml   | 3 +++
 6 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d1f8c6f7/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d1f8c6f7/src/java/org/apache/cassandra/service/StorageProxy.java
--



git commit: Fix altering column types

2013-10-21 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 df188cc8d - 189a60728


Fix altering column types

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6185


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/189a6072
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/189a6072
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/189a6072

Branch: refs/heads/cassandra-1.2
Commit: 189a60728db1e01bfeaa664b41431701fd684f5f
Parents: df188cc
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Oct 21 10:10:54 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Oct 21 10:10:54 2013 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/config/ColumnDefinition.java  |  3 +-
 .../cql3/statements/AlterTableStatement.java| 35 
 .../cassandra/db/marshal/AbstractType.java  | 14 +++-
 .../apache/cassandra/db/marshal/BytesType.java  |  7 
 .../cassandra/db/marshal/CompositeType.java | 24 ++
 6 files changed, 76 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 70bb919..117a200 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.12
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
+ * Fix altering column types (CASSANDRA-6185)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index db5f7ed..807f008 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -180,8 +180,9 @@ public class ColumnDefinition
 if (getIndexType() != null  def.getIndexType() != null)
 {
 // If an index is set (and not drop by this update), the validator 
shouldn't be change to a non-compatible one
+// (and we want true comparator compatibility, not just value one, 
since the validator is used by LocalPartitioner to order index rows)
 if (!def.getValidator().isCompatibleWith(getValidator()))
-throw new ConfigurationException(String.format(Cannot modify 
validator to a non-compatible one for column %s since an index is set, 
comparator.getString(name)));
+throw new ConfigurationException(String.format(Cannot modify 
validator to a non-order-compatible one for column %s since an index is set, 
comparator.getString(name)));
 
 assert getIndexName() != null;
 if (!getIndexName().equals(def.getIndexName()))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index a247a4d..36ec56d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@ -134,24 +134,45 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 throw new 
InvalidRequestException(String.format(counter type is not supported for 
PRIMARY KEY part %s, columnName));
 if (cfDef.hasCompositeKey)
 {
-ListAbstractType? newTypes = new 
ArrayListAbstractType?(((CompositeType) cfm.getKeyValidator()).types);
+ListAbstractType? oldTypes = ((CompositeType) 
cfm.getKeyValidator()).types;
+if 
(!newType.isValueCompatibleWith(oldTypes.get(name.position)))
+throw new 
ConfigurationException(String.format(Cannot change %s from type %s to type %s: 
types are incompatible.,
+   
columnName,
+   
oldTypes.get(name.position).asCQL3Type(),
+   
validator));
+
+ListAbstractType? newTypes = new 
ArrayListAbstractType?(oldTypes);
 newTypes.set(name.position, newType);
 

[1/2] git commit: Fix altering column types

2013-10-21 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 edd1226fd - 5c5426233


Fix altering column types

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6185


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/189a6072
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/189a6072
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/189a6072

Branch: refs/heads/cassandra-2.0
Commit: 189a60728db1e01bfeaa664b41431701fd684f5f
Parents: df188cc
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Oct 21 10:10:54 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Oct 21 10:10:54 2013 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/config/ColumnDefinition.java  |  3 +-
 .../cql3/statements/AlterTableStatement.java| 35 
 .../cassandra/db/marshal/AbstractType.java  | 14 +++-
 .../apache/cassandra/db/marshal/BytesType.java  |  7 
 .../cassandra/db/marshal/CompositeType.java | 24 ++
 6 files changed, 76 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 70bb919..117a200 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.12
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
+ * Fix altering column types (CASSANDRA-6185)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index db5f7ed..807f008 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -180,8 +180,9 @@ public class ColumnDefinition
 if (getIndexType() != null  def.getIndexType() != null)
 {
 // If an index is set (and not drop by this update), the validator 
shouldn't be change to a non-compatible one
+// (and we want true comparator compatibility, not just value one, 
since the validator is used by LocalPartitioner to order index rows)
 if (!def.getValidator().isCompatibleWith(getValidator()))
-throw new ConfigurationException(String.format(Cannot modify 
validator to a non-compatible one for column %s since an index is set, 
comparator.getString(name)));
+throw new ConfigurationException(String.format(Cannot modify 
validator to a non-order-compatible one for column %s since an index is set, 
comparator.getString(name)));
 
 assert getIndexName() != null;
 if (!getIndexName().equals(def.getIndexName()))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index a247a4d..36ec56d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@ -134,24 +134,45 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 throw new 
InvalidRequestException(String.format(counter type is not supported for 
PRIMARY KEY part %s, columnName));
 if (cfDef.hasCompositeKey)
 {
-ListAbstractType? newTypes = new 
ArrayListAbstractType?(((CompositeType) cfm.getKeyValidator()).types);
+ListAbstractType? oldTypes = ((CompositeType) 
cfm.getKeyValidator()).types;
+if 
(!newType.isValueCompatibleWith(oldTypes.get(name.position)))
+throw new 
ConfigurationException(String.format(Cannot change %s from type %s to type %s: 
types are incompatible.,
+   
columnName,
+   
oldTypes.get(name.position).asCQL3Type(),
+   
validator));
+
+ListAbstractType? newTypes = new 
ArrayListAbstractType?(oldTypes);
 newTypes.set(name.position, newType);
 

[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-21 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/ColumnDefinition.java
src/java/org/apache/cassandra/db/marshal/CompositeType.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c542623
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c542623
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c542623

Branch: refs/heads/cassandra-2.0
Commit: 5c54262336430ee2fdca2eb5df6cdff525abf78b
Parents: edd1226 189a607
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Oct 21 10:15:59 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Oct 21 10:15:59 2013 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/config/ColumnDefinition.java  |  4 +--
 .../cql3/statements/AlterTableStatement.java| 35 
 .../cassandra/db/marshal/AbstractType.java  | 14 +++-
 .../apache/cassandra/db/marshal/BytesType.java  |  7 
 .../cassandra/db/marshal/CompositeType.java | 24 ++
 6 files changed, 76 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/CHANGES.txt
--
diff --cc CHANGES.txt
index 351c625,117a200..02bbc1d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -60,43 -36,9 +60,44 @@@ Merged from 1.2
   * Fix validation of empty column names for compact tables (CASSANDRA-6152)
   * Skip replaying mutations that pass CRC but fail to deserialize 
(CASSANDRA-6183)
   * Rework token replacement to use replace_address (CASSANDRA-5916)
++ * Fix altering column types (CASSANDRA-6185)
  
  
 -1.2.10
 +2.0.1
 + * Fix bug that could allow reading deleted data temporarily (CASSANDRA-6025)
 + * Improve memory use defaults (CASSANDRA-5069)
 + * Make ThriftServer more easlly extensible (CASSANDRA-6058)
 + * Remove Hadoop dependency from ITransportFactory (CASSANDRA-6062)
 + * add file_cache_size_in_mb setting (CASSANDRA-5661)
 + * Improve error message when yaml contains invalid properties 
(CASSANDRA-5958)
 + * Improve leveled compaction's ability to find non-overlapping L0 compactions
 +   to work on concurrently (CASSANDRA-5921)
 + * Notify indexer of columns shadowed by range tombstones (CASSANDRA-5614)
 + * Log Merkle tree stats (CASSANDRA-2698)
 + * Switch from crc32 to adler32 for compressed sstable checksums 
(CASSANDRA-5862)
 + * Improve offheap memcpy performance (CASSANDRA-5884)
 + * Use a range aware scanner for cleanup (CASSANDRA-2524)
 + * Cleanup doesn't need to inspect sstables that contain only local data
 +   (CASSANDRA-5722)
 + * Add ability for CQL3 to list partition keys (CASSANDRA-4536)
 + * Improve native protocol serialization (CASSANDRA-5664)
 + * Upgrade Thrift to 0.9.1 (CASSANDRA-5923)
 + * Require superuser status for adding triggers (CASSANDRA-5963)
 + * Make standalone scrubber handle old and new style leveled manifest
 +   (CASSANDRA-6005)
 + * Fix paxos bugs (CASSANDRA-6012, 6013, 6023)
 + * Fix paged ranges with multiple replicas (CASSANDRA-6004)
 + * Fix potential AssertionError during tracing (CASSANDRA-6041)
 + * Fix NPE in sstablesplit (CASSANDRA-6027)
 + * Migrate pre-2.0 key/value/column aliases to system.schema_columns
 +   (CASSANDRA-6009)
 + * Paging filter empty rows too agressively (CASSANDRA-6040)
 + * Support variadic parameters for IN clauses (CASSANDRA-4210)
 + * cqlsh: return the result of CAS writes (CASSANDRA-5796)
 + * Fix validation of IN clauses with 2ndary indexes (CASSANDRA-6050)
 + * Support named bind variables in CQL (CASSANDRA-6033)
 +Merged from 1.2:
 + * Allow cache-keys-to-save to be set at runtime (CASSANDRA-5980)
   * Avoid second-guessing out-of-space state (CASSANDRA-5605)
   * Tuning knobs for dealing with large blobs and many CFs (CASSANDRA-5982)
   * (Hadoop) Fix CQLRW for thrift tables (CASSANDRA-6002)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/src/java/org/apache/cassandra/config/ColumnDefinition.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/src/java/org/apache/cassandra/db/marshal/AbstractType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/src/java/org/apache/cassandra/db/marshal/BytesType.java
--


[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-21 Thread slebresne
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/477191b2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/477191b2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/477191b2

Branch: refs/heads/trunk
Commit: 477191b272942dba7f4a9e0fcd9fc835e6bef9b7
Parents: d1f8c6f 5c54262
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Oct 21 10:16:48 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Oct 21 10:16:48 2013 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/config/ColumnDefinition.java  |  4 +--
 .../cql3/statements/AlterTableStatement.java| 35 
 .../cassandra/db/marshal/AbstractType.java  | 14 +++-
 .../apache/cassandra/db/marshal/BytesType.java  |  7 
 .../cassandra/db/marshal/CompositeType.java | 24 ++
 6 files changed, 76 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/477191b2/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/477191b2/src/java/org/apache/cassandra/config/ColumnDefinition.java
--



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-21 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/ColumnDefinition.java
src/java/org/apache/cassandra/db/marshal/CompositeType.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c542623
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c542623
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c542623

Branch: refs/heads/trunk
Commit: 5c54262336430ee2fdca2eb5df6cdff525abf78b
Parents: edd1226 189a607
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Oct 21 10:15:59 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Oct 21 10:15:59 2013 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/config/ColumnDefinition.java  |  4 +--
 .../cql3/statements/AlterTableStatement.java| 35 
 .../cassandra/db/marshal/AbstractType.java  | 14 +++-
 .../apache/cassandra/db/marshal/BytesType.java  |  7 
 .../cassandra/db/marshal/CompositeType.java | 24 ++
 6 files changed, 76 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/CHANGES.txt
--
diff --cc CHANGES.txt
index 351c625,117a200..02bbc1d
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -60,43 -36,9 +60,44 @@@ Merged from 1.2
   * Fix validation of empty column names for compact tables (CASSANDRA-6152)
   * Skip replaying mutations that pass CRC but fail to deserialize 
(CASSANDRA-6183)
   * Rework token replacement to use replace_address (CASSANDRA-5916)
++ * Fix altering column types (CASSANDRA-6185)
  
  
 -1.2.10
 +2.0.1
 + * Fix bug that could allow reading deleted data temporarily (CASSANDRA-6025)
 + * Improve memory use defaults (CASSANDRA-5069)
 + * Make ThriftServer more easlly extensible (CASSANDRA-6058)
 + * Remove Hadoop dependency from ITransportFactory (CASSANDRA-6062)
 + * add file_cache_size_in_mb setting (CASSANDRA-5661)
 + * Improve error message when yaml contains invalid properties 
(CASSANDRA-5958)
 + * Improve leveled compaction's ability to find non-overlapping L0 compactions
 +   to work on concurrently (CASSANDRA-5921)
 + * Notify indexer of columns shadowed by range tombstones (CASSANDRA-5614)
 + * Log Merkle tree stats (CASSANDRA-2698)
 + * Switch from crc32 to adler32 for compressed sstable checksums 
(CASSANDRA-5862)
 + * Improve offheap memcpy performance (CASSANDRA-5884)
 + * Use a range aware scanner for cleanup (CASSANDRA-2524)
 + * Cleanup doesn't need to inspect sstables that contain only local data
 +   (CASSANDRA-5722)
 + * Add ability for CQL3 to list partition keys (CASSANDRA-4536)
 + * Improve native protocol serialization (CASSANDRA-5664)
 + * Upgrade Thrift to 0.9.1 (CASSANDRA-5923)
 + * Require superuser status for adding triggers (CASSANDRA-5963)
 + * Make standalone scrubber handle old and new style leveled manifest
 +   (CASSANDRA-6005)
 + * Fix paxos bugs (CASSANDRA-6012, 6013, 6023)
 + * Fix paged ranges with multiple replicas (CASSANDRA-6004)
 + * Fix potential AssertionError during tracing (CASSANDRA-6041)
 + * Fix NPE in sstablesplit (CASSANDRA-6027)
 + * Migrate pre-2.0 key/value/column aliases to system.schema_columns
 +   (CASSANDRA-6009)
 + * Paging filter empty rows too agressively (CASSANDRA-6040)
 + * Support variadic parameters for IN clauses (CASSANDRA-4210)
 + * cqlsh: return the result of CAS writes (CASSANDRA-5796)
 + * Fix validation of IN clauses with 2ndary indexes (CASSANDRA-6050)
 + * Support named bind variables in CQL (CASSANDRA-6033)
 +Merged from 1.2:
 + * Allow cache-keys-to-save to be set at runtime (CASSANDRA-5980)
   * Avoid second-guessing out-of-space state (CASSANDRA-5605)
   * Tuning knobs for dealing with large blobs and many CFs (CASSANDRA-5982)
   * (Hadoop) Fix CQLRW for thrift tables (CASSANDRA-6002)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/src/java/org/apache/cassandra/config/ColumnDefinition.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/src/java/org/apache/cassandra/db/marshal/AbstractType.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c542623/src/java/org/apache/cassandra/db/marshal/BytesType.java
--


[1/3] git commit: Fix altering column types

2013-10-21 Thread slebresne
Updated Branches:
  refs/heads/trunk d1f8c6f7a - 477191b27


Fix altering column types

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6185


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/189a6072
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/189a6072
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/189a6072

Branch: refs/heads/trunk
Commit: 189a60728db1e01bfeaa664b41431701fd684f5f
Parents: df188cc
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Oct 21 10:10:54 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Oct 21 10:10:54 2013 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/config/ColumnDefinition.java  |  3 +-
 .../cql3/statements/AlterTableStatement.java| 35 
 .../cassandra/db/marshal/AbstractType.java  | 14 +++-
 .../apache/cassandra/db/marshal/BytesType.java  |  7 
 .../cassandra/db/marshal/CompositeType.java | 24 ++
 6 files changed, 76 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 70bb919..117a200 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.12
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
+ * Fix altering column types (CASSANDRA-6185)
 
 
 1.2.11

http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index db5f7ed..807f008 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -180,8 +180,9 @@ public class ColumnDefinition
 if (getIndexType() != null  def.getIndexType() != null)
 {
 // If an index is set (and not drop by this update), the validator 
shouldn't be change to a non-compatible one
+// (and we want true comparator compatibility, not just value one, 
since the validator is used by LocalPartitioner to order index rows)
 if (!def.getValidator().isCompatibleWith(getValidator()))
-throw new ConfigurationException(String.format(Cannot modify 
validator to a non-compatible one for column %s since an index is set, 
comparator.getString(name)));
+throw new ConfigurationException(String.format(Cannot modify 
validator to a non-order-compatible one for column %s since an index is set, 
comparator.getString(name)));
 
 assert getIndexName() != null;
 if (!getIndexName().equals(def.getIndexName()))

http://git-wip-us.apache.org/repos/asf/cassandra/blob/189a6072/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
index a247a4d..36ec56d 100644
--- a/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/AlterTableStatement.java
@@ -134,24 +134,45 @@ public class AlterTableStatement extends 
SchemaAlteringStatement
 throw new 
InvalidRequestException(String.format(counter type is not supported for 
PRIMARY KEY part %s, columnName));
 if (cfDef.hasCompositeKey)
 {
-ListAbstractType? newTypes = new 
ArrayListAbstractType?(((CompositeType) cfm.getKeyValidator()).types);
+ListAbstractType? oldTypes = ((CompositeType) 
cfm.getKeyValidator()).types;
+if 
(!newType.isValueCompatibleWith(oldTypes.get(name.position)))
+throw new 
ConfigurationException(String.format(Cannot change %s from type %s to type %s: 
types are incompatible.,
+   
columnName,
+   
oldTypes.get(name.position).asCQL3Type(),
+   
validator));
+
+ListAbstractType? newTypes = new 
ArrayListAbstractType?(oldTypes);
 newTypes.set(name.position, newType);
 

[jira] [Updated] (CASSANDRA-3578) Multithreaded commitlog

2013-10-21 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3578:
-

Attachment: ComitlogStress.java

Micro benchmark code attached, try's to update commit log as fast as possible 
(choose a small mutation to avoid active segment starvation, we are still 
creating ~1 CL per second).

It was creating a commit log segment per second, not sure if this is valid 
comparison to real world at this time. But the good part it is that it the 
patch consumes less memory and has a less swings. 

http://pastebin.com/WeJ0QL8p

 Multithreaded commitlog
 ---

 Key: CASSANDRA-3578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3578
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
  Labels: performance
 Attachments: 0001-CASSANDRA-3578.patch, ComitlogStress.java, 
 parallel_commit_log_2.patch


 Brian Aker pointed out a while ago that allowing multiple threads to modify 
 the commitlog simultaneously (reserving space for each with a CAS first, the 
 way we do in the SlabAllocator.Region.allocate) can improve performance, 
 since you're not bottlenecking on a single thread to do all the copying and 
 CRC computation.
 Now that we use mmap'd CommitLog segments (CASSANDRA-3411) this becomes 
 doable.
 (moved from CASSANDRA-622, which was getting a bit muddled.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6139) Cqlsh shouldn't display empty value alias

2013-10-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6139:
-

Attachment: 6139.txt

 Cqlsh shouldn't display empty value alias
 ---

 Key: CASSANDRA-6139
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6139
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.2

 Attachments: 6139.txt


 When someone creates:
 {noformat}
 CREATE TABLE foo (
k int,
v int,
PRIMARY KEY (k, v)
 ) WITH COMPACT STORAGE
 {noformat}
 then we internally create a value alias (1.2)/compact value definition 
 (2.0) with an empty name. Seems that cqlsh don't recognize that fact and 
 display that as:
 {noformat}
 cqlsh:ks DESC TABLE foo;
 CREATE TABLE foo (
   k int,
   v int,
blob,
   PRIMARY KEY (k, v)
 ) WITH COMPACT STORAGE AND ...
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6211) NPE in system.log

2013-10-21 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800496#comment-13800496
 ] 

Benedict commented on CASSANDRA-6211:
-

I've seen this before - especially with NPE the VM can optimise away the stack 
trace in certain cases (helpful, right?)   - seen it especially a problem in 
small highly parallelized workloads.

Try running with -XX:-OmitStackTraceInFastThrow to see if it helps


 NPE in system.log
 -

 Key: CASSANDRA-6211
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6211
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: java version 1.7.0_25
 Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
 Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
 Linux hostname 2.6.32-279.el6.x86_64 #1 SMP Thu Jun 21 15:00:18 EDT 2012 
 x86_64 x86_64 x86_64 GNU/Linux
Reporter: Mikhail Mazursky
  Labels: npe, nullpointerexception

 I wrote a stresstest to test C* and my code that uses CAS heavily. I see 
 strange exception messages in logs:
 {noformat}
 ERROR [MutationStage:320] 2013-10-17 13:59:10,710 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:320,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:328] 2013-10-17 13:59:10,718 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:328,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:327] 2013-10-17 13:59:10,732 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:327,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:325] 2013-10-17 13:59:10,750 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:325,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:326] 2013-10-17 13:59:10,762 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:326,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:330] 2013-10-17 13:59:10,768 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:330,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:331] 2013-10-17 13:59:10,775 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:331,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:334] 2013-10-17 13:59:10,789 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:334,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:329] 2013-10-17 13:59:10,803 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:329,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:335] 2013-10-17 13:59:10,812 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:335,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:333] 2013-10-17 13:59:10,826 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:333,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:332] 2013-10-17 13:59:10,834 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:332,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:337] 2013-10-17 13:59:10,842 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:337,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:336] 2013-10-17 13:59:10,859 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:336,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:338] 2013-10-17 13:59:10,870 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:338,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:339] 2013-10-17 13:59:10,884 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:339,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:341] 2013-10-17 13:59:10,894 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:341,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:340] 2013-10-17 13:59:10,910 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:340,5,main]
 java.lang.NullPointerException
 ERROR [MutationStage:344] 2013-10-17 13:59:10,920 CassandraDaemon.java (line 
 185) Exception in thread Thread[MutationStage:344,5,main]
 java.lang.NullPointerException
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6205) sstableloader broken in 2.0 HEAD

2013-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800498#comment-13800498
 ] 

Sylvain Lebresne commented on CASSANDRA-6205:
-

+1

 sstableloader broken in 2.0 HEAD
 

 Key: CASSANDRA-6205
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6205
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
Assignee: Tyler Hobbs
 Fix For: 2.0.2

 Attachments: 6205.patch


 The code for tracking sstable coldness is also executing when running 
 sstableloader which causes problems.
 {noformat}
 Exception in thread main java.lang.RuntimeException: Error validating 
 SELECT * FROM sstable_activity WHERE keyspace_name='test_backup_restore' and 
 columnfamily_name='cf0' and generation=1
 at 
 org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:190)
 at 
 org.apache.cassandra.db.SystemKeyspace.getSSTableReadMeter(SystemKeyspace.java:907)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.init(SSTableReader.java:337)
 at 
 org.apache.cassandra.io.sstable.SSTableReader.openForBatch(SSTableReader.java:160)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader$1.accept(SSTableLoader.java:112)
 at java.io.File.list(File.java:1087)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.openSSTables(SSTableLoader.java:73)
 at 
 org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:155)
 at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:68)
 Caused by: org.apache.cassandra.db.KeyspaceNotDefinedException: Keyspace 
 system does not exist
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: ninja NodeProbe compactionHistory() - getCompactionHistory()

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 5c5426233 - b365edc46


ninja NodeProbe compactionHistory() - getCompactionHistory()


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b365edc4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b365edc4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b365edc4

Branch: refs/heads/cassandra-2.0
Commit: b365edc46266925a6aec028345f6a47d415a1f0d
Parents: 5c54262
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 17:38:10 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 17:38:10 2013 +0800

--
 src/java/org/apache/cassandra/tools/NodeCmd.java   | 2 +-
 src/java/org/apache/cassandra/tools/NodeProbe.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b365edc4/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 57de7d0..034ff29 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -1329,7 +1329,7 @@ public class NodeCmd
 {
 out.println(Compaction History: );
 
-TabularData tabularData = this.probe.compactionHistory();
+TabularData tabularData = this.probe.getCompactionHistory();
 if (tabularData.isEmpty())
 {
 out.printf(There is no compaction history);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b365edc4/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index 2610b2f..0008325 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -871,7 +871,7 @@ public class NodeProbe
 return spProxy.getReadRepairRepairedBackground();
 }
 
-public TabularData compactionHistory()
+public TabularData getCompactionHistory()
 {
 return compactionProxy.getCompactionHistory();
 }



[1/2] git commit: ninja NodeProbe compactionHistory() - getCompactionHistory()

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 477191b27 - 66957ece2


ninja NodeProbe compactionHistory() - getCompactionHistory()


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b365edc4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b365edc4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b365edc4

Branch: refs/heads/trunk
Commit: b365edc46266925a6aec028345f6a47d415a1f0d
Parents: 5c54262
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 17:38:10 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 17:38:10 2013 +0800

--
 src/java/org/apache/cassandra/tools/NodeCmd.java   | 2 +-
 src/java/org/apache/cassandra/tools/NodeProbe.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b365edc4/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeCmd.java 
b/src/java/org/apache/cassandra/tools/NodeCmd.java
index 57de7d0..034ff29 100644
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@ -1329,7 +1329,7 @@ public class NodeCmd
 {
 out.println(Compaction History: );
 
-TabularData tabularData = this.probe.compactionHistory();
+TabularData tabularData = this.probe.getCompactionHistory();
 if (tabularData.isEmpty())
 {
 out.printf(There is no compaction history);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b365edc4/src/java/org/apache/cassandra/tools/NodeProbe.java
--
diff --git a/src/java/org/apache/cassandra/tools/NodeProbe.java 
b/src/java/org/apache/cassandra/tools/NodeProbe.java
index 2610b2f..0008325 100644
--- a/src/java/org/apache/cassandra/tools/NodeProbe.java
+++ b/src/java/org/apache/cassandra/tools/NodeProbe.java
@@ -871,7 +871,7 @@ public class NodeProbe
 return spProxy.getReadRepairRepairedBackground();
 }
 
-public TabularData compactionHistory()
+public TabularData getCompactionHistory()
 {
 return compactionProxy.getCompactionHistory();
 }



[jira] [Commented] (CASSANDRA-6106) QueryState.getTimestamp() FBUtilities.timestampMicros() reads current timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() / 1000

2013-10-21 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800513#comment-13800513
 ] 

Sylvain Lebresne commented on CASSANDRA-6106:
-

bq. since 1.2.11 is going out without this issue resolved

I'm not against targeting 2.0 but this won't go into the 1.2 branch as this is 
really just an improvement.

bq. What about my alternative of using random data for the lower bits?

I don't think we want to do that because at the very least we want to keep the 
behavior that timestamp generated for the same client connection are always 
strictly increasing, and it seems to me that randomizing is not really 
compatible with it.

bq. nanoTime, perhaps with periodic recalibration

It's an option, though I'll admit that it feels a bit like a hack. Not totally 
opposed I guess, but as Jonathan, I think I'd be fine with using gettimeofday 
and leaving platform that don't support it with the status quo.

Though on the longer run, I'm starting to be convinced that we should slowly 
move back to client side timestamps by default (CASSANDRA-6178) so it's unclear 
to me how much effort is worth putting into this (given that, at the end of the 
day, this won't ensure timestamp uniqueness anyway and you'd still have to be 
aware that on timestamp tie, the resolution is based on the value).


 QueryState.getTimestamp()  FBUtilities.timestampMicros() reads current 
 timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() 
 / 1000
 

 Key: CASSANDRA-6106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: DSE Cassandra 3.1, but also HEAD
Reporter: Christopher Smith
Priority: Minor
  Labels: collision, conflict, timestamp
 Attachments: microtimstamp.patch, microtimstamp_random.patch, 
 microtimstamp_random_rev2.patch


 I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
 mentioned issues with millisecond rounding in timestamps and was able to 
 reproduce the issue. If I specify a timestamp in a mutating query, I get 
 microsecond precision, but if I don't, I get timestamps rounded to the 
 nearest millisecond, at least for my first query on a given connection, which 
 substantially increases the possibilities of collision.
 I believe I found the offending code, though I am by no means sure this is 
 comprehensive. I think we probably need a fairly comprehensive replacement of 
 all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-10-21 Thread Ashot Golovenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashot Golovenko updated CASSANDRA-6220:
---

Attachment: inserts.zip

I've generated some insert scripts... Well the bug disappeared meanwhile but 
It'll be back=)

 Unable to select multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ashot Golovenko
 Attachments: inserts.zip


 I have the following table:
 CREATE TABLE rating (
 id bigint,
 mid int,
 hid int,
 r double,
 PRIMARY KEY ((id, mid), hid));
 And I get really really strange result sets on the following queries:
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329320;
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329220;
  hid   | r
 ---+---
  201329220 | 53.62
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid in (201329320, 201329220);
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)  -- WRONG - should be two records
 As you can see although both records exist I'm not able the fetch all of them 
 using in clause. By now I have to cycle my requests which are about 30 and I 
 find it highly inefficient given that I query physically the same row. 
 More of that  - it doesn't happen all the time! For different id values 
 sometimes I get the correct dataset.
 Ideally I'd like the following select to work:
 SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
 Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-3578) Multithreaded commitlog

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800570#comment-13800570
 ] 

Jonathan Ellis commented on CASSANDRA-3578:
---

I take it that max_mb,allocated_mb,free_mb are heap numbers?  I'm not sure what 
to make of those, really; e.g. we could have higher free_mb for one b/c it's 
CMSing constantly.  Suggest measuring pause time from JVM GC log instead.

 Multithreaded commitlog
 ---

 Key: CASSANDRA-3578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3578
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
  Labels: performance
 Attachments: 0001-CASSANDRA-3578.patch, ComitlogStress.java, 
 parallel_commit_log_2.patch


 Brian Aker pointed out a while ago that allowing multiple threads to modify 
 the commitlog simultaneously (reserving space for each with a CAS first, the 
 way we do in the SlabAllocator.Region.allocate) can improve performance, 
 since you're not bottlenecking on a single thread to do all the copying and 
 CRC computation.
 Now that we use mmap'd CommitLog segments (CASSANDRA-3411) this becomes 
 doable.
 (moved from CASSANDRA-622, which was getting a bit muddled.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6212) TimestampType doesn't support pre-epoch long

2013-10-21 Thread Simon Hopkin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800619#comment-13800619
 ] 

Simon Hopkin commented on CASSANDRA-6212:
-

Thanks for the quick turnaround on this issue guys.

 TimestampType doesn't support pre-epoch long
 

 Key: CASSANDRA-6212
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6212
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
Reporter: Simon Hopkin
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.0.2

 Attachments: cassandra-2.0-6212.patch


 org.apache.cassandra.db.marshal.TimestampType.dateStringToTimestamp() 
 contains a regular expression that checks to see if the String argument 
 contains a number.  If so it parses it as a long timestamp.  However 
 pre-epoch timestamps are negative and the code doesn't account for this so it 
 tries to parse it as a formatted Date.  A tweak to the regular expression in 
 TimestampType.dateStringToTimestamp() would solve this issue.
 I could use formatted date strings instead, but the TimestampType date parser 
 uses ISO8601 patterns which would cause the timestamp to be rounded to the 
 nearest second.
 Currently I get the following exception message:
 unable to coerce '-8640' to a  formatted date (long)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6182) Unable to modify column_metadata via thrift

2013-10-21 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6182:


Attachment: 6182.txt

Attaching patch. This is a regression from CASSANDRA-5579 for column 
definitions where the comparator is not UTF8Type (as is the case in this 
example).

 Unable to modify column_metadata via thrift
 ---

 Key: CASSANDRA-6182
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6182
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey
Assignee: Sylvain Lebresne
 Fix For: 2.0.2

 Attachments: 6182.txt


 Reproduced on 2.0 HEAD
 {noformat}
 [default@unknown] use opscenter;
 Authenticated to keyspace: OpsCenter
 [default@OpsCenter] create column family test with column_metadata = 
 [{column_name: '', validation_class: LongType}];
 637fffa1-a10f-3d89-8be6-8a316af05dd2
 [default@OpsCenter] update column family test with column_metadata=[];
 e49e435b-ba2a-3a08-8af0-32b897b872b8
 [default@OpsCenter] show schema;
 other entries removed
 create column family test
   with column_type = 'Standard'
   and comparator = 'BytesType'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'BytesType'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and populate_io_cache_on_flush = false
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and default_time_to_live = 0
   and speculative_retry = 'NONE'
   and column_metadata = [
 {column_name : '',
 validation_class : LongType}]
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.LZ4Compressor'}
   and index_interval = 128;
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6106) QueryState.getTimestamp() FBUtilities.timestampMicros() reads current timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() / 1000

2013-10-21 Thread Christopher Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800702#comment-13800702
 ] 

Christopher Smith commented on CASSANDRA-6106:
--

The random patch still ensures that a given server's generated timestamps are 
strictly increasing.

 QueryState.getTimestamp()  FBUtilities.timestampMicros() reads current 
 timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() 
 / 1000
 

 Key: CASSANDRA-6106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: DSE Cassandra 3.1, but also HEAD
Reporter: Christopher Smith
Priority: Minor
  Labels: collision, conflict, timestamp
 Attachments: microtimstamp.patch, microtimstamp_random.patch, 
 microtimstamp_random_rev2.patch


 I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
 mentioned issues with millisecond rounding in timestamps and was able to 
 reproduce the issue. If I specify a timestamp in a mutating query, I get 
 microsecond precision, but if I don't, I get timestamps rounded to the 
 nearest millisecond, at least for my first query on a given connection, which 
 substantially increases the possibilities of collision.
 I believe I found the offending code, though I am by no means sure this is 
 comprehensive. I think we probably need a fairly comprehensive replacement of 
 all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6106) QueryState.getTimestamp() FBUtilities.timestampMicros() reads current timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() / 1000

2013-10-21 Thread Christopher Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800704#comment-13800704
 ] 

Christopher Smith commented on CASSANDRA-6106:
--

I think this is worth addressing simply because the collision probabilities are 
surprisingly high for anyone working off the documentation.

 QueryState.getTimestamp()  FBUtilities.timestampMicros() reads current 
 timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() 
 / 1000
 

 Key: CASSANDRA-6106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: DSE Cassandra 3.1, but also HEAD
Reporter: Christopher Smith
Priority: Minor
  Labels: collision, conflict, timestamp
 Attachments: microtimstamp.patch, microtimstamp_random.patch, 
 microtimstamp_random_rev2.patch


 I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
 mentioned issues with millisecond rounding in timestamps and was able to 
 reproduce the issue. If I specify a timestamp in a mutating query, I get 
 microsecond precision, but if I don't, I get timestamps rounded to the 
 nearest millisecond, at least for my first query on a given connection, which 
 substantially increases the possibilities of collision.
 I believe I found the offending code, though I am by no means sure this is 
 comprehensive. I think we probably need a fairly comprehensive replacement of 
 all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-10-21 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800709#comment-13800709
 ] 

Constance Eustace commented on CASSANDRA-6220:
--

one of the CASS-6137 comments has a github with a reproduction script if you 
need to reliably reproduce. Takes about 400,000 inserts + 6000 updates 

 Unable to select multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ashot Golovenko
 Attachments: inserts.zip


 I have the following table:
 CREATE TABLE rating (
 id bigint,
 mid int,
 hid int,
 r double,
 PRIMARY KEY ((id, mid), hid));
 And I get really really strange result sets on the following queries:
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329320;
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329220;
  hid   | r
 ---+---
  201329220 | 53.62
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid in (201329320, 201329220);
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)  -- WRONG - should be two records
 As you can see although both records exist I'm not able the fetch all of them 
 using in clause. By now I have to cycle my requests which are about 30 and I 
 find it highly inefficient given that I query physically the same row. 
 More of that  - it doesn't happen all the time! For different id values 
 sometimes I get the correct dataset.
 Ideally I'd like the following select to work:
 SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
 Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-10-21 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800710#comment-13800710
 ] 

Constance Eustace commented on CASSANDRA-6220:
--

What do you use? Cass-jdbc, binary protocol, or is this simply cqlsh scripts?

 Unable to select multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ashot Golovenko
 Attachments: inserts.zip


 I have the following table:
 CREATE TABLE rating (
 id bigint,
 mid int,
 hid int,
 r double,
 PRIMARY KEY ((id, mid), hid));
 And I get really really strange result sets on the following queries:
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329320;
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329220;
  hid   | r
 ---+---
  201329220 | 53.62
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid in (201329320, 201329220);
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)  -- WRONG - should be two records
 As you can see although both records exist I'm not able the fetch all of them 
 using in clause. By now I have to cycle my requests which are about 30 and I 
 find it highly inefficient given that I query physically the same row. 
 More of that  - it doesn't happen all the time! For different id values 
 sometimes I get the correct dataset.
 Ideally I'd like the following select to work:
 SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
 Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6139) Cqlsh shouldn't display empty value alias

2013-10-21 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800714#comment-13800714
 ] 

Brandon Williams commented on CASSANDRA-6139:
-

+1

 Cqlsh shouldn't display empty value alias
 ---

 Key: CASSANDRA-6139
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6139
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.0.2

 Attachments: 6139.txt


 When someone creates:
 {noformat}
 CREATE TABLE foo (
k int,
v int,
PRIMARY KEY (k, v)
 ) WITH COMPACT STORAGE
 {noformat}
 then we internally create a value alias (1.2)/compact value definition 
 (2.0) with an empty name. Seems that cqlsh don't recognize that fact and 
 display that as:
 {noformat}
 cqlsh:ks DESC TABLE foo;
 CREATE TABLE foo (
   k int,
   v int,
blob,
   PRIMARY KEY (k, v)
 ) WITH COMPACT STORAGE AND ...
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-10-21 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800709#comment-13800709
 ] 

Constance Eustace edited comment on CASSANDRA-6220 at 10/21/13 2:58 PM:


one of the CASS-6137 comments has a github with a reproduction script if you 
need to reliably reproduce. Takes about 400,000 inserts + 6000 updates for me, 
single node

https://github.com/cowarlydragon/CASS-6137


was (Author: cowardlydragon):
one of the CASS-6137 comments has a github with a reproduction script if you 
need to reliably reproduce. Takes about 400,000 inserts + 6000 updates 

 Unable to select multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ashot Golovenko
 Attachments: inserts.zip


 I have the following table:
 CREATE TABLE rating (
 id bigint,
 mid int,
 hid int,
 r double,
 PRIMARY KEY ((id, mid), hid));
 And I get really really strange result sets on the following queries:
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329320;
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329220;
  hid   | r
 ---+---
  201329220 | 53.62
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid in (201329320, 201329220);
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)  -- WRONG - should be two records
 As you can see although both records exist I'm not able the fetch all of them 
 using in clause. By now I have to cycle my requests which are about 30 and I 
 find it highly inefficient given that I query physically the same row. 
 More of that  - it doesn't happen all the time! For different id values 
 sometimes I get the correct dataset.
 Ideally I'd like the following select to work:
 SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
 Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-10-21 Thread Ashot Golovenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800720#comment-13800720
 ] 

Ashot Golovenko commented on CASSANDRA-6220:


For inserts I was using a datastax java driver 1.0.3 with cassandra 2.0.1, 
single node on MacOsX 10.8.5 with SSD.
Wrong result sets can be seen through java driver and cqlsh as well.


 Unable to select multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ashot Golovenko
 Attachments: inserts.zip


 I have the following table:
 CREATE TABLE rating (
 id bigint,
 mid int,
 hid int,
 r double,
 PRIMARY KEY ((id, mid), hid));
 And I get really really strange result sets on the following queries:
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329320;
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329220;
  hid   | r
 ---+---
  201329220 | 53.62
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid in (201329320, 201329220);
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)  -- WRONG - should be two records
 As you can see although both records exist I'm not able the fetch all of them 
 using in clause. By now I have to cycle my requests which are about 30 and I 
 find it highly inefficient given that I query physically the same row. 
 More of that  - it doesn't happen all the time! For different id values 
 sometimes I get the correct dataset.
 Ideally I'd like the following select to work:
 SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
 Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: cqlsh: ignore empty 'value alias' in DESCRIBE

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 b365edc46 - 20f1b816b


cqlsh: ignore empty 'value alias' in DESCRIBE

patch by Aleksey Yeschenko; reviewed by Brandon Williams for
CASSANDRA-6139


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20f1b816
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20f1b816
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20f1b816

Branch: refs/heads/cassandra-2.0
Commit: 20f1b816b36befb1c142d883cf9ef76c7be5bce0
Parents: b365edc
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 23:12:21 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 23:14:37 2013 +0800

--
 CHANGES.txt| 1 +
 bin/cqlsh  | 2 +-
 pylib/cqlshlib/cql3handling.py | 4 ++--
 3 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/20f1b816/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 02bbc1d..895ffcc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -25,6 +25,7 @@
  * Fix NPE if StorageService.getOperationMode() is executed before full 
startup (CASSANDRA-6166)
  * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
  * Add reloadtriggers command to nodetool (CASSANDRA-4949)
+ * cqlsh: ignore empty 'value alias' in DESCRIBE (CASSANDRA-6139)
 Merged from 1.2:
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
  * Add a warning for small LCS sstable size (CASSANDRA-6191)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20f1b816/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index a062dcd..82c9906 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -32,7 +32,7 @@ exit 1
 from __future__ import with_statement
 
 description = CQL Shell for Apache Cassandra
-version = 4.0.1
+version = 4.0.2
 
 from StringIO import StringIO
 from itertools import groupby

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20f1b816/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 3b50cc9..8ec3573 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -1186,8 +1186,8 @@ class CqlTableDef:
 for attr in ('compaction_strategy_options', 'compression_parameters'):
 setattr(cf, attr, json.loads(getattr(cf, attr)))
 
-# deal with columns
-columns = map(CqlColumnDef.from_layout, coldefs)
+# deal with columns, filter out empty column names (see CASSANDRA-6139)
+columns = filter(lambda c: c.name, map(CqlColumnDef.from_layout, 
coldefs))
 
 partition_key_cols = filter(lambda c: c.component_type == 
u'partition_key', columns)
 partition_key_cols.sort(key=lambda c: c.component_index)



[1/2] git commit: cqlsh: ignore empty 'value alias' in DESCRIBE

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 66957ece2 - 9957ed667


cqlsh: ignore empty 'value alias' in DESCRIBE

patch by Aleksey Yeschenko; reviewed by Brandon Williams for
CASSANDRA-6139


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20f1b816
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20f1b816
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20f1b816

Branch: refs/heads/trunk
Commit: 20f1b816b36befb1c142d883cf9ef76c7be5bce0
Parents: b365edc
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 23:12:21 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 23:14:37 2013 +0800

--
 CHANGES.txt| 1 +
 bin/cqlsh  | 2 +-
 pylib/cqlshlib/cql3handling.py | 4 ++--
 3 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/20f1b816/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 02bbc1d..895ffcc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -25,6 +25,7 @@
  * Fix NPE if StorageService.getOperationMode() is executed before full 
startup (CASSANDRA-6166)
  * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
  * Add reloadtriggers command to nodetool (CASSANDRA-4949)
+ * cqlsh: ignore empty 'value alias' in DESCRIBE (CASSANDRA-6139)
 Merged from 1.2:
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
  * Add a warning for small LCS sstable size (CASSANDRA-6191)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20f1b816/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index a062dcd..82c9906 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -32,7 +32,7 @@ exit 1
 from __future__ import with_statement
 
 description = CQL Shell for Apache Cassandra
-version = 4.0.1
+version = 4.0.2
 
 from StringIO import StringIO
 from itertools import groupby

http://git-wip-us.apache.org/repos/asf/cassandra/blob/20f1b816/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 3b50cc9..8ec3573 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -1186,8 +1186,8 @@ class CqlTableDef:
 for attr in ('compaction_strategy_options', 'compression_parameters'):
 setattr(cf, attr, json.loads(getattr(cf, attr)))
 
-# deal with columns
-columns = map(CqlColumnDef.from_layout, coldefs)
+# deal with columns, filter out empty column names (see CASSANDRA-6139)
+columns = filter(lambda c: c.name, map(CqlColumnDef.from_layout, 
coldefs))
 
 partition_key_cols = filter(lambda c: c.component_type == 
u'partition_key', columns)
 partition_key_cols.sort(key=lambda c: c.component_index)



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-21 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9957ed66
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9957ed66
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9957ed66

Branch: refs/heads/trunk
Commit: 9957ed667505fc4be39007714b3be646b42b9549
Parents: 66957ec 20f1b81
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Oct 21 23:15:27 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Oct 21 23:15:27 2013 +0800

--
 CHANGES.txt| 1 +
 bin/cqlsh  | 2 +-
 pylib/cqlshlib/cql3handling.py | 4 ++--
 3 files changed, 4 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9957ed66/CHANGES.txt
--



[jira] [Commented] (CASSANDRA-6135) Add beforeChange Notification to Gossiper State.

2013-10-21 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800724#comment-13800724
 ] 

Brandon Williams commented on CASSANDRA-6135:
-

Hmm, that's unfortunate.  We can't break the interface in a minor (1.2) 
release, and I'm hesitant to do it even in 2.0 at this point, since I know 
there are a decent amount of custom snitches being used.  WDYT [~jbellis]?

 Add beforeChange Notification to Gossiper State.
 

 Key: CASSANDRA-6135
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6135
 Project: Cassandra
  Issue Type: New Feature
Reporter: Benjamin Coverston
Assignee: Sergio Bossa
 Attachments: 
 0001-New-Gossiper-notification-to-IEndpointStateChangeSub.patch, 
 0002-CASSANDRA-6135.diff, CASSANDRA-6135-V3.patch


 We would like an internal notification to be fired before state changes 
 happen so we can intercept them, and in some cases defer them.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6179) Load calculated in nodetool info is strange/inaccurate in JBOD setups

2013-10-21 Thread J. Ryan Earl (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800731#comment-13800731
 ] 

J. Ryan Earl edited comment on CASSANDRA-6179 at 10/21/13 3:31 PM:
---

'nodetool cfstats' below:

{noformat}
[jre@cassandra5 ~]$ nodetool cfstats
Keyspace: system_traces
Read Count: 16
Read Latency: 1.5970625 ms.
Write Count: 5171
Write Latency: 0.06117172693869658 ms.
Pending Tasks: 0
Table: sessions
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 455
Memtable data size, bytes: 255170
Memtable switch count: 0
Read count: 8
Read latency, micros: 0.179 ms.
Write count: 158
Write latency, micros: 0.041 ms.
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 0
Compacted partition minimum size, bytes: 0
Compacted partition maximum size, bytes: 0
Compacted partition mean size, bytes: 0

Table: events
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 2
Memtable data size, bytes: 1048576
Memtable switch count: 0
Read count: 8
Read latency, micros: 3.016 ms.
Write count: 5013
Write latency, micros: 0.062 ms.
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 0
Compacted partition minimum size, bytes: 0
Compacted partition maximum size, bytes: 0
Compacted partition mean size, bytes: 0


Keyspace: VcellsPolishedData
Read Count: 0
Read Latency: NaN ms.
Write Count: 0
Write Latency: NaN ms.
Pending Tasks: 0
Table: vcells_polished_data
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 0
Memtable data size, bytes: 0
Memtable switch count: 0
Read count: 0
Read latency, micros: NaN ms.
Write count: 0
Write latency, micros: NaN ms.
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 0
Compacted partition minimum size, bytes: 0
Compacted partition maximum size, bytes: 0
Compacted partition mean size, bytes: 0


Keyspace: system
Read Count: 502538
Read Latency: 0.337618096541953 ms.
Write Count: 1063
Write Latency: 0.19078080903104422 ms.
Pending Tasks: 0
Table: NodeIdInfo
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 0
Memtable data size, bytes: 0
Memtable switch count: 0
Read count: 0
Read latency, micros: NaN ms.
Write count: 0
Write latency, micros: NaN ms.
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 0
Compacted partition minimum size, bytes: 0
Compacted partition maximum size, bytes: 0
Compacted partition mean size, bytes: 0

Table: batchlog
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 0
Memtable data size, bytes: 0
Memtable switch count: 0
Read count: 0
Read latency, micros: NaN ms.
Write count: 0
 

[jira] [Commented] (CASSANDRA-6179) Load calculated in nodetool info is strange/inaccurate in JBOD setups

2013-10-21 Thread J. Ryan Earl (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800731#comment-13800731
 ] 

J. Ryan Earl commented on CASSANDRA-6179:
-

'nodetool cfstats' below:

{{noformat}}
[jre@cassandra5 ~]$ nodetool cfstats
Keyspace: system_traces
Read Count: 16
Read Latency: 1.5970625 ms.
Write Count: 5171
Write Latency: 0.06117172693869658 ms.
Pending Tasks: 0
Table: sessions
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 455
Memtable data size, bytes: 255170
Memtable switch count: 0
Read count: 8
Read latency, micros: 0.179 ms.
Write count: 158
Write latency, micros: 0.041 ms.
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 0
Compacted partition minimum size, bytes: 0
Compacted partition maximum size, bytes: 0
Compacted partition mean size, bytes: 0

Table: events
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 2
Memtable data size, bytes: 1048576
Memtable switch count: 0
Read count: 8
Read latency, micros: 3.016 ms.
Write count: 5013
Write latency, micros: 0.062 ms.
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 0
Compacted partition minimum size, bytes: 0
Compacted partition maximum size, bytes: 0
Compacted partition mean size, bytes: 0


Keyspace: VcellsPolishedData
Read Count: 0
Read Latency: NaN ms.
Write Count: 0
Write Latency: NaN ms.
Pending Tasks: 0
Table: vcells_polished_data
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 0
Memtable data size, bytes: 0
Memtable switch count: 0
Read count: 0
Read latency, micros: NaN ms.
Write count: 0
Write latency, micros: NaN ms.
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 0
Compacted partition minimum size, bytes: 0
Compacted partition maximum size, bytes: 0
Compacted partition mean size, bytes: 0


Keyspace: system
Read Count: 502538
Read Latency: 0.337618096541953 ms.
Write Count: 1063
Write Latency: 0.19078080903104422 ms.
Pending Tasks: 0
Table: NodeIdInfo
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 0
Memtable data size, bytes: 0
Memtable switch count: 0
Read count: 0
Read latency, micros: NaN ms.
Write count: 0
Write latency, micros: NaN ms.
Pending tasks: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 0
Compacted partition minimum size, bytes: 0
Compacted partition maximum size, bytes: 0
Compacted partition mean size, bytes: 0

Table: batchlog
SSTable count: 0
Space used (live), bytes: 0
Space used (total), bytes: 0
SSTable Compression Ratio: 0.0
Number of keys (estimate): 0
Memtable cell count: 0
Memtable data size, bytes: 0
Memtable switch count: 0
Read count: 0
Read latency, micros: NaN ms.
Write count: 0
Write latency, micros: NaN ms.
   

[jira] [Commented] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-10-21 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800734#comment-13800734
 ] 

Constance Eustace commented on CASSANDRA-6220:
--

Thanks, was going to write a java driver reproduction in case the cass-jdbc was 
somehow creating the problem, but if you've reproduced that way I don't have 
to...


 Unable to select multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ashot Golovenko
 Attachments: inserts.zip


 I have the following table:
 CREATE TABLE rating (
 id bigint,
 mid int,
 hid int,
 r double,
 PRIMARY KEY ((id, mid), hid));
 And I get really really strange result sets on the following queries:
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329320;
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329220;
  hid   | r
 ---+---
  201329220 | 53.62
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid in (201329320, 201329220);
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)  -- WRONG - should be two records
 As you can see although both records exist I'm not able the fetch all of them 
 using in clause. By now I have to cycle my requests which are about 30 and I 
 find it highly inefficient given that I query physically the same row. 
 More of that  - it doesn't happen all the time! For different id values 
 sometimes I get the correct dataset.
 Ideally I'd like the following select to work:
 SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
 Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6156) Poor resilience and recovery for bootstrapping node - unable to fetch range

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800740#comment-13800740
 ] 

Jonathan Ellis commented on CASSANDRA-6156:
---

[~yukim] is this worth keeping open?  I note that unable to fetch range is 
gone in 2.0.

 Poor resilience and recovery for bootstrapping node - unable to fetch range
 -

 Key: CASSANDRA-6156
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6156
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alyssa Kwan
 Fix For: 1.2.8


 We have an 8 node cluster on 1.2.8 using vnodes.  One of our nodes failed and 
 we are having lots of trouble bootstrapping it back.  On each attempt, 
 bootstrapping eventually fails with a RuntimeException Unable to fetch 
 range.  As far as we can tell, long GC pauses on the sender side cause 
 heartbeat drops or delays, which leads the gossip controller to convict the 
 connection and mark the sender dead.  We've done significant GC tuning to 
 minimize the duration of pauses and raised phi_convict to its max.  It merely 
 lets the bootstrap process take longer to fail.
 The inability to reliably add nodes significantly affects our ability to 
 scale.
 We're not the only ones:  
 http://stackoverflow.com/questions/19199349/cassandra-bootstrap-fails-with-unable-to-fetch-range
 What can we do in the immediate term to bring this node in?  And what's the 
 long term solution?
 One possible solution would be to allow bootstrapping to be an incremental 
 process with individual transfers of vnode ownership instead of attempting to 
 transfer the whole set of vnodes transactionally.  (I assume that's what's 
 happening now.)  I don't know what would have to change on the gossip and 
 token-aware client side to support this.
 Another solution would be to partition sstable files by vnode and allow 
 transfer of those files directly with some sort of checkpointing of and 
 incremental transfer of writes after the sstable is transferred.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6146) CQL-native stress

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800742#comment-13800742
 ] 

Jonathan Ellis commented on CASSANDRA-6146:
---

Want to tackle something larger [~ash2k]? :)

 CQL-native stress
 -

 Key: CASSANDRA-6146
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6146
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis

 The existing CQL support in stress is not worth discussing.  We need to 
 start over, and we might as well kill two birds with one stone and move to 
 the native protocol while we're at it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6140) Cassandra-cli backward compatibility issue with Cassandra 2.0.1

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800743#comment-13800743
 ] 

Jonathan Ellis commented on CASSANDRA-6140:
---

Can you try 2.0 HEAD?

 Cassandra-cli backward compatibility issue with Cassandra 2.0.1
 ---

 Key: CASSANDRA-6140
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6140
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: Linux Ubuntu, Cassandra 2.0.0
Reporter: DOAN DuyHai

 Currently we are using Cassandra 1.2.6 and we want to migrate to 2.0.1.
  We still use Thrift for some column families (migration to CQL3 is not done 
 yet for them). We have cassandra-cli script to drop/create fresh keyspace, 
 re-create column families and populate referential data:
 *Schema creation script*
 {code}
 drop keyspace xxx;
 create keyspace xxx with placement_strategy ...
 create column family offers with 
 key_validation_class = UTF8Type and
 comparator = 'CompositeType(UTF8Type)'  and 
 default_validation_class = UTF8Type;
 {code}
 *Data insertion script*:
 {code}
 set offers['OFFER1'][PRODUCT1']='test_product';
 ...
 {code}
  When executing the data insertion script with Cassandra 2.0.1, we have the 
 following stack trace:
 {code}
 Invalid cell for CQL3 table offers. The CQL3 column component (COL1) does not 
 correspond to a defined CQL3 column
 InvalidRequestException(why:Invalid cell for CQL3 table offers. The CQL3 
 column component (COL1) does not correspond to a defined CQL3 column)
   at 
 org.apache.cassandra.thrift.Cassandra$insert_result$insert_resultStandardScheme.read(Cassandra.java:21447)
   at 
 org.apache.cassandra.thrift.Cassandra$insert_result$insert_resultStandardScheme.read(Cassandra.java:21433)
   at 
 org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:21367)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:898)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:882)
   at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:987)
   at 
 org.apache.cassandra.cli.CliClient.executeCLIStatement(CliClient.java:231)
   at 
 org.apache.cassandra.cli.CliMain.processStatementInteractive(CliMain.java:201)
   at org.apache.cassandra.cli.CliMain.main(CliMain.java:327)
 {code}
  This data insertion script works pecfectly with Cassandra 1.2.6.
  We face the same issue with Cassandra 2.0.0. It looks like the cassandra-cli 
 commands no longer works with Cassandra 2.0.0...
   



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6130) Secondary Index does not work properly

2013-10-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6130.
---

Resolution: Duplicate

 Secondary Index does not work properly
 --

 Key: CASSANDRA-6130
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6130
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.0.1
 jdk 7
Reporter: koray sariteke

 When secondary index is created, not able to query by created index. We 
 searched logs and not noticed any info about index. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6121) CASS 2.0, possibly 1.2.8 as well: Secondary Indexes not working

2013-10-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6121.
---

Resolution: Duplicate

 CASS 2.0, possibly 1.2.8 as well: Secondary Indexes not working
 ---

 Key: CASSANDRA-6121
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6121
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: AWS ebs striped quad-volume for data directories, ubuntu 
 12.10. Currently single-node, but also possibly seen in two-node 
 configuration.
Reporter: Constance Eustace
 Fix For: 1.2.8, 2.0.1


 I will attach the schema we are using. We are using CQL3, fed via the 
 cass-jdbc driver project. 
 We are using off-heap JNA-enabled key and row caching.
 We implement an entity model using cassandra wide/sparse row. So an 
 entityid is the rowkey, and various properties are stored with property 
 names as the column key and various value information (type, value(s)), etc. 
 Some of these sparse columns are indexes so there can be searches on the 
 values. 
 We have a fairly large number of indexes. 
 Data is populated using heavy batch intakes (1.2 million row keys done in 
 about 16 minutes).
 We will attempt to reproduce reliably, get stats, logs, traces. Filing the 
 bug for now as a placeholder.
 These 1.2 million rowkey updates are split into individual batches of about 
 200 statements, with a commonly shared timestamp specified for the batch so 
 that update contention can be dealt with. 
 I have seen the previous filed bugs on compaction on TTL columns (not used by 
 us) and rowkey caching having impacts on the indexes. We may attempt 
 experiments where we do not use rowkey caching, toggling JNA/offheap, etc. 
 Any advice would be appreciated for detecting index failure...
 Our schema: (we have another 8-10 near copies of this keyspace that split the 
 data for vendors/storefronts/etc)
 CREATE KEYSPACE internal_submission WITH REPLICATION= { 
 'class':'SimpleStrategy', 
 'replication_factor':%=node.ingest.db.replication_factor% };
 CREATE TABLE internal_submission.Relation (ParentID text,ChildID 
 text,GraphType text,Info maptext,text,PRIMARY KEY (ParentID,ChildID)) with 
 caching = 'all';
 CREATE TABLE internal_submission.RelationBACKREF (ChildID text,ParentID 
 text,PRIMARY KEY (ChildID,ParentID)) with caching = 'all';
 CREATE TABLE internal_submission.Blob (BlobID text,Type text,SubType 
 text,Encoding maptext,text,BlobData blob,PRIMARY KEY (BlobID)) with caching 
 = 'keys_only';
 CREATE TABLE internal_submission.Entity_Job (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,PartnerID 
 text,UserID text,SubmitDate bigint,SourceIP text,SubmitEvent text,Size 
 int,Describes text,Version text,IngestStatus text,IngestStatusDetail 
 text,ReferenceID text,DNDCondition text,PRIMARY KEY (e_EntID,p_Prop)) with 
 caching = 'all';
 CREATE TABLE internal_submission.Processing (EntityID text,InProcess 
 counter,Complete counter,Success counter,Fail counter,Warn counter,Redo 
 counter,Hold counter,PRIMARY KEY (EntityID)) with caching = 'all';
 CREATE TABLE internal_submission.Entity_Asset (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,IngestStatus 
 text,IngestStatusDetail text,PRIMARY KEY (e_EntID,p_Prop)) with caching = 
 'all';
 CREATE TABLE internal_submission.Entity_MetaDataDef (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,PRIMARY KEY 
 (e_EntID,p_Prop)) with caching = 'all';
 CREATE TABLE internal_submission.Entity_HierarchyDef (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,Describes 
 text,Version text,PRIMARY KEY (e_EntID,p_Prop)) with caching = 'all';
 CREATE TABLE internal_submission.Entity_CategoryDef (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,Describes 
 text,Version text,PRIMARY KEY (e_EntID,p_Prop)) with caching = 'all';
 CREATE TABLE internal_submission.Entity_ProductDef 

[jira] [Updated] (CASSANDRA-6120) Boolean constants syntax is not consistent between DDL and DML in CQL

2013-10-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6120:
--

Assignee: Sylvain Lebresne

 Boolean constants syntax is not consistent between DDL and DML in CQL
 -

 Key: CASSANDRA-6120
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6120
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
Priority: Trivial

 DDL statements allow boolean constants to be either quoted or unquoted as:
 {code}
 CREATE KEYSPACE ks WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1} AND durable_writes = true;
 CREATE KEYSPACE ks WITH replication = {'class': 'SimpleStrategy', 
 'replication_factor': 1} AND durable_writes = 'true';
 {code}
 While DML statements only allow unquoted boolean constants.
 While this is not a big deal, it can introduce a bit of confusion for the 
 users. Fixing this lack of syntax consistency would break the existing 
 scripts, so that's something we might want to consider next time we'll 
 introduce some breaking changes in CQL...



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6206) Thrift socket listen backlog

2013-10-21 Thread Nenad Merdanovic (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800746#comment-13800746
 ] 

Nenad Merdanovic commented on CASSANDRA-6206:
-

Although I saw no benefit of doing this, as it doesn't relate to the 
application's way of handling connections, but to the socket limits that are 
handled in-kernel, I have tried to change the RPC server type to 'hsha'. It 
didn't help.

Socket limits are set with the listen(2) system call and Java's implementation 
sets it to '50' if not specified. This is way too low on any production traffic 
that is not using connection pooling (for example PDO-Cassandra doesn't support 
it at all). To patch this should be extremely simple and I kindly ask you to 
reconsider about fixing this.

 Thrift socket listen backlog
 

 Key: CASSANDRA-6206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Linux, Java 7
Reporter: Nenad Merdanovic
 Fix For: 2.0.1

 Attachments: cassandra.patch


 Although Thrift is a depreciated method of accessing Cassandra, default 
 backlog is way too low on that socket. It shouldn't be a problem to implement 
 it and I am including a POC patch for this (sorry, really low on time with 
 limited Java knowledge so just to give an idea).
 This is an old report which was never addressed and the bug remains till this 
 day, except in my case I have a much larger scale application with 3rd party 
 software which I cannot modify to include connection pooling:
 https://issues.apache.org/jira/browse/CASSANDRA-1663
 There is also a pending change in the Thrift itself which Cassandra should be 
 able to use for parts using TServerSocket (SSL):
 https://issues.apache.org/jira/browse/THRIFT-1868



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6130) Secondary Index does not work properly

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800745#comment-13800745
 ] 

Jonathan Ellis commented on CASSANDRA-6130:
---

sounds like CASSANDRA-5732 to me

 Secondary Index does not work properly
 --

 Key: CASSANDRA-6130
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6130
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.0.1
 jdk 7
Reporter: koray sariteke

 When secondary index is created, not able to query by created index. We 
 searched logs and not noticed any info about index. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6121) CASS 2.0, possibly 1.2.8 as well: Secondary Indexes not working

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800748#comment-13800748
 ] 

Jonathan Ellis commented on CASSANDRA-6121:
---

sounds like CASSANDRA-5732

 CASS 2.0, possibly 1.2.8 as well: Secondary Indexes not working
 ---

 Key: CASSANDRA-6121
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6121
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: AWS ebs striped quad-volume for data directories, ubuntu 
 12.10. Currently single-node, but also possibly seen in two-node 
 configuration.
Reporter: Constance Eustace
 Fix For: 1.2.8, 2.0.1


 I will attach the schema we are using. We are using CQL3, fed via the 
 cass-jdbc driver project. 
 We are using off-heap JNA-enabled key and row caching.
 We implement an entity model using cassandra wide/sparse row. So an 
 entityid is the rowkey, and various properties are stored with property 
 names as the column key and various value information (type, value(s)), etc. 
 Some of these sparse columns are indexes so there can be searches on the 
 values. 
 We have a fairly large number of indexes. 
 Data is populated using heavy batch intakes (1.2 million row keys done in 
 about 16 minutes).
 We will attempt to reproduce reliably, get stats, logs, traces. Filing the 
 bug for now as a placeholder.
 These 1.2 million rowkey updates are split into individual batches of about 
 200 statements, with a commonly shared timestamp specified for the batch so 
 that update contention can be dealt with. 
 I have seen the previous filed bugs on compaction on TTL columns (not used by 
 us) and rowkey caching having impacts on the indexes. We may attempt 
 experiments where we do not use rowkey caching, toggling JNA/offheap, etc. 
 Any advice would be appreciated for detecting index failure...
 Our schema: (we have another 8-10 near copies of this keyspace that split the 
 data for vendors/storefronts/etc)
 CREATE KEYSPACE internal_submission WITH REPLICATION= { 
 'class':'SimpleStrategy', 
 'replication_factor':%=node.ingest.db.replication_factor% };
 CREATE TABLE internal_submission.Relation (ParentID text,ChildID 
 text,GraphType text,Info maptext,text,PRIMARY KEY (ParentID,ChildID)) with 
 caching = 'all';
 CREATE TABLE internal_submission.RelationBACKREF (ChildID text,ParentID 
 text,PRIMARY KEY (ChildID,ParentID)) with caching = 'all';
 CREATE TABLE internal_submission.Blob (BlobID text,Type text,SubType 
 text,Encoding maptext,text,BlobData blob,PRIMARY KEY (BlobID)) with caching 
 = 'keys_only';
 CREATE TABLE internal_submission.Entity_Job (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,PartnerID 
 text,UserID text,SubmitDate bigint,SourceIP text,SubmitEvent text,Size 
 int,Describes text,Version text,IngestStatus text,IngestStatusDetail 
 text,ReferenceID text,DNDCondition text,PRIMARY KEY (e_EntID,p_Prop)) with 
 caching = 'all';
 CREATE TABLE internal_submission.Processing (EntityID text,InProcess 
 counter,Complete counter,Success counter,Fail counter,Warn counter,Redo 
 counter,Hold counter,PRIMARY KEY (EntityID)) with caching = 'all';
 CREATE TABLE internal_submission.Entity_Asset (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,IngestStatus 
 text,IngestStatusDetail text,PRIMARY KEY (e_EntID,p_Prop)) with caching = 
 'all';
 CREATE TABLE internal_submission.Entity_MetaDataDef (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,PRIMARY KEY 
 (e_EntID,p_Prop)) with caching = 'all';
 CREATE TABLE internal_submission.Entity_HierarchyDef (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,Describes 
 text,Version text,PRIMARY KEY (e_EntID,p_Prop)) with caching = 'all';
 CREATE TABLE internal_submission.Entity_CategoryDef (e_EntID text,e_EntName 
 text,e_EntType text,e_EntLinks text,p_Prop text,p_Storage text,p_PropID 
 text,p_Flags text,p_Val text,p_ValType text,p_ValUnit text,p_ValLang 
 text,p_ValLinks text,p_Vars text,p_PropLinks text,p_SubEnts text,Describes 
 text,Version text,PRIMARY KEY (e_EntID,p_Prop)) with caching = 

[jira] [Commented] (CASSANDRA-6105) Cassandra Triggers to execute on replicas

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800750#comment-13800750
 ] 

Jonathan Ellis commented on CASSANDRA-6105:
---

If you're going to do this, you should use the existing indexing hooks instead 
of reinventing them.

 Cassandra Triggers to execute on replicas
 -

 Key: CASSANDRA-6105
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6105
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Michael
Priority: Minor

 We would like to keep ElasticSearch eventually consistent across data centers 
 while keeping ElasticSearch clusters local to each data center. The idea is, 
 utilize Cassandra to replicate data across data centers and use triggers to 
 kick off an event which would populate data into ElasticSearch Clusters. Thus 
 keeping disperse ElasticSearch clusters eventually consistent while not 
 extending ElasticSearch across data centers. 
 That in mind, it would be very useful if a trigger could be made to execute 
 on every replica. Or at least one replica per data center.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6105) Cassandra Triggers to execute on replicas

2013-10-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6105.
---

Resolution: Won't Fix

 Cassandra Triggers to execute on replicas
 -

 Key: CASSANDRA-6105
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6105
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Michael
Priority: Minor

 We would like to keep ElasticSearch eventually consistent across data centers 
 while keeping ElasticSearch clusters local to each data center. The idea is, 
 utilize Cassandra to replicate data across data centers and use triggers to 
 kick off an event which would populate data into ElasticSearch Clusters. Thus 
 keeping disperse ElasticSearch clusters eventually consistent while not 
 extending ElasticSearch across data centers. 
 That in mind, it would be very useful if a trigger could be made to execute 
 on every replica. Or at least one replica per data center.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6150) How do I remove the node that has Host Id is null ?

2013-10-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6150.
-

Resolution: Cannot Reproduce

Use the Gossiper.unsafeAssassinateEndpoint JMX call to remove it.

 How do I remove the node that has Host Id is null ?
 -

 Key: CASSANDRA-6150
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6150
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: CentOS 5.8 , Dell , 32Gb Memory
Reporter: yunwoo oh
 Fix For: 1.2.3


 There are 15 Cassandra Node in my team.
 I executed nodetool -h 192.168.61.131 decommission .
 and I killed the cassandra deamon
 But when I execute   ./nodetool status
 there is 192.168.61.131 node that has null value in the HOST ID column 
 and State is Leaving
 How do I remove the node that has Host Id is null ?
 [root@u2metadbm06 bin]# ./nodetool status
 Datacenter: datacenter1
 ===
 Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address   Load   Tokens  Owns   Host ID   
 Rack
 UN  192.168.61.127187.14 GB  256 2.2%   
 d45096ba-32ea-4fbe-aab3-649879771ffb  rack1
 UN  192.168.61.124191.26 GB  256 2.4%   
 935804c5-aa5f-4186-b4d3-32352bc80e9c  rack1
 UN  192.168.61.157186.02 GB  256 2.7%   
 edcd56a3-bfc7-4bdd-8bac-da3e64840e9d  rack1
 UN  192.168.61.130186.86 GB  256 2.3%   
 c8a5f722-fedd-4df4-8262-3b49f804ee0d  rack1
 UL  192.168.61.131136.3 GB   256 2.8%   null  
 rack1
 UN  192.168.61.159165.32 GB  256 2.3%   
 9dc25aa7-1637-43dd-8767-6bd83cd6cfdb  rack1
 .
 thanks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6075) The token function should allow column identifiers in the correct order only

2013-10-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6075:
--

Assignee: Sylvain Lebresne

 The token function should allow column identifiers in the correct order only
 

 Key: CASSANDRA-6075
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6075
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 1.2.9
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
Priority: Minor

 Given the following table:
 {code}
 CREATE TABLE t1 (a int, b text, PRIMARY KEY ((a, b)));
 {code}
 The following request returns an error in cqlsh as literal arguments order is 
 incorrect:
 {code}
 SELECT * FROM t1 WHERE token(a, b)  token('s', 1);
 Bad Request: Type error: 's' cannot be passed as argument 0 of function token 
 of type int
 {code}
 But surprisingly if we provide the column identifier arguments in the wrong 
 order no error is returned:
 {code}
 SELECT * FROM t1 WHERE token(a, b)  token(1, 'a'); // correct order is valid
 SELECT * FROM t1 WHERE token(b, a)  token(1, 'a'); // incorrect order is 
 valid as well
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6066) LHF 2i performance improvements

2013-10-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6066:
--

Reviewer: Aleksey Yeschenko
Assignee: Lyuben Todorov

 LHF 2i performance improvements
 ---

 Key: CASSANDRA-6066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6066
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.0.2


 We should perform more aggressive paging over the index partition (costs us 
 nothing) and also fetch the rows from the base table in one slice query (at 
 least the ones belonging to the same partition).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5981) Netty frame length exception when storing data to Cassandra using binary protocol

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800755#comment-13800755
 ] 

Jonathan Ellis commented on CASSANDRA-5981:
---

Hmm, timeout on that request.  [~norman], could you review v2?

 Netty frame length exception when storing data to Cassandra using binary 
 protocol
 -

 Key: CASSANDRA-5981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5981
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Linux, Java 7
Reporter: Justin Sweeney
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.2

 Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, 
 0002-Allow-to-configure-the-max-frame-length.txt, 5981-v2.txt


 Using Cassandra 1.2.8, I am running into an issue where when I send a large 
 amount of data using the binary protocol, I get the following netty exception 
 in the Cassandra log file:
 {quote}
 ERROR 09:08:35,845 Unexpected exception during request
 org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame 
 length exceeds 268435456: 292413714 - discarded
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441)
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412)
 at 
 org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372)
 at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181)
 at 
 org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422)
 at 
 org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 at 
 org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
 at 
 org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84)
 at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472)
 at 
 org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333)
 at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)
 {quote}
 I am using the Datastax driver and using CQL to execute insert queries. The 
 query that is failing is using atomic batching executing a large number of 
 statements (~55).
 Looking into the code a bit, I saw that in the 
 org.apache.cassandra.transport.Frame$Decoder class, the MAX_FRAME_LENGTH is 
 hard coded to 256 mb.
 Is this something that should be configurable or is this a hard limit that 
 will prevent batch statements of this size from executing for some reason?



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (CASSANDRA-6082) 1.1.12 -- 1.2.x upgrade may result inconsistent ring

2013-10-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6082.
-

Resolution: Cannot Reproduce

Resolving as cantrepro since I can't figure out how this would happen, and the 
gossipinfo appears to be after the resolution, so we don't have much to go on.

 1.1.12 -- 1.2.x upgrade may result inconsistent ring
 -

 Key: CASSANDRA-6082
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6082
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.1.12 -- 1.2.9
Reporter: Chris Burroughs
Priority: Minor
 Attachments: c-gossipinfo, c-status


 This happened to me once, and since I don't have any more 1.1.x clusters I 
 won't be testing again.  I hope the attached files are enough for someone to 
 connect the dots.
 I did a rolling restart to upgrade from 1.1.12 -- 1.2.9.  About a week later 
 I discovered that one node was in an inconsistent state in the ring.  It was 
 either:
  * up
  * host-id=null
  * missing
 Depending on which node I ran nodetool status from.  I *think* I just missed 
 this during the upgrade but can not rule out the possibility that it just 
 happened for no reason some time after the upgrade.  It was detected when 
 running repair in such a ring caused all sorts of terrible data duplication 
 and performance tanked.  Restarting the seeds + bad node caused the ring to 
 be consistent again.
 Two possibly suspicious things are a ArrayIndexOutOfBoundsException on 
 startup:
 {noformat}
 ERROR [GossipStage:1] 2013-09-06 10:45:35,213 CassandraDaemon.java (line 194) 
 Exception in thread Thread[GossipStage:1,5,main]
 java.lang.ArrayIndexOutOfBoundsException: 2
 at 
 org.apache.cassandra.service.StorageService.extractExpireTime(StorageService.java:1660)
 at 
 org.apache.cassandra.service.StorageService.handleStateRemoving(StorageService.java:1607)
 at 
 org.apache.cassandra.service.StorageService.onChange(StorageService.java:1230)
 at 
 org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1958)
 at 
 org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:841)
 at 
 org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:919)
 at 
 org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:50)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}
 and problems to hint delivery to multiple node.
 {noformat}
 ERROR [MutationStage:11] 2013-09-06 13:59:19,604 CassandraDaemon.java (line 
 194) Exception in thread Thread[MutationStage:11,5,main]
 java.lang.AssertionError: Missing host ID for 10.20.2.45
 at 
 org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:583)
 at 
 org.apache.cassandra.service.StorageProxy$5.runMayThrow(StorageProxy.java:552)
 at 
 org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1658)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {noformat}
 Not however that while there were delivery problems to multiple nodes during 
 the rolling upgrade, only one node was in a funky state a week later.
 Attached are the results of running gossipinfo and status on every node.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6206) Thrift socket listen backlog

2013-10-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6206:


Fix Version/s: (was: 2.0.1)
   2.0.2

 Thrift socket listen backlog
 

 Key: CASSANDRA-6206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Linux, Java 7
Reporter: Nenad Merdanovic
 Fix For: 2.0.2

 Attachments: cassandra.patch


 Although Thrift is a depreciated method of accessing Cassandra, default 
 backlog is way too low on that socket. It shouldn't be a problem to implement 
 it and I am including a POC patch for this (sorry, really low on time with 
 limited Java knowledge so just to give an idea).
 This is an old report which was never addressed and the bug remains till this 
 day, except in my case I have a much larger scale application with 3rd party 
 software which I cannot modify to include connection pooling:
 https://issues.apache.org/jira/browse/CASSANDRA-1663
 There is also a pending change in the Thrift itself which Cassandra should be 
 able to use for parts using TServerSocket (SSL):
 https://issues.apache.org/jira/browse/THRIFT-1868



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Reopened] (CASSANDRA-6206) Thrift socket listen backlog

2013-10-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reopened CASSANDRA-6206:
-


 Thrift socket listen backlog
 

 Key: CASSANDRA-6206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Linux, Java 7
Reporter: Nenad Merdanovic
 Fix For: 2.0.2

 Attachments: cassandra.patch


 Although Thrift is a depreciated method of accessing Cassandra, default 
 backlog is way too low on that socket. It shouldn't be a problem to implement 
 it and I am including a POC patch for this (sorry, really low on time with 
 limited Java knowledge so just to give an idea).
 This is an old report which was never addressed and the bug remains till this 
 day, except in my case I have a much larger scale application with 3rd party 
 software which I cannot modify to include connection pooling:
 https://issues.apache.org/jira/browse/CASSANDRA-1663
 There is also a pending change in the Thrift itself which Cassandra should be 
 able to use for parts using TServerSocket (SSL):
 https://issues.apache.org/jira/browse/THRIFT-1868



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6206) Thrift socket listen backlog

2013-10-21 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6206:


Reviewer: Vijay

 Thrift socket listen backlog
 

 Key: CASSANDRA-6206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Linux, Java 7
Reporter: Nenad Merdanovic
 Fix For: 2.0.2

 Attachments: cassandra.patch


 Although Thrift is a depreciated method of accessing Cassandra, default 
 backlog is way too low on that socket. It shouldn't be a problem to implement 
 it and I am including a POC patch for this (sorry, really low on time with 
 limited Java knowledge so just to give an idea).
 This is an old report which was never addressed and the bug remains till this 
 day, except in my case I have a much larger scale application with 3rd party 
 software which I cannot modify to include connection pooling:
 https://issues.apache.org/jira/browse/CASSANDRA-1663
 There is also a pending change in the Thrift itself which Cassandra should be 
 able to use for parts using TServerSocket (SSL):
 https://issues.apache.org/jira/browse/THRIFT-1868



--
This message was sent by Atlassian JIRA
(v6.1#6144)


git commit: BM.forceBatchlogReplay() should be executed in batchlogTasks

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 189a60728 - 7e057f504


BM.forceBatchlogReplay() should be executed in batchlogTasks

follow-up to CASSANDRA-6079


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e057f50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e057f50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e057f50

Branch: refs/heads/cassandra-1.2
Commit: 7e057f504613e68082a76642983d353f3f0400fb
Parents: 189a607
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 00:26:05 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 00:26:05 2013 +0800

--
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e057f50/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 2488458..5fd55a3 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -118,7 +118,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 replayAllFailedBatches();
 }
 };
-StorageService.optionalTasks.execute(runnable);
+batchlogTasks.execute(runnable);
 }
 
 public static RowMutation getBatchlogMutationFor(CollectionRowMutation 
mutations, UUID uuid)



[1/2] git commit: BM.forceBatchlogReplay() should be executed in batchlogTasks

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 20f1b816b - 0c5f05bd9


BM.forceBatchlogReplay() should be executed in batchlogTasks

follow-up to CASSANDRA-6079


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e057f50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e057f50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e057f50

Branch: refs/heads/cassandra-2.0
Commit: 7e057f504613e68082a76642983d353f3f0400fb
Parents: 189a607
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 00:26:05 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 00:26:05 2013 +0800

--
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e057f50/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 2488458..5fd55a3 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -118,7 +118,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 replayAllFailedBatches();
 }
 };
-StorageService.optionalTasks.execute(runnable);
+batchlogTasks.execute(runnable);
 }
 
 public static RowMutation getBatchlogMutationFor(CollectionRowMutation 
mutations, UUID uuid)



[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-21 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0c5f05bd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0c5f05bd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0c5f05bd

Branch: refs/heads/cassandra-2.0
Commit: 0c5f05bd966edfdfdb78baa31dd57e1a90488227
Parents: 20f1b81 7e057f5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 00:27:56 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 00:27:56 2013 +0800

--
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0c5f05bd/src/java/org/apache/cassandra/db/BatchlogManager.java
--



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-21 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35cbc198
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35cbc198
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35cbc198

Branch: refs/heads/trunk
Commit: 35cbc19800b1bf2c14394402b5d670745bb666e2
Parents: 9957ed6 0c5f05b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 00:28:16 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 00:28:16 2013 +0800

--
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--




[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2013-10-21 Thread aleksey
Merge branch 'cassandra-1.2' into cassandra-2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0c5f05bd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0c5f05bd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0c5f05bd

Branch: refs/heads/trunk
Commit: 0c5f05bd966edfdfdb78baa31dd57e1a90488227
Parents: 20f1b81 7e057f5
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 00:27:56 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 00:27:56 2013 +0800

--
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0c5f05bd/src/java/org/apache/cassandra/db/BatchlogManager.java
--



[1/3] git commit: BM.forceBatchlogReplay() should be executed in batchlogTasks

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 9957ed667 - 35cbc1980


BM.forceBatchlogReplay() should be executed in batchlogTasks

follow-up to CASSANDRA-6079


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7e057f50
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7e057f50
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7e057f50

Branch: refs/heads/trunk
Commit: 7e057f504613e68082a76642983d353f3f0400fb
Parents: 189a607
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 00:26:05 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 00:26:05 2013 +0800

--
 src/java/org/apache/cassandra/db/BatchlogManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7e057f50/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 2488458..5fd55a3 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -118,7 +118,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 replayAllFailedBatches();
 }
 };
-StorageService.optionalTasks.execute(runnable);
+batchlogTasks.execute(runnable);
 }
 
 public static RowMutation getBatchlogMutationFor(CollectionRowMutation 
mutations, UUID uuid)



[jira] [Commented] (CASSANDRA-6221) CQL3 statements not executed properly inside BATCH operation.

2013-10-21 Thread Andy Klages (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800803#comment-13800803
 ] 

Andy Klages commented on CASSANDRA-6221:


Glad to hear this is a known issue and fixed in 2.0.2. Thanks!

 CQL3 statements not executed properly inside BATCH operation.
 -

 Key: CASSANDRA-6221
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6221
 Project: Cassandra
  Issue Type: Bug
 Environment: Running on Linux RHEL 6.2 with just a single node 
 cluster. A very basic configuration. You need a CQL3 table with a composite 
 key. Bug occurs while attempting to do both a DELETE and INSERT INTO 
 operation inside a BATCH block.
Reporter: Andy Klages
 Fix For: 2.0.2


 I'm encountering a problem introduced in 2.0.0 where I have 2 CQL3 statements 
 within a BEGIN BATCH - APPLY BATCH operator and the first one seems to be 
 ignored. Both statements operate on the same table and the first one does a 
 DELETE of an existing record, followed by an INSERT of a new record. The 
 table must have a composite key. NOTE that this worked fine in 1.2.10. 
 Here is a simple example of CQL3 statements to reproduce this:
 -- Following table has a composite key.
 CREATE TABLE users (
 user_id bigint,
 id  bigint,
 namevarchar,
 PRIMARY KEY(user_id, id)
 );
 -- Insert record with key 100,1
 INSERT INTO users (user_id,id,name) VALUES (100,1,'jdoe');
 -- Following returns 1 row as expected.
 SELECT * FROM users;
 -- Attempt to delete 100,1 while inserting 100,2 as BATCH
 BEGIN BATCH
 DELETE FROM users WHERE user_id=100 AND id=1;
 INSERT INTO users (user_id,id,name) VALUES (100,2,'jdoe');
 APPLY BATCH;
 -- Following but should return only 100,2 but 100,1 is also returned
 SELECT * FROM users;
 The output from the first select which is correct:
  user_id | id | name
 -++--
  100 |  1 | jdoe
 The output from the second select which is incorrect is:
  user_id | id | name
 -++--
  100 |  1 | jdoe
  100 |  2 | jdoe
 Only the second row (100,2) should've been returned. This was the behavior 
 in 1.2.10.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6179) Load calculated in nodetool info is strange/inaccurate in JBOD setups

2013-10-21 Thread Mikhail Stepura (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800840#comment-13800840
 ] 

Mikhail Stepura commented on CASSANDRA-6179:


Thanks [~jre]

Could you also post the output of {{du -sh}} for *each* directory specified in 
_data_file_directories_, _commitlog_directory_ and _saved_caches_directory_ 
settings from your _cassandra.yaml_
For a default setup it would be 
{code}
du -sh /var/lib/cassandra/commitlog
du -sh /var/lib/cassandra/data
du -sh /var/lib/cassandra/saved_caches
{code}

 Load calculated in nodetool info is strange/inaccurate in JBOD setups
 -

 Key: CASSANDRA-6179
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6179
 Project: Cassandra
  Issue Type: Bug
 Environment: JBOD layouts
Reporter: J. Ryan Earl
Assignee: Mikhail Stepura

 We recently noticed that the storage capacity on Cassandra nodes using JBOD 
 layout was returning what looks close to the average data volume size, 
 instead of the sum of all JBOD data volumes.  It's not exactly an average and 
 I haven't had time to dig into the code to see what it's really doing, it's 
 like some sort of sample of the JBOD volumes sizes.
 So looking at the JBOD volumes we see:
 {noformat}
 [jre@cassandra2 ~]$ df -h
 FilesystemSize  Used Avail Use% Mounted on
 [...]
 /dev/sdc1 1.1T  9.4G  1.1T   1% /data/1
 /dev/sdd1 1.1T  9.2G  1.1T   1% /data/2
 /dev/sde1 1.1T   11G  1.1T   1% /data/3
 /dev/sdf1 1.1T   11G  1.1T   1% /data/4
 /dev/sdg1 1.1T  9.2G  1.1T   1% /data/5
 /dev/sdh1 1.1T   11G  1.1T   1% /data/6
 /dev/sdi1 1.1T  9.8G  1.1T   1% /data/7
 {noformat}
 Looking at 'nodetool info' we see:
 {noformat}
 [jre@cassandra2 ~]$ nodetool info
 Token: (invoke with -T/--tokens to see all 256 tokens)
 ID   : 631f0be3-ce52-4eb9-b48b-069fbfdf0a97
 Gossip active: true
 Thrift active: true
 Native Transport active: true
 Load : 10.57 GB
 {noformat}
 So there are 7 disks in a JBOD configuration in this example, the sum should 
 be closer to 70G for each node.  Maybe we're misinterpreting what this value 
 should be, but things like OpsCenter appear to use this load value as the 
 size of data on the local node, which I expect to be the sum of JBOD volumes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6134) More efficient BatchlogManager

2013-10-21 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800851#comment-13800851
 ] 

Aleksey Yeschenko commented on CASSANDRA-6134:
--

bq. It runs every 1/2 of write timeout and replays all batches written within 
0.9 * write timeout from now. This way we ensure, that batched updates will be 
replayed to th moment client times out from coordinator.

This isn't what we want to ensure though. The current timeout (write timeout * 
2) is there to account for maximum batchlog write timeout + actual data write 
timeout. Avoiding extra mutations is IMO more important than having less delay 
in the failure scenario (and slow writes would happen more often than outright 
failures). And you definitely don't want to hammer an already slow node with 
twice the load. So -1 on this particular change.

bq. It submits all mutations from single batch in parallel (Like StorageProxy 
do). Old implementation played them one-by-one, so client can see half applied 
batches in CF for a long time (depending on size of batch).

This is fine. But yeah, we could/should parallelize batchlog replay more (can 
be done w/out modifying the schema).

bq. It fixes a subtle racing bug with incorrect hint TTL calculation

Care to elaborate? I think there was a tricky open bug related to this, but 
can't fine the JIRA #.

To avoid random reads, we could read the mutation blob in 
replayAllFailedBatches() and and pass it to replayBatch() (I thought we were 
already doing that). To make replay more async, as you suggest, we could read 
several batches and initate their replay async instead of replaying them one by 
one (but w/ RateLimiter in place).

To avoid iterating over the already replayed batches (tombstones), we could 
purge the replayed batches directly from the memtable (although I'd need to see 
a benchmark proving that it's worth doing it first).

Other stuff, in no particular order:

- making the table COMPACT STORAGE limits our flexibility wrt future batchlog 
schema changes, so -1 on that
- we should probably rate-limit batchlog replay w/ RateLimiter
- +1 on moving forceBatchlogReplay() to batchlogTasks as well (this was an 
omission from CASSANDRA-6079, ninja-committed it in 
7e057f504613e68082a76642983d353f3f0400fb)
- +1 on running cleanup() on startup
- -1 on using writeTime for TTL calculation from the UUID (the time can 
actually jump, but uuids will always increase, and it's not what we want for 
TTL calc)

In general:

I like some of the suggested changes, and would like to see the ones that are 
possible w/out the schema change implemented first. I'm strongly against 
altering the batchlog schema, unless the benchmarks can clearly prove that the 
version with the partitioned schema is significantly better than what we could 
come up with without altering the schema, and many of them can be. We should 
avoid any potentially brittle/breaking extra migration code on the already 
slow-ish startup.

Could you give it a try, [~m0nstermind]? Namely,
- replaying several mutations read in replayAllFailedBatches() simultaneously 
instead of 1-by-1
- avoiding the random read by passing the read blob to replayBatch()
- measure the effect of purging the replayed batch from the memtable (when not 
read from the disk)

If this gives us most of the win of a version with the altered schema, then 
I'll be satisfied with just those changes. If benchmarks say that we have a lot 
extra relative and absolute efficiency to gain from the schema change, then I 
won't argue with the data.

 More efficient BatchlogManager
 --

 Key: CASSANDRA-6134
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6134
 Project: Cassandra
  Issue Type: Improvement
Reporter: Oleg Anastasyev
Priority: Minor
 Attachments: BatchlogManager.txt


 As we discussed earlier in CASSANDRA-6079 this is the new BatchManager.
 It stores batch records in 
 {code}
 CREATE TABLE batchlog (
   id_partition int,
   id timeuuid,
   data blob,
   PRIMARY KEY (id_partition, id)
 ) WITH COMPACT STORAGE AND
   CLUSTERING ORDER BY (id DESC)
 {code}
 where id_partition is minute-since-epoch of id uuid. 
 So when it scans for batches to replay ot scans within a single partition for 
  a slice of ids since last processed date till now minus write timeout.
 So no full batchlog CF scan and lot of randrom reads are made on normal 
 cycle. 
 Other improvements:
 1. It runs every 1/2 of write timeout and replays all batches written within 
 0.9 * write timeout from now. This way we ensure, that batched updates will 
 be replayed to th moment client times out from coordinator.
 2. It submits all mutations from single batch in parallel (Like StorageProxy 
 do). Old implementation played them one-by-one, so client can see half 
 applied batches in CF 

[jira] [Commented] (CASSANDRA-6048) Add the ability to use multiple indexes in a single query

2013-10-21 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800888#comment-13800888
 ] 

Alex Liu commented on CASSANDRA-6048:
-

If we allow hash collision , we can use bitmap to join all the indexes. For any 
false results, we can filter them out during filtering the base CF. First find 
the most select index(primary), use it as a base. Once AND all bitmaps, we 
can scan the primary index to filter the good results, then fetch base CF and 
filter out any bad results.

 Add the ability to use multiple indexes in a single query
 -

 Key: CASSANDRA-6048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6048
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.1

 Attachments: 6048-1.2-branch.txt, 6048-trunk.txt


 Existing data filtering uses the following algorithm
 {code}
1. find best selective predicate based on the smallest mean columns count
2. fetch rows for the best selective predicate predicate, then filter the 
 data based on other predicates left.
 {code}
 So potentially we could improve the performance by
 {code}
1.  joining multiple predicates then do the data filtering for other 
 predicates.
2.  fine tune the best predicate selection algorithm
 {code}
 For multiple predicate join, it could improve performance if one predicate 
 has many entries and another predicate has a very few of entries. It means a 
 few index CF read, join the row keys, fetch rows then filter other predicates
 Another approach is to have index on multiple columns.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-3578) Multithreaded commitlog

2013-10-21 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-3578:
-

Attachment: Multi-Threded-CL.png
Current-CL.png

Hi Jonathan, Ohhh you can ignore those i was experimenting few other things 
(like UUID.random was locking and the numbers where all bad, etc) and hence 
added those metrics (didn't mean to confuse). But if you are interested with 
the GC profile please see the attached.

 Multithreaded commitlog
 ---

 Key: CASSANDRA-3578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3578
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
  Labels: performance
 Attachments: 0001-CASSANDRA-3578.patch, ComitlogStress.java, 
 Current-CL.png, Multi-Threded-CL.png, parallel_commit_log_2.patch


 Brian Aker pointed out a while ago that allowing multiple threads to modify 
 the commitlog simultaneously (reserving space for each with a CAS first, the 
 way we do in the SlabAllocator.Region.allocate) can improve performance, 
 since you're not bottlenecking on a single thread to do all the copying and 
 CRC computation.
 Now that we use mmap'd CommitLog segments (CASSANDRA-3411) this becomes 
 doable.
 (moved from CASSANDRA-622, which was getting a bit muddled.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6196) Add compaction, compression to cqlsh tab completion for CREATE TABLE

2013-10-21 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6196:
---

Attachment: (was: cassandra-2.0-6196.patch)

 Add compaction, compression to cqlsh tab completion for CREATE TABLE
 

 Key: CASSANDRA-6196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6196
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.12, 2.0.2






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6179) Load calculated in nodetool info is strange/inaccurate in JBOD setups

2013-10-21 Thread J. Ryan Earl (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800933#comment-13800933
 ] 

J. Ryan Earl commented on CASSANDRA-6179:
-

Follows:
{noformat}
[jre@cassandra5 ~]$ du -sh /var/lib/cassandra/saved_caches /data/? /commit
1020K   /var/lib/cassandra/saved_caches
12G /data/1
12G /data/2
12G /data/3
12G /data/4
12G /data/5
13G /data/6
12G /data/7
1.1G/commit
{noformat}

These are all backed by separate physical disks.

 Load calculated in nodetool info is strange/inaccurate in JBOD setups
 -

 Key: CASSANDRA-6179
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6179
 Project: Cassandra
  Issue Type: Bug
 Environment: JBOD layouts
Reporter: J. Ryan Earl
Assignee: Mikhail Stepura

 We recently noticed that the storage capacity on Cassandra nodes using JBOD 
 layout was returning what looks close to the average data volume size, 
 instead of the sum of all JBOD data volumes.  It's not exactly an average and 
 I haven't had time to dig into the code to see what it's really doing, it's 
 like some sort of sample of the JBOD volumes sizes.
 So looking at the JBOD volumes we see:
 {noformat}
 [jre@cassandra2 ~]$ df -h
 FilesystemSize  Used Avail Use% Mounted on
 [...]
 /dev/sdc1 1.1T  9.4G  1.1T   1% /data/1
 /dev/sdd1 1.1T  9.2G  1.1T   1% /data/2
 /dev/sde1 1.1T   11G  1.1T   1% /data/3
 /dev/sdf1 1.1T   11G  1.1T   1% /data/4
 /dev/sdg1 1.1T  9.2G  1.1T   1% /data/5
 /dev/sdh1 1.1T   11G  1.1T   1% /data/6
 /dev/sdi1 1.1T  9.8G  1.1T   1% /data/7
 {noformat}
 Looking at 'nodetool info' we see:
 {noformat}
 [jre@cassandra2 ~]$ nodetool info
 Token: (invoke with -T/--tokens to see all 256 tokens)
 ID   : 631f0be3-ce52-4eb9-b48b-069fbfdf0a97
 Gossip active: true
 Thrift active: true
 Native Transport active: true
 Load : 10.57 GB
 {noformat}
 So there are 7 disks in a JBOD configuration in this example, the sum should 
 be closer to 70G for each node.  Maybe we're misinterpreting what this value 
 should be, but things like OpsCenter appear to use this load value as the 
 size of data on the local node, which I expect to be the sum of JBOD volumes.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6048) Add the ability to use multiple indexes in a single query

2013-10-21 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800948#comment-13800948
 ] 

Alex Liu commented on CASSANDRA-6048:
-

http://remis-thoughts.blogspot.com/2012/03/perfect-hashes-in-java-given-set-of-m.html
 provides a perfect hash for a set. We can select the primary index and 
secondary index combination set to create a perfect hash. 

Usage is here.
http://code.google.com/p/perfect-hashes/source/browse/trunk/src/test/java/com/googlecode/perfecthashes/PerfectHashesTest.java?r=33

 Add the ability to use multiple indexes in a single query
 -

 Key: CASSANDRA-6048
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6048
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Alex Liu
Assignee: Alex Liu
 Fix For: 2.1

 Attachments: 6048-1.2-branch.txt, 6048-trunk.txt


 Existing data filtering uses the following algorithm
 {code}
1. find best selective predicate based on the smallest mean columns count
2. fetch rows for the best selective predicate predicate, then filter the 
 data based on other predicates left.
 {code}
 So potentially we could improve the performance by
 {code}
1.  joining multiple predicates then do the data filtering for other 
 predicates.
2.  fine tune the best predicate selection algorithm
 {code}
 For multiple predicate join, it could improve performance if one predicate 
 has many entries and another predicate has a very few of entries. It means a 
 few index CF read, join the row keys, fetch rows then filter other predicates
 Another approach is to have index on multiple columns.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (CASSANDRA-6224) CQL3 Column family / tables disappear, get unconfigured columnfamily errors

2013-10-21 Thread Constance Eustace (JIRA)
Constance Eustace created CASSANDRA-6224:


 Summary: CQL3 Column family / tables disappear, get unconfigured 
columnfamily errors
 Key: CASSANDRA-6224
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6224
 Project: Cassandra
  Issue Type: Bug
  Components: API, Core
 Environment: Cassandra 2.0.1 Amazon AWS Ubuntu Single-node
Reporter: Constance Eustace


We're seeing CQL3 tables seemingly arbitrarily disappear. Need to repair for 
prod meant we reconstructed the affected schema before nodetool repairs or 
similar attempts were done. 

It seems to take a few days to appear. Volumes are not tremendously high yet...

Caused by: java.sql.SQLSyntaxErrorException: 
InvalidRequestException(why:unconfigured columnfamily entity_hierarchydef)
at 
org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:103)
 ~[cassandra-jdbc-1.2.5.jar:na]
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:388)
 ~[cassandra-jdbc-1.2.5.jar:na]
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:372)
 ~[cassandra-jdbc-1.2.5.jar:na]
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepareStatement(CassandraConnection.java:50)
 ~[cassandra-jdbc-1.2.5.jar:na]
at 
org.apache.commons.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:281)
 ~[commons-dbcp-1.3.jar:1.3]
at 
org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.prepareStatement(PoolingDataSource.java:313)
 ~[commons-dbcp-1.3.jar:1.3]
at 
com.bestbuy.contentsystems.cupcake.storage.cassandra.cqlentity.CQL.tool.CassPSC.createPreparedStatement(CassPSC.java:61)
 ~[ingest-storage-QA-SNAPSHOT.jar:QA-SNAPSHOT]
at 
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:583) 
~[spring-jdbc-3.2.4.RELEASE.jar:3.2.4.RELEASE]
... 148 common frames omitted
Caused by: org.apache.cassandra.thrift.InvalidRequestException: null
at 
org.apache.cassandra.thrift.Cassandra$prepare_cql3_query_result.read(Cassandra.java:39567)
 ~[cassandra-thrift-1.2.8.jar:1.2.8]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) 
~[libthrift-0.7.0.jar:0.7.0]
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql3_query(Cassandra.java:1625)
 ~[cassandra-thrift-1.2.8.jar:1.2.8]
at 
org.apache.cassandra.thrift.Cassandra$Client.prepare_cql3_query(Cassandra.java:1611)
 ~[cassandra-thrift-1.2.8.jar:1.2.8]
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepare(CassandraConnection.java:517)
 ~[cassandra-jdbc-1.2.5.jar:na]
at 
org.apache.cassandra.cql.jdbc.CassandraConnection.prepare(CassandraConnection.java:532)
 ~[cassandra-jdbc-1.2.5.jar:na]
at 
org.apache.cassandra.cql.jdbc.CassandraPreparedStatement.init(CassandraPreparedStatement.java:96)
 ~[cassandra-jdbc-1.2.5.jar:na]




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6206) Thrift socket listen backlog

2013-10-21 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800956#comment-13800956
 ] 

Vijay commented on CASSANDRA-6206:
--

Nenad, Can you change have the default backlog config to be java default?

 Thrift socket listen backlog
 

 Key: CASSANDRA-6206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Linux, Java 7
Reporter: Nenad Merdanovic
 Fix For: 2.0.2

 Attachments: cassandra.patch


 Although Thrift is a depreciated method of accessing Cassandra, default 
 backlog is way too low on that socket. It shouldn't be a problem to implement 
 it and I am including a POC patch for this (sorry, really low on time with 
 limited Java knowledge so just to give an idea).
 This is an old report which was never addressed and the bug remains till this 
 day, except in my case I have a much larger scale application with 3rd party 
 software which I cannot modify to include connection pooling:
 https://issues.apache.org/jira/browse/CASSANDRA-1663
 There is also a pending change in the Thrift itself which Cassandra should be 
 able to use for parts using TServerSocket (SSL):
 https://issues.apache.org/jira/browse/THRIFT-1868



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6196) Add compaction, compression to cqlsh tab completion for CREATE TABLE

2013-10-21 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6196:
---

Attachment: cassandra-2.0-6196.patch

New patch for 2.0 to address ALTER TABLE as well.

Fixing it for 1.2 will be trickier for me as 1.2 doesn't contain 
https://github.com/apache/cassandra/commit/7f6ac19efb9a9d51a3ebdb58197c8fe35476034f

 Add compaction, compression to cqlsh tab completion for CREATE TABLE
 

 Key: CASSANDRA-6196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6196
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 1.2.12, 2.0.2

 Attachments: cassandra-2.0-6196.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6206) Thrift socket listen backlog

2013-10-21 Thread Nenad Merdanovic (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800968#comment-13800968
 ] 

Nenad Merdanovic commented on CASSANDRA-6206:
-

Hello Vijay,

Not sure I understand, you want me to have a default value in code just the 
same as Java default? So basically, in the patch I should change:
+   public Integer rpc_listen_backlog = 1024;
to
+   public Integer rpc_listen_backlog = 50;

If that is it, no problem, I'll attach another patch.


 Thrift socket listen backlog
 

 Key: CASSANDRA-6206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Linux, Java 7
Reporter: Nenad Merdanovic
 Fix For: 2.0.2

 Attachments: cassandra.patch


 Although Thrift is a depreciated method of accessing Cassandra, default 
 backlog is way too low on that socket. It shouldn't be a problem to implement 
 it and I am including a POC patch for this (sorry, really low on time with 
 limited Java knowledge so just to give an idea).
 This is an old report which was never addressed and the bug remains till this 
 day, except in my case I have a much larger scale application with 3rd party 
 software which I cannot modify to include connection pooling:
 https://issues.apache.org/jira/browse/CASSANDRA-1663
 There is also a pending change in the Thrift itself which Cassandra should be 
 able to use for parts using TServerSocket (SSL):
 https://issues.apache.org/jira/browse/THRIFT-1868



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2013-10-21 Thread aleksey
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/20693928
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/20693928
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/20693928

Branch: refs/heads/trunk
Commit: 20693928ae848676e1f859ecf4a473c9818707c9
Parents: 35cbc19 c4c8bca
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 03:15:22 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 03:15:22 2013 +0800

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/cql3handling.py | 68 +
 2 files changed, 2 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/20693928/CHANGES.txt
--



git commit: cqlsh: fix CREATE/ALTER WITH completion

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/cassandra-2.0 0c5f05bd9 - c4c8bca8d


cqlsh: fix CREATE/ALTER WITH completion

patch by Mikhail Stepura; reviewed by Aleksey Yeschenko for
CASSANDRA-6196


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c4c8bca8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c4c8bca8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c4c8bca8

Branch: refs/heads/cassandra-2.0
Commit: c4c8bca8d55142d64762fc8f2557eed80c1ceaf8
Parents: 0c5f05b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 03:14:24 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 03:14:24 2013 +0800

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/cql3handling.py | 68 +
 2 files changed, 2 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4c8bca8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 895ffcc..40d752c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -26,6 +26,7 @@
  * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
  * Add reloadtriggers command to nodetool (CASSANDRA-4949)
  * cqlsh: ignore empty 'value alias' in DESCRIBE (CASSANDRA-6139)
+ * cqlsh: fix CREATE/ALTER WITH completion (CASSANDRA-6196)
 Merged from 1.2:
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
  * Add a warning for small LCS sstable size (CASSANDRA-6191)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4c8bca8/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 8ec3573..bc349a7 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -830,72 +830,6 @@ def create_ks_wat_completer(ctxt, cass):
 return ['KEYSPACE']
 return ['KEYSPACE', 'SCHEMA']
 
-@completer_for('property', 'propname')
-def keyspace_properties_option_name_completer(ctxt, cass):
-optsseen = ctxt.get_binding('propname', ())
-if 'replication' not in optsseen:
-return ['replication']
-return [durable_writes]
-
-@completer_for('propertyValue', 'propsimpleval')
-def property_value_completer(ctxt, cass):
-optname = ctxt.get_binding('propname')[-1]
-if optname == 'durable_writes':
-return ['true', 'false']
-if optname == 'replication':
-return [{'class': ']
-return ()
-
-@completer_for('propertyValue', 'propmapkey')
-def keyspace_properties_map_key_completer(ctxt, cass):
-optname = ctxt.get_binding('propname')[-1]
-if optname != 'replication':
-return ()
-keysseen = map(dequote_value, ctxt.get_binding('propmapkey', ()))
-valsseen = map(dequote_value, ctxt.get_binding('propmapval', ()))
-for k, v in zip(keysseen, valsseen):
-if k == 'class':
-repclass = v
-break
-else:
-return ['class']
-if repclass in CqlRuleSet.replication_factor_strategies:
-opts = set(('replication_factor',))
-elif repclass == 'NetworkTopologyStrategy':
-return [Hint('dc_name')]
-return map(escape_value, opts.difference(keysseen))
-
-@completer_for('propertyValue', 'propmapval')
-def keyspace_properties_map_value_completer(ctxt, cass):
-optname = ctxt.get_binding('propname')[-1]
-if optname != 'replication':
-return ()
-currentkey = dequote_value(ctxt.get_binding('propmapkey')[-1])
-if currentkey == 'class':
-return map(escape_value, CqlRuleSet.replication_strategies)
-return [Hint('value')]
-
-@completer_for('propertyValue', 'ender')
-def keyspace_properties_map_ender_completer(ctxt, cass):
-optname = ctxt.get_binding('propname')[-1]
-if optname != 'replication':
-return [',']
-keysseen = map(dequote_value, ctxt.get_binding('propmapkey', ()))
-valsseen = map(dequote_value, ctxt.get_binding('propmapval', ()))
-for k, v in zip(keysseen, valsseen):
-if k == 'class':
-repclass = v
-break
-else:
-return [',']
-if repclass in CqlRuleSet.replication_factor_strategies:
-opts = set(('replication_factor',))
-if 'replication_factor' not in keysseen:
-return [',']
-if repclass == 'NetworkTopologyStrategy' and len(keysseen) == 1:
-return [',']
-return ['}']
-
 syntax_rules += r'''
 createColumnFamilyStatement ::= CREATE wat=( COLUMNFAMILY | TABLE ) 
(IF NOT EXISTS)?
 ( ks=nonSystemKeyspaceName dot=. )? 
cf=cfOrKsName
@@ -1021,7 +955,7 @@ def drop_index_completer(ctxt, cass):
 return map(maybe_escape_name, cass.get_index_names())
 
 

[1/2] git commit: cqlsh: fix CREATE/ALTER WITH completion

2013-10-21 Thread aleksey
Updated Branches:
  refs/heads/trunk 35cbc1980 - 20693928a


cqlsh: fix CREATE/ALTER WITH completion

patch by Mikhail Stepura; reviewed by Aleksey Yeschenko for
CASSANDRA-6196


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c4c8bca8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c4c8bca8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c4c8bca8

Branch: refs/heads/trunk
Commit: c4c8bca8d55142d64762fc8f2557eed80c1ceaf8
Parents: 0c5f05b
Author: Aleksey Yeschenko alek...@apache.org
Authored: Tue Oct 22 03:14:24 2013 +0800
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Tue Oct 22 03:14:24 2013 +0800

--
 CHANGES.txt|  1 +
 pylib/cqlshlib/cql3handling.py | 68 +
 2 files changed, 2 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4c8bca8/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 895ffcc..40d752c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -26,6 +26,7 @@
  * CQL3: support pre-epoch longs for TimestampType (CASSANDRA-6212)
  * Add reloadtriggers command to nodetool (CASSANDRA-4949)
  * cqlsh: ignore empty 'value alias' in DESCRIBE (CASSANDRA-6139)
+ * cqlsh: fix CREATE/ALTER WITH completion (CASSANDRA-6196)
 Merged from 1.2:
  * (Hadoop) Require CFRR batchSize to be at least 2 (CASSANDRA-6114)
  * Add a warning for small LCS sstable size (CASSANDRA-6191)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4c8bca8/pylib/cqlshlib/cql3handling.py
--
diff --git a/pylib/cqlshlib/cql3handling.py b/pylib/cqlshlib/cql3handling.py
index 8ec3573..bc349a7 100644
--- a/pylib/cqlshlib/cql3handling.py
+++ b/pylib/cqlshlib/cql3handling.py
@@ -830,72 +830,6 @@ def create_ks_wat_completer(ctxt, cass):
 return ['KEYSPACE']
 return ['KEYSPACE', 'SCHEMA']
 
-@completer_for('property', 'propname')
-def keyspace_properties_option_name_completer(ctxt, cass):
-optsseen = ctxt.get_binding('propname', ())
-if 'replication' not in optsseen:
-return ['replication']
-return [durable_writes]
-
-@completer_for('propertyValue', 'propsimpleval')
-def property_value_completer(ctxt, cass):
-optname = ctxt.get_binding('propname')[-1]
-if optname == 'durable_writes':
-return ['true', 'false']
-if optname == 'replication':
-return [{'class': ']
-return ()
-
-@completer_for('propertyValue', 'propmapkey')
-def keyspace_properties_map_key_completer(ctxt, cass):
-optname = ctxt.get_binding('propname')[-1]
-if optname != 'replication':
-return ()
-keysseen = map(dequote_value, ctxt.get_binding('propmapkey', ()))
-valsseen = map(dequote_value, ctxt.get_binding('propmapval', ()))
-for k, v in zip(keysseen, valsseen):
-if k == 'class':
-repclass = v
-break
-else:
-return ['class']
-if repclass in CqlRuleSet.replication_factor_strategies:
-opts = set(('replication_factor',))
-elif repclass == 'NetworkTopologyStrategy':
-return [Hint('dc_name')]
-return map(escape_value, opts.difference(keysseen))
-
-@completer_for('propertyValue', 'propmapval')
-def keyspace_properties_map_value_completer(ctxt, cass):
-optname = ctxt.get_binding('propname')[-1]
-if optname != 'replication':
-return ()
-currentkey = dequote_value(ctxt.get_binding('propmapkey')[-1])
-if currentkey == 'class':
-return map(escape_value, CqlRuleSet.replication_strategies)
-return [Hint('value')]
-
-@completer_for('propertyValue', 'ender')
-def keyspace_properties_map_ender_completer(ctxt, cass):
-optname = ctxt.get_binding('propname')[-1]
-if optname != 'replication':
-return [',']
-keysseen = map(dequote_value, ctxt.get_binding('propmapkey', ()))
-valsseen = map(dequote_value, ctxt.get_binding('propmapval', ()))
-for k, v in zip(keysseen, valsseen):
-if k == 'class':
-repclass = v
-break
-else:
-return [',']
-if repclass in CqlRuleSet.replication_factor_strategies:
-opts = set(('replication_factor',))
-if 'replication_factor' not in keysseen:
-return [',']
-if repclass == 'NetworkTopologyStrategy' and len(keysseen) == 1:
-return [',']
-return ['}']
-
 syntax_rules += r'''
 createColumnFamilyStatement ::= CREATE wat=( COLUMNFAMILY | TABLE ) 
(IF NOT EXISTS)?
 ( ks=nonSystemKeyspaceName dot=. )? 
cf=cfOrKsName
@@ -1021,7 +955,7 @@ def drop_index_completer(ctxt, cass):
 return map(maybe_escape_name, cass.get_index_names())
 
 syntax_rules += 

[jira] [Resolved] (CASSANDRA-6196) Add compaction, compression to cqlsh tab completion for CREATE TABLE

2013-10-21 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-6196.
--

   Resolution: Fixed
Fix Version/s: (was: 1.2.12)
Reproduced In: 2.0.1, 1.2.10  (was: 1.2.10, 2.0.1)

Committed as is, thanks.

Re: 1.2 - if you break completion for CQL2/CQL3-beta while fixing it for 
CQL3-proper, I wouldn't object. So if that's the only thing that's stopping 
you, feel free to break it.

 Add compaction, compression to cqlsh tab completion for CREATE TABLE
 

 Key: CASSANDRA-6196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6196
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.0.2

 Attachments: cassandra-2.0-6196.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-10-21 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800984#comment-13800984
 ] 

Constance Eustace commented on CASSANDRA-6220:
--

Does nodetool compact keyspace tablename fix the corruption? It did for me, 
but I don't think it stops the ongoing corruption...

 Unable to select multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ashot Golovenko
 Attachments: inserts.zip


 I have the following table:
 CREATE TABLE rating (
 id bigint,
 mid int,
 hid int,
 r double,
 PRIMARY KEY ((id, mid), hid));
 And I get really really strange result sets on the following queries:
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329320;
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329220;
  hid   | r
 ---+---
  201329220 | 53.62
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid in (201329320, 201329220);
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)  -- WRONG - should be two records
 As you can see although both records exist I'm not able the fetch all of them 
 using in clause. By now I have to cycle my requests which are about 30 and I 
 find it highly inefficient given that I query physically the same row. 
 More of that  - it doesn't happen all the time! For different id values 
 sometimes I get the correct dataset.
 Ideally I'd like the following select to work:
 SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
 Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-21 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800983#comment-13800983
 ] 

Constance Eustace commented on CASSANDRA-6137:
--

nodetool scrub did nothing to fix it.

UPDATE: nodetool compact seems to repair the damage! It obviously probably 
doesn't prevent the reoccurrence and doesn't allow us to trust the queries...

 CQL3 SELECT IN CLAUSE inconsistent
 --

 Key: CASSANDRA-6137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE on EBS RAID 
 storage
 OSX Cassandra 1.2.8 on SSD storage
Reporter: Constance Eustace
Priority: Minor

 I am elevating this to Critical after doing some trace and reproducing in 
 several environments. No one has commented on this bug from the cassandra 
 team, and I view unreliable/corrupted data a pretty big deal. We are 
 considering pulling cassandra and using something else.
 We have the data state reproduced locally in an environment that we can set 
 TRACE logging, attach a debugger, etc. Some guidance as to where to look 
 would be greatly appreciated.
 --
 We are encountering inconsistent results from CQL3 queries with column keys 
 using IN clause in WHERE. This has been reproduced in cqlsh and the jdbc 
 driver.
 Rowkey is e_entid
 Column key is p_prop
 This returns roughly 21 rows for 21 column keys that match p_prop.
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
 These three queries each return one row for the requested single column key 
 in the IN clause:
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:all:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:fail:count');
 This query returns ONLY ONE ROW (one column key), not three as I would expect 
 from the three-column-key IN clause:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 This query does return two rows however for the requested two column keys:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in (  
   
 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 cqlsh describe table internal_submission.entity_job;
 CREATE TABLE entity_job (
   e_entid text,
   p_prop text,
   describes text,
   dndcondition text,
   e_entlinks text,
   e_entname text,
   e_enttype text,
   ingeststatus text,
   ingeststatusdetail text,
   p_flags text,
   p_propid text,
   p_proplinks text,
   p_storage text,
   p_subents text,
   p_val text,
   p_vallang text,
   p_vallinks text,
   p_valtype text,
   p_valunit text,
   p_vars text,
   partnerid text,
   referenceid text,
   size int,
   sourceip text,
   submitdate bigint,
   submitevent text,
   userid text,
   version text,
   PRIMARY KEY (e_entid, p_prop)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   

[jira] [Commented] (CASSANDRA-3578) Multithreaded commitlog

2013-10-21 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800990#comment-13800990
 ] 

Jonathan Ellis commented on CASSANDRA-3578:
---

Hmm, interesting that the new MT code doesn't have pauses every 10s or so.  Is 
that where the current code has to block when it runs out of segments?

 Multithreaded commitlog
 ---

 Key: CASSANDRA-3578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3578
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
  Labels: performance
 Attachments: 0001-CASSANDRA-3578.patch, ComitlogStress.java, 
 Current-CL.png, Multi-Threded-CL.png, parallel_commit_log_2.patch


 Brian Aker pointed out a while ago that allowing multiple threads to modify 
 the commitlog simultaneously (reserving space for each with a CAS first, the 
 way we do in the SlabAllocator.Region.allocate) can improve performance, 
 since you're not bottlenecking on a single thread to do all the copying and 
 CRC computation.
 Now that we use mmap'd CommitLog segments (CASSANDRA-3411) this becomes 
 doable.
 (moved from CASSANDRA-622, which was getting a bit muddled.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6206) Thrift socket listen backlog

2013-10-21 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800991#comment-13800991
 ] 

Vijay commented on CASSANDRA-6206:
--

Hi Nenad, Yep, thanks!

 Thrift socket listen backlog
 

 Key: CASSANDRA-6206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Linux, Java 7
Reporter: Nenad Merdanovic
 Fix For: 2.0.2

 Attachments: cassandra.patch


 Although Thrift is a depreciated method of accessing Cassandra, default 
 backlog is way too low on that socket. It shouldn't be a problem to implement 
 it and I am including a POC patch for this (sorry, really low on time with 
 limited Java knowledge so just to give an idea).
 This is an old report which was never addressed and the bug remains till this 
 day, except in my case I have a much larger scale application with 3rd party 
 software which I cannot modify to include connection pooling:
 https://issues.apache.org/jira/browse/CASSANDRA-1663
 There is also a pending change in the Thrift itself which Cassandra should be 
 able to use for parts using TServerSocket (SSL):
 https://issues.apache.org/jira/browse/THRIFT-1868



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-6206) Thrift socket listen backlog

2013-10-21 Thread Nenad Merdanovic (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nenad Merdanovic updated CASSANDRA-6206:


Attachment: cassandra-v2.patch

Hello Vijay, attached the cassandra-v2.patch file as requested.

Thanks,
Nenad

 Thrift socket listen backlog
 

 Key: CASSANDRA-6206
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6206
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Debian Linux, Java 7
Reporter: Nenad Merdanovic
 Fix For: 2.0.2

 Attachments: cassandra.patch, cassandra-v2.patch


 Although Thrift is a depreciated method of accessing Cassandra, default 
 backlog is way too low on that socket. It shouldn't be a problem to implement 
 it and I am including a POC patch for this (sorry, really low on time with 
 limited Java knowledge so just to give an idea).
 This is an old report which was never addressed and the bug remains till this 
 day, except in my case I have a much larger scale application with 3rd party 
 software which I cannot modify to include connection pooling:
 https://issues.apache.org/jira/browse/CASSANDRA-1663
 There is also a pending change in the Thrift itself which Cassandra should be 
 able to use for parts using TServerSocket (SSL):
 https://issues.apache.org/jira/browse/THRIFT-1868



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Issue Comment Deleted] (CASSANDRA-5591) Windows failure renaming LCS json.

2013-10-21 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5591:
--

Comment: was deleted

(was: This adds the files mentioned in the Jira and tweaks 
cassandra-shuffle.bat.)

 Windows failure renaming LCS json.
 --

 Key: CASSANDRA-5591
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5591
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: Windows
Reporter: Jeremiah Jordan

 Had someone report that on Windows, under load, the LCS json file sometimes 
 fails to be renamed.
 {noformat}
 ERROR [CompactionExecutor:1] 2013-05-23 14:43:55,848 CassandraDaemon.java 
 (line 174) Exception in thread Thread[CompactionExecutor:1,1,main]
  java.lang.RuntimeException: Failed to rename C:\development\tools\DataStax 
 Community\data\data\zzz\zzz\zzz.json to C:\development\tools\DataStax 
 Community\data\data\zzz\zzz\zzz-old.json
   at 
 org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133)
   at 
 org.apache.cassandra.db.compaction.LeveledManifest.serialize(LeveledManifest.java:617)
   at 
 org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:229)
   at 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:155)
   at 
 org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:410)
   at 
 org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:223)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:991)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:230)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58)
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:188)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
   at java.lang.Thread.run(Thread.java:662)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2013-10-21 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801007#comment-13801007
 ] 

Lyuben Todorov commented on CASSANDRA-5351:
---

For STCS
The best choice seems like compacting into repaired sstables and unrepaired 
sstables based on SSTableMetadata#repairedAt field.

For LCS
We have two main choices, Repair data going fromg L0 - L1. Should be 
reasonable since repairing at each promotion means we dont need to carry out a 
lot of repairs during compaction, but it does mean we need to trigger repairs 
automatically (I dont like the sound of extra work before/during compaction), 
this is why I prefer [~krummas]'s idea of keeping separate levels of LCS. Next 
step is to workout how the data at UnrepairedLevelN can jump to RepairedLevelN 
without having to go through each level.

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6137) CQL3 SELECT IN CLAUSE inconsistent

2013-10-21 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801014#comment-13801014
 ] 

Constance Eustace commented on CASSANDRA-6137:
--

Once the initial datarun is done, then a nodetool compact fixes the initial 
corruption, my second run is now about 4x further than it's been before and no 
corruption/bad WHERE IN results have occurred. 

This could be some initial confusion in the internal data structures on newly 
created keyspaces that lack data, or once the compaction thread catches up. 

 CQL3 SELECT IN CLAUSE inconsistent
 --

 Key: CASSANDRA-6137
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6137
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu AWS Cassandra 2.0.1 SINGLE NODE on EBS RAID 
 storage
 OSX Cassandra 1.2.8 on SSD storage
Reporter: Constance Eustace
Priority: Minor

 I am elevating this to Critical after doing some trace and reproducing in 
 several environments. No one has commented on this bug from the cassandra 
 team, and I view unreliable/corrupted data a pretty big deal. We are 
 considering pulling cassandra and using something else.
 We have the data state reproduced locally in an environment that we can set 
 TRACE logging, attach a debugger, etc. Some guidance as to where to look 
 would be greatly appreciated.
 --
 We are encountering inconsistent results from CQL3 queries with column keys 
 using IN clause in WHERE. This has been reproduced in cqlsh and the jdbc 
 driver.
 Rowkey is e_entid
 Column key is p_prop
 This returns roughly 21 rows for 21 column keys that match p_prop.
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB';
 These three queries each return one row for the requested single column key 
 in the IN clause:
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:all:count');
 SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:fail:count');
 This query returns ONLY ONE ROW (one column key), not three as I would expect 
 from the three-column-key IN clause:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in 
 ('urn:bby:pcm:job:ingest:content:complete:count','urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 This query does return two rows however for the requested two column keys:
 cqlsh SELECT 
 e_entid,e_entname,e_enttype,p_prop,p_flags,p_propid,e_entlinks,p_proplinks,p_subents,p_val,p_vallinks,p_vars
  FROM internal_submission.Entity_Job WHERE e_entid = 
 '845b38f1-2b91-11e3-854d-126aad0075d4-CJOB'  AND p_prop in (  
   
 'urn:bby:pcm:job:ingest:content:all:count','urn:bby:pcm:job:ingest:content:fail:count');
 cqlsh describe table internal_submission.entity_job;
 CREATE TABLE entity_job (
   e_entid text,
   p_prop text,
   describes text,
   dndcondition text,
   e_entlinks text,
   e_entname text,
   e_enttype text,
   ingeststatus text,
   ingeststatusdetail text,
   p_flags text,
   p_propid text,
   p_proplinks text,
   p_storage text,
   p_subents text,
   p_val text,
   p_vallang text,
   p_vallinks text,
   p_valtype text,
   p_valunit text,
   p_vars text,
   partnerid text,
   referenceid text,
   size int,
   sourceip text,
   submitdate bigint,
   submitevent text,
   userid text,
   version text,
   PRIMARY KEY (e_entid, p_prop)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   

[jira] [Updated] (CASSANDRA-6196) Add compaction, compression to cqlsh tab completion for CREATE TABLE

2013-10-21 Thread Mikhail Stepura (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Stepura updated CASSANDRA-6196:
---

Attachment: cassandra-1.2-6196.patch

Patch for 1.2. 
* Removed static KS-only completers
* Fix ALTER TABLE

 Add compaction, compression to cqlsh tab completion for CREATE TABLE
 

 Key: CASSANDRA-6196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6196
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Mikhail Stepura
Priority: Minor
 Fix For: 2.0.2

 Attachments: cassandra-1.2-6196.patch, cassandra-2.0-6196.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (CASSANDRA-6220) Unable to select multiple entries using In clause on clustering part of compound key

2013-10-21 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13800984#comment-13800984
 ] 

Constance Eustace edited comment on CASSANDRA-6220 at 10/21/13 8:15 PM:


Does nodetool compact keyspace tablename fix the corruption? It did for me, 
but I don't think it stops the ongoing corruption...


EDIT: my reproduction seems to indicate nodetool compact MAY fix ongoing 
updates after the nodetool compact was executed... I was unable to generate 
bad queries after another 1.5 million row inserts and 30,000 updates to 
existing data. 


was (Author: cowardlydragon):
Does nodetool compact keyspace tablename fix the corruption? It did for me, 
but I don't think it stops the ongoing corruption...

 Unable to select multiple entries using In clause on clustering part of 
 compound key
 

 Key: CASSANDRA-6220
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6220
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Ashot Golovenko
 Attachments: inserts.zip


 I have the following table:
 CREATE TABLE rating (
 id bigint,
 mid int,
 hid int,
 r double,
 PRIMARY KEY ((id, mid), hid));
 And I get really really strange result sets on the following queries:
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329320;
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid = 201329220;
  hid   | r
 ---+---
  201329220 | 53.62
 (1 rows)
 cqlsh:bm SELECT hid, r FROM rating WHERE id  = 755349113 and mid = 201310 
 and hid in (201329320, 201329220);
  hid   | r
 ---+
  201329320 | 45.476
 (1 rows)  -- WRONG - should be two records
 As you can see although both records exist I'm not able the fetch all of them 
 using in clause. By now I have to cycle my requests which are about 30 and I 
 find it highly inefficient given that I query physically the same row. 
 More of that  - it doesn't happen all the time! For different id values 
 sometimes I get the correct dataset.
 Ideally I'd like the following select to work:
 SELECT hid, r FROM rating WHERE id  = 755349113 and mid in ? and hid in ?;
 Which doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-6151) CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated

2013-10-21 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801039#comment-13801039
 ] 

Alex Liu commented on CASSANDRA-6151:
-

[~jbellis] To fix this one, we can modify CqlPagingReader to check the where 
clauses. If it have EQUAL clauses for all the partitioning keys, we use Query 

{code}
  SELECT * FROM data 
  WHERE occurday='A Great Day' 
   AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
{code}

instead of 
{code}
  SELECT * FROM data 
  WHERE token(occurday,seqnumber)  ? 
   AND token(occurday,seqnumber) = ? 
   AND occurday='A Great Day' 
   AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
{code}

Any comments before I proceed.

 CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated
 

 Key: CASSANDRA-6151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6151
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Russell Alexander Spitzer
Assignee: Alex Liu
Priority: Minor

 From 
 http://stackoverflow.com/questions/19189649/composite-key-in-cassandra-with-pig/19211546#19211546
 The user was attempting to load a single partition using a where clause in a 
 pig load statement. 
 CQL Table
 {code}
 CREATE table data (
   occurday  text,
   seqnumber int,
   occurtimems bigint,
   unique bigint,
   fields maptext, text,
   primary key ((occurday, seqnumber), occurtimems, unique)
 )
 {code}
 Pig Load statement Query
 {code}
 data = LOAD 
 'cql://ks/data?where_clause=seqnumber%3D10%20AND%20occurday%3D%272013-10-01%27'
  USING CqlStorage();
 {code}
 This results in an exception when processed by the the CqlPagingRecordReader 
 which attempts to page this query even though it contains at most one 
 partition key. This leads to an invalid CQL statement. 
 CqlPagingRecordReader Query
 {code}
 SELECT * FROM data WHERE token(occurday,seqnumber)  ? AND
 token(occurday,seqnumber) = ? AND occurday='A Great Day' 
 AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
 {code}
 Exception
 {code}
  InvalidRequestException(why:occurday cannot be restricted by more than one 
 relation if it includes an Equal)
 {code}
 I'm not sure it is worth the special case but, a modification to not use the 
 paging record reader when the entire partition key is specified would solve 
 this issue. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5911) Commit logs are not removed after nodetool flush or nodetool drain

2013-10-21 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801055#comment-13801055
 ] 

Robert Coli commented on CASSANDRA-5911:


What are the since/affects versions for this issue?

 Commit logs are not removed after nodetool flush or nodetool drain
 --

 Key: CASSANDRA-5911
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5911
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: J.B. Langston
Assignee: Vijay
Priority: Minor
 Fix For: 2.0.2

 Attachments: 0001-CASSANDRA-5911.patch, 
 6528_140171_knwmuqxe9bjv5re_system.log


 Commit logs are not removed after nodetool flush or nodetool drain. This can 
 lead to unnecessary commit log replay during startup.  I've reproduced this 
 on Apache Cassandra 1.2.8.  Usually this isn't much of an issue but on a 
 Solr-indexed column family in DSE, each replayed mutation has to be reindexed 
 which can make startup take a long time (on the order of 20-30 min).
 Reproduction follows:
 {code}
 jblangston:bin jblangston$ ./cassandra  /dev/null
 jblangston:bin jblangston$ ../tools/bin/cassandra-stress -n 2000  
 /dev/null
 jblangston:bin jblangston$ du -h ../commitlog
 576M  ../commitlog
 jblangston:bin jblangston$ nodetool flush
 jblangston:bin jblangston$ du -h ../commitlog
 576M  ../commitlog
 jblangston:bin jblangston$ nodetool drain
 jblangston:bin jblangston$ du -h ../commitlog
 576M  ../commitlog
 jblangston:bin jblangston$ pkill java
 jblangston:bin jblangston$ du -h ../commitlog
 576M  ../commitlog
 jblangston:bin jblangston$ ./cassandra -f | grep Replaying
  INFO 10:03:42,915 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566761.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566762.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566763.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566764.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566765.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566766.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566767.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566768.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566769.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566770.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566771.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566772.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566773.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566774.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566775.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566776.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566777.log, 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566778.log
  INFO 10:03:42,922 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566761.log
  INFO 10:03:43,907 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566762.log
  INFO 10:03:43,907 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566763.log
  INFO 10:03:43,907 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566764.log
  INFO 10:03:43,908 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566765.log
  INFO 10:03:43,908 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566766.log
  INFO 10:03:43,908 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566767.log
  INFO 10:03:43,909 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566768.log
  INFO 10:03:43,909 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566769.log
  INFO 10:03:43,909 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566770.log
  INFO 10:03:43,910 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566771.log
  INFO 10:03:43,910 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566772.log
  INFO 10:03:43,911 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566773.log
  INFO 10:03:43,911 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566774.log
  INFO 10:03:43,911 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566775.log
  INFO 10:03:43,912 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566776.log
  INFO 10:03:43,912 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566777.log
  INFO 10:03:43,912 Replaying 
 /opt/apache-cassandra-1.2.8/commitlog/CommitLog-2-1377096566778.log
 {code}



--
This message was 

[jira] [Commented] (CASSANDRA-6151) CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated

2013-10-21 Thread Russell Alexander Spitzer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801072#comment-13801072
 ] 

Russell Alexander Spitzer commented on CASSANDRA-6151:
--

I like that solution, but if we special case this shouldn't we also add a case 
for  'IN (tuple)' statements? 

Example:
{code}
Select key from table where key in ('keyvalue1','keyvalue2','keyvalue3')
{code}

 CqlPagingRecorderReader Used when Partition Key Is Explicitly Stated
 

 Key: CASSANDRA-6151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6151
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Russell Alexander Spitzer
Assignee: Alex Liu
Priority: Minor

 From 
 http://stackoverflow.com/questions/19189649/composite-key-in-cassandra-with-pig/19211546#19211546
 The user was attempting to load a single partition using a where clause in a 
 pig load statement. 
 CQL Table
 {code}
 CREATE table data (
   occurday  text,
   seqnumber int,
   occurtimems bigint,
   unique bigint,
   fields maptext, text,
   primary key ((occurday, seqnumber), occurtimems, unique)
 )
 {code}
 Pig Load statement Query
 {code}
 data = LOAD 
 'cql://ks/data?where_clause=seqnumber%3D10%20AND%20occurday%3D%272013-10-01%27'
  USING CqlStorage();
 {code}
 This results in an exception when processed by the the CqlPagingRecordReader 
 which attempts to page this query even though it contains at most one 
 partition key. This leads to an invalid CQL statement. 
 CqlPagingRecordReader Query
 {code}
 SELECT * FROM data WHERE token(occurday,seqnumber)  ? AND
 token(occurday,seqnumber) = ? AND occurday='A Great Day' 
 AND seqnumber=1 LIMIT 1000 ALLOW FILTERING
 {code}
 Exception
 {code}
  InvalidRequestException(why:occurday cannot be restricted by more than one 
 relation if it includes an Equal)
 {code}
 I'm not sure it is worth the special case but, a modification to not use the 
 paging record reader when the entire partition key is specified would solve 
 this issue. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (CASSANDRA-5988) Make hint TTL customizable

2013-10-21 Thread Vishy Kasar (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishy Kasar updated CASSANDRA-5988:
---

Attachment: 5988.txt

 Make hint TTL customizable
 --

 Key: CASSANDRA-5988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5988
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Oleg Kibirev
  Labels: patch
 Attachments: 5988.txt


 Currently time to live for stored hints is hardcoded to be gc_grace_seconds. 
 This causes problems for applications using backdated deletes as a form of 
 optimistic locking. Hints for updates made to the same data on which delete 
 was attempted can persist for days, making it impossible to determine if 
 delete succeeded by doing read(ALL) after a reasonable delay. We need a way 
 to explicitly configure hint TTL, either through schema parameter or through 
 a yaml file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-5988) Make hint TTL customizable

2013-10-21 Thread Vishy Kasar (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801114#comment-13801114
 ] 

Vishy Kasar commented on CASSANDRA-5988:


Attached the diff as a file 5988.txt

 Make hint TTL customizable
 --

 Key: CASSANDRA-5988
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5988
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Oleg Kibirev
  Labels: patch
 Attachments: 5988.txt


 Currently time to live for stored hints is hardcoded to be gc_grace_seconds. 
 This causes problems for applications using backdated deletes as a form of 
 optimistic locking. Hints for updates made to the same data on which delete 
 was attempted can persist for days, making it impossible to determine if 
 delete succeeded by doing read(ALL) after a reasonable delay. We need a way 
 to explicitly configure hint TTL, either through schema parameter or through 
 a yaml file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-3578) Multithreaded commitlog

2013-10-21 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13801163#comment-13801163
 ] 

Vijay commented on CASSANDRA-3578:
--

Yeah we do CAS instead of queue.take() in http://goo.gl/JbNWM5 , but we do 
allocate new segments every second, not sure why the dip... will do more 
profiling on it.

 Multithreaded commitlog
 ---

 Key: CASSANDRA-3578
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3578
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Vijay
Priority: Minor
  Labels: performance
 Attachments: 0001-CASSANDRA-3578.patch, ComitlogStress.java, 
 Current-CL.png, Multi-Threded-CL.png, parallel_commit_log_2.patch


 Brian Aker pointed out a while ago that allowing multiple threads to modify 
 the commitlog simultaneously (reserving space for each with a CAS first, the 
 way we do in the SlabAllocator.Region.allocate) can improve performance, 
 since you're not bottlenecking on a single thread to do all the copying and 
 CRC computation.
 Now that we use mmap'd CommitLog segments (CASSANDRA-3411) this becomes 
 doable.
 (moved from CASSANDRA-622, which was getting a bit muddled.)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >