Git Push Summary

2014-03-31 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/1.2.16-tentative [deleted] 05fcfa2be


Git Push Summary

2014-03-31 Thread slebresne
Repository: cassandra
Updated Tags:  refs/tags/cassandra-1.2.16 [created] ef3b9b7d1


[jira] [Commented] (CASSANDRA-6931) BatchLogManager shouldn't serialize mutations with version 1.2 in 2.1.

2014-03-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955038#comment-13955038
 ] 

Sylvain Lebresne commented on CASSANDRA-6931:
-

It's a bit annoying to have to bump the minimum requirement to upgrade to 2.1 
once again: can't we just special case the 2.1 patch to use the 1.2 version for 
any node before 2.1 (which would make the 2.0 patch unnecessary as a bonus,  
not that the 2.0 patch is extremely complex)?

 BatchLogManager shouldn't serialize mutations with version 1.2 in 2.1.
 --

 Key: CASSANDRA-6931
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6931
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
 Fix For: 2.1 beta2


 BatchLogManager serialize and deserialize mutations using 
 MessagingService.VERSION_12 and this is hardcoded. Meaning that it does that 
 in 2.0, 2.1 and trunk, even though in 2.1 the 1.2 format is not properly 
 serialized properly since [this 
 commit|https://github.com/apache/cassandra/commit/cca65d7c1638dcd9370b080f08fd55faefc2733e]
  (meaning that I'm pretty sure batch logs on super columns is broken on 2.1 
 currently). And keeping the 1.2 format indefinitely just for batchlog is 
 unrealistic.
 So batchlog needs to do something like hints, record the messaging format 
 used to encode every mutation and use that for deserialization, but always 
 serialize with the current format.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6931) BatchLogManager shouldn't serialize mutations with version 1.2 in 2.1.

2014-03-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955039#comment-13955039
 ] 

Sylvain Lebresne commented on CASSANDRA-6931:
-

bq. can't we just special case the 2.1 patch to use the 1.2 version for any 
node before 2.1

Hum, never mind, that would force us to re-add the serialization code for super 
columns which is probably more error-prone that not.
Well, +1 to the patches then.

 BatchLogManager shouldn't serialize mutations with version 1.2 in 2.1.
 --

 Key: CASSANDRA-6931
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6931
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
 Fix For: 2.1 beta2


 BatchLogManager serialize and deserialize mutations using 
 MessagingService.VERSION_12 and this is hardcoded. Meaning that it does that 
 in 2.0, 2.1 and trunk, even though in 2.1 the 1.2 format is not properly 
 serialized properly since [this 
 commit|https://github.com/apache/cassandra/commit/cca65d7c1638dcd9370b080f08fd55faefc2733e]
  (meaning that I'm pretty sure batch logs on super columns is broken on 2.1 
 currently). And keeping the 1.2 format indefinitely just for batchlog is 
 unrealistic.
 So batchlog needs to do something like hints, record the messaging format 
 used to encode every mutation and use that for deserialization, but always 
 serialize with the current format.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6869) Broken 1.2 sstables support in 2.1

2014-03-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955040#comment-13955040
 ] 

Sylvain Lebresne commented on CASSANDRA-6869:
-

bq. this looks extremely dangerous to me when unpaged

Right, forgot to page it, agreed that it should be done. But as you said, we 
need to keep this for CASSANDRA-6931 so let's forget about the 2 last patches 
then, as I said, they are not directly related to fixing this ticket anyway. 
What about the first patch then, are we good on that?

 Broken 1.2 sstables support in 2.1
 --

 Key: CASSANDRA-6869
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6869
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Sylvain Lebresne
 Fix For: 2.1 beta2

 Attachments: 0001-Drop-support-for-pre-2.0-sstables.txt, 
 0002-Check-for-remaining-1.2-hints-when-upgrading-to-2.1.txt, 
 0003-Remove-remnant-of-pre-2.0-messaging-format.txt


 CASSANDRA-5417 has broken 1.2 (ic) sstables support in at least two ways.
 1. CFMetaData.getOnDiskSerializer(), used by SSTableNamesIterator and 
 IndexedSliceReader, doesn't account for pre-2.0 supercolumn sstables
 2. More importantly, ACCNT.CompositeDeserializer doesn't handle ic tables' 
 cell counts, and maybeReadNext() might throw EOFException while expecting the 
 partition end marker. SimpleDeserializer is likely just as broken.
 I'd expect more issues like this, but less obvious, in the code, and thus am 
 torn between forcing people to run upgradesstables on 2.0 and actually fixing 
 these issues, and hoping that we haven't missed anything.
 Implementing a supercolumn aware AtomDeserializer is not hard, fixing 
 CompositeDeserializer and SimpleDeserializer isn't very hard either, but I 
 really am worried about stuff that's less obvious. Plus, if we drop that 
 support, we can get rid of some legacy supercolumn code in 2.1. Minus, 
 obviously, is a bit of extra pain for 2.0-2.1 upgraders still having 1.2- 
 sstables around.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6955) Version of commons-cli is still at commons-cli-1.1.jar even in C* 2.0

2014-03-31 Thread Sucwinder Bassi (JIRA)
Sucwinder Bassi created CASSANDRA-6955:
--

 Summary: Version of commons-cli is still at commons-cli-1.1.jar 
even in C* 2.0
 Key: CASSANDRA-6955
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6955
 Project: Cassandra
  Issue Type: Bug
Reporter: Sucwinder Bassi
Priority: Minor


I found in C* 2.0.5.22 the version of commons-cli is still at 
commons-cli-1.1.jar. This should really be updated to a later version to 
prevent this versioning falling too far behind and causing any confusion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6869) Broken 1.2 sstables support in 2.1

2014-03-31 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955059#comment-13955059
 ] 

Aleksey Yeschenko commented on CASSANDRA-6869:
--

bq. What about the first patch then, are we good on that?

Almost. SSTableImportTest fails to build (trivially fixed), and, once fixed, 
fails (slightly less trivially fixed). Feel free to fix on commit.

 Broken 1.2 sstables support in 2.1
 --

 Key: CASSANDRA-6869
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6869
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Sylvain Lebresne
 Fix For: 2.1 beta2

 Attachments: 0001-Drop-support-for-pre-2.0-sstables.txt, 
 0002-Check-for-remaining-1.2-hints-when-upgrading-to-2.1.txt, 
 0003-Remove-remnant-of-pre-2.0-messaging-format.txt


 CASSANDRA-5417 has broken 1.2 (ic) sstables support in at least two ways.
 1. CFMetaData.getOnDiskSerializer(), used by SSTableNamesIterator and 
 IndexedSliceReader, doesn't account for pre-2.0 supercolumn sstables
 2. More importantly, ACCNT.CompositeDeserializer doesn't handle ic tables' 
 cell counts, and maybeReadNext() might throw EOFException while expecting the 
 partition end marker. SimpleDeserializer is likely just as broken.
 I'd expect more issues like this, but less obvious, in the code, and thus am 
 torn between forcing people to run upgradesstables on 2.0 and actually fixing 
 these issues, and hoping that we haven't missed anything.
 Implementing a supercolumn aware AtomDeserializer is not hard, fixing 
 CompositeDeserializer and SimpleDeserializer isn't very hard either, but I 
 really am worried about stuff that's less obvious. Plus, if we drop that 
 support, we can get rid of some legacy supercolumn code in 2.1. Minus, 
 obviously, is a bit of extra pain for 2.0-2.1 upgraders still having 1.2- 
 sstables around.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: 2.0 compatibility modifications for CASSANDRA-6931

2014-03-31 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 6874aaa0d - d049017ac


2.0 compatibility modifications for CASSANDRA-6931

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-6931


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d049017a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d049017a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d049017a

Branch: refs/heads/cassandra-2.0
Commit: d049017ac85ce22e7dcf87879e94b386987b19e6
Parents: 6874aaa
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 12:53:24 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 12:53:24 2014 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java |  3 ++-
 .../apache/cassandra/db/BatchlogManager.java| 22 ++--
 2 files changed, 13 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d049017a/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index ff40e65..1f25cea 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -233,7 +233,8 @@ public final class CFMetaData
 public static final CFMetaData BatchlogCf = compile(CREATE TABLE  + 
SystemKeyspace.BATCHLOG_CF +  (
 + id uuid PRIMARY 
KEY,
 + written_at 
timestamp,
-+ data blob
++ data blob,
++ version int,
 + ) WITH 
COMMENT='uncommited batches' AND gc_grace_seconds=0 
 + AND 
COMPACTION={'class' : 'SizeTieredCompactionStrategy', 'min_threshold' : 2});
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d049017a/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 23cacca..2e09285 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -66,7 +66,6 @@ import org.apache.cassandra.utils.WrappedRunnable;
 public class BatchlogManager implements BatchlogManagerMBean
 {
 private static final String MBEAN_NAME = 
org.apache.cassandra.db:type=BatchlogManager;
-private static final int VERSION = MessagingService.VERSION_12;
 private static final long REPLAY_INTERVAL = 60 * 1000; // milliseconds
 private static final int PAGE_SIZE = 128; // same as HHOM, for now, w/out 
using any heuristics. TODO: set based on avg batch size.
 
@@ -151,7 +150,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 {
 out.writeInt(mutations.size());
 for (RowMutation rm : mutations)
-RowMutation.serializer.serialize(rm, out, VERSION);
+RowMutation.serializer.serialize(rm, out, 
MessagingService.VERSION_12);
 }
 catch (IOException e)
 {
@@ -176,7 +175,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 try
 {
-UntypedResultSet page = process(SELECT id, data, written_at FROM 
%s.%s LIMIT %d,
+UntypedResultSet page = process(SELECT id, data, written_at, 
version FROM %s.%s LIMIT %d,
 Keyspace.SYSTEM_KS,
 SystemKeyspace.BATCHLOG_CF,
 PAGE_SIZE);
@@ -188,7 +187,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 if (page.size()  PAGE_SIZE)
 break; // we've exhausted the batchlog, next query would 
be empty.
 
-page = process(SELECT id, data, written_at FROM %s.%s WHERE 
token(id)  token(%s) LIMIT %d,
+page = process(SELECT id, data, written_at, version FROM 
%s.%s WHERE token(id)  token(%s) LIMIT %d,
Keyspace.SYSTEM_KS,
SystemKeyspace.BATCHLOG_CF,
id,
@@ -213,22 +212,23 @@ public class BatchlogManager implements 
BatchlogManagerMBean
 {
 id = row.getUUID(id);
 long writtenAt = row.getLong(written_at);

[jira] [Commented] (CASSANDRA-6953) Optimise CounterCell#reconcile

2014-03-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955065#comment-13955065
 ] 

Sylvain Lebresne commented on CASSANDRA-6953:
-

lgtm, +1.

 Optimise CounterCell#reconcile
 --

 Key: CASSANDRA-6953
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6953
 Project: Cassandra
  Issue Type: Improvement
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 2.1 beta2


 Can't have CASSANDRA-6506 in 2.1, but we can get *some* of the benefit by 
 optimising CounterCell#reconcile() as it is.
 Specifically, if one context is a superset of the other one, there is no need 
 to allocate a BB for the resulting context and merge them - we can just 
 return the superset context. With 2.1 producing global shards exclusively, 
 this would minimize allocations by a lot for reads from multiple sstables.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


svn commit: r1583281 - in /cassandra/site: publish/download/index.html src/settings.py

2014-03-31 Thread slebresne
Author: slebresne
Date: Mon Mar 31 09:57:43 2014
New Revision: 1583281

URL: http://svn.apache.org/r1583281
Log:
Update website for 1.2.16

Modified:
cassandra/site/publish/download/index.html
cassandra/site/src/settings.py

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1583281r1=1583280r2=1583281view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Mon Mar 31 09:57:43 2014
@@ -118,16 +118,16 @@
   p
   Previous stable branches of Cassandra continue to see periodic maintenance
   for some time after a new major release is made. The lastest release on the
-  1.2 branch is 1.2.15 (released on
-  2014-02-07).
+  1.2 branch is 1.2.16 (released on
+  2014-03-31).
   /p
 
   ul
 li
-a class=filename 
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.15/apache-cassandra-1.2.15-bin.tar.gz;apache-cassandra-1.2.15-bin.tar.gz/a
-[a 
href=http://www.apache.org/dist/cassandra/1.2.15/apache-cassandra-1.2.15-bin.tar.gz.asc;PGP/a]
-[a 
href=http://www.apache.org/dist/cassandra/1.2.15/apache-cassandra-1.2.15-bin.tar.gz.md5;MD5/a]
-[a 
href=http://www.apache.org/dist/cassandra/1.2.15/apache-cassandra-1.2.15-bin.tar.gz.sha1;SHA1/a]
+a class=filename 
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.16/apache-cassandra-1.2.16-bin.tar.gz;apache-cassandra-1.2.16-bin.tar.gz/a
+[a 
href=http://www.apache.org/dist/cassandra/1.2.16/apache-cassandra-1.2.16-bin.tar.gz.asc;PGP/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.2.16/apache-cassandra-1.2.16-bin.tar.gz.md5;MD5/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.2.16/apache-cassandra-1.2.16-bin.tar.gz.sha1;SHA1/a]
 /li
   /ul
   
@@ -170,10 +170,10 @@
 /li
   
 li
-a class=filename 
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.15/apache-cassandra-1.2.15-src.tar.gz;apache-cassandra-1.2.15-src.tar.gz/a
-[a 
href=http://www.apache.org/dist/cassandra/1.2.15/apache-cassandra-1.2.15-src.tar.gz.asc;PGP/a]
-[a 
href=http://www.apache.org/dist/cassandra/1.2.15/apache-cassandra-1.2.15-src.tar.gz.md5;MD5/a]
-[a 
href=http://www.apache.org/dist/cassandra/1.2.15/apache-cassandra-1.2.15-src.tar.gz.sha1;SHA1/a]
+a class=filename 
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.16/apache-cassandra-1.2.16-src.tar.gz;apache-cassandra-1.2.16-src.tar.gz/a
+[a 
href=http://www.apache.org/dist/cassandra/1.2.16/apache-cassandra-1.2.16-src.tar.gz.asc;PGP/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.2.16/apache-cassandra-1.2.16-src.tar.gz.md5;MD5/a]
+[a 
href=http://www.apache.org/dist/cassandra/1.2.16/apache-cassandra-1.2.16-src.tar.gz.sha1;SHA1/a]
 /li
   
   

Modified: cassandra/site/src/settings.py
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/settings.py?rev=1583281r1=1583280r2=1583281view=diff
==
--- cassandra/site/src/settings.py (original)
+++ cassandra/site/src/settings.py Mon Mar 31 09:57:43 2014
@@ -92,8 +92,8 @@ SITE_POST_PROCESSORS = {
 }
 
 class CassandraDef(object):
-oldstable_version = '1.2.15'
-oldstable_release_date = '2014-02-07'
+oldstable_version = '1.2.16'
+oldstable_release_date = '2014-03-31'
 oldstable_exists = True
 veryoldstable_version = '1.1.12'
 veryoldstable_release_date = '2013-05-27'




[jira] [Created] (CASSANDRA-6956) SELECT ... LIMIT offset by 1 with static columns

2014-03-31 Thread Pavel Eremeev (JIRA)
Pavel Eremeev created CASSANDRA-6956:


 Summary: SELECT ... LIMIT offset by 1 with static columns
 Key: CASSANDRA-6956
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6956
 Project: Cassandra
  Issue Type: Bug
 Environment: cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1
Reporter: Pavel Eremeev


First, repro case:

{code}
cqlsh:test create table test ( pk1 text, pk2 timeuuid, data1 text static, 
data2 text, PRIMARY KEY( pk1, pk2 ) );
cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 'pk1' 
and pk2 = now();
cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 'pk1' 
and pk2 = now();
cqlsh:test select * from test limit 1;

 pk1 | pk2  | data1 | data2
-+--+---+---
 pk1 | null | data1 |  null

(1 rows)

cqlsh:test select * from test limit 2;

 pk1 | pk2  | data1 | data2
-+--+---+---
 pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2

(1 rows)

cqlsh:test select * from test limit 3;

 pk1 | pk2  | data1 | data2
-+--+---+---
 pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
 pk1 | 0af67a40-b8ba-11e3-a345-49baa9ac32e6 | data1 | data2

(2 rows)
{code}

I think that: 1) if this is a static columns feature it should be documented so 
I can use it safely or 2) it should be fixed (return 2 rows with limit 2 for 
query above).




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6924) Data Inserted Immediately After Secondary Index Creation is not Indexed

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6924:


Fix Version/s: (was: 1.2.16)
   2.0.7

 Data Inserted Immediately After Secondary Index Creation is not Indexed
 ---

 Key: CASSANDRA-6924
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6924
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
 Fix For: 2.0.7

 Attachments: repro.py


 The head of the cassandra-1.2 branch (currently 1.2.16-tentative) contains a 
 regression from 1.2.15.  Data that is inserted immediately after secondary 
 index creation may never get indexed.
 You can reproduce the issue with a [pycassa integration 
 test|https://github.com/pycassa/pycassa/blob/master/tests/test_autopacking.py#L793]
  by running:
 {noformat}
 nosetests tests/test_autopacking.py:TestKeyValidators.test_get_indexed_slices
 {noformat}
 from the pycassa directory.
 The operation order goes like this:
 # create CF
 # create secondary index
 # insert data
 # query secondary index
 If a short sleep is added in between steps 2 and 3, the data gets indexed and 
 the query is successful.
 If a sleep is only added in between steps 3 and 4, some of the data is never 
 indexed and the query will return incomplete results.  This appears to be the 
 case even if the sleep is relatively long (30s), which makes me think the 
 data may never get indexed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[1/3] git commit: 2.0 compatibility modifications for CASSANDRA-6931

2014-03-31 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 684446289 - b7ac8f96c


2.0 compatibility modifications for CASSANDRA-6931

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-6931


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d049017a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d049017a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d049017a

Branch: refs/heads/cassandra-2.1
Commit: d049017ac85ce22e7dcf87879e94b386987b19e6
Parents: 6874aaa
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 12:53:24 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 12:53:24 2014 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java |  3 ++-
 .../apache/cassandra/db/BatchlogManager.java| 22 ++--
 2 files changed, 13 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d049017a/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index ff40e65..1f25cea 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -233,7 +233,8 @@ public final class CFMetaData
 public static final CFMetaData BatchlogCf = compile(CREATE TABLE  + 
SystemKeyspace.BATCHLOG_CF +  (
 + id uuid PRIMARY 
KEY,
 + written_at 
timestamp,
-+ data blob
++ data blob,
++ version int,
 + ) WITH 
COMMENT='uncommited batches' AND gc_grace_seconds=0 
 + AND 
COMPACTION={'class' : 'SizeTieredCompactionStrategy', 'min_threshold' : 2});
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d049017a/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 23cacca..2e09285 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -66,7 +66,6 @@ import org.apache.cassandra.utils.WrappedRunnable;
 public class BatchlogManager implements BatchlogManagerMBean
 {
 private static final String MBEAN_NAME = 
org.apache.cassandra.db:type=BatchlogManager;
-private static final int VERSION = MessagingService.VERSION_12;
 private static final long REPLAY_INTERVAL = 60 * 1000; // milliseconds
 private static final int PAGE_SIZE = 128; // same as HHOM, for now, w/out 
using any heuristics. TODO: set based on avg batch size.
 
@@ -151,7 +150,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 {
 out.writeInt(mutations.size());
 for (RowMutation rm : mutations)
-RowMutation.serializer.serialize(rm, out, VERSION);
+RowMutation.serializer.serialize(rm, out, 
MessagingService.VERSION_12);
 }
 catch (IOException e)
 {
@@ -176,7 +175,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 try
 {
-UntypedResultSet page = process(SELECT id, data, written_at FROM 
%s.%s LIMIT %d,
+UntypedResultSet page = process(SELECT id, data, written_at, 
version FROM %s.%s LIMIT %d,
 Keyspace.SYSTEM_KS,
 SystemKeyspace.BATCHLOG_CF,
 PAGE_SIZE);
@@ -188,7 +187,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 if (page.size()  PAGE_SIZE)
 break; // we've exhausted the batchlog, next query would 
be empty.
 
-page = process(SELECT id, data, written_at FROM %s.%s WHERE 
token(id)  token(%s) LIMIT %d,
+page = process(SELECT id, data, written_at, version FROM 
%s.%s WHERE token(id)  token(%s) LIMIT %d,
Keyspace.SYSTEM_KS,
SystemKeyspace.BATCHLOG_CF,
id,
@@ -213,22 +212,23 @@ public class BatchlogManager implements 
BatchlogManagerMBean
 {
 id = row.getUUID(id);
 long writtenAt = row.getLong(written_at);

[2/3] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-31 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/db/BatchlogManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6daf4e3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6daf4e3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6daf4e3

Branch: refs/heads/cassandra-2.1
Commit: f6daf4e30767c8a4891b88116108db54387847f9
Parents: 6844462 d049017
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:04:10 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:04:10 2014 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java |  3 ++-
 .../apache/cassandra/db/BatchlogManager.java| 22 ++--
 2 files changed, 13 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6daf4e3/src/java/org/apache/cassandra/config/CFMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6daf4e3/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --cc src/java/org/apache/cassandra/db/BatchlogManager.java
index 8024769,2e09285..603b9d8
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@@ -147,9 -148,9 +146,9 @@@ public class BatchlogManager implement
  
  try
  {
 -out.writeInt(mutations.size());
 -for (RowMutation rm : mutations)
 -RowMutation.serializer.serialize(rm, out, 
MessagingService.VERSION_12);
 +buf.writeInt(mutations.size());
 +for (Mutation mutation : mutations)
- Mutation.serializer.serialize(mutation, buf, VERSION);
++Mutation.serializer.serialize(mutation, buf, 
MessagingService.VERSION_12);
  }
  catch (IOException e)
  {
@@@ -250,14 -252,14 +250,14 @@@
  DataInputStream in = new 
DataInputStream(ByteBufferUtil.inputStream(data));
  int size = in.readInt();
  for (int i = 0; i  size; i++)
- replaySerializedMutation(Mutation.serializer.deserialize(in, 
VERSION), writtenAt, rateLimiter);
 -replaySerializedMutation(RowMutation.serializer.deserialize(in, 
version), writtenAt, version, rateLimiter);
++replaySerializedMutation(Mutation.serializer.deserialize(in, 
version), writtenAt, version, rateLimiter);
  }
  
  /*
   * We try to deliver the mutations to the replicas ourselves if they are 
alive and only resort to writing hints
   * when a replica is down or a write request times out.
   */
- private void replaySerializedMutation(Mutation mutation, long writtenAt, 
RateLimiter rateLimiter)
 -private void replaySerializedMutation(RowMutation mutation, long 
writtenAt, int version, RateLimiter rateLimiter)
++private void replaySerializedMutation(Mutation mutation, long writtenAt, 
int version, RateLimiter rateLimiter)
  {
  int ttl = calculateHintTTL(mutation, writtenAt);
  if (ttl = 0)
@@@ -266,7 -268,7 +266,7 @@@
  SetInetAddress liveEndpoints = new HashSet();
  String ks = mutation.getKeyspaceName();
  Token? tk = 
StorageService.getPartitioner().getToken(mutation.key());
- int mutationSize = (int) Mutation.serializer.serializedSize(mutation, 
VERSION);
 -int mutationSize = (int) 
RowMutation.serializer.serializedSize(mutation, version);
++int mutationSize = (int) Mutation.serializer.serializedSize(mutation, 
version);
  
  for (InetAddress endpoint : 
Iterables.concat(StorageService.instance.getNaturalEndpoints(ks, tk),
   
StorageService.instance.getTokenMetadata().pendingEndpointsFor(tk, ks)))



[3/3] git commit: Serialize batchlog mutations with the version of the target node

2014-03-31 Thread aleksey
Serialize batchlog mutations with the version of the target node

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-6931


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b7ac8f96
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b7ac8f96
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b7ac8f96

Branch: refs/heads/cassandra-2.1
Commit: b7ac8f96c169a3cfe18dd50ca1f27ce2b21fd78b
Parents: f6daf4e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:09:36 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:09:36 2014 +0300

--
 NEWS.txt|  2 +-
 .../apache/cassandra/db/BatchlogManager.java| 28 ---
 .../apache/cassandra/net/MessagingService.java  |  6 +--
 .../apache/cassandra/service/StorageProxy.java  | 51 
 .../cassandra/db/BatchlogManagerTest.java   |  7 ++-
 5 files changed, 50 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7ac8f96/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 23f6522..b53795e 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -33,7 +33,7 @@ New features
 
 Upgrading
 -
-   - Rolling upgrades from anything pre-2.0.6 is not supported.
+   - Rolling upgrade from anything pre-2.0.7 is not supported.
- For leveled compaction users, 2.0 must be atleast started before
  upgrading to 2.1 due to the fact that the old JSON leveled
  manifest is migrated into the sstable metadata files on startup

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7ac8f96/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 603b9d8..47eb77a 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -44,9 +44,7 @@ import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.cql3.QueryProcessor;
 import org.apache.cassandra.cql3.UntypedResultSet;
-import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.db.compaction.CompactionManager;
-import org.apache.cassandra.db.marshal.LongType;
 import org.apache.cassandra.db.marshal.UUIDType;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.exceptions.WriteTimeoutException;
@@ -121,26 +119,23 @@ public class BatchlogManager implements 
BatchlogManagerMBean
 batchlogTasks.execute(runnable);
 }
 
-public static Mutation getBatchlogMutationFor(CollectionMutation 
mutations, UUID uuid)
+public static Mutation getBatchlogMutationFor(CollectionMutation 
mutations, UUID uuid, int version)
 {
-return getBatchlogMutationFor(mutations, uuid, 
FBUtilities.timestampMicros());
+return getBatchlogMutationFor(mutations, uuid, version, 
FBUtilities.timestampMicros());
 }
 
 @VisibleForTesting
-static Mutation getBatchlogMutationFor(CollectionMutation mutations, 
UUID uuid, long now)
+static Mutation getBatchlogMutationFor(CollectionMutation mutations, 
UUID uuid, int version, long now)
 {
-ByteBuffer writtenAt = LongType.instance.decompose(now / 1000);
-ByteBuffer data = serializeMutations(mutations);
-
 ColumnFamily cf = 
ArrayBackedSortedColumns.factory.create(CFMetaData.BatchlogCf);
-cf.addColumn(new Cell(cellName(), ByteBufferUtil.EMPTY_BYTE_BUFFER, 
now));
-cf.addColumn(new Cell(cellName(data), data, now));
-cf.addColumn(new Cell(cellName(written_at), writtenAt, now));
-
+CFRowAdder adder = new CFRowAdder(cf, 
CFMetaData.BatchlogCf.comparator.builder().build(), now);
+adder.add(data, serializeMutations(mutations, version))
+ .add(written_at, new Date(now / 1000))
+ .add(version, version);
 return new Mutation(Keyspace.SYSTEM_KS, 
UUIDType.instance.decompose(uuid), cf);
 }
 
-private static ByteBuffer serializeMutations(CollectionMutation 
mutations)
+private static ByteBuffer serializeMutations(CollectionMutation 
mutations, int version)
 {
 DataOutputBuffer buf = new DataOutputBuffer();
 
@@ -148,7 +143,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 {
 buf.writeInt(mutations.size());
 for (Mutation mutation : mutations)
-Mutation.serializer.serialize(mutation, buf, 
MessagingService.VERSION_12);
+Mutation.serializer.serialize(mutation, buf, 

[2/4] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-31 Thread aleksey
Merge branch 'cassandra-2.0' into cassandra-2.1

Conflicts:
src/java/org/apache/cassandra/db/BatchlogManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6daf4e3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6daf4e3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6daf4e3

Branch: refs/heads/trunk
Commit: f6daf4e30767c8a4891b88116108db54387847f9
Parents: 6844462 d049017
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:04:10 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:04:10 2014 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java |  3 ++-
 .../apache/cassandra/db/BatchlogManager.java| 22 ++--
 2 files changed, 13 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6daf4e3/src/java/org/apache/cassandra/config/CFMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6daf4e3/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --cc src/java/org/apache/cassandra/db/BatchlogManager.java
index 8024769,2e09285..603b9d8
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@@ -147,9 -148,9 +146,9 @@@ public class BatchlogManager implement
  
  try
  {
 -out.writeInt(mutations.size());
 -for (RowMutation rm : mutations)
 -RowMutation.serializer.serialize(rm, out, 
MessagingService.VERSION_12);
 +buf.writeInt(mutations.size());
 +for (Mutation mutation : mutations)
- Mutation.serializer.serialize(mutation, buf, VERSION);
++Mutation.serializer.serialize(mutation, buf, 
MessagingService.VERSION_12);
  }
  catch (IOException e)
  {
@@@ -250,14 -252,14 +250,14 @@@
  DataInputStream in = new 
DataInputStream(ByteBufferUtil.inputStream(data));
  int size = in.readInt();
  for (int i = 0; i  size; i++)
- replaySerializedMutation(Mutation.serializer.deserialize(in, 
VERSION), writtenAt, rateLimiter);
 -replaySerializedMutation(RowMutation.serializer.deserialize(in, 
version), writtenAt, version, rateLimiter);
++replaySerializedMutation(Mutation.serializer.deserialize(in, 
version), writtenAt, version, rateLimiter);
  }
  
  /*
   * We try to deliver the mutations to the replicas ourselves if they are 
alive and only resort to writing hints
   * when a replica is down or a write request times out.
   */
- private void replaySerializedMutation(Mutation mutation, long writtenAt, 
RateLimiter rateLimiter)
 -private void replaySerializedMutation(RowMutation mutation, long 
writtenAt, int version, RateLimiter rateLimiter)
++private void replaySerializedMutation(Mutation mutation, long writtenAt, 
int version, RateLimiter rateLimiter)
  {
  int ttl = calculateHintTTL(mutation, writtenAt);
  if (ttl = 0)
@@@ -266,7 -268,7 +266,7 @@@
  SetInetAddress liveEndpoints = new HashSet();
  String ks = mutation.getKeyspaceName();
  Token? tk = 
StorageService.getPartitioner().getToken(mutation.key());
- int mutationSize = (int) Mutation.serializer.serializedSize(mutation, 
VERSION);
 -int mutationSize = (int) 
RowMutation.serializer.serializedSize(mutation, version);
++int mutationSize = (int) Mutation.serializer.serializedSize(mutation, 
version);
  
  for (InetAddress endpoint : 
Iterables.concat(StorageService.instance.getNaturalEndpoints(ks, tk),
   
StorageService.instance.getTokenMetadata().pendingEndpointsFor(tk, ks)))



[3/4] git commit: Serialize batchlog mutations with the version of the target node

2014-03-31 Thread aleksey
Serialize batchlog mutations with the version of the target node

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-6931


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b7ac8f96
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b7ac8f96
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b7ac8f96

Branch: refs/heads/trunk
Commit: b7ac8f96c169a3cfe18dd50ca1f27ce2b21fd78b
Parents: f6daf4e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:09:36 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:09:36 2014 +0300

--
 NEWS.txt|  2 +-
 .../apache/cassandra/db/BatchlogManager.java| 28 ---
 .../apache/cassandra/net/MessagingService.java  |  6 +--
 .../apache/cassandra/service/StorageProxy.java  | 51 
 .../cassandra/db/BatchlogManagerTest.java   |  7 ++-
 5 files changed, 50 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7ac8f96/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 23f6522..b53795e 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -33,7 +33,7 @@ New features
 
 Upgrading
 -
-   - Rolling upgrades from anything pre-2.0.6 is not supported.
+   - Rolling upgrade from anything pre-2.0.7 is not supported.
- For leveled compaction users, 2.0 must be atleast started before
  upgrading to 2.1 due to the fact that the old JSON leveled
  manifest is migrated into the sstable metadata files on startup

http://git-wip-us.apache.org/repos/asf/cassandra/blob/b7ac8f96/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 603b9d8..47eb77a 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -44,9 +44,7 @@ import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.cql3.QueryProcessor;
 import org.apache.cassandra.cql3.UntypedResultSet;
-import org.apache.cassandra.db.composites.CellName;
 import org.apache.cassandra.db.compaction.CompactionManager;
-import org.apache.cassandra.db.marshal.LongType;
 import org.apache.cassandra.db.marshal.UUIDType;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.exceptions.WriteTimeoutException;
@@ -121,26 +119,23 @@ public class BatchlogManager implements 
BatchlogManagerMBean
 batchlogTasks.execute(runnable);
 }
 
-public static Mutation getBatchlogMutationFor(CollectionMutation 
mutations, UUID uuid)
+public static Mutation getBatchlogMutationFor(CollectionMutation 
mutations, UUID uuid, int version)
 {
-return getBatchlogMutationFor(mutations, uuid, 
FBUtilities.timestampMicros());
+return getBatchlogMutationFor(mutations, uuid, version, 
FBUtilities.timestampMicros());
 }
 
 @VisibleForTesting
-static Mutation getBatchlogMutationFor(CollectionMutation mutations, 
UUID uuid, long now)
+static Mutation getBatchlogMutationFor(CollectionMutation mutations, 
UUID uuid, int version, long now)
 {
-ByteBuffer writtenAt = LongType.instance.decompose(now / 1000);
-ByteBuffer data = serializeMutations(mutations);
-
 ColumnFamily cf = 
ArrayBackedSortedColumns.factory.create(CFMetaData.BatchlogCf);
-cf.addColumn(new Cell(cellName(), ByteBufferUtil.EMPTY_BYTE_BUFFER, 
now));
-cf.addColumn(new Cell(cellName(data), data, now));
-cf.addColumn(new Cell(cellName(written_at), writtenAt, now));
-
+CFRowAdder adder = new CFRowAdder(cf, 
CFMetaData.BatchlogCf.comparator.builder().build(), now);
+adder.add(data, serializeMutations(mutations, version))
+ .add(written_at, new Date(now / 1000))
+ .add(version, version);
 return new Mutation(Keyspace.SYSTEM_KS, 
UUIDType.instance.decompose(uuid), cf);
 }
 
-private static ByteBuffer serializeMutations(CollectionMutation 
mutations)
+private static ByteBuffer serializeMutations(CollectionMutation 
mutations, int version)
 {
 DataOutputBuffer buf = new DataOutputBuffer();
 
@@ -148,7 +143,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 {
 buf.writeInt(mutations.size());
 for (Mutation mutation : mutations)
-Mutation.serializer.serialize(mutation, buf, 
MessagingService.VERSION_12);
+Mutation.serializer.serialize(mutation, buf, version);

[4/4] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-31 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2a463a6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2a463a6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2a463a6

Branch: refs/heads/trunk
Commit: a2a463a66878cd62ecd0ea096b07f806d99f7548
Parents: eeef406 b7ac8f9
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:11:00 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:11:00 2014 +0300

--
 NEWS.txt|  2 +-
 .../org/apache/cassandra/config/CFMetaData.java |  3 +-
 .../apache/cassandra/db/BatchlogManager.java| 48 --
 .../apache/cassandra/net/MessagingService.java  |  6 +--
 .../apache/cassandra/service/StorageProxy.java  | 51 
 .../cassandra/db/BatchlogManagerTest.java   |  7 ++-
 6 files changed, 62 insertions(+), 55 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2a463a6/NEWS.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2a463a6/src/java/org/apache/cassandra/config/CFMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2a463a6/src/java/org/apache/cassandra/net/MessagingService.java
--



[1/4] git commit: 2.0 compatibility modifications for CASSANDRA-6931

2014-03-31 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk eeef4061b - a2a463a66


2.0 compatibility modifications for CASSANDRA-6931

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-6931


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d049017a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d049017a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d049017a

Branch: refs/heads/trunk
Commit: d049017ac85ce22e7dcf87879e94b386987b19e6
Parents: 6874aaa
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 12:53:24 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 12:53:24 2014 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java |  3 ++-
 .../apache/cassandra/db/BatchlogManager.java| 22 ++--
 2 files changed, 13 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d049017a/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index ff40e65..1f25cea 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -233,7 +233,8 @@ public final class CFMetaData
 public static final CFMetaData BatchlogCf = compile(CREATE TABLE  + 
SystemKeyspace.BATCHLOG_CF +  (
 + id uuid PRIMARY 
KEY,
 + written_at 
timestamp,
-+ data blob
++ data blob,
++ version int,
 + ) WITH 
COMMENT='uncommited batches' AND gc_grace_seconds=0 
 + AND 
COMPACTION={'class' : 'SizeTieredCompactionStrategy', 'min_threshold' : 2});
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d049017a/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 23cacca..2e09285 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -66,7 +66,6 @@ import org.apache.cassandra.utils.WrappedRunnable;
 public class BatchlogManager implements BatchlogManagerMBean
 {
 private static final String MBEAN_NAME = 
org.apache.cassandra.db:type=BatchlogManager;
-private static final int VERSION = MessagingService.VERSION_12;
 private static final long REPLAY_INTERVAL = 60 * 1000; // milliseconds
 private static final int PAGE_SIZE = 128; // same as HHOM, for now, w/out 
using any heuristics. TODO: set based on avg batch size.
 
@@ -151,7 +150,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 {
 out.writeInt(mutations.size());
 for (RowMutation rm : mutations)
-RowMutation.serializer.serialize(rm, out, VERSION);
+RowMutation.serializer.serialize(rm, out, 
MessagingService.VERSION_12);
 }
 catch (IOException e)
 {
@@ -176,7 +175,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 
 try
 {
-UntypedResultSet page = process(SELECT id, data, written_at FROM 
%s.%s LIMIT %d,
+UntypedResultSet page = process(SELECT id, data, written_at, 
version FROM %s.%s LIMIT %d,
 Keyspace.SYSTEM_KS,
 SystemKeyspace.BATCHLOG_CF,
 PAGE_SIZE);
@@ -188,7 +187,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 if (page.size()  PAGE_SIZE)
 break; // we've exhausted the batchlog, next query would 
be empty.
 
-page = process(SELECT id, data, written_at FROM %s.%s WHERE 
token(id)  token(%s) LIMIT %d,
+page = process(SELECT id, data, written_at, version FROM 
%s.%s WHERE token(id)  token(%s) LIMIT %d,
Keyspace.SYSTEM_KS,
SystemKeyspace.BATCHLOG_CF,
id,
@@ -213,22 +212,23 @@ public class BatchlogManager implements 
BatchlogManagerMBean
 {
 id = row.getUUID(id);
 long writtenAt = row.getLong(written_at);
+int 

[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-31 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6f78382f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6f78382f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6f78382f

Branch: refs/heads/trunk
Commit: 6f78382fa41d85318fcd3e9e4ca0d028b6ce59cc
Parents: a2a463a 86c79d6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:15:33 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:15:33 2014 +0300

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6f78382f/CHANGES.txt
--
diff --cc CHANGES.txt
index a0bf7bd,b27a52d..04df2d6
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -39,7 -34,10 +39,9 @@@
   * Add multiple memory allocation options for memtables (CASSANDRA-6689)
   * Remove adjusted op rate from stress output (CASSANDRA-6921)
   * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
+  * Serialize batchlog mutations with the version of the target node
+(CASSANDRA-6931)
  Merged from 2.0:
 - * Restrict Windows to parallel repairs (CASSANDRA-6907)
   * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
   * Fix NPE in MeteredFlusher (CASSANDRA-6820)
   * Fix race processing range scan responses (CASSANDRA-6820)



git commit: Add missing CHANGES.txt entry for CASSANDRA-6931

2014-03-31 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 b7ac8f96c - 86c79d6d8


Add missing CHANGES.txt entry for CASSANDRA-6931


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86c79d6d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86c79d6d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86c79d6d

Branch: refs/heads/cassandra-2.1
Commit: 86c79d6d8caca879ad19c31fee77ac1eb421f39a
Parents: b7ac8f9
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:15:19 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:15:19 2014 +0300

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86c79d6d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 196fa0d..b27a52d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,8 @@
  * Add multiple memory allocation options for memtables (CASSANDRA-6689)
  * Remove adjusted op rate from stress output (CASSANDRA-6921)
  * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
+ * Serialize batchlog mutations with the version of the target node
+   (CASSANDRA-6931)
 Merged from 2.0:
  * Restrict Windows to parallel repairs (CASSANDRA-6907)
  * (Hadoop) Allow manually specifying start/end tokens in CFIF (CASSANDRA-6436)



[1/2] git commit: Add missing CHANGES.txt entry for CASSANDRA-6931

2014-03-31 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk a2a463a66 - 6f78382fa


Add missing CHANGES.txt entry for CASSANDRA-6931


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86c79d6d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86c79d6d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86c79d6d

Branch: refs/heads/trunk
Commit: 86c79d6d8caca879ad19c31fee77ac1eb421f39a
Parents: b7ac8f9
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:15:19 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:15:19 2014 +0300

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/86c79d6d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 196fa0d..b27a52d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,8 @@
  * Add multiple memory allocation options for memtables (CASSANDRA-6689)
  * Remove adjusted op rate from stress output (CASSANDRA-6921)
  * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
+ * Serialize batchlog mutations with the version of the target node
+   (CASSANDRA-6931)
 Merged from 2.0:
  * Restrict Windows to parallel repairs (CASSANDRA-6907)
  * (Hadoop) Allow manually specifying start/end tokens in CFIF (CASSANDRA-6436)



[jira] [Updated] (CASSANDRA-6931) BatchLogManager shouldn't serialize mutations with version 1.2 in 2.1.

2014-03-31 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6931:
-

Fix Version/s: 2.0.7

 BatchLogManager shouldn't serialize mutations with version 1.2 in 2.1.
 --

 Key: CASSANDRA-6931
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6931
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Aleksey Yeschenko
 Fix For: 2.0.7, 2.1 beta2


 BatchLogManager serialize and deserialize mutations using 
 MessagingService.VERSION_12 and this is hardcoded. Meaning that it does that 
 in 2.0, 2.1 and trunk, even though in 2.1 the 1.2 format is not properly 
 serialized properly since [this 
 commit|https://github.com/apache/cassandra/commit/cca65d7c1638dcd9370b080f08fd55faefc2733e]
  (meaning that I'm pretty sure batch logs on super columns is broken on 2.1 
 currently). And keeping the 1.2 format indefinitely just for batchlog is 
 unrealistic.
 So batchlog needs to do something like hints, record the messaging format 
 used to encode every mutation and use that for deserialization, but always 
 serialize with the current format.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Optimize CounterColumn#reconcile()

2014-03-31 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 86c79d6d8 - a79d54eea


Optimize CounterColumn#reconcile()

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-6953


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a79d54ee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a79d54ee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a79d54ee

Branch: refs/heads/cassandra-2.1
Commit: a79d54eeafc7880e0257775e344b11c1252a8e89
Parents: 86c79d6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:23:36 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:23:36 2014 +0300

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/db/CounterCell.java|  17 ++-
 .../cassandra/db/context/CounterContext.java| 138 ---
 3 files changed, 98 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a79d54ee/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b27a52d..0457b5e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -36,6 +36,7 @@
  * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
  * Serialize batchlog mutations with the version of the target node
(CASSANDRA-6931)
+ * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
 Merged from 2.0:
  * Restrict Windows to parallel repairs (CASSANDRA-6907)
  * (Hadoop) Allow manually specifying start/end tokens in CFIF (CASSANDRA-6436)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a79d54ee/src/java/org/apache/cassandra/db/CounterCell.java
--
diff --git a/src/java/org/apache/cassandra/db/CounterCell.java 
b/src/java/org/apache/cassandra/db/CounterCell.java
index cc26ef5..6b588ef 100644
--- a/src/java/org/apache/cassandra/db/CounterCell.java
+++ b/src/java/org/apache/cassandra/db/CounterCell.java
@@ -168,11 +168,18 @@ public class CounterCell extends Cell
 // live last delete  live
 if (timestampOfLastDelete()  cell.timestamp())
 return this;
-// live + live: merge clocks; update value
-return new CounterCell(name(),
-   contextManager.merge(value(), cell.value()),
-   Math.max(timestamp(), cell.timestamp()),
-   Math.max(timestampOfLastDelete(), 
((CounterCell) cell).timestampOfLastDelete()));
+
+// live + live. return one of the cells if its context is a superset 
of the other's, or merge them otherwise
+ByteBuffer context = contextManager.merge(value(), cell.value());
+if (context == value()  timestamp() = cell.timestamp()  
timestampOfLastDelete() = ((CounterCell) cell).timestampOfLastDelete())
+return this;
+else if (context == cell.value()  cell.timestamp() = timestamp()  
((CounterCell) cell).timestampOfLastDelete() = timestampOfLastDelete())
+return cell;
+else // merge clocks and timsestamps.
+return new CounterCell(name(),
+   context,
+   Math.max(timestamp(), cell.timestamp()),
+   Math.max(timestampOfLastDelete(), 
((CounterCell) cell).timestampOfLastDelete()));
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a79d54ee/src/java/org/apache/cassandra/db/context/CounterContext.java
--
diff --git a/src/java/org/apache/cassandra/db/context/CounterContext.java 
b/src/java/org/apache/cassandra/db/context/CounterContext.java
index 1e830a6..0b1677b 100644
--- a/src/java/org/apache/cassandra/db/context/CounterContext.java
+++ b/src/java/org/apache/cassandra/db/context/CounterContext.java
@@ -22,7 +22,6 @@ import java.security.MessageDigest;
 import java.util.ArrayList;
 import java.util.List;
 
-import org.apache.cassandra.serializers.MarshalException;
 import com.google.common.annotations.VisibleForTesting;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -30,6 +29,7 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.db.ClockAndCount;
 import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.db.compaction.CompactionManager;
+import org.apache.cassandra.serializers.MarshalException;
 import org.apache.cassandra.utils.*;
 
 /**
@@ -256,6 +256,9 @@ public class CounterContext
  */
 public ByteBuffer merge(ByteBuffer left, ByteBuffer right)
 {
+boolean leftIsSuperSet = true;
+boolean rightIsSuperSet = true;
+
 int globalCount = 0;
 int 

[2/2] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-31 Thread aleksey
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cbde9672
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cbde9672
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cbde9672

Branch: refs/heads/trunk
Commit: cbde96724d6f4bd586be33e5bc4fbeb2c1f5daa4
Parents: 6f78382 a79d54e
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:26:38 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:26:38 2014 +0300

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/db/CounterCell.java|  17 ++-
 .../cassandra/db/context/CounterContext.java| 138 ---
 3 files changed, 98 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cbde9672/CHANGES.txt
--
diff --cc CHANGES.txt
index 04df2d6,0457b5e..e604304
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -41,7 -36,9 +41,8 @@@
   * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
   * Serialize batchlog mutations with the version of the target node
 (CASSANDRA-6931)
+  * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
  Merged from 2.0:
 - * Restrict Windows to parallel repairs (CASSANDRA-6907)
   * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
   * Fix NPE in MeteredFlusher (CASSANDRA-6820)
   * Fix race processing range scan responses (CASSANDRA-6820)



[1/2] git commit: Optimize CounterColumn#reconcile()

2014-03-31 Thread aleksey
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6f78382fa - cbde96724


Optimize CounterColumn#reconcile()

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-6953


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a79d54ee
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a79d54ee
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a79d54ee

Branch: refs/heads/trunk
Commit: a79d54eeafc7880e0257775e344b11c1252a8e89
Parents: 86c79d6
Author: Aleksey Yeschenko alek...@apache.org
Authored: Mon Mar 31 13:23:36 2014 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Mon Mar 31 13:23:36 2014 +0300

--
 CHANGES.txt |   1 +
 .../org/apache/cassandra/db/CounterCell.java|  17 ++-
 .../cassandra/db/context/CounterContext.java| 138 ---
 3 files changed, 98 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a79d54ee/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b27a52d..0457b5e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -36,6 +36,7 @@
  * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
  * Serialize batchlog mutations with the version of the target node
(CASSANDRA-6931)
+ * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
 Merged from 2.0:
  * Restrict Windows to parallel repairs (CASSANDRA-6907)
  * (Hadoop) Allow manually specifying start/end tokens in CFIF (CASSANDRA-6436)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a79d54ee/src/java/org/apache/cassandra/db/CounterCell.java
--
diff --git a/src/java/org/apache/cassandra/db/CounterCell.java 
b/src/java/org/apache/cassandra/db/CounterCell.java
index cc26ef5..6b588ef 100644
--- a/src/java/org/apache/cassandra/db/CounterCell.java
+++ b/src/java/org/apache/cassandra/db/CounterCell.java
@@ -168,11 +168,18 @@ public class CounterCell extends Cell
 // live last delete  live
 if (timestampOfLastDelete()  cell.timestamp())
 return this;
-// live + live: merge clocks; update value
-return new CounterCell(name(),
-   contextManager.merge(value(), cell.value()),
-   Math.max(timestamp(), cell.timestamp()),
-   Math.max(timestampOfLastDelete(), 
((CounterCell) cell).timestampOfLastDelete()));
+
+// live + live. return one of the cells if its context is a superset 
of the other's, or merge them otherwise
+ByteBuffer context = contextManager.merge(value(), cell.value());
+if (context == value()  timestamp() = cell.timestamp()  
timestampOfLastDelete() = ((CounterCell) cell).timestampOfLastDelete())
+return this;
+else if (context == cell.value()  cell.timestamp() = timestamp()  
((CounterCell) cell).timestampOfLastDelete() = timestampOfLastDelete())
+return cell;
+else // merge clocks and timsestamps.
+return new CounterCell(name(),
+   context,
+   Math.max(timestamp(), cell.timestamp()),
+   Math.max(timestampOfLastDelete(), 
((CounterCell) cell).timestampOfLastDelete()));
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a79d54ee/src/java/org/apache/cassandra/db/context/CounterContext.java
--
diff --git a/src/java/org/apache/cassandra/db/context/CounterContext.java 
b/src/java/org/apache/cassandra/db/context/CounterContext.java
index 1e830a6..0b1677b 100644
--- a/src/java/org/apache/cassandra/db/context/CounterContext.java
+++ b/src/java/org/apache/cassandra/db/context/CounterContext.java
@@ -22,7 +22,6 @@ import java.security.MessageDigest;
 import java.util.ArrayList;
 import java.util.List;
 
-import org.apache.cassandra.serializers.MarshalException;
 import com.google.common.annotations.VisibleForTesting;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -30,6 +29,7 @@ import org.slf4j.LoggerFactory;
 import org.apache.cassandra.db.ClockAndCount;
 import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.db.compaction.CompactionManager;
+import org.apache.cassandra.serializers.MarshalException;
 import org.apache.cassandra.utils.*;
 
 /**
@@ -256,6 +256,9 @@ public class CounterContext
  */
 public ByteBuffer merge(ByteBuffer left, ByteBuffer right)
 {
+boolean leftIsSuperSet = true;
+boolean rightIsSuperSet = true;
+
 int globalCount = 0;
 int localCount = 0;

[2/2] git commit: Merge branch 'cassandra-2.1' of https://git-wip-us.apache.org/repos/asf/cassandra into cassandra-2.1

2014-03-31 Thread slebresne
Merge branch 'cassandra-2.1' of 
https://git-wip-us.apache.org/repos/asf/cassandra into cassandra-2.1

Conflicts:
CHANGES.txt
NEWS.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3632811f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3632811f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3632811f

Branch: refs/heads/cassandra-2.1
Commit: 3632811fac3222c7e14a625755385fb12f087c5c
Parents: 8e172c8 a79d54e
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 31 12:34:24 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 31 12:34:24 2014 +0200

--
 CHANGES.txt |   3 +
 NEWS.txt|   8 +-
 .../org/apache/cassandra/config/CFMetaData.java |   3 +-
 .../apache/cassandra/db/BatchlogManager.java|  48 +++
 .../org/apache/cassandra/db/CounterCell.java|  17 ++-
 .../cassandra/db/context/CounterContext.java| 138 ---
 .../apache/cassandra/net/MessagingService.java  |   6 +-
 .../apache/cassandra/service/StorageProxy.java  |  51 ---
 .../cassandra/db/BatchlogManagerTest.java   |   7 +-
 9 files changed, 165 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3632811f/CHANGES.txt
--
diff --cc CHANGES.txt
index c224c8f,0457b5e..9e104e0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -34,7 -34,9 +34,10 @@@
   * Add multiple memory allocation options for memtables (CASSANDRA-6689)
   * Remove adjusted op rate from stress output (CASSANDRA-6921)
   * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
+  * Serialize batchlog mutations with the version of the target node
+(CASSANDRA-6931)
+  * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
 + * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
  Merged from 2.0:
   * Restrict Windows to parallel repairs (CASSANDRA-6907)
   * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3632811f/NEWS.txt
--
diff --cc NEWS.txt
index 7cb7565,b53795e..9567ef3
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -33,11 -33,11 +33,11 @@@ New feature
  
  Upgrading
  -
-- Rolling upgrades from anything pre-2.0.6 is not supported. Furthermore
-- Pre-2.0 sstables are not supported. This means that before upgrading
-  a node a 2.1, this node must be started on 2.0 and
 -   - Rolling upgrade from anything pre-2.0.7 is not supported.
 -   - For leveled compaction users, 2.0 must be atleast started before
 - upgrading to 2.1 due to the fact that the old JSON leveled
 - manifest is migrated into the sstable metadata files on startup
 - in 2.0 and this code is gone from 2.1.
++   - Rolling upgrades from anything pre-2.0.7 is not supported. Furthermore
++ pre-2.0 sstables are not supported. This means that before upgrading
++ a node on 2.1, this node must be started on 2.0 and
 + 'nodetool upgdradesstables' must be run (and this even in the case
-  of no-rolling upgrades).
++ of not-rolling upgrades).
 - For size-tiered compaction users, Cassandra now defaults to ignoring
   the coldest 5% of sstables.  This can be customized with the
   cold_reads_to_omit compaction option; 0.0 omits nothing (the old

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3632811f/src/java/org/apache/cassandra/config/CFMetaData.java
--



[1/2] git commit: Remove 1.2 sstable support in 2.1

2014-03-31 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 a79d54eea - 3632811fa


Remove 1.2 sstable support in 2.1

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6869


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e172c85
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e172c85
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e172c85

Branch: refs/heads/cassandra-2.1
Commit: 8e172c8563a995808a72a1a7e81a06f3c2a355ce
Parents: 6844462
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 31 12:30:50 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 31 12:30:50 2014 +0200

--
 CHANGES.txt |  1 +
 NEWS.txt| 10 ++--
 .../org/apache/cassandra/config/CFMetaData.java | 10 ++--
 src/java/org/apache/cassandra/db/Cell.java  | 10 
 .../apache/cassandra/db/ColumnFamilyStore.java  |  2 +-
 .../db/columniterator/IndexedSliceReader.java   | 53 +++-
 .../db/columniterator/SSTableNamesIterator.java | 14 ++
 .../db/columniterator/SimpleSliceReader.java|  7 +--
 .../cassandra/db/compaction/Scrubber.java   | 23 ++---
 .../apache/cassandra/io/sstable/Descriptor.java | 17 +--
 .../io/sstable/SSTableIdentityIterator.java |  3 +-
 .../cassandra/io/sstable/SSTableReader.java |  6 +--
 .../cassandra/io/sstable/SSTableScanner.java|  2 -
 .../cassandra/io/sstable/SSTableWriter.java | 11 +---
 .../metadata/LegacyMetadataSerializer.java  | 37 --
 .../io/sstable/metadata/StatsMetadata.java  | 36 +
 .../apache/cassandra/tools/SSTableExport.java   |  6 +--
 .../cassandra/io/sstable/LegacySSTableTest.java |  4 +-
 18 files changed, 60 insertions(+), 192 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e172c85/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 196fa0d..c224c8f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,7 @@
  * Add multiple memory allocation options for memtables (CASSANDRA-6689)
  * Remove adjusted op rate from stress output (CASSANDRA-6921)
  * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
+ * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
 Merged from 2.0:
  * Restrict Windows to parallel repairs (CASSANDRA-6907)
  * (Hadoop) Allow manually specifying start/end tokens in CFIF (CASSANDRA-6436)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e172c85/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 23f6522..7cb7565 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -33,11 +33,11 @@ New features
 
 Upgrading
 -
-   - Rolling upgrades from anything pre-2.0.6 is not supported.
-   - For leveled compaction users, 2.0 must be atleast started before
- upgrading to 2.1 due to the fact that the old JSON leveled
- manifest is migrated into the sstable metadata files on startup
- in 2.0 and this code is gone from 2.1.
+   - Rolling upgrades from anything pre-2.0.6 is not supported. Furthermore
+   - Pre-2.0 sstables are not supported. This means that before upgrading
+ a node a 2.1, this node must be started on 2.0 and
+ 'nodetool upgdradesstables' must be run (and this even in the case
+ of no-rolling upgrades).
- For size-tiered compaction users, Cassandra now defaults to ignoring
  the coldest 5% of sstables.  This can be customized with the
  cold_reads_to_omit compaction option; 0.0 omits nothing (the old

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e172c85/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 8a4f147..1ca9880 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1379,16 +1379,14 @@ public final class CFMetaData
 return (cfName + _ + columnName + _idx).replaceAll(\\W, );
 }
 
-public IteratorOnDiskAtom getOnDiskIterator(DataInput in, int count, 
Descriptor.Version version)
+public IteratorOnDiskAtom getOnDiskIterator(DataInput in, 
Descriptor.Version version)
 {
-return getOnDiskIterator(in, count, ColumnSerializer.Flag.LOCAL, 
Integer.MIN_VALUE, version);
+return getOnDiskIterator(in, ColumnSerializer.Flag.LOCAL, 
Integer.MIN_VALUE, version);
 }
 
-public IteratorOnDiskAtom getOnDiskIterator(DataInput in, int count, 
ColumnSerializer.Flag flag, int expireBefore, 

[2/3] git commit: Merge branch 'cassandra-2.1' of https://git-wip-us.apache.org/repos/asf/cassandra into cassandra-2.1

2014-03-31 Thread slebresne
Merge branch 'cassandra-2.1' of 
https://git-wip-us.apache.org/repos/asf/cassandra into cassandra-2.1

Conflicts:
CHANGES.txt
NEWS.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3632811f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3632811f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3632811f

Branch: refs/heads/trunk
Commit: 3632811fac3222c7e14a625755385fb12f087c5c
Parents: 8e172c8 a79d54e
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 31 12:34:24 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 31 12:34:24 2014 +0200

--
 CHANGES.txt |   3 +
 NEWS.txt|   8 +-
 .../org/apache/cassandra/config/CFMetaData.java |   3 +-
 .../apache/cassandra/db/BatchlogManager.java|  48 +++
 .../org/apache/cassandra/db/CounterCell.java|  17 ++-
 .../cassandra/db/context/CounterContext.java| 138 ---
 .../apache/cassandra/net/MessagingService.java  |   6 +-
 .../apache/cassandra/service/StorageProxy.java  |  51 ---
 .../cassandra/db/BatchlogManagerTest.java   |   7 +-
 9 files changed, 165 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3632811f/CHANGES.txt
--
diff --cc CHANGES.txt
index c224c8f,0457b5e..9e104e0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -34,7 -34,9 +34,10 @@@
   * Add multiple memory allocation options for memtables (CASSANDRA-6689)
   * Remove adjusted op rate from stress output (CASSANDRA-6921)
   * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
+  * Serialize batchlog mutations with the version of the target node
+(CASSANDRA-6931)
+  * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
 + * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
  Merged from 2.0:
   * Restrict Windows to parallel repairs (CASSANDRA-6907)
   * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3632811f/NEWS.txt
--
diff --cc NEWS.txt
index 7cb7565,b53795e..9567ef3
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -33,11 -33,11 +33,11 @@@ New feature
  
  Upgrading
  -
-- Rolling upgrades from anything pre-2.0.6 is not supported. Furthermore
-- Pre-2.0 sstables are not supported. This means that before upgrading
-  a node a 2.1, this node must be started on 2.0 and
 -   - Rolling upgrade from anything pre-2.0.7 is not supported.
 -   - For leveled compaction users, 2.0 must be atleast started before
 - upgrading to 2.1 due to the fact that the old JSON leveled
 - manifest is migrated into the sstable metadata files on startup
 - in 2.0 and this code is gone from 2.1.
++   - Rolling upgrades from anything pre-2.0.7 is not supported. Furthermore
++ pre-2.0 sstables are not supported. This means that before upgrading
++ a node on 2.1, this node must be started on 2.0 and
 + 'nodetool upgdradesstables' must be run (and this even in the case
-  of no-rolling upgrades).
++ of not-rolling upgrades).
 - For size-tiered compaction users, Cassandra now defaults to ignoring
   the coldest 5% of sstables.  This can be customized with the
   cold_reads_to_omit compaction option; 0.0 omits nothing (the old

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3632811f/src/java/org/apache/cassandra/config/CFMetaData.java
--



[1/3] git commit: Remove 1.2 sstable support in 2.1

2014-03-31 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/trunk cbde96724 - 93bd9ec25


Remove 1.2 sstable support in 2.1

patch by slebresne; reviewed by iamaleksey for CASSANDRA-6869


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8e172c85
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8e172c85
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8e172c85

Branch: refs/heads/trunk
Commit: 8e172c8563a995808a72a1a7e81a06f3c2a355ce
Parents: 6844462
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 31 12:30:50 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 31 12:30:50 2014 +0200

--
 CHANGES.txt |  1 +
 NEWS.txt| 10 ++--
 .../org/apache/cassandra/config/CFMetaData.java | 10 ++--
 src/java/org/apache/cassandra/db/Cell.java  | 10 
 .../apache/cassandra/db/ColumnFamilyStore.java  |  2 +-
 .../db/columniterator/IndexedSliceReader.java   | 53 +++-
 .../db/columniterator/SSTableNamesIterator.java | 14 ++
 .../db/columniterator/SimpleSliceReader.java|  7 +--
 .../cassandra/db/compaction/Scrubber.java   | 23 ++---
 .../apache/cassandra/io/sstable/Descriptor.java | 17 +--
 .../io/sstable/SSTableIdentityIterator.java |  3 +-
 .../cassandra/io/sstable/SSTableReader.java |  6 +--
 .../cassandra/io/sstable/SSTableScanner.java|  2 -
 .../cassandra/io/sstable/SSTableWriter.java | 11 +---
 .../metadata/LegacyMetadataSerializer.java  | 37 --
 .../io/sstable/metadata/StatsMetadata.java  | 36 +
 .../apache/cassandra/tools/SSTableExport.java   |  6 +--
 .../cassandra/io/sstable/LegacySSTableTest.java |  4 +-
 18 files changed, 60 insertions(+), 192 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e172c85/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 196fa0d..c224c8f 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -34,6 +34,7 @@
  * Add multiple memory allocation options for memtables (CASSANDRA-6689)
  * Remove adjusted op rate from stress output (CASSANDRA-6921)
  * Add optimized CF.hasColumns() implementations (CASSANDRA-6941)
+ * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
 Merged from 2.0:
  * Restrict Windows to parallel repairs (CASSANDRA-6907)
  * (Hadoop) Allow manually specifying start/end tokens in CFIF (CASSANDRA-6436)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e172c85/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index 23f6522..7cb7565 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -33,11 +33,11 @@ New features
 
 Upgrading
 -
-   - Rolling upgrades from anything pre-2.0.6 is not supported.
-   - For leveled compaction users, 2.0 must be atleast started before
- upgrading to 2.1 due to the fact that the old JSON leveled
- manifest is migrated into the sstable metadata files on startup
- in 2.0 and this code is gone from 2.1.
+   - Rolling upgrades from anything pre-2.0.6 is not supported. Furthermore
+   - Pre-2.0 sstables are not supported. This means that before upgrading
+ a node a 2.1, this node must be started on 2.0 and
+ 'nodetool upgdradesstables' must be run (and this even in the case
+ of no-rolling upgrades).
- For size-tiered compaction users, Cassandra now defaults to ignoring
  the coldest 5% of sstables.  This can be customized with the
  cold_reads_to_omit compaction option; 0.0 omits nothing (the old

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8e172c85/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 8a4f147..1ca9880 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1379,16 +1379,14 @@ public final class CFMetaData
 return (cfName + _ + columnName + _idx).replaceAll(\\W, );
 }
 
-public IteratorOnDiskAtom getOnDiskIterator(DataInput in, int count, 
Descriptor.Version version)
+public IteratorOnDiskAtom getOnDiskIterator(DataInput in, 
Descriptor.Version version)
 {
-return getOnDiskIterator(in, count, ColumnSerializer.Flag.LOCAL, 
Integer.MIN_VALUE, version);
+return getOnDiskIterator(in, ColumnSerializer.Flag.LOCAL, 
Integer.MIN_VALUE, version);
 }
 
-public IteratorOnDiskAtom getOnDiskIterator(DataInput in, int count, 
ColumnSerializer.Flag flag, int expireBefore, Descriptor.Version version)

[3/3] git commit: Merge branch 'cassandra-2.1' into trunk

2014-03-31 Thread slebresne
Merge branch 'cassandra-2.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93bd9ec2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93bd9ec2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93bd9ec2

Branch: refs/heads/trunk
Commit: 93bd9ec25382fdd651f9931de924fd1872bdb60b
Parents: cbde967 3632811
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 31 12:36:36 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 31 12:36:36 2014 +0200

--
 CHANGES.txt |  1 +
 NEWS.txt| 10 ++--
 .../org/apache/cassandra/config/CFMetaData.java | 10 ++--
 src/java/org/apache/cassandra/db/Cell.java  | 10 
 .../apache/cassandra/db/ColumnFamilyStore.java  |  2 +-
 .../db/columniterator/IndexedSliceReader.java   | 53 +++-
 .../db/columniterator/SSTableNamesIterator.java | 14 ++
 .../db/columniterator/SimpleSliceReader.java|  7 +--
 .../cassandra/db/compaction/Scrubber.java   | 23 ++---
 .../apache/cassandra/io/sstable/Descriptor.java | 17 +--
 .../io/sstable/SSTableIdentityIterator.java |  3 +-
 .../cassandra/io/sstable/SSTableReader.java |  6 +--
 .../cassandra/io/sstable/SSTableScanner.java|  2 -
 .../cassandra/io/sstable/SSTableWriter.java | 11 +---
 .../metadata/LegacyMetadataSerializer.java  | 37 --
 .../io/sstable/metadata/StatsMetadata.java  | 36 +
 .../apache/cassandra/tools/SSTableExport.java   |  6 +--
 .../cassandra/io/sstable/LegacySSTableTest.java |  4 +-
 18 files changed, 60 insertions(+), 192 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/93bd9ec2/CHANGES.txt
--
diff --cc CHANGES.txt
index e604304,9e104e0..7166ed9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -42,7 -37,9 +42,8 @@@
   * Serialize batchlog mutations with the version of the target node
 (CASSANDRA-6931)
   * Optimize CounterColumn#reconcile() (CASSANDRA-6953)
+  * Properly remove 1.2 sstable support in 2.1 (CASSANDRA-6869)
  Merged from 2.0:
 - * Restrict Windows to parallel repairs (CASSANDRA-6907)
   * (Hadoop) Allow manually specifying start/end tokens in CFIF 
(CASSANDRA-6436)
   * Fix NPE in MeteredFlusher (CASSANDRA-6820)
   * Fix race processing range scan responses (CASSANDRA-6820)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93bd9ec2/NEWS.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93bd9ec2/src/java/org/apache/cassandra/config/CFMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/93bd9ec2/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
--



[jira] [Resolved] (CASSANDRA-6738) java.lang.ClassCastException: org.apache.cassandra.db.composites.CompoundComposite cannot be cast to org.apache.cassandra.db.composites.CellName

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6738.
-

Resolution: Cannot Reproduce

As said above, I think there is something wrong in the sstable attached, so in 
the meantime no-one seems to have be able to repro this and so closing. If 
someone has fresh info on this and/or is able to repro, feel free to reopen.

 java.lang.ClassCastException: 
 org.apache.cassandra.db.composites.CompoundComposite cannot be cast to 
 org.apache.cassandra.db.composites.CellName
 

 Key: CASSANDRA-6738
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6738
 Project: Cassandra
  Issue Type: Bug
Reporter: Mateusz Gajewski
Assignee: Sylvain Lebresne
 Fix For: 2.1 beta2

 Attachments: 6738.txt, user_attribs.tar.gz


 When using nodetool upgradesstables (2.0.4 - 2.1-beta) class cast exception 
 occurs:
 ERROR [CompactionExecutor:7] 2014-02-19 21:34:16,839 CassandraDaemon.java:165 
 - Exception in thread Thread[CompactionExecutor:7,1,main]
 java.lang.ClassCastException: 
 org.apache.cassandra.db.composites.CompoundComposite cannot be cast to 
 org.apache.cassandra.db.composites.CellName
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:86)
  ~[main/:na]
   at org.apache.cassandra.db.Cell$1.computeNext(Cell.java:75) ~[main/:na]
   at org.apache.cassandra.db.Cell$1.computeNext(Cell.java:64) ~[main/:na]
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
  ~[guava-16.0.jar:na]
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
 ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:129)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
  ~[main/:na]
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
  ~[guava-16.0.jar:na]
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
 ~[guava-16.0.jar:na]
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
  ~[guava-16.0.jar:na]
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
 ~[guava-16.0.jar:na]
   at 
 org.apache.cassandra.db.ColumnIndex$Builder.buildForCompaction(ColumnIndex.java:165)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:110)
  ~[main/:na]
   at 
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:178) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:172)
  ~[main/:na]
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
  ~[main/:na]
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:67)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:64)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$4.perform(CompactionManager.java:262)
  ~[main/:na]
   at 
 org.apache.cassandra.db.compaction.CompactionManager$2.call(CompactionManager.java:227)
  ~[main/:na]
   at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  ~[na:1.7.0_45]
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_45]
   at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6764) Using Batch commitlog_sync is slow and doesn't actually batch writes

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955122#comment-13955122
 ] 

Jonathan Ellis commented on CASSANDRA-6764:
---

batch and periodic have always differed only by the guarantees provided.  I 
think it makes sense to preserve that.

 Using Batch commitlog_sync is slow and doesn't actually batch writes
 

 Key: CASSANDRA-6764
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6764
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: John Carrino
Assignee: John Carrino
 Fix For: 2.1 beta2

 Attachments: 6764.fix.txt, cassandra_6764_v2.patch, 
 cassandra_6764_v3.patch


 The assumption behind batch commit mode is that the client does it's own 
 batching and wants to wait until the write is durable before returning.  The 
 problem is that the queue that cassandra uses under the covers only allows 
 for a single ROW (RowMutation) per thread (concurrent_writes).  This means 
 that commitlog_sync_batch_window_in_ms should really be called sleep_between 
 each_concurrent_writes_rows_in_ms.
 I assume the reason this slipped by for so long is that no one uses batch 
 mode, probably because people say it's slow.  We need durability so this 
 isn't an option.
 However it doesn't need to be this slow.
 Also, if you write a row that is larger than the commit log size it silently 
 (warn) fails to put it in the commit log.  This is not ideal for batch mode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6957) testNewRepairedSSTable fails intermittently

2014-03-31 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-6957:
-

 Summary: testNewRepairedSSTable fails intermittently
 Key: CASSANDRA-6957
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6957
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Fix For: 2.1 beta2






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6764) Using Batch commitlog_sync is slow and doesn't actually batch writes

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955126#comment-13955126
 ] 

Benedict commented on CASSANDRA-6764:
-

Well this _does_ come under the heading of 'guarantees' in my book :)

Okay, in that case I'm a smidgen concerned we may end up breaking some people 
unexpectedly in 2.1, but they can just bump their CL size so it shouldn't be 
too bad. Perhaps we should mention this in the error message at least?

I'll update the test.

 Using Batch commitlog_sync is slow and doesn't actually batch writes
 

 Key: CASSANDRA-6764
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6764
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: John Carrino
Assignee: John Carrino
 Fix For: 2.1 beta2

 Attachments: 6764.fix.txt, cassandra_6764_v2.patch, 
 cassandra_6764_v3.patch


 The assumption behind batch commit mode is that the client does it's own 
 batching and wants to wait until the write is durable before returning.  The 
 problem is that the queue that cassandra uses under the covers only allows 
 for a single ROW (RowMutation) per thread (concurrent_writes).  This means 
 that commitlog_sync_batch_window_in_ms should really be called sleep_between 
 each_concurrent_writes_rows_in_ms.
 I assume the reason this slipped by for so long is that no one uses batch 
 mode, probably because people say it's slow.  We need durability so this 
 isn't an option.
 However it doesn't need to be this slow.
 Also, if you write a row that is larger than the commit log size it silently 
 (warn) fails to put it in the commit log.  This is not ideal for batch mode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-6956) SELECT ... LIMIT offset by 1 with static columns

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-6956:
---

Assignee: Sylvain Lebresne

 SELECT ... LIMIT offset by 1 with static columns
 

 Key: CASSANDRA-6956
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6956
 Project: Cassandra
  Issue Type: Bug
 Environment: cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1
Reporter: Pavel Eremeev
Assignee: Sylvain Lebresne
 Fix For: 2.0.7


 First, repro case:
 {code}
 cqlsh:test create table test ( pk1 text, pk2 timeuuid, data1 text static, 
 data2 text, PRIMARY KEY( pk1, pk2 ) );
 cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 
 'pk1' and pk2 = now();
 cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 
 'pk1' and pk2 = now();
 cqlsh:test select * from test limit 1;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | null | data1 |  null
 (1 rows)
 cqlsh:test select * from test limit 2;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
 (1 rows)
 cqlsh:test select * from test limit 3;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
  pk1 | 0af67a40-b8ba-11e3-a345-49baa9ac32e6 | data1 | data2
 (2 rows)
 {code}
 I think that: 1) if this is a static columns feature it should be documented 
 so I can use it safely or 2) it should be fixed (return 2 rows with limit 2 
 for query above).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6956) SELECT ... LIMIT offset by 1 with static columns

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6956:


Fix Version/s: 2.0.7

 SELECT ... LIMIT offset by 1 with static columns
 

 Key: CASSANDRA-6956
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6956
 Project: Cassandra
  Issue Type: Bug
 Environment: cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1
Reporter: Pavel Eremeev
Assignee: Sylvain Lebresne
 Fix For: 2.0.7


 First, repro case:
 {code}
 cqlsh:test create table test ( pk1 text, pk2 timeuuid, data1 text static, 
 data2 text, PRIMARY KEY( pk1, pk2 ) );
 cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 
 'pk1' and pk2 = now();
 cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 
 'pk1' and pk2 = now();
 cqlsh:test select * from test limit 1;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | null | data1 |  null
 (1 rows)
 cqlsh:test select * from test limit 2;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
 (1 rows)
 cqlsh:test select * from test limit 3;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
  pk1 | 0af67a40-b8ba-11e3-a345-49baa9ac32e6 | data1 | data2
 (2 rows)
 {code}
 I think that: 1) if this is a static columns feature it should be documented 
 so I can use it safely or 2) it should be fixed (return 2 rows with limit 2 
 for query above).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4050) Unable to remove snapshot files on Windows while original sstables are live

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955152#comment-13955152
 ] 

Benedict commented on CASSANDRA-4050:
-

I've uploaded a tidied up version 
[here|https://github.com/belliottsmith/cassandra/tree/4050-nio2]

I've eliminated some unnecessary variables, simplified a couple of 
loops/conditions, and unified the AbstractDataInput/Small hierarchy. Also fixed 
a minor bug with getPosition() in RAR after close(), and CRAR now ensures 
that the current position is restored after rebuffer() - whilst currently this 
wouldn't cause any problems, it seems like an oversight.

 Unable to remove snapshot files on Windows while original sstables are live
 ---

 Key: CASSANDRA-4050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Jim Newsham
Assignee: Joshua McKenzie
Priority: Minor
 Attachments: CASSANDRA-4050_v1.patch


 I'm using Cassandra 1.0.8, on Windows 7.  When I take a snapshot of the 
 database, I find that I am unable to delete the snapshot directory (i.e., dir 
 named {datadir}\{keyspacename}\snapshots\{snapshottag}) while Cassandra is 
 running:  The action can't be completed because the folder or a file in it 
 is open in another program.  Close the folder or file and try again [in 
 Windows Explorer].  If I terminate Cassandra, then I can delete the directory 
 with no problem.
 I expect to be able to move or delete the snapshotted files while Cassandra 
 is running, as this should not affect the runtime operation of Cassandra.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6956) SELECT ... LIMIT offset by 1 with static columns

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6956:


Attachment: 6956.txt

Currently the columnCounter will count the static block as one row which 
throw off LIMIT. Attaching a patch to fix that.

 SELECT ... LIMIT offset by 1 with static columns
 

 Key: CASSANDRA-6956
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6956
 Project: Cassandra
  Issue Type: Bug
 Environment: cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1
Reporter: Pavel Eremeev
Assignee: Sylvain Lebresne
 Fix For: 2.0.7

 Attachments: 6956.txt


 First, repro case:
 {code}
 cqlsh:test create table test ( pk1 text, pk2 timeuuid, data1 text static, 
 data2 text, PRIMARY KEY( pk1, pk2 ) );
 cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 
 'pk1' and pk2 = now();
 cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 
 'pk1' and pk2 = now();
 cqlsh:test select * from test limit 1;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | null | data1 |  null
 (1 rows)
 cqlsh:test select * from test limit 2;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
 (1 rows)
 cqlsh:test select * from test limit 3;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
  pk1 | 0af67a40-b8ba-11e3-a345-49baa9ac32e6 | data1 | data2
 (2 rows)
 {code}
 I think that: 1) if this is a static columns feature it should be documented 
 so I can use it safely or 2) it should be fixed (return 2 rows with limit 2 
 for query above).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6764) Using Batch commitlog_sync is slow and doesn't actually batch writes

2014-03-31 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6764:


Attachment: 6764.fix2.txt

Take two

 Using Batch commitlog_sync is slow and doesn't actually batch writes
 

 Key: CASSANDRA-6764
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6764
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: John Carrino
Assignee: John Carrino
 Fix For: 2.1 beta2

 Attachments: 6764.fix.txt, 6764.fix2.txt, cassandra_6764_v2.patch, 
 cassandra_6764_v3.patch


 The assumption behind batch commit mode is that the client does it's own 
 batching and wants to wait until the write is durable before returning.  The 
 problem is that the queue that cassandra uses under the covers only allows 
 for a single ROW (RowMutation) per thread (concurrent_writes).  This means 
 that commitlog_sync_batch_window_in_ms should really be called sleep_between 
 each_concurrent_writes_rows_in_ms.
 I assume the reason this slipped by for so long is that no one uses batch 
 mode, probably because people say it's slow.  We need durability so this 
 isn't an option.
 However it doesn't need to be this slow.
 Also, if you write a row that is larger than the commit log size it silently 
 (warn) fails to put it in the commit log.  This is not ideal for batch mode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6958) upgradesstables does not maintain levels for existing SSTables

2014-03-31 Thread Wei Deng (JIRA)
Wei Deng created CASSANDRA-6958:
---

 Summary: upgradesstables does not maintain levels for existing 
SSTables
 Key: CASSANDRA-6958
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6958
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Wei Deng
Priority: Critical


Initially ran into this issue on a DSE 3.2 (C* 1.2) to DSE 4.0 (C* 2.0) 
upgrade, and then I was able to reproduce it when testing an upgrade from C* 
2.0.5 to C* 2.1-beta so the problem still exists in the latest code.

Basically after you've upgraded to the new version and run nodetool 
upgradesstables on a CF/table that has been using LCS, then all of the non-L0 
SSTables will be changed to L0 in the upgraded SSTables. In other words, they 
don't maintain their level and will have to go through the compaction again. 
The problem is that if you've got thousands of non-L0 SSTables before the 
upgrade, then all of these files showing up in L0 will push the system to do 
STCS and start to build some huge L0 tables. If a user doesn't budget enough 
free space (for example, if they used the recommended guideline and only 
budgeted 10% of free space because LCS is in use), then this STCS in L0 effect 
will have them run out of space.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6958) upgradesstables does not maintain levels for existing SSTables

2014-03-31 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-6958:
--

Assignee: Marcus Eriksson

 upgradesstables does not maintain levels for existing SSTables
 --

 Key: CASSANDRA-6958
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6958
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Wei Deng
Assignee: Marcus Eriksson
Priority: Critical

 Initially ran into this issue on a DSE 3.2 (C* 1.2) to DSE 4.0 (C* 2.0) 
 upgrade, and then I was able to reproduce it when testing an upgrade from C* 
 2.0.5 to C* 2.1-beta so the problem still exists in the latest code.
 Basically after you've upgraded to the new version and run nodetool 
 upgradesstables on a CF/table that has been using LCS, then all of the 
 non-L0 SSTables will be changed to L0 in the upgraded SSTables. In other 
 words, they don't maintain their level and will have to go through the 
 compaction again. The problem is that if you've got thousands of non-L0 
 SSTables before the upgrade, then all of these files showing up in L0 will 
 push the system to do STCS and start to build some huge L0 tables. If a user 
 doesn't budget enough free space (for example, if they used the recommended 
 guideline and only budgeted 10% of free space because LCS is in use), then 
 this STCS in L0 effect will have them run out of space.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6912) SSTableReader.isReplaced does not allow for safe resource cleanup

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955236#comment-13955236
 ] 

Jonathan Ellis commented on CASSANDRA-6912:
---

Can you summarize the changes in SSTR?

 SSTableReader.isReplaced does not allow for safe resource cleanup
 -

 Key: CASSANDRA-6912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6912
 Project: Cassandra
  Issue Type: Bug
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2


 There are a number of possible race conditions on resource cleanup from the 
 use of cloneWithNewSummarySamplingLevel, because the replacement sstable can 
 be itself replaced/obsoleted while the prior sstable is still referenced 
 (this is actually quite easy with compaction, but can happen in other 
 circumstances less commonly).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (CASSANDRA-6954) Native protocol v2 spec is missing column type definition for text

2014-03-31 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-6954:
---

Assignee: Sylvain Lebresne

 Native protocol v2 spec is missing column type definition for text
 --

 Key: CASSANDRA-6954
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6954
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Documentation  website
Reporter: Matt Stump
Assignee: Sylvain Lebresne
Priority: Trivial
  Labels: native_protocol

 Native protocol v2 spec is missing column type definition for text. Should be 
 0x000A.
 https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v2.spec#L526



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955258#comment-13955258
 ] 

Jonathan Ellis commented on CASSANDRA-6477:
---

bq. The problem is that this means we can't do lazy updates of the index; we 
need to keep the index perfectly (or, eventually perfectly) in sync with the 
base table.

To clarify: Suppose you have you index on the age of users, and we have an 
entry for {{24: user1}} in the index table.  Now two threads update user1's 
age; one to 25, and one to 26.  Each thread will

# Read existing age
# Delete index entry for existing age
# Update user record and insert index entry for new age

The problem is if each thread reads the existing age of 24, then we'll end up 
with both {{25: user1}} and {{26: user1} index entries.  (Atomic batches do not 
help with this.)  With normal indexes, we clean up stale entries at compaction 
+ read time; we could still do this here but the performance penalty is a lot 
higher.

Sylvain had another idea.



 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955260#comment-13955260
 ] 

Jonathan Ellis commented on CASSANDRA-6477:
---

Sylvain had a different idea:

Instead of just writing a {{24, user1}} tombstone, write a tombstone that 
indicates what the value changed to: {{24, user1 - 25}} for one thread, and 
{{24, user1 - 26}} for the other.

When the tombstones is merged for compaction or read, you can say wait 2 
people tried to erase that, one with 25 the other with 26, let's check which 
was has a higher timestamp and delete any obsolete entries.


 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955262#comment-13955262
 ] 

Jonathan Ellis commented on CASSANDRA-6477:
---

This does mean that a tombstone is not just a tombstone, i.e., we will have 
to keep all tombstones of this time for gcgs or a similar period, not just the 
most recent post-merge tombstone as currently.

But it should be relatively rare to have racing tombstones, so the penalty vs 
the status quo is not actually large in practice.

/cc [~mstump]

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6892:


Attachment: 6892-2.0-v2.txt

The fact that a default alias could conflict with a column name introduced 
through thrift is definitively not on purpose. And in fact, this is not really 
just limited to default aliases since with a compact table with 1 clustering 
column, you currently cannot add a value for that clustering column if it 
conflicts with any of the CQL column names (which don't have to be default 
aliases in particular). Or rather, CQL will let you insert it, but if you scrub 
later, it will tell you the data is invalid which is bogus.

All this to say that this is not really entirely thrift related, and is really 
a bug in getColumnDefinitionFromColumnName (and that's what we should fix). 
This method should never return a definition that is not a REGULAR or STATIC 
one, since that's the only 2 cases where a column definition is stored inside 
an internal column/cell name.  That, plus the fact that getValueValidator() 
should always use getColumnDefinitionFromColumnName, since it's really only 
ever used when dealing with cell names. So anyway, attaching a v2 that does 
those modifications: it fixes getColumnDefinitionFromColumnName and fix 
getValueValidator to use it (renaming it to getValueValidatorFromColumnName for 
consistency sake). Also remove Schema.getValueValidator as it is a really 
unecessary (but that's more of a nit). The patch does include the unit tests 
from the first patch and adds a new one for the 'compact table with clustering 
column' case I describe above.


 Cassandra 2.0.x validates Thrift columns incorrectly and causes 
 InvalidRequestException
 ---

 Key: CASSANDRA-6892
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6892
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Christian Spriegel
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.7

 Attachments: 6892-2.0-v2.txt, 6892-2.0.txt, CASSANDRA-6892_V1.patch


 I just upgrade my local dev machine to Cassandra 2.0, which causes one of my 
 automated tests to fail now. With the latest 1.2.x it was working fine.
 The Exception I get on my client (using Hector) is:
 {code}
 me.prettyprint.hector.api.exceptions.HInvalidRequestException: 
 InvalidRequestException(why:(Expected 8 or 0 byte long (21)) 
 [MDS_0][MasterdataIndex][key2] failed validation)
   at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52)
   at 
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265)
   at 
 me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113)
   at 
 me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
   at 
 me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115)
   at 
 me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163)
   at 
 me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69)
   at 
 com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482)
   
 Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) 
 [MDS_0][MasterdataIndex][key2] failed validation)
   at 
 org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950)
   at 
 me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246)
   at 
 me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1)
   at 
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104)
   at 
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)
   ... 46 more
 {code}
 The schema of my column family is:
 {code}
 create column family MasterdataIndex with
 compression_options = {sstable_compression:SnappyCompressor, 
 chunk_length_kb:64} and
 comparator = UTF8Type and
 key_validation_class = 'CompositeType(UTF8Type,LongType)' and
 default_validation_class = BytesType;
 {code}
 From the error message it looks like Cassandra is trying to validate the 
 value with the key-validator! (My 

[jira] [Commented] (CASSANDRA-6523) Unable to contact any seeds! with multi-DC cluster and listen != broadcast address

2014-03-31 Thread Chris Burroughs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955303#comment-13955303
 ] 

Chris Burroughs commented on CASSANDRA-6523:


I believe it's a regression caused by CASSANDRA-5768

 Unable to contact any seeds! with multi-DC cluster and listen != broadcast 
 address
 

 Key: CASSANDRA-6523
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6523
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.13ish
Reporter: Chris Burroughs

 New cluster:
  * Seeds: list of 6 internal IPs
  * listen address: internal ip
  * broadcast: external ip
 Two DC cluster, using GPFS where the external IPs are NATed.  Clusters fails 
 to start with Unable to contact any seeds!
  * Fail: Try to start a seed node
  * Fail: Try to start two seed nodes at the same time in the same DC
  * Success: Start two seed nodes at the same time in different DCs.
 Presumably related to CASSANDRA-5768



--
This message was sent by Atlassian JIRA
(v6.2#6252)


git commit: Add missing entry in protocol spec (#6954)

2014-03-31 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.0 d049017ac - 07dc6e189


Add missing entry in protocol spec (#6954)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07dc6e18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07dc6e18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07dc6e18

Branch: refs/heads/cassandra-2.0
Commit: 07dc6e189176bca07597e3fdb8d9d9f0e4240cef
Parents: d049017
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 31 17:43:32 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 31 17:43:41 2014 +0200

--
 doc/native_protocol_v2.spec | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07dc6e18/doc/native_protocol_v2.spec
--
diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec
index 44061da..11d380f 100644
--- a/doc/native_protocol_v2.spec
+++ b/doc/native_protocol_v2.spec
@@ -523,6 +523,7 @@ Table of Contents
 0x0007Double
 0x0008Float
 0x0009Int
+0x000AText
 0x000BTimestamp
 0x000CUuid
 0x000DVarchar



[1/2] git commit: Add missing entry in protocol spec (#6954)

2014-03-31 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 3632811fa - 2bb30af66


Add missing entry in protocol spec (#6954)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/07dc6e18
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/07dc6e18
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/07dc6e18

Branch: refs/heads/cassandra-2.1
Commit: 07dc6e189176bca07597e3fdb8d9d9f0e4240cef
Parents: d049017
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 31 17:43:32 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 31 17:43:41 2014 +0200

--
 doc/native_protocol_v2.spec | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/07dc6e18/doc/native_protocol_v2.spec
--
diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec
index 44061da..11d380f 100644
--- a/doc/native_protocol_v2.spec
+++ b/doc/native_protocol_v2.spec
@@ -523,6 +523,7 @@ Table of Contents
 0x0007Double
 0x0008Float
 0x0009Int
+0x000AText
 0x000BTimestamp
 0x000CUuid
 0x000DVarchar



[2/2] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1

2014-03-31 Thread slebresne
Merge branch 'cassandra-2.0' into cassandra-2.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2bb30af6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2bb30af6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2bb30af6

Branch: refs/heads/cassandra-2.1
Commit: 2bb30af662069aa35ea8e2dd1be4890cd70bf330
Parents: 3632811 07dc6e1
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Mar 31 17:44:22 2014 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Mar 31 17:44:22 2014 +0200

--
 doc/native_protocol_v2.spec | 1 +
 1 file changed, 1 insertion(+)
--




[jira] [Commented] (CASSANDRA-6912) SSTableReader.isReplaced does not allow for safe resource cleanup

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955313#comment-13955313
 ] 

Benedict commented on CASSANDRA-6912:
-

Sure: basically instead of a boolean isReplaced flag, we build a linked-list 
chain of replacement, which we synchronise using a shared object to make 
maintenance of the list across multiple threads simple. Then on close we check 
if any of the closeable resources differ between any chains either side of us, 
and any that are in neither of the adjacent links (if any) are closed. It's 
worth pointing out that as of this patch only one of those resources can 
possibly differ, but I think it is more correct to test all of them even 
knowing this, since it is not expensive and is future proof. Once we've made 
this decision we remove ourselves from the linked list, so that anybody 
behind/ahead will compare against only other still opened resources.

I've also folded the close() and releaseReferences() tidying up into one tidy() 
method with a boolean flag for the kind of release we're doing, as this seemed 
more explicit to me.

 SSTableReader.isReplaced does not allow for safe resource cleanup
 -

 Key: CASSANDRA-6912
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6912
 Project: Cassandra
  Issue Type: Bug
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.1 beta2


 There are a number of possible race conditions on resource cleanup from the 
 use of cloneWithNewSummarySamplingLevel, because the replacement sstable can 
 be itself replaced/obsoleted while the prior sstable is still referenced 
 (this is actually quite easy with compaction, but can happen in other 
 circumstances less commonly).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6892:


Attachment: (was: 6892-2.0-v2.txt)

 Cassandra 2.0.x validates Thrift columns incorrectly and causes 
 InvalidRequestException
 ---

 Key: CASSANDRA-6892
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6892
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Christian Spriegel
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.7

 Attachments: 6892-2.0.txt, CASSANDRA-6892_V1.patch


 I just upgrade my local dev machine to Cassandra 2.0, which causes one of my 
 automated tests to fail now. With the latest 1.2.x it was working fine.
 The Exception I get on my client (using Hector) is:
 {code}
 me.prettyprint.hector.api.exceptions.HInvalidRequestException: 
 InvalidRequestException(why:(Expected 8 or 0 byte long (21)) 
 [MDS_0][MasterdataIndex][key2] failed validation)
   at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52)
   at 
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265)
   at 
 me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113)
   at 
 me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
   at 
 me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115)
   at 
 me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163)
   at 
 me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69)
   at 
 com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482)
   
 Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) 
 [MDS_0][MasterdataIndex][key2] failed validation)
   at 
 org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950)
   at 
 me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246)
   at 
 me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1)
   at 
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104)
   at 
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)
   ... 46 more
 {code}
 The schema of my column family is:
 {code}
 create column family MasterdataIndex with
 compression_options = {sstable_compression:SnappyCompressor, 
 chunk_length_kb:64} and
 comparator = UTF8Type and
 key_validation_class = 'CompositeType(UTF8Type,LongType)' and
 default_validation_class = BytesType;
 {code}
 From the error message it looks like Cassandra is trying to validate the 
 value with the key-validator! (My value in this case it 21 bytes long)
 I studied the Cassandra 2.0 code and found something wrong. It seems in 
 CFMetaData.addDefaultKeyAliases it passes the KeyValidator into 
 ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is 
 expected to be the value validator!
 In CFMetaData:
 {code}
 private ListColumnDefinition 
 addDefaultKeyAliases(ListColumnDefinition pkCols)
 {
 for (int i = 0; i  pkCols.size(); i++)
 {
 if (pkCols.get(i) == null)
 {
 Integer idx = null;
 AbstractType? type = keyValidator;
 if (keyValidator instanceof CompositeType)
 {
 idx = i;
 type = ((CompositeType)keyValidator).types.get(i);
 }
 // For compatibility sake, we call the first alias 'key' 
 rather than 'key1'. This
 // is inconsistent with column alias, but it's probably not 
 worth risking breaking compatibility now.
 ByteBuffer name = ByteBufferUtil.bytes(i == 0 ? 
 DEFAULT_KEY_ALIAS : DEFAULT_KEY_ALIAS + (i + 1));
 ColumnDefinition newDef = 
 ColumnDefinition.partitionKeyDef(name, type, idx); // type is LongType in my 
 case, as it uses keyValidator !!!
 column_metadata.put(newDef.name, newDef);
 pkCols.set(i, newDef);
 }
 }
 return pkCols;
 }
 ...
 public AbstractType? 

[jira] [Updated] (CASSANDRA-6892) Cassandra 2.0.x validates Thrift columns incorrectly and causes InvalidRequestException

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6892:


Attachment: 6892-2.0-v2.txt

 Cassandra 2.0.x validates Thrift columns incorrectly and causes 
 InvalidRequestException
 ---

 Key: CASSANDRA-6892
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6892
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Christian Spriegel
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.7

 Attachments: 6892-2.0-v2.txt, 6892-2.0.txt, CASSANDRA-6892_V1.patch


 I just upgrade my local dev machine to Cassandra 2.0, which causes one of my 
 automated tests to fail now. With the latest 1.2.x it was working fine.
 The Exception I get on my client (using Hector) is:
 {code}
 me.prettyprint.hector.api.exceptions.HInvalidRequestException: 
 InvalidRequestException(why:(Expected 8 or 0 byte long (21)) 
 [MDS_0][MasterdataIndex][key2] failed validation)
   at 
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:52)
   at 
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:265)
   at 
 me.prettyprint.cassandra.model.ExecutingKeyspace.doExecuteOperation(ExecutingKeyspace.java:113)
   at 
 me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:243)
   at 
 me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeBatch(AbstractColumnFamilyTemplate.java:115)
   at 
 me.prettyprint.cassandra.service.template.AbstractColumnFamilyTemplate.executeIfNotBatched(AbstractColumnFamilyTemplate.java:163)
   at 
 me.prettyprint.cassandra.service.template.ColumnFamilyTemplate.update(ColumnFamilyTemplate.java:69)
   at 
 com.mycompany.spring3utils.dataaccess.cassandra.AbstractCassandraDAO.doUpdate(AbstractCassandraDAO.java:482)
   
 Caused by: InvalidRequestException(why:(Expected 8 or 0 byte long (21)) 
 [MDS_0][MasterdataIndex][key2] failed validation)
   at 
 org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833)
   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964)
   at 
 org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950)
   at 
 me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:246)
   at 
 me.prettyprint.cassandra.model.MutatorImpl$3.execute(MutatorImpl.java:1)
   at 
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:104)
   at 
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)
   ... 46 more
 {code}
 The schema of my column family is:
 {code}
 create column family MasterdataIndex with
 compression_options = {sstable_compression:SnappyCompressor, 
 chunk_length_kb:64} and
 comparator = UTF8Type and
 key_validation_class = 'CompositeType(UTF8Type,LongType)' and
 default_validation_class = BytesType;
 {code}
 From the error message it looks like Cassandra is trying to validate the 
 value with the key-validator! (My value in this case it 21 bytes long)
 I studied the Cassandra 2.0 code and found something wrong. It seems in 
 CFMetaData.addDefaultKeyAliases it passes the KeyValidator into 
 ColumnDefinition.partitionKeyDef. Inside ColumnDefinition the validator is 
 expected to be the value validator!
 In CFMetaData:
 {code}
 private ListColumnDefinition 
 addDefaultKeyAliases(ListColumnDefinition pkCols)
 {
 for (int i = 0; i  pkCols.size(); i++)
 {
 if (pkCols.get(i) == null)
 {
 Integer idx = null;
 AbstractType? type = keyValidator;
 if (keyValidator instanceof CompositeType)
 {
 idx = i;
 type = ((CompositeType)keyValidator).types.get(i);
 }
 // For compatibility sake, we call the first alias 'key' 
 rather than 'key1'. This
 // is inconsistent with column alias, but it's probably not 
 worth risking breaking compatibility now.
 ByteBuffer name = ByteBufferUtil.bytes(i == 0 ? 
 DEFAULT_KEY_ALIAS : DEFAULT_KEY_ALIAS + (i + 1));
 ColumnDefinition newDef = 
 ColumnDefinition.partitionKeyDef(name, type, idx); // type is LongType in my 
 case, as it uses keyValidator !!!
 column_metadata.put(newDef.name, newDef);
 pkCols.set(i, newDef);
 }
 }
 return pkCols;
 }
 ...
 public 

[jira] [Resolved] (CASSANDRA-6954) Native protocol v2 spec is missing column type definition for text

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6954.
-

Resolution: Fixed

Weird. Well, pushed fix, thanks.

 Native protocol v2 spec is missing column type definition for text
 --

 Key: CASSANDRA-6954
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6954
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Documentation  website
Reporter: Matt Stump
Assignee: Sylvain Lebresne
Priority: Trivial
  Labels: native_protocol

 Native protocol v2 spec is missing column type definition for text. Should be 
 0x000A.
 https://github.com/apache/cassandra/blob/trunk/doc/native_protocol_v2.spec#L526



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6952) Cannot bind variables to USE statements

2014-03-31 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6952:


Issue Type: New Feature  (was: Bug)

 Cannot bind variables to USE statements
 ---

 Key: CASSANDRA-6952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6952
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Matt Stump
Priority: Minor
  Labels: cql3

 Attempting to bind a variable for a USE query results in a syntax error.
 Example Invocation:
 {code}
 ResultSet result = session.execute(USE ?, system);
 {code}
 Error:
 {code}
 ERROR SYNTAX_ERROR: line 1:4 no viable alternative at input '?', v=2
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6952) Cannot bind variables to USE statements

2014-03-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955327#comment-13955327
 ] 

Sylvain Lebresne commented on CASSANDRA-6952:
-

Marking as new feature as this is kind of the currently implemented behavior: 
we never support bind marker when a keyspace or a table name is expected. And 
at least for DML (which  are the only statement for which we allow bind markers 
really), we can't support them because all everything we do during preparation 
involves knowing to which table the statement apply.

That said, it would be trivial to implement it in the case of {{USE}}. But 
given what's above, I'm wondering out loud if it's not more consistent to just 
say sorry, we never support preparing a keyspace or table name. 

 Cannot bind variables to USE statements
 ---

 Key: CASSANDRA-6952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6952
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Matt Stump
Priority: Minor
  Labels: cql3

 Attempting to bind a variable for a USE query results in a syntax error.
 Example Invocation:
 {code}
 ResultSet result = session.execute(USE ?, system);
 {code}
 Error:
 {code}
 ERROR SYNTAX_ERROR: line 1:4 no viable alternative at input '?', v=2
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955333#comment-13955333
 ] 

Sylvain Lebresne commented on CASSANDRA-6477:
-

I'll note that the idea above has the downside to be only eventually 
consistent, but with no good user control about how eventual (we're dependent 
on when read/compaction happen to heal the denormalized index).

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6952) Cannot bind variables to USE statements

2014-03-31 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955336#comment-13955336
 ] 

Aleksey Yeschenko commented on CASSANDRA-6952:
--

It's only useful to avoid escaping the keyspace name if it's not all 
lower-case. Which isn't much, really, and ideally you should be using fully 
qualified table names anyway instead of USE.

Consistency feels more important to me, so +1 to sorry, we never support 
preparing a keyspace or table name.

 Cannot bind variables to USE statements
 ---

 Key: CASSANDRA-6952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6952
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Matt Stump
Priority: Minor
  Labels: cql3

 Attempting to bind a variable for a USE query results in a syntax error.
 Example Invocation:
 {code}
 ResultSet result = session.execute(USE ?, system);
 {code}
 Error:
 {code}
 ERROR SYNTAX_ERROR: line 1:4 no viable alternative at input '?', v=2
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6952) Cannot bind variables to USE statements

2014-03-31 Thread Matt Stump (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955357#comment-13955357
 ] 

Matt Stump commented on CASSANDRA-6952:
---

Can we get a better error message or a note in the docs/spec?

 Cannot bind variables to USE statements
 ---

 Key: CASSANDRA-6952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6952
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Matt Stump
Priority: Minor
  Labels: cql3

 Attempting to bind a variable for a USE query results in a syntax error.
 Example Invocation:
 {code}
 ResultSet result = session.execute(USE ?, system);
 {code}
 Error:
 {code}
 ERROR SYNTAX_ERROR: line 1:4 no viable alternative at input '?', v=2
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955358#comment-13955358
 ] 

Benedict commented on CASSANDRA-6477:
-

I may be being dim here, but it seems to me that with this scheme you would 
need to write a reverse record of 25, user1-replaced 24, so when you lookup on 
25, you can then read 24 and check there were no competing updates? Either that 
or read the original record, which sort of defeats the point of 
denormalisation...

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955260#comment-13955260
 ] 

Jonathan Ellis edited comment on CASSANDRA-6477 at 3/31/14 4:49 PM:


Sylvain had a different idea:

Instead of just writing a {{24, user1}} tombstone, write a tombstone that 
indicates what the value changed to: {{24, user1 - 25}} for one thread, and 
{{24, user1 - 26}} for the other.

When the tombstones are merged for compaction or read, you can say wait 2 
people tried to erase that, one with 25 the other with 26, let's check which 
was has a higher timestamp and delete any obsolete entries.



was (Author: jbellis):
Sylvain had a different idea:

Instead of just writing a {{24, user1}} tombstone, write a tombstone that 
indicates what the value changed to: {{24, user1 - 25}} for one thread, and 
{{24, user1 - 26}} for the other.

When the tombstones is merged for compaction or read, you can say wait 2 
people tried to erase that, one with 25 the other with 26, let's check which 
was has a higher timestamp and delete any obsolete entries.


 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955258#comment-13955258
 ] 

Jonathan Ellis edited comment on CASSANDRA-6477 at 3/31/14 4:49 PM:


bq. The problem is that this means we can't do lazy updates of the index; we 
need to keep the index perfectly (or, eventually perfectly) in sync with the 
base table.

To clarify: Suppose you have you index on the age of users, and we have an 
entry for {{24: user1}} in the index table.  Now two threads update user1's 
age; one to 25, and one to 26.  Each thread will

# Read existing age
# Delete index entry for existing age
# Update user record and insert index entry for new age

The problem is if each thread reads the existing age of 24, then we'll end up 
with both {{25: user1}} and {{26: user1} index entries.  (Atomic batches do not 
help with this.)  With normal indexes, we clean up stale entries at compaction 
+ read time; we could still do this here but the performance penalty is a lot 
higher.



was (Author: jbellis):
bq. The problem is that this means we can't do lazy updates of the index; we 
need to keep the index perfectly (or, eventually perfectly) in sync with the 
base table.

To clarify: Suppose you have you index on the age of users, and we have an 
entry for {{24: user1}} in the index table.  Now two threads update user1's 
age; one to 25, and one to 26.  Each thread will

# Read existing age
# Delete index entry for existing age
# Update user record and insert index entry for new age

The problem is if each thread reads the existing age of 24, then we'll end up 
with both {{25: user1}} and {{26: user1} index entries.  (Atomic batches do not 
help with this.)  With normal indexes, we clean up stale entries at compaction 
+ read time; we could still do this here but the performance penalty is a lot 
higher.

Sylvain had another idea.



 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955260#comment-13955260
 ] 

Jonathan Ellis edited comment on CASSANDRA-6477 at 3/31/14 4:50 PM:


Sylvain had a different idea:

Instead of just writing a {{24, user1}} tombstone, write a tombstone that 
indicates what the value changed to: {{24, user1 - 25}} for one thread, and 
{{24, user1 - 26}} for the other.

When the tombstones are merged for compaction or read you can say, Wait! 2 
people tried to erase that, one with 25 the other with 26, let's check which 
one has a higher timestamp and delete any obsolete entries.



was (Author: jbellis):
Sylvain had a different idea:

Instead of just writing a {{24, user1}} tombstone, write a tombstone that 
indicates what the value changed to: {{24, user1 - 25}} for one thread, and 
{{24, user1 - 26}} for the other.

When the tombstones are merged for compaction or read, you can say wait 2 
people tried to erase that, one with 25 the other with 26, let's check which 
was has a higher timestamp and delete any obsolete entries.


 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955384#comment-13955384
 ] 

Jeremiah Jordan edited comment on CASSANDRA-6477 at 3/31/14 5:02 PM:
-

bq. I'll note that the idea above has the downside to be only eventually 
consistent, but with no good user control about how eventual (we're dependent 
on when read/compaction happen to heal the denormalized index).

I think this might be OK, as this is really only an issue in the case of a 
race, so both tombstones will end up in memtables and be resolved immediately, 
or in sstables written near each other in time (which should hopefully compact 
together fairly quickly).  In both cases resolving the conflict *should* happen 
fairly quickly, though there are probably edge cases.

The issue I see here is that compaction now has to issue queries, and we need 
to make sure those deletes issue by compaction MUST happen, or else the index 
will get out of whack, and we will have already thrown out the extra tombstone.


was (Author: jjordan):
bq. I'll note that the idea above has the downside to be only eventually 
consistent, but with no good user control about how eventual (we're dependent 
on when read/compaction happen to heal the denormalized index).

I think this might be OK, as this is really only an issue in the case of a 
race, so both tombstones will end up in meltables and be resolved immediately, 
or in sstables written near each other in time (which should hopefully compact 
together fairly quickly).  In both cases resolving the conflict *should* happen 
fairly quickly, though there are probably edge cases.

The issue I see here is that compaction now has to issue queries, and we need 
to make sure those deletes issue by compaction MUST happen, or else the index 
will get out of whack, and we will have already thrown out the extra tombstone.

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955384#comment-13955384
 ] 

Jeremiah Jordan commented on CASSANDRA-6477:


bq. I'll note that the idea above has the downside to be only eventually 
consistent, but with no good user control about how eventual (we're dependent 
on when read/compaction happen to heal the denormalized index).

I think this might be OK, as this is really only an issue in the case of a 
race, so both tombstones will end up in meltables and be resolved immediately, 
or in sstables written near each other in time (which should hopefully compact 
together fairly quickly).  In both cases resolving the conflict *should* happen 
fairly quickly, though there are probably edge cases.

The issue I see here is that compaction now has to issue queries, and we need 
to make sure those deletes issue by compaction MUST happen, or else the index 
will get out of whack, and we will have already thrown out the extra tombstone.

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6956) SELECT ... LIMIT offset by 1 with static columns

2014-03-31 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955387#comment-13955387
 ] 

Aleksey Yeschenko commented on CASSANDRA-6956:
--

+1 (would rename lastGroupIsStatic to previousGroupWasStatic, and last to 
previous, but that's just a personal preference).

 SELECT ... LIMIT offset by 1 with static columns
 

 Key: CASSANDRA-6956
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6956
 Project: Cassandra
  Issue Type: Bug
 Environment: cqlsh 4.1.1 | Cassandra 2.0.6 | CQL spec 3.1.1
Reporter: Pavel Eremeev
Assignee: Sylvain Lebresne
 Fix For: 2.0.7

 Attachments: 6956.txt


 First, repro case:
 {code}
 cqlsh:test create table test ( pk1 text, pk2 timeuuid, data1 text static, 
 data2 text, PRIMARY KEY( pk1, pk2 ) );
 cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 
 'pk1' and pk2 = now();
 cqlsh:test update test set data1 = 'data1', data2 = 'data2' where pk1 = 
 'pk1' and pk2 = now();
 cqlsh:test select * from test limit 1;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | null | data1 |  null
 (1 rows)
 cqlsh:test select * from test limit 2;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
 (1 rows)
 cqlsh:test select * from test limit 3;
  pk1 | pk2  | data1 | data2
 -+--+---+---
  pk1 | 9b068ee0-b8b0-11e3-a345-49baa9ac32e6 | data1 | data2
  pk1 | 0af67a40-b8ba-11e3-a345-49baa9ac32e6 | data1 | data2
 (2 rows)
 {code}
 I think that: 1) if this is a static columns feature it should be documented 
 so I can use it safely or 2) it should be fixed (return 2 rows with limit 2 
 for query above).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955394#comment-13955394
 ] 

Jeremiah Jordan commented on CASSANDRA-6477:


bq. I may be being dim here, but it seems to me that with this scheme you would 
need to write a reverse record of 25, user1-replaced 24, so when you lookup on 
25, you can then read 24 and check there were no competing updates? Either that 
or read the original record, which sort of defeats the point of 
denormalisation...

No, you resolve it in compaction or on lookup of 24.  Compaction sees the two 
different tombstones for 24 and then resolves them to the correct new value, 
deleting the wrong new value.  Or a look up of 24 pulls in the two 
tombstones, resolves them to the correct one, deletes the wrong one, and 
returns none to the user.

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6958) upgradesstables does not maintain levels for existing SSTables

2014-03-31 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-6958:
---

Attachment: 0001-Use-LeveledCompactionTask-for-upgradesstables-when-L.patch

 upgradesstables does not maintain levels for existing SSTables
 --

 Key: CASSANDRA-6958
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6958
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Wei Deng
Assignee: Marcus Eriksson
Priority: Critical
 Fix For: 2.0.7

 Attachments: 
 0001-Use-LeveledCompactionTask-for-upgradesstables-when-L.patch


 Initially ran into this issue on a DSE 3.2 (C* 1.2) to DSE 4.0 (C* 2.0) 
 upgrade, and then I was able to reproduce it when testing an upgrade from C* 
 2.0.5 to C* 2.1-beta so the problem still exists in the latest code.
 Basically after you've upgraded to the new version and run nodetool 
 upgradesstables on a CF/table that has been using LCS, then all of the 
 non-L0 SSTables will be changed to L0 in the upgraded SSTables. In other 
 words, they don't maintain their level and will have to go through the 
 compaction again. The problem is that if you've got thousands of non-L0 
 SSTables before the upgrade, then all of these files showing up in L0 will 
 push the system to do STCS and start to build some huge L0 tables. If a user 
 doesn't budget enough free space (for example, if they used the recommended 
 guideline and only budgeted 10% of free space because LCS is in use), then 
 this STCS in L0 effect will have them run out of space.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6958) upgradesstables does not maintain levels for existing SSTables

2014-03-31 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-6958:
---

Priority: Major  (was: Critical)

 upgradesstables does not maintain levels for existing SSTables
 --

 Key: CASSANDRA-6958
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6958
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Wei Deng
Assignee: Marcus Eriksson
 Fix For: 2.0.7

 Attachments: 
 0001-Use-LeveledCompactionTask-for-upgradesstables-when-L.patch


 Initially ran into this issue on a DSE 3.2 (C* 1.2) to DSE 4.0 (C* 2.0) 
 upgrade, and then I was able to reproduce it when testing an upgrade from C* 
 2.0.5 to C* 2.1-beta so the problem still exists in the latest code.
 Basically after you've upgraded to the new version and run nodetool 
 upgradesstables on a CF/table that has been using LCS, then all of the 
 non-L0 SSTables will be changed to L0 in the upgraded SSTables. In other 
 words, they don't maintain their level and will have to go through the 
 compaction again. The problem is that if you've got thousands of non-L0 
 SSTables before the upgrade, then all of these files showing up in L0 will 
 push the system to do STCS and start to build some huge L0 tables. If a user 
 doesn't budget enough free space (for example, if they used the recommended 
 guideline and only budgeted 10% of free space because LCS is in use), then 
 this STCS in L0 effect will have them run out of space.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955397#comment-13955397
 ] 

Benedict commented on CASSANDRA-6477:
-

bq. No, you resolve it in compaction or on lookup of 24.

That only resolves deletes. How do you resolve *seeing the wrong data*?

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955400#comment-13955400
 ] 

Jeremiah Jordan commented on CASSANDRA-6477:


If you have the race, you may briefly see the other value, but its a race, and 
it would be just like you read before update #2 happened, so as long as the 
period of time where you can get the wrong data is small, it is ok.

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955412#comment-13955412
 ] 

Benedict commented on CASSANDRA-6477:
-

[~jjordan] is that in response to me? Because I don't see how this would work: 
if both deleted 24 and inserted 25 and 26, then we now have a record of both 25 
and 26 mapping to user1, despite only one of them being true, and no means of 
tidying it up. So people can indefinitely look up on both values. This is only 
resolved if we look up the original record after every 2i result, which maybe 
was always the plan. I'm not sure.

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955399#comment-13955399
 ] 

Jonathan Ellis commented on CASSANDRA-6477:
---

That's why Sylvain said, it's eventually consistent, but with no good user 
control about how eventual.

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6106) QueryState.getTimestamp() FBUtilities.timestampMicros() reads current timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() / 1000

2014-03-31 Thread Christopher Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955416#comment-13955416
 ] 

Christopher Smith commented on CASSANDRA-6106:
--

Isn't using gettimeofday or clock_gettime crossing the custom JNI code 
rubicon anyway?

 QueryState.getTimestamp()  FBUtilities.timestampMicros() reads current 
 timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() 
 / 1000
 

 Key: CASSANDRA-6106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: DSE Cassandra 3.1, but also HEAD
Reporter: Christopher Smith
Assignee: Benedict
Priority: Minor
  Labels: timestamps
 Fix For: 2.1 beta2

 Attachments: microtimstamp.patch, microtimstamp_random.patch, 
 microtimstamp_random_rev2.patch


 I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
 mentioned issues with millisecond rounding in timestamps and was able to 
 reproduce the issue. If I specify a timestamp in a mutating query, I get 
 microsecond precision, but if I don't, I get timestamps rounded to the 
 nearest millisecond, at least for my first query on a given connection, which 
 substantially increases the possibilities of collision.
 I believe I found the offending code, though I am by no means sure this is 
 comprehensive. I think we probably need a fairly comprehensive replacement of 
 all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6958) upgradesstables does not maintain levels for existing SSTables

2014-03-31 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-6958:


Reviewer: Yuki Morishita

 upgradesstables does not maintain levels for existing SSTables
 --

 Key: CASSANDRA-6958
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6958
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Wei Deng
Assignee: Marcus Eriksson
 Fix For: 2.0.7

 Attachments: 
 0001-Use-LeveledCompactionTask-for-upgradesstables-when-L.patch


 Initially ran into this issue on a DSE 3.2 (C* 1.2) to DSE 4.0 (C* 2.0) 
 upgrade, and then I was able to reproduce it when testing an upgrade from C* 
 2.0.5 to C* 2.1-beta so the problem still exists in the latest code.
 Basically after you've upgraded to the new version and run nodetool 
 upgradesstables on a CF/table that has been using LCS, then all of the 
 non-L0 SSTables will be changed to L0 in the upgraded SSTables. In other 
 words, they don't maintain their level and will have to go through the 
 compaction again. The problem is that if you've got thousands of non-L0 
 SSTables before the upgrade, then all of these files showing up in L0 will 
 push the system to do STCS and start to build some huge L0 tables. If a user 
 doesn't budget enough free space (for example, if they used the recommended 
 guideline and only budgeted 10% of free space because LCS is in use), then 
 this STCS in L0 effect will have them run out of space.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6958) upgradesstables does not maintain levels for existing SSTables

2014-03-31 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955431#comment-13955431
 ] 

Yuki Morishita commented on CASSANDRA-6958:
---

We need to cover standalone offline upgrade (sstableupgrade) as well.

 upgradesstables does not maintain levels for existing SSTables
 --

 Key: CASSANDRA-6958
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6958
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Wei Deng
Assignee: Marcus Eriksson
 Fix For: 2.0.7

 Attachments: 
 0001-Use-LeveledCompactionTask-for-upgradesstables-when-L.patch


 Initially ran into this issue on a DSE 3.2 (C* 1.2) to DSE 4.0 (C* 2.0) 
 upgrade, and then I was able to reproduce it when testing an upgrade from C* 
 2.0.5 to C* 2.1-beta so the problem still exists in the latest code.
 Basically after you've upgraded to the new version and run nodetool 
 upgradesstables on a CF/table that has been using LCS, then all of the 
 non-L0 SSTables will be changed to L0 in the upgraded SSTables. In other 
 words, they don't maintain their level and will have to go through the 
 compaction again. The problem is that if you've got thousands of non-L0 
 SSTables before the upgrade, then all of these files showing up in L0 will 
 push the system to do STCS and start to build some huge L0 tables. If a user 
 doesn't budget enough free space (for example, if they used the recommended 
 guideline and only budgeted 10% of free space because LCS is in use), then 
 this STCS in L0 effect will have them run out of space.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4050) Unable to remove snapshot files on Windows while original sstables are live

2014-03-31 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955439#comment-13955439
 ] 

Joshua McKenzie commented on CASSANDRA-4050:


Good catch on getPosition - I accounted for that in current() but that hadn't 
triggered on any testing and was an oversight.

I kept AbstractDataInput and AbstractDataInputSmall separate in the type 
heirarchy because I didn't want to push the int - long signature change down 
to all the classes that implemented the base.  I'm not sure if the added 
footprint justifies the added complexity or not - I was trying to minimize 
changes to unrelated classes due to the loss of RAF code.  I didn't like it, 
but I also don'e like the alternative that much.  It looks like we run the risk 
of Bad Things if someone does a MemoryInputStream.skipBytes that pushes the 
position past Max Int - this impl has us casting off the remainder on a seek 
call so you could end up in negative territory.

As for the tidying up - looks good to me.  Thanks for taking the time to do 
that - clean idiomatic usage of the nio API's clearly makes things easier to 
parse.

Tests on linux look good, snapshots on Windows behave w/benedict's revisions 
and no mmap, and read performance looks comparable so I +1 the changes with the 
above caveat.

 Unable to remove snapshot files on Windows while original sstables are live
 ---

 Key: CASSANDRA-4050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Jim Newsham
Assignee: Joshua McKenzie
Priority: Minor
 Attachments: CASSANDRA-4050_v1.patch


 I'm using Cassandra 1.0.8, on Windows 7.  When I take a snapshot of the 
 database, I find that I am unable to delete the snapshot directory (i.e., dir 
 named {datadir}\{keyspacename}\snapshots\{snapshottag}) while Cassandra is 
 running:  The action can't be completed because the folder or a file in it 
 is open in another program.  Close the folder or file and try again [in 
 Windows Explorer].  If I terminate Cassandra, then I can delete the directory 
 with no problem.
 I expect to be able to move or delete the snapshotted files while Cassandra 
 is running, as this should not affect the runtime operation of Cassandra.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4050) Unable to remove snapshot files on Windows while original sstables are live

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955451#comment-13955451
 ] 

Benedict commented on CASSANDRA-4050:
-

bq. It looks like we run the risk of Bad Things if someone does a 
MemoryInputStream.skipBytes that pushes the position past Max Int - this impl 
has us casting off the remainder on a seek call so you could end up in negative 
territory.

How so? The MemoryInputStream defines what its limit is, and the skipBytes 
method ensures it never goes above this. So seek() can never be called with a 
value that is out of range (since it is a protected method). We could put in an 
assert if we want to be doubly certain, however, and that's probably not a bad 
idea for simple declaration of intent.

I think the reduced code duplication (from readLine and skipBytes now being 
shared), and cleaner hierarchy is preferable, especially as ADISmall is not a 
very clear distinction from ADI. Think the overall footprint is reduced rather 
than increased...?

bq. Thanks for taking the time to do that - clean idiomatic usage of the nio 
API's clearly makes things easier to parse.

I find the NIO library tough to parse at the best of times, and wanted to be 
sure I was reading it right, so it was a freebie to change as I reviewed :)

 Unable to remove snapshot files on Windows while original sstables are live
 ---

 Key: CASSANDRA-4050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Jim Newsham
Assignee: Joshua McKenzie
Priority: Minor
 Attachments: CASSANDRA-4050_v1.patch


 I'm using Cassandra 1.0.8, on Windows 7.  When I take a snapshot of the 
 database, I find that I am unable to delete the snapshot directory (i.e., dir 
 named {datadir}\{keyspacename}\snapshots\{snapshottag}) while Cassandra is 
 running:  The action can't be completed because the folder or a file in it 
 is open in another program.  Close the folder or file and try again [in 
 Windows Explorer].  If I terminate Cassandra, then I can delete the directory 
 with no problem.
 I expect to be able to move or delete the snapshotted files while Cassandra 
 is running, as this should not affect the runtime operation of Cassandra.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6958) upgradesstables does not maintain levels for existing SSTables

2014-03-31 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-6958:
---

Attachment: 0001-Use-6958-v2.patch

ah right, attached

 upgradesstables does not maintain levels for existing SSTables
 --

 Key: CASSANDRA-6958
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6958
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Wei Deng
Assignee: Marcus Eriksson
 Fix For: 2.0.7

 Attachments: 0001-Use-6958-v2.patch, 
 0001-Use-LeveledCompactionTask-for-upgradesstables-when-L.patch


 Initially ran into this issue on a DSE 3.2 (C* 1.2) to DSE 4.0 (C* 2.0) 
 upgrade, and then I was able to reproduce it when testing an upgrade from C* 
 2.0.5 to C* 2.1-beta so the problem still exists in the latest code.
 Basically after you've upgraded to the new version and run nodetool 
 upgradesstables on a CF/table that has been using LCS, then all of the 
 non-L0 SSTables will be changed to L0 in the upgraded SSTables. In other 
 words, they don't maintain their level and will have to go through the 
 compaction again. The problem is that if you've got thousands of non-L0 
 SSTables before the upgrade, then all of these files showing up in L0 will 
 push the system to do STCS and start to build some huge L0 tables. If a user 
 doesn't budget enough free space (for example, if they used the recommended 
 guideline and only budgeted 10% of free space because LCS is in use), then 
 this STCS in L0 effect will have them run out of space.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4050) Unable to remove snapshot files on Windows while original sstables are live

2014-03-31 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955474#comment-13955474
 ] 

Joshua McKenzie commented on CASSANDRA-4050:


{quote}
the skipBytes method ensures it never goes above this
{quote}

How is skipBytes protecting against blowing past our limit?  (note: me just 
being dense here is not out of the question)
{code:java, title=skipBytes}
 64 public int skipBytes(int n) throws IOException
 65 {
 66 if (n = 0)
 67 return 0;
 68 seek(getPosition() + n);
 69 return position;
 70 }
{code}

It looks like this exposes seek() to the outside world with a protection 
against negative inputs but not much else.  That being said - the old code 
looks like it has the same potential problem:

{code:java, title=old code}
public int skipBytes(int n) throws IOException
{
seekInternal(getPosition() + n);
return position;
}
{code}

 Unable to remove snapshot files on Windows while original sstables are live
 ---

 Key: CASSANDRA-4050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Jim Newsham
Assignee: Joshua McKenzie
Priority: Minor
 Attachments: CASSANDRA-4050_v1.patch


 I'm using Cassandra 1.0.8, on Windows 7.  When I take a snapshot of the 
 database, I find that I am unable to delete the snapshot directory (i.e., dir 
 named {datadir}\{keyspacename}\snapshots\{snapshottag}) while Cassandra is 
 running:  The action can't be completed because the folder or a file in it 
 is open in another program.  Close the folder or file and try again [in 
 Windows Explorer].  If I terminate Cassandra, then I can delete the directory 
 with no problem.
 I expect to be able to move or delete the snapshotted files while Cassandra 
 is running, as this should not affect the runtime operation of Cassandra.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4050) Unable to remove snapshot files on Windows while original sstables are live

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955480#comment-13955480
 ] 

Benedict commented on CASSANDRA-4050:
-

Ah, this is my failure to delete the skipBytes method from MIS, as it now 
occurs in ADI (in a safe manner).

 Unable to remove snapshot files on Windows while original sstables are live
 ---

 Key: CASSANDRA-4050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Jim Newsham
Assignee: Joshua McKenzie
Priority: Minor
 Attachments: CASSANDRA-4050_v1.patch


 I'm using Cassandra 1.0.8, on Windows 7.  When I take a snapshot of the 
 database, I find that I am unable to delete the snapshot directory (i.e., dir 
 named {datadir}\{keyspacename}\snapshots\{snapshottag}) while Cassandra is 
 running:  The action can't be completed because the folder or a file in it 
 is open in another program.  Close the folder or file and try again [in 
 Windows Explorer].  If I terminate Cassandra, then I can delete the directory 
 with no problem.
 I expect to be able to move or delete the snapshotted files while Cassandra 
 is running, as this should not affect the runtime operation of Cassandra.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4050) Unable to remove snapshot files on Windows while original sstables are live

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955483#comment-13955483
 ] 

Benedict commented on CASSANDRA-4050:
-

In fact, it looks like that is simply a bug that has always been present - the 
new behaviour is no worse than the old, but deleting it is still the correct 
fix.

Good spot.

 Unable to remove snapshot files on Windows while original sstables are live
 ---

 Key: CASSANDRA-4050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Jim Newsham
Assignee: Joshua McKenzie
Priority: Minor
 Attachments: CASSANDRA-4050_v1.patch


 I'm using Cassandra 1.0.8, on Windows 7.  When I take a snapshot of the 
 database, I find that I am unable to delete the snapshot directory (i.e., dir 
 named {datadir}\{keyspacename}\snapshots\{snapshottag}) while Cassandra is 
 running:  The action can't be completed because the folder or a file in it 
 is open in another program.  Close the folder or file and try again [in 
 Windows Explorer].  If I terminate Cassandra, then I can delete the directory 
 with no problem.
 I expect to be able to move or delete the snapshotted files while Cassandra 
 is running, as this should not affect the runtime operation of Cassandra.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-4050) Unable to remove snapshot files on Windows while original sstables are live

2014-03-31 Thread Joshua McKenzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955486#comment-13955486
 ] 

Joshua McKenzie commented on CASSANDRA-4050:


Sure enough.  given the docs for skipBytes are 0-n bytes skipped I think the 
code in ADI looks good.  I'd much rather we not add more types to the hierarchy 
in this context.

 Unable to remove snapshot files on Windows while original sstables are live
 ---

 Key: CASSANDRA-4050
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4050
 Project: Cassandra
  Issue Type: Bug
 Environment: Windows 7
Reporter: Jim Newsham
Assignee: Joshua McKenzie
Priority: Minor
 Attachments: CASSANDRA-4050_v1.patch


 I'm using Cassandra 1.0.8, on Windows 7.  When I take a snapshot of the 
 database, I find that I am unable to delete the snapshot directory (i.e., dir 
 named {datadir}\{keyspacename}\snapshots\{snapshottag}) while Cassandra is 
 running:  The action can't be completed because the folder or a file in it 
 is open in another program.  Close the folder or file and try again [in 
 Windows Explorer].  If I terminate Cassandra, then I can delete the directory 
 with no problem.
 I expect to be able to move or delete the snapshotted files while Cassandra 
 is running, as this should not affect the runtime operation of Cassandra.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6924) Data Inserted Immediately After Secondary Index Creation is not Indexed

2014-03-31 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955489#comment-13955489
 ] 

Ryan McGuire commented on CASSANDRA-6924:
-

I ported [~thobbs]' pycassa test to [a CQL based 
dtest|https://github.com/riptano/cassandra-dtest/commit/36960090d219ab8dbc7f108faa91c3ea5cea2bec].
 It's failing on 1.2, 2.0, and 2.1 HEAD.

 Data Inserted Immediately After Secondary Index Creation is not Indexed
 ---

 Key: CASSANDRA-6924
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6924
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
 Fix For: 2.0.7

 Attachments: repro.py


 The head of the cassandra-1.2 branch (currently 1.2.16-tentative) contains a 
 regression from 1.2.15.  Data that is inserted immediately after secondary 
 index creation may never get indexed.
 You can reproduce the issue with a [pycassa integration 
 test|https://github.com/pycassa/pycassa/blob/master/tests/test_autopacking.py#L793]
  by running:
 {noformat}
 nosetests tests/test_autopacking.py:TestKeyValidators.test_get_indexed_slices
 {noformat}
 from the pycassa directory.
 The operation order goes like this:
 # create CF
 # create secondary index
 # insert data
 # query secondary index
 If a short sleep is added in between steps 2 and 3, the data gets indexed and 
 the query is successful.
 If a sleep is only added in between steps 3 and 4, some of the data is never 
 indexed and the query will return incomplete results.  This appears to be the 
 case even if the sleep is relatively long (30s), which makes me think the 
 data may never get indexed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6924) Data Inserted Immediately After Secondary Index Creation is not Indexed

2014-03-31 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6924:


Reproduced In: 2.1 beta1, 2.0.6, 1.2.16

 Data Inserted Immediately After Secondary Index Creation is not Indexed
 ---

 Key: CASSANDRA-6924
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6924
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
 Fix For: 2.0.7

 Attachments: repro.py


 The head of the cassandra-1.2 branch (currently 1.2.16-tentative) contains a 
 regression from 1.2.15.  Data that is inserted immediately after secondary 
 index creation may never get indexed.
 You can reproduce the issue with a [pycassa integration 
 test|https://github.com/pycassa/pycassa/blob/master/tests/test_autopacking.py#L793]
  by running:
 {noformat}
 nosetests tests/test_autopacking.py:TestKeyValidators.test_get_indexed_slices
 {noformat}
 from the pycassa directory.
 The operation order goes like this:
 # create CF
 # create secondary index
 # insert data
 # query secondary index
 If a short sleep is added in between steps 2 and 3, the data gets indexed and 
 the query is successful.
 If a sleep is only added in between steps 3 and 4, some of the data is never 
 indexed and the query will return incomplete results.  This appears to be the 
 case even if the sleep is relatively long (30s), which makes me think the 
 data may never get indexed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6959) Reusing Keyspace and CF raises assertion errors

2014-03-31 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-6959:
---

 Summary: Reusing Keyspace and CF raises assertion errors
 Key: CASSANDRA-6959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6959
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire


The [dtest I 
introduced|https://github.com/riptano/cassandra-dtest/commit/36960090d219ab8dbc7f108faa91c3ea5cea2bec]
 to test CASSANDRA-6924 introduces some log errors which I think may be related 
to  CASSANDRA-5202. 

On 2.1 :

{code}
ERROR [MigrationStage:1] 2014-03-31 14:36:43,463 
CommitLogSegmentManager.java:306 - Failed waiting for a forced recycle of 
in-use commit log segments
java.lang.AssertionError: null
at 
org.apache.cassandra.db.commitlog.CommitLogSegmentManager.forceRecycleAll(CommitLogSegmentManager.java:301)
 ~[main/:na]
at 
org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:160)
 [main/:na]
at 
org.apache.cassandra.db.DefsTables.dropColumnFamily(DefsTables.java:497) 
[main/:na]
at 
org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:296) 
[main/:na]
at org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:181) 
[main/:na]
at 
org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:49)
 [main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
[main/:na]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
[na:1.7.0_51]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
[na:1.7.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
[na:1.7.0_51]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
{code}

On 2.0: 

{code}
ERROR [ReadStage:3] 2014-03-31 13:28:11,014 CassandraDaemon.java (line 198) 
Exception in thread Thread[ReadStage:3,5,main]
java.lang.AssertionError
at 
org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.getExtraFilter(ExtendedFilter.java:258)
at 
org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1744)
at 
org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1699)
at 
org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119)
at 
org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{code}

To reproduce, you many need to comment out the assertion in that test, as it is 
not 100% reproducible on the first try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6924) Data Inserted Immediately After Secondary Index Creation is not Indexed

2014-03-31 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955505#comment-13955505
 ] 

Ryan McGuire commented on CASSANDRA-6924:
-

Also, this test raises some assertion errors that may be related, but I created 
CASSANDRA-6959 for them.

 Data Inserted Immediately After Secondary Index Creation is not Indexed
 ---

 Key: CASSANDRA-6924
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6924
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
 Fix For: 2.0.7

 Attachments: repro.py


 The head of the cassandra-1.2 branch (currently 1.2.16-tentative) contains a 
 regression from 1.2.15.  Data that is inserted immediately after secondary 
 index creation may never get indexed.
 You can reproduce the issue with a [pycassa integration 
 test|https://github.com/pycassa/pycassa/blob/master/tests/test_autopacking.py#L793]
  by running:
 {noformat}
 nosetests tests/test_autopacking.py:TestKeyValidators.test_get_indexed_slices
 {noformat}
 from the pycassa directory.
 The operation order goes like this:
 # create CF
 # create secondary index
 # insert data
 # query secondary index
 If a short sleep is added in between steps 2 and 3, the data gets indexed and 
 the query is successful.
 If a sleep is only added in between steps 3 and 4, some of the data is never 
 indexed and the query will return incomplete results.  This appears to be the 
 case even if the sleep is relatively long (30s), which makes me think the 
 data may never get indexed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6236) Update native protocol server to Netty 4

2014-03-31 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955528#comment-13955528
 ] 

Norman Maurer commented on CASSANDRA-6236:
--

There is now even the recorded video for this:

https://www.youtube.com/watch?v=_GRIyCMNGGI

Anyway what you guys think about having me doing the heavy work and submit a 
patch?

 Update native protocol server to Netty 4
 

 Key: CASSANDRA-6236
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6236
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
Priority: Minor
 Fix For: 2.1 beta2


 We should switch to Netty 4 at some point, since it's the future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6477) Partitioned indexes

2014-03-31 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955527#comment-13955527
 ] 

Jeremiah Jordan commented on CASSANDRA-6477:


[~benedict] two threads update age = null.  generate tombstones {{24, 
user1-null}}, two of them, so those are OK and not a problem, updated to the 
same value, we also need to generate {{null: user1}} as an append to the index. 
 Then update age=25 generates tombstone {{null, user1-25}} and age=26 
generates tombstone {{null, user1-26}}.  Those two tombstones will be resolved 
on compaction/memtable clash, or when someone asks for age=null as a query.  
This will require keeping track of null columns in the index.  Something 
similar would need to be done for a full delete of the row.

 Partitioned indexes
 ---

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
 Fix For: 3.0


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6236) Update native protocol server to Netty 4

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955535#comment-13955535
 ] 

Benedict commented on CASSANDRA-6236:
-

A quick bit of Google-due-diligence suggests you *might* be capable of it. I'm 
willing to give you a chance anyway :)

 Update native protocol server to Netty 4
 

 Key: CASSANDRA-6236
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6236
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
Priority: Minor
 Fix For: 2.1 beta2


 We should switch to Netty 4 at some point, since it's the future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6236) Update native protocol server to Netty 4

2014-03-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955538#comment-13955538
 ] 

Benedict commented on CASSANDRA-6236:
-

i.e. that would be great!

 Update native protocol server to Netty 4
 

 Key: CASSANDRA-6236
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6236
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
Priority: Minor
 Fix For: 2.1 beta2


 We should switch to Netty 4 at some point, since it's the future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6959) Reusing Keyspace and CF names raises assertion errors

2014-03-31 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6959:


Reproduced In: 2.1 beta1, 2.0.6  (was: 2.0.6, 2.1 beta1)
  Summary: Reusing Keyspace and CF names raises assertion errors  (was: 
Reusing Keyspace and CF raises assertion errors)

 Reusing Keyspace and CF names raises assertion errors
 -

 Key: CASSANDRA-6959
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6959
 Project: Cassandra
  Issue Type: Bug
Reporter: Ryan McGuire

 The [dtest I 
 introduced|https://github.com/riptano/cassandra-dtest/commit/36960090d219ab8dbc7f108faa91c3ea5cea2bec]
  to test CASSANDRA-6924 introduces some log errors which I think may be 
 related to  CASSANDRA-5202. 
 On 2.1 :
 {code}
 ERROR [MigrationStage:1] 2014-03-31 14:36:43,463 
 CommitLogSegmentManager.java:306 - Failed waiting for a forced recycle of 
 in-use commit log segments
 java.lang.AssertionError: null
 at 
 org.apache.cassandra.db.commitlog.CommitLogSegmentManager.forceRecycleAll(CommitLogSegmentManager.java:301)
  ~[main/:na]
 at 
 org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:160)
  [main/:na]
 at 
 org.apache.cassandra.db.DefsTables.dropColumnFamily(DefsTables.java:497) 
 [main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:296) 
 [main/:na]
 at 
 org.apache.cassandra.db.DefsTables.mergeSchema(DefsTables.java:181) [main/:na]
 at 
 org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:49)
  [main/:na]
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
 [main/:na]
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
 [na:1.7.0_51]
 at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
 [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  [na:1.7.0_51]
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  [na:1.7.0_51]
 at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
 {code}
 On 2.0: 
 {code}
 ERROR [ReadStage:3] 2014-03-31 13:28:11,014 CassandraDaemon.java (line 198) 
 Exception in thread Thread[ReadStage:3,5,main]
 java.lang.AssertionError
 at 
 org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.getExtraFilter(ExtendedFilter.java:258)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1744)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1699)
 at 
 org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:119)
 at 
 org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:60)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {code}
 To reproduce, you many need to comment out the assertion in that test, as it 
 is not 100% reproducible on the first try.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6236) Update native protocol server to Netty 4

2014-03-31 Thread Norman Maurer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955542#comment-13955542
 ] 

Norman Maurer commented on CASSANDRA-6236:
--

lol.. Ok tell me from which branch etc to start and I will come back to you 
guys in the next days ;)

 Update native protocol server to Netty 4
 

 Key: CASSANDRA-6236
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6236
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
Priority: Minor
 Fix For: 2.1 beta2


 We should switch to Netty 4 at some point, since it's the future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6236) Update native protocol server to Netty 4

2014-03-31 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955545#comment-13955545
 ] 

Aleksey Yeschenko commented on CASSANDRA-6236:
--

[~norman] cassandra-2.1

 Update native protocol server to Netty 4
 

 Key: CASSANDRA-6236
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6236
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
Priority: Minor
 Fix For: 2.1 beta2


 We should switch to Netty 4 at some point, since it's the future.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CASSANDRA-6960) Cassandra requires allow filtering

2014-03-31 Thread J.B. Langston (JIRA)
J.B. Langston created CASSANDRA-6960:


 Summary: Cassandra requires allow filtering
 Key: CASSANDRA-6960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6960
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-6960) Cassandra requires ALLOW FILTERING for a range scan

2014-03-31 Thread J.B. Langston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

J.B. Langston updated CASSANDRA-6960:
-

Reproduced In: 2.0.5
  Description: 
Given this table definition:

{code}
CREATE TABLE metric_log_a (
  destination_id text,
  rate_plan_id int,
  metric_name text,
  extraction_date 'org.apache.cassandra.db.marshal.TimestampType',
  metric_value text,
  PRIMARY KEY (destination_id, rate_plan_id, metric_name, extraction_date)
);
{code}

It seems that Cassandra should be able to perform the following query without 
ALLOW FILTERING:

{code}
select destination_id, rate_plan_id, metric_name, extraction_date, metric_value 
from metric_log_a 
where token(destination_id)  ? 
and token(destination_id) = ? 
and rate_plan_id=90 
and metric_name='minutesOfUse' 
and extraction_date = '2014-03-05' 
and extraction_date = '2014-03-05' 
allow filtering;
{code}

However, it will refuse to run unless ALLOW FILTERING is specified.
  Summary: Cassandra requires ALLOW FILTERING for a range scan  (was: 
Cassandra requires allow filtering)

 Cassandra requires ALLOW FILTERING for a range scan
 ---

 Key: CASSANDRA-6960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6960
 Project: Cassandra
  Issue Type: Bug
Reporter: J.B. Langston

 Given this table definition:
 {code}
 CREATE TABLE metric_log_a (
   destination_id text,
   rate_plan_id int,
   metric_name text,
   extraction_date 'org.apache.cassandra.db.marshal.TimestampType',
   metric_value text,
   PRIMARY KEY (destination_id, rate_plan_id, metric_name, extraction_date)
 );
 {code}
 It seems that Cassandra should be able to perform the following query without 
 ALLOW FILTERING:
 {code}
 select destination_id, rate_plan_id, metric_name, extraction_date, 
 metric_value 
 from metric_log_a 
 where token(destination_id)  ? 
 and token(destination_id) = ? 
 and rate_plan_id=90 
 and metric_name='minutesOfUse' 
 and extraction_date = '2014-03-05' 
 and extraction_date = '2014-03-05' 
 allow filtering;
 {code}
 However, it will refuse to run unless ALLOW FILTERING is specified.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6450) sstable2json hangs if keyspace uses authentication

2014-03-31 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955588#comment-13955588
 ] 

Chris Lohfink commented on CASSANDRA-6450:
--

not sure if related to authentication, might be something with your data?  I 
enabled authentication/authorization in cassandra.yaml with default 
cassandra/cassandra user and gave it a quick try:

{code}
/var/lib/cassandra/data/Keyspace1/Standard1$ sstable2json 
Keyspace1-Standard1-ic-1-Data.db 

 WARN 14:43:53,224 MemoryMeter uninitialized (jamm not specified as java 
agent); KeyCache size in JVM Heap will not be calculated accurately. Usually 
this means cassandra-env.sh disabled jamm because you are using a buggy JRE; 
upgrade to the Sun JRE instead
[
{key: 30313236303933,columns: [[C0,
...
{code}

Or perhaps theres something involved in adding data after authentication 
enabled?  Are there any other steps to help reproduce this?

 sstable2json hangs if keyspace uses authentication
 --

 Key: CASSANDRA-6450
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6450
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12
Reporter: Josh Dzielak
Priority: Minor
  Labels: lhf

 Running sstable2json against an authenticated keyspace hangs indefinitely. 
 True for other utilities based on SSTableExport as well.
 Running sstable2json against other unauthenticated keyspaces in the same 
 node/cluster was successful. Running against any CF in the keyspace with 
 password authentication on resulted in a hang.
 It looks like it gets about to:
 Table table = Table.open(descriptor.ksname); or
 table.getColumnFamilyStore(baseName);
 in SSTableExport.java but no farther.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-6106) QueryState.getTimestamp() FBUtilities.timestampMicros() reads current timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() / 1000

2014-03-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13955596#comment-13955596
 ] 

Jonathan Ellis commented on CASSANDRA-6106:
---

Shouldn't we be able to do that with JNA?

 QueryState.getTimestamp()  FBUtilities.timestampMicros() reads current 
 timestamp with System.currentTimeMillis() * 1000 instead of System.nanoTime() 
 / 1000
 

 Key: CASSANDRA-6106
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6106
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
 Environment: DSE Cassandra 3.1, but also HEAD
Reporter: Christopher Smith
Assignee: Benedict
Priority: Minor
  Labels: timestamps
 Fix For: 2.1 beta2

 Attachments: microtimstamp.patch, microtimstamp_random.patch, 
 microtimstamp_random_rev2.patch


 I noticed this blog post: http://aphyr.com/posts/294-call-me-maybe-cassandra 
 mentioned issues with millisecond rounding in timestamps and was able to 
 reproduce the issue. If I specify a timestamp in a mutating query, I get 
 microsecond precision, but if I don't, I get timestamps rounded to the 
 nearest millisecond, at least for my first query on a given connection, which 
 substantially increases the possibilities of collision.
 I believe I found the offending code, though I am by no means sure this is 
 comprehensive. I think we probably need a fairly comprehensive replacement of 
 all uses of System.currentTimeMillis() with System.nanoTime().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >