[jira] [Commented] (CASSANDRA-4421) Support cql3 table definitions in Hadoop InputFormat

2013-05-16 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659340#comment-13659340
 ] 

Jeremy Hanna commented on CASSANDRA-4421:
-

Mike: you're right, cql3 does filter range ghosts. FWIW, I have seen where if 
I've used the default consistency level of ONE (for the CFRR) when counting 
rows, that an inconsistent number may come back.

 Support cql3 table definitions in Hadoop InputFormat
 

 Key: CASSANDRA-4421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4421
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.1.0
 Environment: Debian Squeeze
Reporter: bert Passek
  Labels: cql3
 Fix For: 1.2.5

 Attachments: 4421-1.txt, 4421-2.txt, 4421-3.txt, 4421.txt


 Hello,
 i faced a bug while writing composite column values and following validation 
 on server side.
 This is the setup for reproduction:
 1. create a keyspace
 create keyspace test with strategy_class = 'SimpleStrategy' and 
 strategy_options:replication_factor = 1;
 2. create a cf via cql (3.0)
 create table test1 (
 a int,
 b int,
 c int,
 primary key (a, b)
 );
 If i have a look at the schema in cli i noticed that there is no column 
 metadata for columns not part of primary key.
 create column family test1
   with column_type = 'Standard'
   and comparator = 
 'CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type)'
   and default_validation_class = 'UTF8Type'
   and key_validation_class = 'Int32Type'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 Please notice the default validation class: UTF8Type
 Now i would like to insert value  127 via cassandra client (no cql, part of 
 mr-jobs). Have a look at the attachement.
 Batch mutate fails:
 InvalidRequestException(why:(String didn't validate.) [test][test1][1:c] 
 failed validation)
 A validator for column value is fetched in 
 ThriftValidation::validateColumnData which returns always the default 
 validator which is UTF8Type as described above (The ColumnDefinition for 
 given column name c is always null)
 In UTF8Type there is a check for
 if (b  127)
return false;
 Anyway, maybe i'm doing something wrong, but i used cql 3.0 for table 
 creation. I assigned data types to all columns, but i can not set values for 
 a composite column because the default validation class is used.
 I think the schema should know the correct validator even for composite 
 columns. The usage of the default validation class does not make sense.
 Best Regards 
 Bert Passek

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5572) Write row markers when serializing columnfamilies and columns schema

2013-05-16 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659347#comment-13659347
 ] 

Sylvain Lebresne commented on CASSANDRA-5572:
-

lgtm, +1.

Makes me think it might simplify things a bit to use range tombstones in 
dropFromSchema. And maybe we could start using CQL3 queries (with 
processInternal) to avoid having to deal with row markers manually? But anyway, 
we can definitively do that later. 

 Write row markers when serializing columnfamilies and columns schema
 

 Key: CASSANDRA-5572
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5572
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.4
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Attachments: 5572.txt


 ColumnDefinition.toSchema() and CFMetaData.toSchemaNoColumns() currently 
 don't write the row markers, which leads to certain queries not returning the 
 expected results, e.g.
 select keyspace_name, columnfamily_name from system.schema_columnfamilies 
 where keyspace_name = 'system' and columnfamily_name = 'hints' - [].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Slightly simplify/optimize columns collation

2013-05-16 Thread slebresne
Updated Branches:
  refs/heads/trunk ad191c55d - 9bb3441fe


Slightly simplify/optimize columns collation


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9bb3441f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9bb3441f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9bb3441f

Branch: refs/heads/trunk
Commit: 9bb3441fe23ddff2ccf12469860e37492e1092d6
Parents: ad191c5
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu May 16 11:03:22 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu May 16 11:03:22 2013 +0200

--
 .../apache/cassandra/db/filter/QueryFilter.java|   17 --
 1 files changed, 10 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9bb3441f/src/java/org/apache/cassandra/db/filter/QueryFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/QueryFilter.java 
b/src/java/org/apache/cassandra/db/filter/QueryFilter.java
index e65e85a..8187294 100644
--- a/src/java/org/apache/cassandra/db/filter/QueryFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/QueryFilter.java
@@ -25,6 +25,7 @@ import 
org.apache.cassandra.db.columniterator.OnDiskAtomIterator;
 import org.apache.cassandra.db.columniterator.IdentityQueryFilter;
 import org.apache.cassandra.io.sstable.SSTableReader;
 import org.apache.cassandra.io.util.FileDataInput;
+import org.apache.cassandra.utils.HeapAllocator;
 import org.apache.cassandra.utils.MergeIterator;
 
 public class QueryFilter
@@ -83,23 +84,25 @@ public class QueryFilter
 
 public void collateColumns(final ColumnFamily returnCF, List? extends 
IteratorColumn toCollate, final int gcBefore)
 {
-ComparatorColumn fcomp = 
filter.getColumnComparator(returnCF.getComparator());
+final ComparatorColumn fcomp = 
filter.getColumnComparator(returnCF.getComparator());
 // define a 'reduced' iterator that merges columns w/ the same name, 
which
 // greatly simplifies computing liveColumns in the presence of 
tombstones.
 MergeIterator.ReducerColumn, Column reducer = new 
MergeIterator.ReducerColumn, Column()
 {
-ColumnFamily curCF = returnCF.cloneMeShallow();
+Column current;
 
-public void reduce(Column current)
+public void reduce(Column next)
 {
-curCF.addColumn(current);
+assert current == null || fcomp.compare(current, next) == 0;
+current = current == null ? next : current.reconcile(next, 
HeapAllocator.instance);
 }
 
 protected Column getReduced()
 {
-Column c = curCF.iterator().next();
-curCF.clear();
-return c;
+assert current != null;
+Column toReturn = current;
+current = null;
+return toReturn;
 }
 };
 IteratorColumn reduced = MergeIterator.get(toCollate, fcomp, 
reducer);



[jira] [Assigned] (CASSANDRA-4693) CQL Protocol should allow multiple PreparedStatements to be atomically executed

2013-05-16 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-4693:
---

Assignee: Sylvain Lebresne

 CQL Protocol should allow multiple PreparedStatements to be atomically 
 executed
 ---

 Key: CASSANDRA-4693
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4693
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
  Labels: cql, protocol
 Fix For: 2.0


 Currently the only way to insert multiple records on the same partition key, 
 atomically and using PreparedStatements is to use a CQL BATCH command. 
 Unfortunately when doing so the amount of records to be inserted must be 
 known prior to prepare the statement which is rarely the case. Thus the only 
 workaround if one want to keep atomicity is currently to use unprepared 
 statements which send a bulk of CQL strings and is fairly inefficient.
 Therefore CQL Protocol should allow clients to send multiple 
 PreparedStatements to be executed with similar guarantees and semantic as CQL 
 BATCH command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5572) Write row markers when serializing columnfamilies and columns schema

2013-05-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659523#comment-13659523
 ] 

Aleksey Yeschenko commented on CASSANDRA-5572:
--

Yeah, thought about range tombstones here as well yesterday. But, later.

 Write row markers when serializing columnfamilies and columns schema
 

 Key: CASSANDRA-5572
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5572
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.4
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Minor
 Attachments: 5572.txt


 ColumnDefinition.toSchema() and CFMetaData.toSchemaNoColumns() currently 
 don't write the row markers, which leads to certain queries not returning the 
 expected results, e.g.
 select keyspace_name, columnfamily_name from system.schema_columnfamilies 
 where keyspace_name = 'system' and columnfamily_name = 'hints' - [].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Write row markers when serializing schema

2013-05-16 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 8986e8f9f - 61567e7b4


Write row markers when serializing schema

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-5572


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/61567e7b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/61567e7b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/61567e7b

Branch: refs/heads/cassandra-1.2
Commit: 61567e7b4676a7075979e005b54c3c1f7ff8d04b
Parents: 8986e8f
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu May 16 16:43:45 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu May 16 16:43:45 2013 +0300

--
 CHANGES.txt|4 
 .../org/apache/cassandra/config/CFMetaData.java|3 ++-
 .../apache/cassandra/config/ColumnDefinition.java  |2 ++
 3 files changed, 8 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/61567e7b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2182768..6d5c117 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+1.2.6
+ * Write row markers when serializing schema (CASSANDRA-5572)
+
+
 1.2.5
  * make BytesToken.toString only return hex bytes (CASSANDRA-5566)
  * Ensure that submitBackground enqueues at least one task (CASSANDRA-5554)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/61567e7b/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 0b2be66..81afd23 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1295,6 +1295,7 @@ public final class CFMetaData
 ColumnFamily cf = rm.addOrGet(SystemTable.SCHEMA_COLUMNFAMILIES_CF);
 int ldt = (int) (System.currentTimeMillis() / 1000);
 
+cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, ));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, id));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, type));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator));
@@ -1341,10 +1342,10 @@ public final class CFMetaData
 int ldt = (int) (System.currentTimeMillis() / 1000);
 
 Integer oldId = Schema.instance.convertNewCfId(cfId);
-
 if (oldId != null) // keep old ids (see CASSANDRA-3794 for details)
 cf.addColumn(Column.create(oldId, timestamp, cfName, id));
 
+cf.addColumn(Column.create(, timestamp, cfName, ));
 cf.addColumn(Column.create(cfType.toString(), timestamp, cfName, 
type));
 cf.addColumn(Column.create(comparator.toString(), timestamp, cfName, 
comparator));
 if (subcolumnComparator != null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/61567e7b/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index 328d0ff..97f57e1 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -150,6 +150,7 @@ public class ColumnDefinition
 ColumnFamily cf = rm.addOrGet(SystemTable.SCHEMA_COLUMNS_CF);
 int ldt = (int) (System.currentTimeMillis() / 1000);
 
+cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator.getString(name), ));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator.getString(name), validator));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator.getString(name), index_type));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator.getString(name), index_options));
@@ -162,6 +163,7 @@ public class ColumnDefinition
 ColumnFamily cf = rm.addOrGet(SystemTable.SCHEMA_COLUMNS_CF);
 int ldt = (int) (System.currentTimeMillis() / 1000);
 
+cf.addColumn(Column.create(, timestamp, cfName, 
comparator.getString(name), ));
 cf.addColumn(Column.create(validator.toString(), timestamp, cfName, 
comparator.getString(name), validator));
 cf.addColumn(index_type == null ? DeletedColumn.create(ldt, timestamp, 
cfName, comparator.getString(name), index_type)
 : Column.create(index_type.toString(), 
timestamp, cfName, comparator.getString(name), index_type));



[1/2] git commit: Write row markers when serializing schema

2013-05-16 Thread aleksey
Updated Branches:
  refs/heads/trunk 9bb3441fe - 405c2515f


Write row markers when serializing schema

patch by Aleksey Yeschenko; reviewed by Sylvain Lebresne for
CASSANDRA-5572


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/61567e7b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/61567e7b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/61567e7b

Branch: refs/heads/trunk
Commit: 61567e7b4676a7075979e005b54c3c1f7ff8d04b
Parents: 8986e8f
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu May 16 16:43:45 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu May 16 16:43:45 2013 +0300

--
 CHANGES.txt|4 
 .../org/apache/cassandra/config/CFMetaData.java|3 ++-
 .../apache/cassandra/config/ColumnDefinition.java  |2 ++
 3 files changed, 8 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/61567e7b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 2182768..6d5c117 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,3 +1,7 @@
+1.2.6
+ * Write row markers when serializing schema (CASSANDRA-5572)
+
+
 1.2.5
  * make BytesToken.toString only return hex bytes (CASSANDRA-5566)
  * Ensure that submitBackground enqueues at least one task (CASSANDRA-5554)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/61567e7b/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 0b2be66..81afd23 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1295,6 +1295,7 @@ public final class CFMetaData
 ColumnFamily cf = rm.addOrGet(SystemTable.SCHEMA_COLUMNFAMILIES_CF);
 int ldt = (int) (System.currentTimeMillis() / 1000);
 
+cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, ));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, id));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, type));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator));
@@ -1341,10 +1342,10 @@ public final class CFMetaData
 int ldt = (int) (System.currentTimeMillis() / 1000);
 
 Integer oldId = Schema.instance.convertNewCfId(cfId);
-
 if (oldId != null) // keep old ids (see CASSANDRA-3794 for details)
 cf.addColumn(Column.create(oldId, timestamp, cfName, id));
 
+cf.addColumn(Column.create(, timestamp, cfName, ));
 cf.addColumn(Column.create(cfType.toString(), timestamp, cfName, 
type));
 cf.addColumn(Column.create(comparator.toString(), timestamp, cfName, 
comparator));
 if (subcolumnComparator != null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/61567e7b/src/java/org/apache/cassandra/config/ColumnDefinition.java
--
diff --git a/src/java/org/apache/cassandra/config/ColumnDefinition.java 
b/src/java/org/apache/cassandra/config/ColumnDefinition.java
index 328d0ff..97f57e1 100644
--- a/src/java/org/apache/cassandra/config/ColumnDefinition.java
+++ b/src/java/org/apache/cassandra/config/ColumnDefinition.java
@@ -150,6 +150,7 @@ public class ColumnDefinition
 ColumnFamily cf = rm.addOrGet(SystemTable.SCHEMA_COLUMNS_CF);
 int ldt = (int) (System.currentTimeMillis() / 1000);
 
+cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator.getString(name), ));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator.getString(name), validator));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator.getString(name), index_type));
 cf.addColumn(DeletedColumn.create(ldt, timestamp, cfName, 
comparator.getString(name), index_options));
@@ -162,6 +163,7 @@ public class ColumnDefinition
 ColumnFamily cf = rm.addOrGet(SystemTable.SCHEMA_COLUMNS_CF);
 int ldt = (int) (System.currentTimeMillis() / 1000);
 
+cf.addColumn(Column.create(, timestamp, cfName, 
comparator.getString(name), ));
 cf.addColumn(Column.create(validator.toString(), timestamp, cfName, 
comparator.getString(name), validator));
 cf.addColumn(index_type == null ? DeletedColumn.create(ldt, timestamp, 
cfName, comparator.getString(name), index_type)
 : Column.create(index_type.toString(), 
timestamp, cfName, comparator.getString(name), index_type));



[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-05-16 Thread aleksey
Merge branch 'cassandra-1.2' into trunk

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/config/CFMetaData.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/405c2515
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/405c2515
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/405c2515

Branch: refs/heads/trunk
Commit: 405c2515f1f1810c1cdab405105fd08b8756fddb
Parents: 9bb3441 61567e7
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu May 16 16:49:48 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu May 16 16:49:48 2013 +0300

--
 CHANGES.txt|4 
 .../org/apache/cassandra/config/CFMetaData.java|2 ++
 .../apache/cassandra/config/ColumnDefinition.java  |2 ++
 3 files changed, 8 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/405c2515/CHANGES.txt
--
diff --cc CHANGES.txt
index 11be8ba,6d5c117..543765c
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,51 -1,7 +1,55 @@@
 +2.0
 + * JEMalloc support for off-heap allocation (CASSANDRA-3997)
 + * Single-pass compaction (CASSANDRA-4180)
 + * Removed token range bisection (CASSANDRA-5518)
 + * Removed compatibility with pre-1.2.5 sstables and network messages
 +   (CASSANDRA-5511)
 + * removed PBSPredictor (CASSANDRA-5455)
 + * CAS support (CASSANDRA-5062, 5441, 5443)
 + * Leveled compaction performs size-tiered compactions in L0 
 +   (CASSANDRA-5371, 5439)
 + * Add yaml network topology snitch for mixed ec2/other envs (CASSANDRA-5339)
 + * Log when a node is down longer than the hint window (CASSANDRA-4554)
 + * Optimize tombstone creation for ExpiringColumns (CASSANDRA-4917)
 + * Improve LeveledScanner work estimation (CASSANDRA-5250, 5407)
 + * Replace compaction lock with runWithCompactionsDisabled (CASSANDRA-3430)
 + * Change Message IDs to ints (CASSANDRA-5307)
 + * Move sstable level information into the Stats component, removing the
 +   need for a separate Manifest file (CASSANDRA-4872)
 + * avoid serializing to byte[] on commitlog append (CASSANDRA-5199)
 + * make index_interval configurable per columnfamily (CASSANDRA-3961)
 + * add default_time_to_live (CASSANDRA-3974)
 + * add memtable_flush_period_in_ms (CASSANDRA-4237)
 + * replace supercolumns internally by composites (CASSANDRA-3237, 5123)
 + * upgrade thrift to 0.9.0 (CASSANDRA-3719)
 + * drop unnecessary keyspace parameter from user-defined compaction API 
 +   (CASSANDRA-5139)
 + * more robust solution to incomplete compactions + counters (CASSANDRA-5151)
 + * Change order of directory searching for c*.in.sh (CASSANDRA-3983)
 + * Add tool to reset SSTable compaction level for LCS (CASSANDRA-5271)
 + * Allow custom configuration loader (CASSANDRA-5045)
 + * Remove memory emergency pressure valve logic (CASSANDRA-3534)
 + * Reduce request latency with eager retry (CASSANDRA-4705)
 + * cqlsh: Remove ASSUME command (CASSANDRA-5331)
 + * Rebuild BF when loading sstables if bloom_filter_fp_chance
 +   has changed since compaction (CASSANDRA-5015)
 + * remove row-level bloom filters (CASSANDRA-4885)
 + * Change Kernel Page Cache skipping into row preheating (disabled by default)
 +   (CASSANDRA-4937)
 + * Improve repair by deciding on a gcBefore before sending
 +   out TreeRequests (CASSANDRA-4932)
 + * Add an official way to disable compactions (CASSANDRA-5074)
 + * Reenable ALTER TABLE DROP with new semantics (CASSANDRA-3919)
 + * Add binary protocol versioning (CASSANDRA-5436)
 + * Swap THshaServer for TThreadedSelectorServer (CASSANDRA-5530)
 + * Add alias support to SELECT statement (CASSANDRA-5075)
 + * Don't create empty RowMutations in CommitLogReplayer (CASSANDRA-5541)
 +
 +
+ 1.2.6
+  * Write row markers when serializing schema (CASSANDRA-5572)
+ 
+ 
  1.2.5
   * make BytesToken.toString only return hex bytes (CASSANDRA-5566)
   * Ensure that submitBackground enqueues at least one task (CASSANDRA-5554)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/405c2515/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --cc src/java/org/apache/cassandra/config/CFMetaData.java
index a686bf6,81afd23..e9ed8bb
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@@ -1477,22 -1341,15 +1478,23 @@@ public final class CFMetaDat
  ColumnFamily cf = rm.addOrGet(SystemTable.SCHEMA_COLUMNFAMILIES_CF);
  int ldt = (int) (System.currentTimeMillis() / 1000);
  
 -Integer oldId = Schema.instance.convertNewCfId(cfId);
 -if (oldId != null) // keep old ids (see CASSANDRA-3794 for details)
 -   

[jira] [Updated] (CASSANDRA-4693) CQL Protocol should allow multiple PreparedStatements to be atomically executed

2013-05-16 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4693:


Attachment: 0001-Binary-protocol-adds-message-to-batch-prepared-or-not-.txt

Attaching patch for this. This adds a new BATCH message to the protocol that 
allows pass a list of either string query (+ optional variables for one-shot 
binding) or prepared statement id + variables, and batch all of this server 
side.

I made a small manual test and that seems to work correctly.

 CQL Protocol should allow multiple PreparedStatements to be atomically 
 executed
 ---

 Key: CASSANDRA-4693
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4693
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
  Labels: cql, protocol
 Fix For: 2.0

 Attachments: 
 0001-Binary-protocol-adds-message-to-batch-prepared-or-not-.txt


 Currently the only way to insert multiple records on the same partition key, 
 atomically and using PreparedStatements is to use a CQL BATCH command. 
 Unfortunately when doing so the amount of records to be inserted must be 
 known prior to prepare the statement which is rarely the case. Thus the only 
 workaround if one want to keep atomicity is currently to use unprepared 
 statements which send a bulk of CQL strings and is fairly inefficient.
 Therefore CQL Protocol should allow clients to send multiple 
 PreparedStatements to be executed with similar guarantees and semantic as CQL 
 BATCH command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4693) CQL Protocol should allow multiple PreparedStatements to be atomically executed

2013-05-16 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4693:
--

Reviewer: iamaleksey

 CQL Protocol should allow multiple PreparedStatements to be atomically 
 executed
 ---

 Key: CASSANDRA-4693
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4693
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
  Labels: cql, protocol
 Fix For: 2.0

 Attachments: 
 0001-Binary-protocol-adds-message-to-batch-prepared-or-not-.txt


 Currently the only way to insert multiple records on the same partition key, 
 atomically and using PreparedStatements is to use a CQL BATCH command. 
 Unfortunately when doing so the amount of records to be inserted must be 
 known prior to prepare the statement which is rarely the case. Thus the only 
 workaround if one want to keep atomicity is currently to use unprepared 
 statements which send a bulk of CQL strings and is fairly inefficient.
 Therefore CQL Protocol should allow clients to send multiple 
 PreparedStatements to be executed with similar guarantees and semantic as CQL 
 BATCH command.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/3] git commit: Check only SSTables for the requested range when streaming patch by Rick Branson; reviewed by yukim for CASSANDRA-5569

2013-05-16 Thread yukim
Updated Branches:
  refs/heads/cassandra-1.2 61567e7b4 - 8b96334a0
  refs/heads/trunk 405c2515f - c7b67666d


Check only SSTables for the requested range when streaming patch by Rick 
Branson; reviewed by yukim for CASSANDRA-5569


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b96334a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b96334a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b96334a

Branch: refs/heads/cassandra-1.2
Commit: 8b96334a0c107216604d85d59ff50b1edbec89fa
Parents: 61567e7
Author: Rick Branson r...@diodeware.com
Authored: Thu May 16 11:36:50 2013 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Thu May 16 11:39:03 2013 -0500

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   96 ++-
 .../org/apache/cassandra/streaming/StreamOut.java  |   31 -
 .../cassandra/streaming/StreamingRepairTask.java   |7 +-
 4 files changed, 96 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b96334a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d5c117..619e415 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.6
  * Write row markers when serializing schema (CASSANDRA-5572)
+ * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
 
 
 1.2.5

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b96334a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 4ed7f82..055c415 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -28,10 +28,7 @@ import java.util.concurrent.atomic.AtomicLong;
 import java.util.regex.Pattern;
 import javax.management.*;
 
-import com.google.common.collect.AbstractIterator;
-import com.google.common.collect.ImmutableSet;
-import com.google.common.collect.Iterables;
-import com.google.common.collect.Sets;
+import com.google.common.collect.*;
 import com.google.common.util.concurrent.Futures;
 import org.cliffc.high_scale_lib.NonBlockingHashMap;
 import org.slf4j.Logger;
@@ -60,6 +57,7 @@ import org.apache.cassandra.db.index.SecondaryIndex;
 import org.apache.cassandra.db.index.SecondaryIndexManager;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.dht.*;
+import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.io.compress.CompressionParameters;
 import org.apache.cassandra.io.sstable.*;
@@ -1277,56 +1275,90 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 return markCurrentViewReferenced().sstables;
 }
 
-/**
- * @return a ViewFragment containing the sstables and memtables that may 
need to be merged
- * for the given @param key, according to the interval tree
- */
-public ViewFragment markReferenced(DecoratedKey key)
+abstract class AbstractViewSSTableFinder
 {
-assert !key.isMinimum();
-DataTracker.View view;
-ListSSTableReader sstables;
-while (true)
+abstract ListSSTableReader findSSTables(DataTracker.View view);
+protected ListSSTableReader 
sstablesForRowBounds(AbstractBoundsRowPosition rowBounds, DataTracker.View 
view)
 {
-view = data.getView();
-sstables = view.intervalTree.search(key);
-if (SSTableReader.acquireReferences(sstables))
-break;
-// retry w/ new view
+RowPosition stopInTree = rowBounds.right.isMinimum() ? 
view.intervalTree.max() : rowBounds.right;
+return view.intervalTree.search(Interval.RowPosition, 
SSTableReadercreate(rowBounds.left, stopInTree));
 }
-return new ViewFragment(sstables, 
Iterables.concat(Collections.singleton(view.memtable), 
view.memtablesPendingFlush));
 }
 
-/**
- * @return a ViewFragment containing the sstables and memtables that may 
need to be merged
- * for rows between @param startWith and @param stopAt, inclusive, 
according to the interval tree
- */
-public ViewFragment markReferenced(RowPosition startWith, RowPosition 
stopAt)
+private ViewFragment markReferenced(AbstractViewSSTableFinder finder)
 {
-DataTracker.View view;
 ListSSTableReader sstables;
+DataTracker.View view;
+
 while (true)
 {
 view = data.getView();
-// startAt == 

[2/3] git commit: Check only SSTables for the requested range when streaming patch by Rick Branson; reviewed by yukim for CASSANDRA-5569

2013-05-16 Thread yukim
Check only SSTables for the requested range when streaming patch by Rick 
Branson; reviewed by yukim for CASSANDRA-5569


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8b96334a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8b96334a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8b96334a

Branch: refs/heads/trunk
Commit: 8b96334a0c107216604d85d59ff50b1edbec89fa
Parents: 61567e7
Author: Rick Branson r...@diodeware.com
Authored: Thu May 16 11:36:50 2013 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Thu May 16 11:39:03 2013 -0500

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   96 ++-
 .../org/apache/cassandra/streaming/StreamOut.java  |   31 -
 .../cassandra/streaming/StreamingRepairTask.java   |7 +-
 4 files changed, 96 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b96334a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d5c117..619e415 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.6
  * Write row markers when serializing schema (CASSANDRA-5572)
+ * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
 
 
 1.2.5

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8b96334a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 4ed7f82..055c415 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -28,10 +28,7 @@ import java.util.concurrent.atomic.AtomicLong;
 import java.util.regex.Pattern;
 import javax.management.*;
 
-import com.google.common.collect.AbstractIterator;
-import com.google.common.collect.ImmutableSet;
-import com.google.common.collect.Iterables;
-import com.google.common.collect.Sets;
+import com.google.common.collect.*;
 import com.google.common.util.concurrent.Futures;
 import org.cliffc.high_scale_lib.NonBlockingHashMap;
 import org.slf4j.Logger;
@@ -60,6 +57,7 @@ import org.apache.cassandra.db.index.SecondaryIndex;
 import org.apache.cassandra.db.index.SecondaryIndexManager;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.dht.*;
+import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.io.compress.CompressionParameters;
 import org.apache.cassandra.io.sstable.*;
@@ -1277,56 +1275,90 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 return markCurrentViewReferenced().sstables;
 }
 
-/**
- * @return a ViewFragment containing the sstables and memtables that may 
need to be merged
- * for the given @param key, according to the interval tree
- */
-public ViewFragment markReferenced(DecoratedKey key)
+abstract class AbstractViewSSTableFinder
 {
-assert !key.isMinimum();
-DataTracker.View view;
-ListSSTableReader sstables;
-while (true)
+abstract ListSSTableReader findSSTables(DataTracker.View view);
+protected ListSSTableReader 
sstablesForRowBounds(AbstractBoundsRowPosition rowBounds, DataTracker.View 
view)
 {
-view = data.getView();
-sstables = view.intervalTree.search(key);
-if (SSTableReader.acquireReferences(sstables))
-break;
-// retry w/ new view
+RowPosition stopInTree = rowBounds.right.isMinimum() ? 
view.intervalTree.max() : rowBounds.right;
+return view.intervalTree.search(Interval.RowPosition, 
SSTableReadercreate(rowBounds.left, stopInTree));
 }
-return new ViewFragment(sstables, 
Iterables.concat(Collections.singleton(view.memtable), 
view.memtablesPendingFlush));
 }
 
-/**
- * @return a ViewFragment containing the sstables and memtables that may 
need to be merged
- * for rows between @param startWith and @param stopAt, inclusive, 
according to the interval tree
- */
-public ViewFragment markReferenced(RowPosition startWith, RowPosition 
stopAt)
+private ViewFragment markReferenced(AbstractViewSSTableFinder finder)
 {
-DataTracker.View view;
 ListSSTableReader sstables;
+DataTracker.View view;
+
 while (true)
 {
 view = data.getView();
-// startAt == minimum is ok, but stopAt == minimum is confusing 
because all IntervalTree deals with
-// is Comparable, so 

[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-05-16 Thread yukim
Merge branch 'cassandra-1.2' into trunk

Conflicts:
src/java/org/apache/cassandra/db/ColumnFamilyStore.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7b67666
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7b67666
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7b67666

Branch: refs/heads/trunk
Commit: c7b67666d8665f13a66aba218c64c5979c3f20bf
Parents: 405c251 8b96334
Author: Yuki Morishita yu...@apache.org
Authored: Thu May 16 11:42:42 2013 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Thu May 16 11:42:42 2013 -0500

--
 CHANGES.txt|1 +
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   90 ++-
 .../org/apache/cassandra/streaming/StreamOut.java  |   31 +-
 .../cassandra/streaming/StreamingRepairTask.java   |7 +-
 4 files changed, 94 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7b67666/CHANGES.txt
--
diff --cc CHANGES.txt
index 543765c,619e415..8d1410e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,53 -1,6 +1,54 @@@
 +2.0
 + * JEMalloc support for off-heap allocation (CASSANDRA-3997)
 + * Single-pass compaction (CASSANDRA-4180)
 + * Removed token range bisection (CASSANDRA-5518)
 + * Removed compatibility with pre-1.2.5 sstables and network messages
 +   (CASSANDRA-5511)
 + * removed PBSPredictor (CASSANDRA-5455)
 + * CAS support (CASSANDRA-5062, 5441, 5443)
 + * Leveled compaction performs size-tiered compactions in L0 
 +   (CASSANDRA-5371, 5439)
 + * Add yaml network topology snitch for mixed ec2/other envs (CASSANDRA-5339)
 + * Log when a node is down longer than the hint window (CASSANDRA-4554)
 + * Optimize tombstone creation for ExpiringColumns (CASSANDRA-4917)
 + * Improve LeveledScanner work estimation (CASSANDRA-5250, 5407)
 + * Replace compaction lock with runWithCompactionsDisabled (CASSANDRA-3430)
 + * Change Message IDs to ints (CASSANDRA-5307)
 + * Move sstable level information into the Stats component, removing the
 +   need for a separate Manifest file (CASSANDRA-4872)
 + * avoid serializing to byte[] on commitlog append (CASSANDRA-5199)
 + * make index_interval configurable per columnfamily (CASSANDRA-3961)
 + * add default_time_to_live (CASSANDRA-3974)
 + * add memtable_flush_period_in_ms (CASSANDRA-4237)
 + * replace supercolumns internally by composites (CASSANDRA-3237, 5123)
 + * upgrade thrift to 0.9.0 (CASSANDRA-3719)
 + * drop unnecessary keyspace parameter from user-defined compaction API 
 +   (CASSANDRA-5139)
 + * more robust solution to incomplete compactions + counters (CASSANDRA-5151)
 + * Change order of directory searching for c*.in.sh (CASSANDRA-3983)
 + * Add tool to reset SSTable compaction level for LCS (CASSANDRA-5271)
 + * Allow custom configuration loader (CASSANDRA-5045)
 + * Remove memory emergency pressure valve logic (CASSANDRA-3534)
 + * Reduce request latency with eager retry (CASSANDRA-4705)
 + * cqlsh: Remove ASSUME command (CASSANDRA-5331)
 + * Rebuild BF when loading sstables if bloom_filter_fp_chance
 +   has changed since compaction (CASSANDRA-5015)
 + * remove row-level bloom filters (CASSANDRA-4885)
 + * Change Kernel Page Cache skipping into row preheating (disabled by default)
 +   (CASSANDRA-4937)
 + * Improve repair by deciding on a gcBefore before sending
 +   out TreeRequests (CASSANDRA-4932)
 + * Add an official way to disable compactions (CASSANDRA-5074)
 + * Reenable ALTER TABLE DROP with new semantics (CASSANDRA-3919)
 + * Add binary protocol versioning (CASSANDRA-5436)
 + * Swap THshaServer for TThreadedSelectorServer (CASSANDRA-5530)
 + * Add alias support to SELECT statement (CASSANDRA-5075)
 + * Don't create empty RowMutations in CommitLogReplayer (CASSANDRA-5541)
 +
 +
  1.2.6
   * Write row markers when serializing schema (CASSANDRA-5572)
+  * Check only SSTables for the requested range when streaming (CASSANDRA-5569)
  
  
  1.2.5

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7b67666/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 05784ce,055c415..36c1db0
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -1461,9 -1413,9 +1495,9 @@@ public class ColumnFamilyStore implemen
  final RowPosition startWith = range.left;
  final RowPosition stopAt = range.right;
  
 -QueryFilter filter = new QueryFilter(null, new 
QueryPath(columnFamily, superColumn, null), columnFilter);
 +QueryFilter filter = new QueryFilter(null, name, 

[jira] [Commented] (CASSANDRA-1311) Triggers

2013-05-16 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659835#comment-13659835
 ] 

Jonathan Ellis commented on CASSANDRA-1311:
---

Nit: would like to see javadoc for TriggerExecutor methods.

Otherwise LGTM!

bq. I posted the sample to https://github.com/Vijay2win/inverted-index, i am 
really happy to move it to contrib

(I meant examples, not contrib.  Old memories...)

 Triggers
 

 Key: CASSANDRA-1311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311
 Project: Cassandra
  Issue Type: New Feature
Reporter: Maxim Grinev
Assignee: Vijay
 Fix For: 2.0

 Attachments: 0001-1311-v3.patch, HOWTO-PatchAndRunTriggerExample.txt, 
 HOWTO-PatchAndRunTriggerExample-update1.txt, ImplementationDetails.pdf, 
 ImplementationDetails-update1.pdf, trunk-967053.txt, 
 trunk-984391-update1.txt, trunk-984391-update2.txt


 Asynchronous triggers is a basic mechanism to implement various use cases of 
 asynchronous execution of application code at database side. For example to 
 support indexes and materialized views, online analytics, push-based data 
 propagation.
 Please find the motivation, triggers description and list of applications:
 http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/
 An example of using triggers for indexing:
 http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/
 Implementation details are attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: inline maxChange

2013-05-16 Thread jbellis
Updated Branches:
  refs/heads/trunk c7b67666d - b6a0284fb


inline maxChange


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b6a0284f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b6a0284f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b6a0284f

Branch: refs/heads/trunk
Commit: b6a0284fb909c976f4bcf61c7623dd75de2cdd32
Parents: c7b6766
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu May 16 14:01:18 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu May 16 14:28:40 2013 -0500

--
 .../apache/cassandra/db/filter/QueryFilter.java|5 ++---
 1 files changed, 2 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/b6a0284f/src/java/org/apache/cassandra/db/filter/QueryFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/QueryFilter.java 
b/src/java/org/apache/cassandra/db/filter/QueryFilter.java
index 8187294..4d8e640 100644
--- a/src/java/org/apache/cassandra/db/filter/QueryFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/QueryFilter.java
@@ -175,9 +175,8 @@ public class QueryFilter
 // the column itself must be not gc-able (it is live, or a still 
relevant tombstone, or has live subcolumns), (1)
 // and if its container is deleted, the column must be changed more 
recently than the container tombstone (2)
 // (since otherwise, the only thing repair cares about is the 
container tombstone)
-long maxChange = column.timestamp();
-return (column.getLocalDeletionTime() = gcBefore || maxChange  
column.getMarkedForDeleteAt()) // (1)
-(!container.deletionInfo().isDeleted(column.name(), 
maxChange)); // (2)
+return (column.getLocalDeletionTime() = gcBefore || 
column.timestamp()  column.getMarkedForDeleteAt()) // (1)
+(!container.deletionInfo().isDeleted(column.name(), 
column.timestamp())); // (2)
 }
 
 /**



[jira] [Commented] (CASSANDRA-5573) Querying with an empty (impossible) range returns incorrect results

2013-05-16 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13659909#comment-13659909
 ] 

Alex Liu commented on CASSANDRA-5573:
-

 * The semantics of start keys and tokens are slightly different.
 * Keys are start-inclusive; tokens are start-exclusive.  Token
 * ranges may also wrap -- that is, the end token may be less
 * than the start one.  Thus, a range from keyX to keyX is a
 * one-element range, but a range from tokenY to tokenY is the
 * full ring.

So that query covers the whole ring. I think it's a bug in the hadoop code. I 
will fix it in CASSANDRA-4421

 Querying with an empty (impossible) range returns incorrect results
 ---

 Key: CASSANDRA-5573
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5573
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Mike Schrag

 SELECT * FROM cf WHERE token(key)  2000 AND token(key) = 2000 LIMIT 1000 
 ALLOW FILTERING;
 This should return nothing, but instead appears to freak and return arbitrary 
 token values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5573) Querying with an empty (impossible) range returns incorrect results

2013-05-16 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu resolved CASSANDRA-5573.
-

Resolution: Invalid

 Querying with an empty (impossible) range returns incorrect results
 ---

 Key: CASSANDRA-5573
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5573
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
Reporter: Mike Schrag

 SELECT * FROM cf WHERE token(key)  2000 AND token(key) = 2000 LIMIT 1000 
 ALLOW FILTERING;
 This should return nothing, but instead appears to freak and return arbitrary 
 token values.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/2] git commit: move QF.isRelevant into CF.addIfRelevant

2013-05-16 Thread jbellis
Updated Branches:
  refs/heads/trunk b6a0284fb - c94bc106e


move QF.isRelevant into CF.addIfRelevant


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/439ce7e4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/439ce7e4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/439ce7e4

Branch: refs/heads/trunk
Commit: 439ce7e4cc90fc7daf9e6d32f549a28627d3cc3d
Parents: b6a0284
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu May 16 14:32:55 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu May 16 14:32:55 2013 -0500

--
 src/java/org/apache/cassandra/db/ColumnFamily.java |   12 
 .../cassandra/db/filter/NamesQueryFilter.java  |6 +-
 .../apache/cassandra/db/filter/QueryFilter.java|9 -
 .../cassandra/db/filter/SliceQueryFilter.java  |4 +---
 4 files changed, 14 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/439ce7e4/src/java/org/apache/cassandra/db/ColumnFamily.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamily.java 
b/src/java/org/apache/cassandra/db/ColumnFamily.java
index 4186460..868d95d 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamily.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamily.java
@@ -98,6 +98,18 @@ public abstract class ColumnFamily implements 
IterableColumn, IRowCacheEntry
 return metadata;
 }
 
+public void addIfRelevant(Column column, int gcBefore)
+{
+// the column itself must be not gc-able (it is live, or a still 
relevant tombstone, or has live subcolumns), (1)
+// and if its container is deleted, the column must be changed more 
recently than the container tombstone (2)
+// (since otherwise, the only thing repair cares about is the 
container tombstone)
+if ((column.getLocalDeletionTime() = gcBefore || column.timestamp()  
column.getMarkedForDeleteAt()) // (1)
+ (!deletionInfo().isDeleted(column.name(), column.timestamp( 
   // (2)
+{
+addColumn(column);
+}
+}
+
 public void addColumn(Column column)
 {
 addColumn(column, HeapAllocator.instance);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/439ce7e4/src/java/org/apache/cassandra/db/filter/NamesQueryFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/NamesQueryFilter.java 
b/src/java/org/apache/cassandra/db/filter/NamesQueryFilter.java
index 349ec2e..e7ef6a7 100644
--- a/src/java/org/apache/cassandra/db/filter/NamesQueryFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/NamesQueryFilter.java
@@ -95,11 +95,7 @@ public class NamesQueryFilter implements IDiskAtomFilter
 public void collectReducedColumns(ColumnFamily container, IteratorColumn 
reducedColumns, int gcBefore)
 {
 while (reducedColumns.hasNext())
-{
-Column column = reducedColumns.next();
-if (QueryFilter.isRelevant(column, container, gcBefore))
-container.addColumn(column);
-}
+container.addIfRelevant(reducedColumns.next(), gcBefore);
 }
 
 public ComparatorColumn getColumnComparator(AbstractType? comparator)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/439ce7e4/src/java/org/apache/cassandra/db/filter/QueryFilter.java
--
diff --git a/src/java/org/apache/cassandra/db/filter/QueryFilter.java 
b/src/java/org/apache/cassandra/db/filter/QueryFilter.java
index 4d8e640..ab4a64e 100644
--- a/src/java/org/apache/cassandra/db/filter/QueryFilter.java
+++ b/src/java/org/apache/cassandra/db/filter/QueryFilter.java
@@ -170,15 +170,6 @@ public class QueryFilter
 return cfName;
 }
 
-public static boolean isRelevant(Column column, ColumnFamily container, 
int gcBefore)
-{
-// the column itself must be not gc-able (it is live, or a still 
relevant tombstone, or has live subcolumns), (1)
-// and if its container is deleted, the column must be changed more 
recently than the container tombstone (2)
-// (since otherwise, the only thing repair cares about is the 
container tombstone)
-return (column.getLocalDeletionTime() = gcBefore || 
column.timestamp()  column.getMarkedForDeleteAt()) // (1)
-(!container.deletionInfo().isDeleted(column.name(), 
column.timestamp())); // (2)
-}
-
 /**
  * @return a QueryFilter object to satisfy the given slice criteria:
  * @param key the row to slice


[2/2] git commit: update NEWS for JEMalloc

2013-05-16 Thread jbellis
update NEWS for JEMalloc


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c94bc106
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c94bc106
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c94bc106

Branch: refs/heads/trunk
Commit: c94bc106e9bf2e38a3e73a5f790e49bca0084474
Parents: 439ce7e
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu May 16 15:37:37 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu May 16 15:37:37 2013 -0500

--
 NEWS.txt|1 +
 conf/cassandra.yaml |6 --
 2 files changed, 5 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c94bc106/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index f231df4..7116a6b 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -59,6 +59,7 @@ Features
 
 - Alias support has been added to CQL3 SELECT statement. Refer to
   CQL3 documentation (http://cassandra.apache.org/doc/cql3/CQL.html) for 
details.
+- JEMalloc support (see memory_allocator in cassandra.yaml)
 
 
 1.2.5

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c94bc106/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index ed32572..214b14a 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -188,8 +188,10 @@ row_cache_save_period: 0
 # Defaults to SerializingCacheProvider
 row_cache_provider: SerializingCacheProvider
 
-# The pluggable memory allocation for off-heap row cache, Experiments show 
that JEMAlloc
-# saves some memory than the native GCC allocator.
+# The off-heap memory allocator.  Affects storage engine metadata as
+# well as caches.  Experiments show that JEMAlloc saves some memory
+# than the native GCC allocator (i.e., JEMalloc is more
+# fragmentation-resistant).
 # 
 # Supported values are: NativeAllocator, JEMallocAllocator
 #



[jira] [Commented] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10

2013-05-16 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13660058#comment-13660058
 ] 

Alex Liu commented on CASSANDRA-5234:
-

To fix it, we need modify CassandraStorage to get CF meta data from system 
table instead of thrift describe_keyspace because of the CQL3 table doesn't 
show up in thrift describe_keyspace call.

 Table created through CQL3 are not accessble to Pig 0.10
 

 Key: CASSANDRA-5234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.1
 Environment: Red hat linux 5
Reporter: Shamim Ahmed
 Fix For: 1.2.2


 Hi,
   i have faced a bug when creating table through CQL3 and trying to load data 
 through pig 0.10 as follows:
 java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ'
   at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112)
   at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615).
 This effects from Simple table to table with compound key. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3475) LoadBroadcaster never removes endpoints

2013-05-16 Thread Roshan Pradeep (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13660138#comment-13660138
 ] 

Roshan Pradeep commented on CASSANDRA-3475:
---

Still this happen top me with 1.0.11 version. The JMX LoadMap shows already 
decommission nodes.

 LoadBroadcaster never removes endpoints
 ---

 Key: CASSANDRA-3475
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3475
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Trivial
  Labels: lhf
 Fix For: 1.0.3

 Attachments: 3475.txt


 As the title says.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/4] Cassandra Triggers! patch by Vijay; reviewed by Jonathan Ellis for CASSANDRA-1311

2013-05-16 Thread vijay
Updated Branches:
  refs/heads/trunk 997ab9593 - 72a6cff6e


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72a6cff6/src/java/org/apache/cassandra/cql3/CFPropDefs.java
--
diff --git a/src/java/org/apache/cassandra/cql3/CFPropDefs.java 
b/src/java/org/apache/cassandra/cql3/CFPropDefs.java
index cc4c457..2c0ad0d 100644
--- a/src/java/org/apache/cassandra/cql3/CFPropDefs.java
+++ b/src/java/org/apache/cassandra/cql3/CFPropDefs.java
@@ -45,6 +45,7 @@ public class CFPropDefs extends PropertyDefinitions
 public static final String KW_POPULATE_IO_CACHE_ON_FLUSH = 
populate_io_cache_on_flush;
 public static final String KW_BF_FP_CHANCE = bloom_filter_fp_chance;
 public static final String KW_MEMTABLE_FLUSH_PERIOD = 
memtable_flush_period_in_ms;
+public static final String KW_TRIGGER_CLASS = trigger_class;
 
 public static final String KW_COMPACTION = compaction;
 public static final String KW_COMPRESSION = compression;
@@ -69,6 +70,7 @@ public class CFPropDefs extends PropertyDefinitions
 keywords.add(KW_COMPACTION);
 keywords.add(KW_COMPRESSION);
 keywords.add(KW_MEMTABLE_FLUSH_PERIOD);
+keywords.add(KW_TRIGGER_CLASS);
 
 obsoleteKeywords.add(compaction_strategy_class);
 obsoleteKeywords.add(compaction_strategy_options);
@@ -150,6 +152,8 @@ public class CFPropDefs extends PropertyDefinitions
 
cfm.speculativeRetry(CFMetaData.SpeculativeRetry.fromString(getString(KW_SPECULATIVE_RETRY,
 cfm.getSpeculativeRetry().toString(;
 cfm.memtableFlushPeriod(getInt(KW_MEMTABLE_FLUSH_PERIOD, 
cfm.getMemtableFlushPeriod()));
 cfm.populateIoCacheOnFlush(getBoolean(KW_POPULATE_IO_CACHE_ON_FLUSH, 
cfm.populateIoCacheOnFlush()));
+if (hasProperty(KW_TRIGGER_CLASS))
+cfm.triggerClass(getSet(KW_TRIGGER_CLASS, cfm.getTriggerClass()));
 
 if (compactionStrategyClass != null)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72a6cff6/src/java/org/apache/cassandra/cql3/PropertyDefinitions.java
--
diff --git a/src/java/org/apache/cassandra/cql3/PropertyDefinitions.java 
b/src/java/org/apache/cassandra/cql3/PropertyDefinitions.java
index ba83e45..82a1b82 100644
--- a/src/java/org/apache/cassandra/cql3/PropertyDefinitions.java
+++ b/src/java/org/apache/cassandra/cql3/PropertyDefinitions.java
@@ -76,6 +76,16 @@ public class PropertyDefinitions
 return (MapString, String)val;
 }
 
+protected SetString getSet(String name, SetString defaultValue) throws 
SyntaxException
+{
+Object val = properties.get(name);
+if (val == null)
+return defaultValue;
+if (!(val instanceof Set))
+throw new SyntaxException(String.format(Invalid value for 
property '%s', name));
+return (SetString) val;
+}
+
 public Boolean hasProperty(String name)
 {
 return properties.containsKey(name);

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72a6cff6/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
index e9623a2..c7cd9ae 100644
--- a/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/BatchStatement.java
@@ -125,11 +125,8 @@ public class BatchStatement implements CQLStatement
 throw new InvalidRequestException(Invalid empty consistency 
level);
 
 Collection? extends IMutation mutations = getMutations(variables, 
false, cl, queryState.getTimestamp());
-if (type == Type.LOGGED  mutations.size()  1)
-StorageProxy.mutateAtomically((CollectionRowMutation) mutations, 
cl);
-else
-StorageProxy.mutate(mutations, cl);
-
+boolean mutateAtomic = (type == Type.LOGGED  mutations.size()  1);
+StorageProxy.mutateWithTriggers(mutations, cl, mutateAtomic);
 return null;
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/72a6cff6/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 45fbecc..5b3e718 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -351,7 +351,7 @@ public abstract class ModificationStatement implements 
CQLStatement
 else
 cl.validateForWrite(cfm.ksName);
 
-StorageProxy.mutate(getMutations(variables, 

[4/4] git commit: Cassandra Triggers! patch by Vijay; reviewed by Jonathan Ellis for CASSANDRA-1311

2013-05-16 Thread vijay
Cassandra Triggers!
patch by Vijay; reviewed by Jonathan Ellis for CASSANDRA-1311

Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/72a6cff6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/72a6cff6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/72a6cff6

Branch: refs/heads/trunk
Commit: 72a6cff6e883320a6ceec88e41b42ca15dff1e2e
Parents: 997ab95
Author: Vijay Parthasarathy vijay2...@gmail.com
Authored: Thu May 16 18:19:55 2013 -0700
Committer: Vijay Parthasarathy vijay2...@gmail.com
Committed: Thu May 16 18:19:55 2013 -0700

--
 interface/cassandra.thrift |1 +
 .../org/apache/cassandra/thrift/Cassandra.java |  952 +++---
 .../org/apache/cassandra/thrift/CfDef.java |  283 -
 .../cassandra/thrift/CounterSuperColumn.java   |4 +-
 .../org/apache/cassandra/thrift/CqlMetadata.java   |   88 +-
 .../apache/cassandra/thrift/CqlPreparedResult.java |   64 +-
 .../org/apache/cassandra/thrift/CqlResult.java |   36 +-
 .../org/apache/cassandra/thrift/CqlRow.java|   36 +-
 .../org/apache/cassandra/thrift/IndexClause.java   |4 +-
 .../org/apache/cassandra/thrift/KeyRange.java  |4 +-
 .../org/apache/cassandra/thrift/KeySlice.java  |4 +-
 .../org/apache/cassandra/thrift/KsDef.java |   80 +-
 .../apache/cassandra/thrift/SlicePredicate.java|4 +-
 .../org/apache/cassandra/thrift/SuperColumn.java   |4 +-
 .../org/apache/cassandra/thrift/TokenRange.java|   12 +-
 src/java/org/apache/cassandra/cli/CliClient.java   |   16 +-
 .../org/apache/cassandra/config/CFMetaData.java|   29 +-
 .../apache/cassandra/cql/AlterTableStatement.java  |1 +
 src/java/org/apache/cassandra/cql/CFPropDefs.java  |   12 +
 .../cassandra/cql/CreateColumnFamilyStatement.java |9 +-
 .../org/apache/cassandra/cql/QueryProcessor.java   |6 +-
 src/java/org/apache/cassandra/cql3/CFPropDefs.java |4 +
 .../apache/cassandra/cql3/PropertyDefinitions.java |   10 +
 .../cassandra/cql3/statements/BatchStatement.java  |7 +-
 .../cql3/statements/ModificationStatement.java |2 +-
 .../org/apache/cassandra/db/CounterMutation.java   |5 +
 src/java/org/apache/cassandra/db/IMutation.java|1 +
 .../org/apache/cassandra/service/StorageProxy.java |   20 +-
 .../cassandra/service/StorageProxyMBean.java   |2 +
 .../apache/cassandra/thrift/CassandraServer.java   |5 +-
 .../cassandra/triggers/CustomClassLoader.java  |  113 ++
 .../org/apache/cassandra/triggers/ITrigger.java|   31 +
 .../apache/cassandra/triggers/TriggerExecutor.java |  129 ++
 .../org/apache/cassandra/utils/FBUtilities.java|6 +
 .../org/apache/cassandra/cli/CliHelp.yaml  |3 +
 35 files changed, 1262 insertions(+), 725 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/72a6cff6/interface/cassandra.thrift
--
diff --git a/interface/cassandra.thrift b/interface/cassandra.thrift
index b057fa0..1e78d51 100644
--- a/interface/cassandra.thrift
+++ b/interface/cassandra.thrift
@@ -448,6 +448,7 @@ struct CfDef {
 40: optional i32 default_time_to_live,
 41: optional i32 index_interval,
 42: optional string speculative_retry=NONE,
+43: optional setstring trigger_class,
 
 /* All of the following are now ignored and unsupplied. */
 



[jira] [Created] (CASSANDRA-5574) Add trigger examples

2013-05-16 Thread Vijay (JIRA)
Vijay created CASSANDRA-5574:


 Summary: Add trigger examples 
 Key: CASSANDRA-5574
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5574
 Project: Cassandra
  Issue Type: Test
Reporter: Vijay
Assignee: Vijay
Priority: Trivial


Since 1311 is committed we need some example code to show the power and usage 
of triggers. Similar to the ones in examples directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: don't declare throwing exceptions that aren't thrown

2013-05-16 Thread dbrosius
Updated Branches:
  refs/heads/trunk 72a6cff6e - 410142b06


don't declare throwing exceptions that aren't thrown


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/410142b0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/410142b0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/410142b0

Branch: refs/heads/trunk
Commit: 410142b06a9c7575def0e73a807540e8cfdf8e9f
Parents: 72a6cff
Author: Dave Brosius dbros...@apache.org
Authored: Thu May 16 22:59:30 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Thu May 16 22:59:30 2013 -0400

--
 .../db/columniterator/SSTableNamesIterator.java|2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/410142b0/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java
index 20ca0cb..39cfa87 100644
--- a/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java
+++ b/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java
@@ -152,7 +152,7 @@ public class SSTableNamesIterator extends 
AbstractIteratorOnDiskAtom implement
 iter = result.iterator();
 }
 
-private void readSimpleColumns(FileDataInput file, SortedSetByteBuffer 
columnNames, ListOnDiskAtom result, int columnCount) throws IOException
+private void readSimpleColumns(FileDataInput file, SortedSetByteBuffer 
columnNames, ListOnDiskAtom result, int columnCount)
 {
 IteratorOnDiskAtom atomIterator = 
cf.metadata().getOnDiskIterator(file, columnCount, sstable.descriptor.version);
 int n = 0;



[jira] [Created] (CASSANDRA-5575) permanent client failures: attempting batch_mutate on data that serializes to more than thrift_framed_transport_size_in_mb fails forever

2013-05-16 Thread John R. Frank (JIRA)
John R. Frank created CASSANDRA-5575:


 Summary: permanent client failures:  attempting batch_mutate on 
data that serializes to more than thrift_framed_transport_size_in_mb fails 
forever 
 Key: CASSANDRA-5575
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5575
 Project: Cassandra
  Issue Type: Bug
Reporter: John R. Frank


Since batch_mutate is a thrift interface, it unifies all of the data in a batch 
into a single thrift message.  This means that clients cannot easily predict 
whether a batch will exceed thrift_framed_transport_size_in_mb

Thrift's client libraries do not yet raise an exception on exceeding the frame 
size:
https://issues.apache.org/jira/browse/THRIFT-1324 

So, Cassandra clients are doomed to the infinite loop illustrated here: 
http://mail-archives.apache.org/mod_mbox/cassandra-user/201305.mbox/%3calpine.deb.2.00.1305101202190.25...@computableinsights.com%3E


I still don't understand why Cassandra has both of these parameters -- the 
second parameter appears to be superfluous:
{code:borderStyle=solid}
# Frame size for thrift (maximum field length).
thrift_framed_transport_size_in_mb: 1500

# The max length of a thrift message, including all fields and
# internal thrift overhead.
thrift_max_message_length_in_mb: 1600
{code}

(Note the monsterous message sizes we are now using to avoid zoombie clients; 
This is clearly too brittle to go into production.  Is Cassandra really only 
for small batches?)

Possible solutions:

1) fix Thrift and catch the error inside all the Cassandra clients and 
subdivide the batch and raise a further error if an individual message is too 
large.

2) change batch_mutate to serialize each mutation separately and assemble the 
messages into a thrift transmission controlled more directly by the client

3) plan the end-of-life of the Thrift interfaces to Cassandra and replace them 
with something else -- the new binary streaming protocol we've been hearing 
about?

Other ideas?


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5435) Support range tombstones from thrift

2013-05-16 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13660301#comment-13660301
 ] 

Edward Capriolo commented on CASSANDRA-5435:


I get it that all the other thrifts tests are written in python. but... The 
instructions here are not right.

http://wiki.apache.org/cassandra/HowToContribute

And I have installed all this python stuff and the tests are not working, out 
of the box.


{noformat}
E
==
ERROR: system.test_thrift_server.TestMutations.test_bad_batch_calls
--
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/nose/case.py, line 133, in run
self.runTest(result)
  File /usr/lib/python2.7/site-packages/nose/case.py, line 151, in runTest
test(result)
  File /usr/lib64/python2.7/unittest/case.py, line 429, in __call__
return self.run(*args, **kwds)
  File /usr/lib64/python2.7/unittest/case.py, line 362, in run
result.addError(self, sys.exc_info())
  File /usr/lib/python2.7/site-packages/nose/proxy.py, line 131, in addError
formatted = plugins.formatError(self.test, err)
  File /usr/lib/python2.7/site-packages/nose/plugins/manager.py, line 94, in 
__call__
return self.call(*arg, **kw)
  File /usr/lib/python2.7/site-packages/nose/plugins/manager.py, line 136, in 
chain
result = meth(*arg, **kw)
  File /usr/lib/python2.7/site-packages/nose/plugins/capture.py, line 81, in 
formatError
return (ec, self.addCaptureToErr(ev, output), tb)
  File /usr/lib/python2.7/site-packages/nose/plugins/capture.py, line 106, in 
addCaptureToErr
output, ln(u' end captured stdout ')])
TypeError: sequence item 0: expected string or Unicode, exceptions.SystemExit 
found

{noformat}

This is what I am trying to add.
{noformat}
[edward@jackintosh system]$ diff test_thrift_server.py 
/tmp/test_thrift_server.py 
221a222,238
 def test_range_tombstone(self):
 _set_keyspace('Keyspace1')
 client.insert('keyrange1', ColumnParent('Standard1'), Column('a', 
 'a', 0), ConsistencyLevel.ONE)
 client.insert('keyrange1', ColumnParent('Standard1'), Column('b', 
 'b', 0), ConsistencyLevel.ONE)
 client.insert('keyrange1', ColumnParent('Standard1'), Column('c', 
 'c', 0), ConsistencyLevel.ONE)
 client.insert('keyrange1', ColumnParent('Standard1'), Column('d', 
 'd', 0), ConsistencyLevel.ONE)
 update_map = {'keyrange1': {'Standard1': [
 
 Mutation(deletion=Deletion(predicate=SlicePredicate(slice_range=SliceRange('b',
  'c', False, 1000,
 ]}}
 client.batch_mutate(update_map, ConsistencyLevel.ONE)
 p = SlicePredicate(slice_range=SliceRange('', '', False, 10))
 column_parent = ColumnParent('Standard1')
 slice = [result.column
  for result in client.get_slice('keyrange1', column_parent, 
 p, ConsistencyLevel.ONE)]
 assert slice == [Column('a', 'a', 0), Column('c', 'c', 0), 
 Column('d', 'd', 0)], slice
 
 
{noformat}

I'm not a python developer.
The python steps in the documentation are not correct.
I have other stuff to do in life, then struggle with python.

Q. Is it really necessary that we test a Java project in using python? Don't 
the current Java tests work and prove the feature works? 





 Support range tombstones from thrift
 

 Key: CASSANDRA-5435
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5435
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor

 I see a RangeTomstone test and methods in row mutation. However thrift's 
 validate method throws exception when Deletion's have a slice predicate. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-1311) Triggers

2013-05-16 Thread Patrick McFadin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13660316#comment-13660316
 ] 

Patrick McFadin commented on CASSANDRA-1311:


I'm going to have to object one more time to storing a jar file in the file 
system. With large scale deployments, this is going to be a disaster waiting to 
happen. One last plea for https://issues.apache.org/jira/browse/CASSANDRA-4954 ?

 Triggers
 

 Key: CASSANDRA-1311
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1311
 Project: Cassandra
  Issue Type: New Feature
Reporter: Maxim Grinev
Assignee: Vijay
 Fix For: 2.0

 Attachments: 0001-1311-v3.patch, HOWTO-PatchAndRunTriggerExample.txt, 
 HOWTO-PatchAndRunTriggerExample-update1.txt, ImplementationDetails.pdf, 
 ImplementationDetails-update1.pdf, trunk-967053.txt, 
 trunk-984391-update1.txt, trunk-984391-update2.txt


 Asynchronous triggers is a basic mechanism to implement various use cases of 
 asynchronous execution of application code at database side. For example to 
 support indexes and materialized views, online analytics, push-based data 
 propagation.
 Please find the motivation, triggers description and list of applications:
 http://maxgrinev.com/2010/07/23/extending-cassandra-with-asynchronous-triggers/
 An example of using triggers for indexing:
 http://maxgrinev.com/2010/07/23/managing-indexes-in-cassandra-using-async-triggers/
 Implementation details are attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5435) Support range tombstones from thrift

2013-05-16 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13660324#comment-13660324
 ] 

Aleksey Yeschenko commented on CASSANDRA-5435:
--

You stack trace suggests that you haven't actually rebased against recent-ish 
trunk, as I recommened in the last comment. You should see no errors once you 
rebase, out of the box.

 Support range tombstones from thrift
 

 Key: CASSANDRA-5435
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5435
 Project: Cassandra
  Issue Type: New Feature
  Components: API
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor

 I see a RangeTomstone test and methods in row mutation. However thrift's 
 validate method throws exception when Deletion's have a slice predicate. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira