[jira] [Updated] (CASSANDRA-5626) Support empty IN queries

2013-07-24 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5626:
-

Attachment: 5626.txt

 Support empty IN queries
 

 Key: CASSANDRA-5626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5626
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Alexander Solovyev
Assignee: Aleksey Yeschenko
Priority: Minor
 Attachments: 5626.txt


 It would be nice to have support of empty IN queries. 
 Example: SELECT a FROM t WHERE aKey IN (). 
 One of the reasons is to have such support in DataStax Java Driver (see 
 discussion here: https://datastax-oss.atlassian.net/browse/JAVA-106).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5626) Support empty IN queries

2013-07-24 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5626:
-

 Reviewer: slebresne
Affects Version/s: 1.2.0
   Labels: cql3  (was: )

 Support empty IN queries
 

 Key: CASSANDRA-5626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5626
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Alexander Solovyev
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3
 Attachments: 5626.txt


 It would be nice to have support of empty IN queries. 
 Example: SELECT a FROM t WHERE aKey IN (). 
 One of the reasons is to have such support in DataStax Java Driver (see 
 discussion here: https://datastax-oss.atlassian.net/browse/JAVA-106).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5795) now() is being rejected in INSERTs when inside collections

2013-07-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5795:


Attachment: 5795.txt

CASSANDRA-5616 was indeed slightly more work that I though for collections, my 
bad. Attaching patch to fix (on the plus side, if we ever want to support bind 
markers inside collections (not that I think it's useful), the code will be 
mostly there). I've pushed a dtest too.

 now() is being rejected in INSERTs when inside collections
 --

 Key: CASSANDRA-5795
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5795
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Aleksey Yeschenko
Assignee: Sylvain Lebresne
 Attachments: 5795.txt


 Lists, Sets and Maps reject NonTerminal terms in prepare:
 {code}
 if (t instanceof Term.NonTerminal)
 throw new InvalidRequestException(String.format(Invalid 
 list literal for %s: bind variables are not supported inside collection 
 literals, receiver));
 {code}
 and now() is instanceof NonTerminal since CASSANDRA-5616, hence
 {noformat}
 cqlsh:test insert into demo (id, timeuuids) values (0, [now()]);
 Bad Request: Invalid list literal for tus: bind variables are not supported 
 inside collection literals
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5614) W/O specified columns ASPCSI does not get notified of deletes

2013-07-24 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe updated CASSANDRA-5614:
---

Attachment: 0004-CASSANDRA-5614-Consider-timestamps-when-checking-col.patch

 W/O specified columns ASPCSI does not get notified of deletes
 -

 Key: CASSANDRA-5614
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5614
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Benjamin Coverston
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 1.2.7

 Attachments: 
 0001-CASSANDRA-5614-PreCompactedRow-updates-2i-correctly.patch, 
 0002-CASSANDRA-5614-LazilyCompactedRow-outputs-SSTables-a.patch, 
 0003-CASSANDRA-5614-Memtable-updates-with-RowTombstone-up.patch, 
 0004-CASSANDRA-5614-Consider-timestamps-when-checking-col.patch


 I'm working on a secondary index implementation based on the composite index 
 type.
 AbstractSimplePerColumnSecondaryIndex.java#delete is not called when CQL 
 delete statements do not specify columns.
 When I specify columns it is called. Pretty sure this is a bug.
 Setup:
 {code}
 cqlsh create KEYSPACE foo WITH replication = {'class': 'SimpleStrategy' , 
 'replication_factor': 1};
 cqlsh use foo;
 cqlsh:foo CREATE TABLE albums (artist text, album text, rating int, release 
 int, PRIMARY KEY (artist, album));
 cqlsh:foo CREATE INDEX ON albums (rating);
 {code}
 {code}
 cqlsh:foo insert into albums (artist, album, rating, release) VALUES 
 ('artist', 'album', 1, 2);
 {code}
 Does not get called here:
 {code}
 cqlsh:foo DELETE FROM albums where artist='artist' and album='album';
 {code}
 {code}
 cqlsh:foo insert into albums (artist, album, rating, release) VALUES 
 ('artist', 'album', 1, 2);
 {code}
 gets called here:
 {code}
 cqlsh:foo DELETE rating FROM albums where artist='artist' and album='album';
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5614) W/O specified columns ASPCSI does not get notified of deletes

2013-07-24 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718111#comment-13718111
 ] 

Sam Tunnicliffe commented on CASSANDRA-5614:


I realised I'd missed the case where columns are added after a RT, so attaching 
a fourth patch which handles that. 
My github branch is https://github.com/beobal/cassandra/tree/5614


 W/O specified columns ASPCSI does not get notified of deletes
 -

 Key: CASSANDRA-5614
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5614
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Benjamin Coverston
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 1.2.7

 Attachments: 
 0001-CASSANDRA-5614-PreCompactedRow-updates-2i-correctly.patch, 
 0002-CASSANDRA-5614-LazilyCompactedRow-outputs-SSTables-a.patch, 
 0003-CASSANDRA-5614-Memtable-updates-with-RowTombstone-up.patch, 
 0004-CASSANDRA-5614-Consider-timestamps-when-checking-col.patch


 I'm working on a secondary index implementation based on the composite index 
 type.
 AbstractSimplePerColumnSecondaryIndex.java#delete is not called when CQL 
 delete statements do not specify columns.
 When I specify columns it is called. Pretty sure this is a bug.
 Setup:
 {code}
 cqlsh create KEYSPACE foo WITH replication = {'class': 'SimpleStrategy' , 
 'replication_factor': 1};
 cqlsh use foo;
 cqlsh:foo CREATE TABLE albums (artist text, album text, rating int, release 
 int, PRIMARY KEY (artist, album));
 cqlsh:foo CREATE INDEX ON albums (rating);
 {code}
 {code}
 cqlsh:foo insert into albums (artist, album, rating, release) VALUES 
 ('artist', 'album', 1, 2);
 {code}
 Does not get called here:
 {code}
 cqlsh:foo DELETE FROM albums where artist='artist' and album='album';
 {code}
 {code}
 cqlsh:foo insert into albums (artist, album, rating, release) VALUES 
 ('artist', 'album', 1, 2);
 {code}
 gets called here:
 {code}
 cqlsh:foo DELETE rating FROM albums where artist='artist' and album='album';
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5761) Issue with secondary index sstable.

2013-07-24 Thread Andriy Yevsyukov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718133#comment-13718133
 ] 

Andriy Yevsyukov commented on CASSANDRA-5761:
-

Ok, we had no succes to reproduce this on test application, so I will try to 
explain our system a bit. We have 4 cassandra nodes, here our yaml:
{noformat}
cluster_name: 'Our Cassandra Cluster'
num_tokens: 256
initial_token:
max_hint_window_in_ms: 1080 # 3 hours
hinted_handoff_throttle_in_kb: 1024
max_hints_delivery_threads: 2
authenticator: org.apache.cassandra.auth.AllowAllAuthenticator
authorizer: org.apache.cassandra.auth.AllowAllAuthorizer
permissions_validity_in_ms: 2000
partitioner: org.apache.cassandra.dht.Murmur3Partitioner
commitlog_directory: /commit/cassandra/commitlog
disk_failure_policy: stop
key_cache_size_in_mb:
key_cache_save_period: 14400
row_cache_size_in_mb: 0
row_cache_save_period: 0
row_cache_provider: SerializingCacheProvider
saved_caches_directory: /commit/cassandra/saved_caches
commitlog_sync: periodic
commitlog_sync_period_in_ms: 1
commitlog_segment_size_in_mb: 32
seed_provider:
  - seeds: our secret seed ip
flush_largest_memtables_at: 0.75
reduce_cache_sizes_at: 0.85
reduce_cache_capacity_to: 0.6
concurrent_reads: 64
concurrent_writes: 128
memtable_total_space_in_mb: 1024
memtable_flush_writers: 2
memtable_flush_queue_size: 24
trickle_fsync: false
trickle_fsync_interval_in_kb: 10240
storage_port: 7000
ssl_storage_port: 7001
listen_address: 192.168.10.6
native_transport_port: 9042
start_rpc: true
rpc_address:
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
thrift_framed_transport_size_in_mb: 15
thrift_max_message_length_in_mb: 16
incremental_backups: false
snapshot_before_compaction: false
auto_snapshot: true
column_index_size_in_kb: 64
in_memory_compaction_limit_in_mb: 64
multithreaded_compaction: false
compaction_throughput_mb_per_sec: 16
compaction_preheat_key_cache: true
read_request_timeout_in_ms: 6
range_request_timeout_in_ms: 6
write_request_timeout_in_ms: 6
truncate_request_timeout_in_ms: 6
request_timeout_in_ms: 6
cross_node_timeout: false
endpoint_snitch: SimpleSnitch
dynamic_snitch_update_interval_in_ms: 100 
dynamic_snitch_reset_interval_in_ms: 60
dynamic_snitch_badness_threshold: 0.1
request_scheduler: org.apache.cassandra.scheduler.NoScheduler
index_interval: 128
server_encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/.truststore
truststore_password: cassandra
client_encryption_options:
enabled: false
keystore: conf/.keystore
keystore_password: cassandra
internode_compression: all
inter_dc_tcp_nodelay: true
{noformat}
As I said we use CQL3 to create our tables structure before we start, here 
sample of how we do that for one CF:
{noformat}
CREATE KEYSPACE common WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': 1};

CREATE TABLE events(
key varchar,
event_id bigint, 
rotation_number bigint, 
source_name varchar, 
event_date bigint, 
event_day bigint, 
timestamp bigint, 
description varchar,
type varchar, 
league varchar, 
points float,
units varchar,
has_result boolean, 
result varchar,
result_desc varchar,
result_score varchar,
actual boolean,
 PRIMARY KEY (key) 
)WITH COMPACT STORAGE;

CREATE INDEX ON events (event_id);
CREATE INDEX ON events (rotation_number);
CREATE INDEX ON events (source_name);
CREATE INDEX ON events (event_date);
CREATE INDEX ON events (event_day);
CREATE INDEX ON events (type);
CREATE INDEX ON events (league);
CREATE INDEX ON events (common_total_units);
CREATE INDEX ON events (has_result);
CREATE INDEX ON events (actual);
{noformat}

 We use our own resource adapter to work with Cassandra and our own api over 
Hector. Here the fragment how we store data into CF:
{noformat}
class RowMutationBatchContextImplK,N extends AbstractBuilderContextK,N 
implements RowMutationBatchContextK,N {

private K key;
private Mutator mutator;
private RowMutationK, N rowMutation;
private MutationBatch mutationBatch;

RowMutationBatchContextImpl(Mutator mutator) {
this.mutator = mutator;
}

@Override
public void setKey(K key) {
this.key = key;
}

@Override
public void setRowMutation(RowMutationK, N rowMutation){
this.rowMutation = rowMutation;
}

@Override
public RowMutationK,N getRowMutation() {
return rowMutation;
}

@Override
public void setMutationBatch(MutationBatch mutationBatch) {
this.mutationBatch = mutationBatch;
}

@Override
public MutationBatch getMutationBatch() {
return mutationBatch;
}

@Override
public V void insert(ColumnMetaDataN, V columnDef, V value) {
mutator.addInsertion(toKeyComponents(key), 
getColumnFamilyMetaData().getName(), 

[jira] [Commented] (CASSANDRA-5626) Support empty IN queries

2013-07-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718185#comment-13718185
 ] 

Sylvain Lebresne commented on CASSANDRA-5626:
-

On the patch:
* The simplification in getKeyBounds is not completely correct.  If we do a 
2ndary index query with a EQ on the partition key (which is supported), we will 
still go through getKeyBounds even though we don't use the token function (more 
precisely, this patch breaks the cql_tests.py:TestCQL.indexed_with_eq_test 
dtest) We can still simplify things a bit, because we don't need the current 
generality, but I think I'd rather open a separate ticket for that.
* SelectStatement.getSliceCommands() was assuming that names filters were never 
null, which is not the case anymore with this patch. Concretely, I've pushed a 
test for this in dtests (http://goo.gl/VpqD46) and with the attached patch 
there's a NPE thrown by this test.
* Why remove the break when checking if we have at least one indexed EQ clause?
* Nit: in getIndexExpressions, let's add an assert that we're not in the IN 
case, if only for documenting that we only use the first element on purpose.


 Support empty IN queries
 

 Key: CASSANDRA-5626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5626
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Alexander Solovyev
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3
 Attachments: 5626.txt


 It would be nice to have support of empty IN queries. 
 Example: SELECT a FROM t WHERE aKey IN (). 
 One of the reasons is to have such support in DataStax Java Driver (see 
 discussion here: https://datastax-oss.atlassian.net/browse/JAVA-106).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Fix SELECT validation

2013-07-24 Thread slebresne
Updated Branches:
  refs/heads/trunk 7b841aade - 4021fffa0


Fix SELECT validation


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4021fffa
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4021fffa
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4021fffa

Branch: refs/heads/trunk
Commit: 4021fffa0c252b36a872501449d292e0828dd182
Parents: 7b841aa
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Jul 24 12:36:58 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 12:37:11 2013 +0200

--
 .../apache/cassandra/cql3/statements/SelectStatement.java   | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4021fffa/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
--
diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
index ce0d50b..2f9627b 100644
--- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java
@@ -1176,12 +1176,9 @@ public class SelectStatement implements CQLStatement
 }
 else
 {
-if (hasQueriableIndex)
-{
-stmt.usesSecondaryIndexing = true;
-break;
-}
-throw new InvalidRequestException(Only EQ and IN relation 
are supported on the partition key for random partitioners (unless you use the 
token() function));
+// Non EQ relation is not supported without token(), even 
if we have a 2ndary index (since even those are ordered by partitioner).
+// Note: In theory we could allow it for 2ndary index 
queries with ALLOW FILTERING, but that would probably require some special 
casing
+throw new InvalidRequestException(Only EQ and IN relation 
are supported on the partition key (unless you use the token() function));
 }
 previous = cname;
 }



[jira] [Commented] (CASSANDRA-5797) DC-local CAS

2013-07-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718268#comment-13718268
 ] 

Sylvain Lebresne commented on CASSANDRA-5797:
-

bq. Not sure what CQL syntax for this is. Is it protocol level the way CL is?

That's a good question and I'm not really sure what's the right answer.

I think it may make the most sense to make it protocol level because of reads.  
For CAS writes, we do have a CQL syntax for it, so we could extends it with say:
{noformat}
UPDATE foo SET v1 = 2, v2 = 3 WHERE k = 1 IF v1 = 1 AND v2 = 1 IN LOCAL DC
{noformat}
But for reads, we don't have any syntax, the consistency level (SERIAL) is the 
only thing that makes a read go through paxos, so I'm afraid adding some CQL 
syntax in that case would be confusing.

But even making it protocol level is not that easy. For thrift, on the read 
side, the only way I can see us supporting this DC-local CAS would be add a 
LOCAL_SERIAL consistency level (short of duplicating all read methods for CAS 
reads that is). But that doesn't really work for writes since the consistency 
level for writes is really the consistency of the paxos learn/commit phase.

One option (the best I can come up with so far) would be to add the 
LOCAL_SERIAL consistency level, and then to change CAS write to take 2 CL: the 
first one would be for the commit (learn) phase (as we have now, but we would 
refuse CL.SERIAL and CL.LOCAL_SERIAL in that case) and a 2nd CL that would 
control the Paxos consistency (and for that one, only CL.SERIAL or 
CL.LOCAL_SERIAL would be valid). It's not perfect however because the one thing 
you can't properly express is the ability to do CL.SERIAL for paxos but don't 
wait on any node for the learn phase. Unless we make CL.ANY for the commit 
consistency mean that, but that's a slight stretch.

In any case, we should probably make it sure to shove that in 2.0.0 because I 
don't want to change the thrift API nor break the native protocol in 2.0.1.

Any better idea?


 DC-local CAS
 

 Key: CASSANDRA-5797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5797
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 2.0
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.1


 For two-datacenter deployments where the second DC is strictly for disaster 
 failover, it would be useful to restrict CAS to a single DC to avoid cross-DC 
 round trips.
 (This would require manually truncating {{system.paxos}} when failing over.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5739) Class unloading triggers container crash

2013-07-24 Thread Vivek Mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718282#comment-13718282
 ] 

Vivek Mishra commented on CASSANDRA-5739:
-

as per error:

org.apache.cassandra.exceptions.ConfigurationException: Cannot locate 
cassandra.yaml
at 
org.apache.cassandra.config.DatabaseDescriptor.getStorageConfigURL(DatabaseDescriptor.java:114)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadYaml(DatabaseDescriptor.java:131)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:123)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.getPartitioner(StorageService.java:140)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.init(StorageService.java:132) 
[cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.clinit(StorageService.java:134) 
[cassandra-all-1.2.4.jar:1.2.4]
at org.apache.cassandra.db.Table.clinit(Table.java:67) 
[cassandra-all-1.2.4.jar:1.2.4]
at sun.misc.Unsafe.ensureClassInitialized(Native Method) [na:1.7.0_21]

It happens as StorageService initialization happens, when you undeploy(when 
tries to clear held references). 

Error is caused by:

public VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(getPartitioner());

I may be wrong but, shouldn't StorageService handle such and keep it 
uninitialized, rather than static loading.


Now, Kundera(Cassandra ORM solution) needs to load cassandra packages such as 
org.apache.cassandra.locator, org.apache.cassandra.db.marshal for processing 
POJOs.

AFAIK these are available with:

dependency
   groupIdorg.apache.cassandra/groupId
   artifactIdcassandra-all/artifactId
   version${cassandra.version}/version
/dependency

Which in turns bring in StorageService as well. Is there any other alternative 
dependency for this?

-Vivek

 Class unloading triggers container crash
 

 Key: CASSANDRA-5739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5739
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.2.4
Reporter: Paulo Pires

 While undeploying a Java webapp on a Glassfish cluster, I get a exception and 
 consequently Glassfish crashes. The log can be seen at 
 http://pastebin.com/CG6LKPEv
 After some research, WebappClassLoader tries to clear references on each 
 loaded class, triggering the call to static StorageService.getPartitioner() 
 which then tries to load the cassandra.yaml file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5739) Class unloading triggers container crash

2013-07-24 Thread Vivek Mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718282#comment-13718282
 ] 

Vivek Mishra edited comment on CASSANDRA-5739 at 7/24/13 12:06 PM:
---

as per error:

org.apache.cassandra.exceptions.ConfigurationException: Cannot locate 
cassandra.yaml
at 
org.apache.cassandra.config.DatabaseDescriptor.getStorageConfigURL(DatabaseDescriptor.java:114)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadYaml(DatabaseDescriptor.java:131)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:123)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.getPartitioner(StorageService.java:140)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.init(StorageService.java:132) 
[cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.clinit(StorageService.java:134) 
[cassandra-all-1.2.4.jar:1.2.4]
at org.apache.cassandra.db.Table.clinit(Table.java:67) 
[cassandra-all-1.2.4.jar:1.2.4]
at sun.misc.Unsafe.ensureClassInitialized(Native Method) [na:1.7.0_21]

It happens as StorageService initialization happens, when you undeploy(when 
tries to clear held references). 

Error is caused by:

b
public VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(getPartitioner());

/b
I may be wrong but, shouldn't StorageService handle such and keep it 
uninitialized, rather than static loading.


Now, Kundera(Cassandra ORM solution) needs to load cassandra packages such as 
org.apache.cassandra.locator, org.apache.cassandra.db.marshal for processing 
POJOs.

AFAIK these are available with:

dependency
   groupIdorg.apache.cassandra/groupId
   artifactIdcassandra-all/artifactId
   version${cassandra.version}/version
/dependency

Which in turns bring in StorageService as well. Is there any other alternative 
dependency for this?

-Vivek

  was (Author: mishravivek):
as per error:

org.apache.cassandra.exceptions.ConfigurationException: Cannot locate 
cassandra.yaml
at 
org.apache.cassandra.config.DatabaseDescriptor.getStorageConfigURL(DatabaseDescriptor.java:114)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadYaml(DatabaseDescriptor.java:131)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:123)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.getPartitioner(StorageService.java:140)
 [cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.init(StorageService.java:132) 
[cassandra-all-1.2.4.jar:1.2.4]
at 
org.apache.cassandra.service.StorageService.clinit(StorageService.java:134) 
[cassandra-all-1.2.4.jar:1.2.4]
at org.apache.cassandra.db.Table.clinit(Table.java:67) 
[cassandra-all-1.2.4.jar:1.2.4]
at sun.misc.Unsafe.ensureClassInitialized(Native Method) [na:1.7.0_21]

It happens as StorageService initialization happens, when you undeploy(when 
tries to clear held references). 

Error is caused by:

public VersionedValue.VersionedValueFactory valueFactory = new 
VersionedValue.VersionedValueFactory(getPartitioner());

I may be wrong but, shouldn't StorageService handle such and keep it 
uninitialized, rather than static loading.


Now, Kundera(Cassandra ORM solution) needs to load cassandra packages such as 
org.apache.cassandra.locator, org.apache.cassandra.db.marshal for processing 
POJOs.

AFAIK these are available with:

dependency
   groupIdorg.apache.cassandra/groupId
   artifactIdcassandra-all/artifactId
   version${cassandra.version}/version
/dependency

Which in turns bring in StorageService as well. Is there any other alternative 
dependency for this?

-Vivek
  
 Class unloading triggers container crash
 

 Key: CASSANDRA-5739
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5739
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.2.4
Reporter: Paulo Pires

 While undeploying a Java webapp on a Glassfish cluster, I get a exception and 
 consequently Glassfish crashes. The log can be seen at 
 http://pastebin.com/CG6LKPEv
 After some research, WebappClassLoader tries to clear references on each 
 loaded class, triggering the call to static StorageService.getPartitioner() 
 which then tries to load the cassandra.yaml file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: 

git commit: Add sanity checks

2013-07-24 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 25a46eae5 - 9ae960a1a


Add sanity checks


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9ae960a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9ae960a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9ae960a1

Branch: refs/heads/cassandra-1.2
Commit: 9ae960a1a4f57e3c9ec018f3cbb32fd3312d7a6e
Parents: 25a46ea
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Jul 24 14:18:57 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 14:18:57 2013 +0200

--
 .../db/index/AbstractSimplePerColumnSecondaryIndex.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9ae960a1/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
--
diff --git 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
 
b/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
index 63af51b..2ff2d27 100644
--- 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
+++ 
b/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
@@ -94,7 +94,9 @@ public abstract class AbstractSimplePerColumnSecondaryIndex 
extends PerColumnSec
 DecoratedKey valueKey = getIndexKeyFor(column.value());
 int localDeletionTime = (int) (System.currentTimeMillis() / 1000);
 ColumnFamily cfi = ColumnFamily.create(indexCfs.metadata);
-cfi.addTombstone(makeIndexColumnName(rowKey, column), 
localDeletionTime, column.timestamp());
+ByteBuffer name = makeIndexColumnName(rowKey, column);
+assert name.remaining()  0  name.remaining() = 
IColumn.MAX_NAME_LENGTH : name.remaining();
+cfi.addTombstone(name, localDeletionTime, column.timestamp());
 indexCfs.apply(valueKey, cfi, SecondaryIndexManager.nullUpdater);
 if (logger.isDebugEnabled())
 logger.debug(removed index entry for cleaned-up value {}:{}, 
valueKey, cfi);
@@ -105,6 +107,7 @@ public abstract class AbstractSimplePerColumnSecondaryIndex 
extends PerColumnSec
 DecoratedKey valueKey = getIndexKeyFor(column.value());
 ColumnFamily cfi = ColumnFamily.create(indexCfs.metadata);
 ByteBuffer name = makeIndexColumnName(rowKey, column);
+assert name.remaining()  0  name.remaining() = 
IColumn.MAX_NAME_LENGTH : name.remaining();
 if (column instanceof ExpiringColumn)
 {
 ExpiringColumn ec = (ExpiringColumn)column;



[jira] [Commented] (CASSANDRA-5761) Issue with secondary index sstable.

2013-07-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718314#comment-13718314
 ] 

Sylvain Lebresne commented on CASSANDRA-5761:
-

I've added an assertion in commit 9ae960a1a4f57e3c9ec018f3cbb32fd3312d7a6e to 
validate at the time of the index update that we don't write an empty column 
name. But what you describe should not happen, so short of being able to 
reproduce, not sure what else we can do. Are you sure you don't have broken 
disk?

 Issue with secondary index sstable.
 ---

 Key: CASSANDRA-5761
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5761
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.5
Reporter: Andriy Yevsyukov
Priority: Critical

 With Cassandra 1.2.5 having issue very similar to 
 [CASSANDRA-5225|https://issues.apache.org/jira/browse/CASSANDRA-5225] but for 
 secondary index sstable. Every query that uses this index fails in Hector 
 with ConnectionTimeout but cassandra log says that reason is:
 {noformat}
 ERROR [ReadStage:55803] 2013-07-15 12:11:35,392 CassandraDaemon.java (line 
 175) Exception in thread Thread[ReadStage:55803,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/data/cassandra/data/betting/events/betting-events.events_sport_type_idx-ic-1-Data.db,
  19658 bytes remaining)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/data/cassandra/data/betting/events/betting-events.events_sport_type_idx-ic-1-Data.db,
  19658 bytes remaining)
   at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:108)
   at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:39)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:90)
   at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:171)
   at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:154)
   at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:143)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:86)
   at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:45)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:134)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:84)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:293)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher$1.computeNext(KeysSearcher.java:140)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher$1.computeNext(KeysSearcher.java:109)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1466)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:82)
   at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:548)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1454)
   at 
 org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:44)
   at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1076)
   at 
 

[jira] [Commented] (CASSANDRA-5761) Issue with secondary index sstable.

2013-07-24 Thread Andriy Yevsyukov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718317#comment-13718317
 ] 

Andriy Yevsyukov commented on CASSANDRA-5761:
-

Hi, thanks for the answer, no - it happens on every node so it shouldn't be a 
disk.

 Issue with secondary index sstable.
 ---

 Key: CASSANDRA-5761
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5761
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.5
Reporter: Andriy Yevsyukov
Priority: Critical

 With Cassandra 1.2.5 having issue very similar to 
 [CASSANDRA-5225|https://issues.apache.org/jira/browse/CASSANDRA-5225] but for 
 secondary index sstable. Every query that uses this index fails in Hector 
 with ConnectionTimeout but cassandra log says that reason is:
 {noformat}
 ERROR [ReadStage:55803] 2013-07-15 12:11:35,392 CassandraDaemon.java (line 
 175) Exception in thread Thread[ReadStage:55803,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/data/cassandra/data/betting/events/betting-events.events_sport_type_idx-ic-1-Data.db,
  19658 bytes remaining)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/data/cassandra/data/betting/events/betting-events.events_sport_type_idx-ic-1-Data.db,
  19658 bytes remaining)
   at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:108)
   at 
 org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:39)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:90)
   at 
 org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:171)
   at 
 org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:154)
   at 
 org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:143)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:86)
   at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:45)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:134)
   at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:84)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:293)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher$1.computeNext(KeysSearcher.java:140)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher$1.computeNext(KeysSearcher.java:109)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1466)
   at 
 org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:82)
   at 
 org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:548)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1454)
   at 
 org.apache.cassandra.service.RangeSliceVerbHandler.executeLocally(RangeSliceVerbHandler.java:44)
   at 
 org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1076)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
   ... 3 more
 Caused by: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: 
 invalid column name length 0 
 

[jira] [Commented] (CASSANDRA-5626) Support empty IN queries

2013-07-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718351#comment-13718351
 ] 

Aleksey Yeschenko commented on CASSANDRA-5626:
--

bq. Why remove the break when checking if we have at least one indexed EQ 
clause?

Otherwise 
https://github.com/apache/cassandra/blob/cassandra-1.2/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java#L1160
 validation is non-deterministically reached - depending on the names of the 
indexed columns and on the hashmap seed (and then getIndexExpressions get(0) 
throws NPE with IN () half the time). So you need to iterate over all 
restrictions.

 Support empty IN queries
 

 Key: CASSANDRA-5626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5626
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Alexander Solovyev
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3
 Attachments: 5626.txt


 It would be nice to have support of empty IN queries. 
 Example: SELECT a FROM t WHERE aKey IN (). 
 One of the reasons is to have such support in DataStax Java Driver (see 
 discussion here: https://datastax-oss.atlassian.net/browse/JAVA-106).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Fabien Rousseau (JIRA)
Fabien Rousseau created CASSANDRA-5799:
--

 Summary: Column can expire while lazy compacting it...
 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.7
Reporter: Fabien Rousseau


Using TTL + range tombstones can lead to failure while lazy compacting rows.

Scenario to reproduce :
 - create an SSTable with one row and some columns and a TTL of 8 seconds
 - wait one second
 - create a second SSTable with the same rowkey as above, and add a range 
tombstone
 - start the first pass of the lazy compaction before the columns with TTL are 
expired
 - wait 10 seconds (enough for columns with TTL to expire)
 - continue lazy expiration
 - the following assertion will fail :
[junit] junit.framework.AssertionFailedError: originally calculated column 
size of 1379 but now it is 1082
[junit] at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)





--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Fabien Rousseau (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718370#comment-13718370
 ] 

Fabien Rousseau commented on CASSANDRA-5799:


Note : it was initially reported here 
http://www.mail-archive.com/user@cassandra.apache.org/msg31277.html

The row size is around 153Mb, thus, compacting at 16Mb/s = around 9s to 
compact, which is an open window for columns to expire

 Column can expire while lazy compacting it...
 -

 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.7
Reporter: Fabien Rousseau

 Using TTL + range tombstones can lead to failure while lazy compacting rows.
 Scenario to reproduce :
  - create an SSTable with one row and some columns and a TTL of 8 seconds
  - wait one second
  - create a second SSTable with the same rowkey as above, and add a range 
 tombstone
  - start the first pass of the lazy compaction before the columns with TTL 
 are expired
  - wait 10 seconds (enough for columns with TTL to expire)
  - continue lazy expiration
  - the following assertion will fail :
 [junit] junit.framework.AssertionFailedError: originally calculated 
 column size of 1379 but now it is 1082
 [junit]   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-5799:
---

Assignee: Sylvain Lebresne

 Column can expire while lazy compacting it...
 -

 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.7
Reporter: Fabien Rousseau
Assignee: Sylvain Lebresne

 Using TTL + range tombstones can lead to failure while lazy compacting rows.
 Scenario to reproduce :
  - create an SSTable with one row and some columns and a TTL of 8 seconds
  - wait one second
  - create a second SSTable with the same rowkey as above, and add a range 
 tombstone
  - start the first pass of the lazy compaction before the columns with TTL 
 are expired
  - wait 10 seconds (enough for columns with TTL to expire)
  - continue lazy expiration
  - the following assertion will fail :
 [junit] junit.framework.AssertionFailedError: originally calculated 
 column size of 1379 but now it is 1082
 [junit]   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4958) Snappy 1.0.4 doesn't work on OSX / Java 7

2013-07-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718395#comment-13718395
 ] 

Jonathan Ellis commented on CASSANDRA-4958:
---

Just to be clear, if we had known 1.0.5 would break RHEL5 we would have waited 
for 2.0.  But since we didn't find out until 1.2.6 was out in the wild, I think 
you can consider RHEL5 support dropped in Apache Cassandra, since reverting to 
the older version would break OS X users.

However, I note that DataStax Enterprise 3.1 (based on 1.2.x) still packages 
the old Snappy (and does not support OS X).

 Snappy 1.0.4 doesn't work on OSX / Java 7
 -

 Key: CASSANDRA-4958
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4958
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 2
Reporter: Colin Taylor
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2.6

 Attachments: 0001-CASSANDRA-4958-1.2.patch


 Fixed in 1.0.5-M3 see :
 https://github.com/xerial/snappy-java/issues/6

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718402#comment-13718402
 ] 

Jonathan Ellis commented on CASSANDRA-5799:
---

If it requires compaction to take longer than TTL to be a problem, I'm inclined 
to say don't do that in 1.2.  2.0 has single-pass compaction so should not 
matter there.

 Column can expire while lazy compacting it...
 -

 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.7
Reporter: Fabien Rousseau
Assignee: Sylvain Lebresne

 Using TTL + range tombstones can lead to failure while lazy compacting rows.
 Scenario to reproduce :
  - create an SSTable with one row and some columns and a TTL of 8 seconds
  - wait one second
  - create a second SSTable with the same rowkey as above, and add a range 
 tombstone
  - start the first pass of the lazy compaction before the columns with TTL 
 are expired
  - wait 10 seconds (enough for columns with TTL to expire)
  - continue lazy expiration
  - the following assertion will fail :
 [junit] junit.framework.AssertionFailedError: originally calculated 
 column size of 1379 but now it is 1082
 [junit]   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5138) Provide a better CQL error when table data does not conform to CQL metadata.

2013-07-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718404#comment-13718404
 ] 

Jonathan Ellis commented on CASSANDRA-5138:
---

+1

 Provide a better CQL error when table data does not conform to CQL metadata.
 

 Key: CASSANDRA-5138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5138
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Mac OS X running 1.2
Reporter: Brian ONeill
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.2.7

 Attachments: 5138-2.txt, 5138.txt, northpole.cql


 When you create a table via CQL, then insert into it via Thrift.  If you 
 inadvertently leave out a component of the column name, in CQL you receive a:
 TSocket read 0 bytes
 Server-side the following exception is logged:
 ERROR 15:19:18,016 Error occurred during processing of message.
 java.lang.ArrayIndexOutOfBoundsException: 3
   at 
 org.apache.cassandra.cql3.statements.ColumnGroupMap.add(ColumnGroupMap.java:43)
   at 
 org.apache.cassandra.cql3.statements.ColumnGroupMap.access$200(ColumnGroupMap.java:31)
   at 
 org.apache.cassandra.cql3.statements.ColumnGroupMap$Builder.add(ColumnGroupMap.java:138)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:805)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:145)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:134)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:61)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:140)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1686)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4074)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4062)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:680)
 I'll submit a schema, and steps to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5138) Provide a better CQL error when table data does not conform to CQL metadata.

2013-07-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718407#comment-13718407
 ] 

Jonathan Ellis commented on CASSANDRA-5138:
---

... although we might want to make this 2.0-only to be on the safe side.

 Provide a better CQL error when table data does not conform to CQL metadata.
 

 Key: CASSANDRA-5138
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5138
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Mac OS X running 1.2
Reporter: Brian ONeill
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 1.2.7

 Attachments: 5138-2.txt, 5138.txt, northpole.cql


 When you create a table via CQL, then insert into it via Thrift.  If you 
 inadvertently leave out a component of the column name, in CQL you receive a:
 TSocket read 0 bytes
 Server-side the following exception is logged:
 ERROR 15:19:18,016 Error occurred during processing of message.
 java.lang.ArrayIndexOutOfBoundsException: 3
   at 
 org.apache.cassandra.cql3.statements.ColumnGroupMap.add(ColumnGroupMap.java:43)
   at 
 org.apache.cassandra.cql3.statements.ColumnGroupMap.access$200(ColumnGroupMap.java:31)
   at 
 org.apache.cassandra.cql3.statements.ColumnGroupMap$Builder.add(ColumnGroupMap.java:138)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:805)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:145)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:134)
   at 
 org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:61)
   at 
 org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
   at 
 org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:140)
   at 
 org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1686)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4074)
   at 
 org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4062)
   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
   at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:680)
 I'll submit a schema, and steps to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5799:


Attachment: 5799.txt

bq. If it requires compaction to take longer than TTL to be a problem

No, it only requires that a TTL expire between the first and second phase, 
which can happen whatever the TTL and compaction time is. For some reason, 
DeletionTime.isDeleted(), which is just supposed to check if the deletion time 
shadows a given column is completely broken (it checks if the columns is 
deleted which shouldn't even matter for that method) and depends on the current 
time. Attaching simple patch to fix.


 Column can expire while lazy compacting it...
 -

 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.7
Reporter: Fabien Rousseau
Assignee: Sylvain Lebresne
 Attachments: 5799.txt


 Using TTL + range tombstones can lead to failure while lazy compacting rows.
 Scenario to reproduce :
  - create an SSTable with one row and some columns and a TTL of 8 seconds
  - wait one second
  - create a second SSTable with the same rowkey as above, and add a range 
 tombstone
  - start the first pass of the lazy compaction before the columns with TTL 
 are expired
  - wait 10 seconds (enough for columns with TTL to expire)
  - continue lazy expiration
  - the following assertion will fail :
 [junit] junit.framework.AssertionFailedError: originally calculated 
 column size of 1379 but now it is 1082
 [junit]   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5626) Support empty IN queries

2013-07-24 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5626:
-

Attachment: 5626-v2.txt

 Support empty IN queries
 

 Key: CASSANDRA-5626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5626
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Alexander Solovyev
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3
 Attachments: 5626.txt, 5626-v2.txt


 It would be nice to have support of empty IN queries. 
 Example: SELECT a FROM t WHERE aKey IN (). 
 One of the reasons is to have such support in DataStax Java Driver (see 
 discussion here: https://datastax-oss.atlassian.net/browse/JAVA-106).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5626) Support empty IN queries

2013-07-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718417#comment-13718417
 ] 

Aleksey Yeschenko commented on CASSANDRA-5626:
--

bq. We can still simplify things a bit, because we don't need the current 
generality, but I think I'd rather open a separate ticket for that.

Agreed. Removed it.

bq. SelectStatement.getSliceCommands() was assuming that names filters were 
never null, which is not the case anymore with this patch.

Oops, missed 1/3. Corrected.

bq. Nit: in getIndexExpressions, let's add an assert that we're not in the IN 
case, if only for documenting that we only use the first element on purpose.

Addressed.

 Support empty IN queries
 

 Key: CASSANDRA-5626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5626
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Alexander Solovyev
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3
 Attachments: 5626.txt, 5626-v2.txt


 It would be nice to have support of empty IN queries. 
 Example: SELECT a FROM t WHERE aKey IN (). 
 One of the reasons is to have such support in DataStax Java Driver (see 
 discussion here: https://datastax-oss.atlassian.net/browse/JAVA-106).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718420#comment-13718420
 ] 

Jonathan Ellis commented on CASSANDRA-5799:
---

+1

 Column can expire while lazy compacting it...
 -

 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.7
Reporter: Fabien Rousseau
Assignee: Sylvain Lebresne
 Attachments: 5799.txt


 Using TTL + range tombstones can lead to failure while lazy compacting rows.
 Scenario to reproduce :
  - create an SSTable with one row and some columns and a TTL of 8 seconds
  - wait one second
  - create a second SSTable with the same rowkey as above, and add a range 
 tombstone
  - start the first pass of the lazy compaction before the columns with TTL 
 are expired
  - wait 10 seconds (enough for columns with TTL to expire)
  - continue lazy expiration
  - the following assertion will fail :
 [junit] junit.framework.AssertionFailedError: originally calculated 
 column size of 1379 but now it is 1082
 [junit]   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5799:


Affects Version/s: (was: 1.2.7)
   1.2.6

 Column can expire while lazy compacting it...
 -

 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Fabien Rousseau
Assignee: Sylvain Lebresne
 Attachments: 5799.txt


 Using TTL + range tombstones can lead to failure while lazy compacting rows.
 Scenario to reproduce :
  - create an SSTable with one row and some columns and a TTL of 8 seconds
  - wait one second
  - create a second SSTable with the same rowkey as above, and add a range 
 tombstone
  - start the first pass of the lazy compaction before the columns with TTL 
 are expired
  - wait 10 seconds (enough for columns with TTL to expire)
  - continue lazy expiration
  - the following assertion will fail :
 [junit] junit.framework.AssertionFailedError: originally calculated 
 column size of 1379 but now it is 1082
 [junit]   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718428#comment-13718428
 ] 

Sylvain Lebresne commented on CASSANDRA-5799:
-

Alright, committed, thanks

 Column can expire while lazy compacting it...
 -

 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Fabien Rousseau
Assignee: Sylvain Lebresne
 Fix For: 1.2.7

 Attachments: 5799.txt


 Using TTL + range tombstones can lead to failure while lazy compacting rows.
 Scenario to reproduce :
  - create an SSTable with one row and some columns and a TTL of 8 seconds
  - wait one second
  - create a second SSTable with the same rowkey as above, and add a range 
 tombstone
  - start the first pass of the lazy compaction before the columns with TTL 
 are expired
  - wait 10 seconds (enough for columns with TTL to expire)
  - continue lazy expiration
  - the following assertion will fail :
 [junit] junit.framework.AssertionFailedError: originally calculated 
 column size of 1379 but now it is 1082
 [junit]   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5799) Column can expire while lazy compacting it...

2013-07-24 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5799:


Fix Version/s: 1.2.7

 Column can expire while lazy compacting it...
 -

 Key: CASSANDRA-5799
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5799
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Fabien Rousseau
Assignee: Sylvain Lebresne
 Fix For: 1.2.7

 Attachments: 5799.txt


 Using TTL + range tombstones can lead to failure while lazy compacting rows.
 Scenario to reproduce :
  - create an SSTable with one row and some columns and a TTL of 8 seconds
  - wait one second
  - create a second SSTable with the same rowkey as above, and add a range 
 tombstone
  - start the first pass of the lazy compaction before the columns with TTL 
 are expired
  - wait 10 seconds (enough for columns with TTL to expire)
  - continue lazy expiration
  - the following assertion will fail :
 [junit] junit.framework.AssertionFailedError: originally calculated 
 column size of 1379 but now it is 1082
 [junit]   at 
 org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:150)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-07-24 Thread slebresne
Merge branch 'cassandra-1.2' into trunk

Conflicts:
src/java/org/apache/cassandra/db/DeletionTime.java
src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java

src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6cc50946
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6cc50946
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6cc50946

Branch: refs/heads/trunk
Commit: 6cc50946300ba0a24d1028a0e266a3dc627ed404
Parents: 4021fff d38446a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Jul 24 16:58:24 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 16:58:24 2013 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/DeletionTime.java  | 4 ++--
 src/java/org/apache/cassandra/db/RangeTombstone.java| 4 ++--
 .../org/apache/cassandra/db/compaction/LazilyCompactedRow.java  | 3 +--
 .../db/index/AbstractSimplePerColumnSecondaryIndex.java | 5 -
 5 files changed, 10 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cc50946/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cc50946/src/java/org/apache/cassandra/db/DeletionTime.java
--
diff --cc src/java/org/apache/cassandra/db/DeletionTime.java
index 5296529,deab30b..3d6fad4
--- a/src/java/org/apache/cassandra/db/DeletionTime.java
+++ b/src/java/org/apache/cassandra/db/DeletionTime.java
@@@ -83,9 -83,9 +83,9 @@@ public class DeletionTime implements Co
  return localDeletionTime  gcBefore;
  }
  
- public boolean isDeleted(Column column, long now)
 -public boolean isDeleted(IColumn column)
++public boolean isDeleted(Column column)
  {
- return column.isMarkedForDelete(now)  column.getMarkedForDeleteAt() 
= markedForDeleteAt;
 -return column.mostRecentLiveChangeAt() = markedForDeleteAt;
++return column.timestamp() = markedForDeleteAt;
  }
  
  public long memorySize()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cc50946/src/java/org/apache/cassandra/db/RangeTombstone.java
--
diff --cc src/java/org/apache/cassandra/db/RangeTombstone.java
index cac50e8,5e87847..02dddb2
--- a/src/java/org/apache/cassandra/db/RangeTombstone.java
+++ b/src/java/org/apache/cassandra/db/RangeTombstone.java
@@@ -237,7 -239,7 +237,7 @@@ public class RangeTombstone extends Int
  }
  }
  
- public boolean isDeleted(Column column, long now)
 -public boolean isDeleted(IColumn column)
++public boolean isDeleted(Column column)
  {
  for (RangeTombstone tombstone : ranges)
  {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cc50946/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --cc src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 1f38494,9a03598..6390b14
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@@ -246,10 -293,10 +246,9 @@@ public class LazilyCompactedRow extend
  
  // PrecompactedRow.removeDeletedAndOldShards have only 
checked the top-level CF deletion times,
  // not the range tombstone. For that we use the columnIndexer 
tombstone tracker.
- // Note that this doesn't work for super columns.
- if (indexBuilder.tombstoneTracker().isDeleted(reduced, 
System.currentTimeMillis()))
+ if (indexBuilder.tombstoneTracker().isDeleted(reduced))
  return null;
  
 -serializedSize += reduced.serializedSizeForSSTable();
  columns++;
  minTimestampSeen = Math.min(minTimestampSeen, 
reduced.minTimestamp());
  maxTimestampSeen = Math.max(maxTimestampSeen, 
reduced.maxTimestamp());

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6cc50946/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
--
diff --cc 
src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
index 73f818f,2ff2d27..afc7409
--- 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
+++ 

[2/3] git commit: Fix columns expiring in the middle of 2 phase compactions

2013-07-24 Thread slebresne
Fix columns expiring in the middle of 2 phase compactions

patch by slebresne; reviewed by jbellis for CASSANDRA-5799


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d38446a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d38446a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d38446a1

Branch: refs/heads/trunk
Commit: d38446a1bbca0abf7a2af8986fecf348355abafb
Parents: 9ae960a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Jul 24 16:43:38 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 16:43:38 2013 +0200

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/DeletionTime.java | 2 +-
 .../org/apache/cassandra/db/compaction/LazilyCompactedRow.java | 1 -
 3 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d38446a1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da03306..aaeb10b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -26,6 +26,7 @@
  * Don't rely on row marker for queries in general to hide lost markers
after TTL expires (CASSANDRA-5762)
  * Sort nodetool help output (CASSANDRA-5776)
+ * Fix column expiring during 2 phases compaction (CASSANDRA-5799)
 
 
 1.2.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d38446a1/src/java/org/apache/cassandra/db/DeletionTime.java
--
diff --git a/src/java/org/apache/cassandra/db/DeletionTime.java 
b/src/java/org/apache/cassandra/db/DeletionTime.java
index bcb1b01..deab30b 100644
--- a/src/java/org/apache/cassandra/db/DeletionTime.java
+++ b/src/java/org/apache/cassandra/db/DeletionTime.java
@@ -85,7 +85,7 @@ public class DeletionTime implements ComparableDeletionTime
 
 public boolean isDeleted(IColumn column)
 {
-return column.isMarkedForDelete()  column.getMarkedForDeleteAt() = 
markedForDeleteAt;
+return column.mostRecentLiveChangeAt() = markedForDeleteAt;
 }
 
 public long memorySize()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d38446a1/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 9575d41..9a03598 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -293,7 +293,6 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 
 // PrecompactedRow.removeDeletedAndOldShards have only checked 
the top-level CF deletion times,
 // not the range tombstone. For that we use the columnIndexer 
tombstone tracker.
-// Note that this doesn't work for super columns.
 if (indexBuilder.tombstoneTracker().isDeleted(reduced))
 return null;
 



[1/3] git commit: Add sanity checks

2013-07-24 Thread slebresne
Updated Branches:
  refs/heads/trunk 4021fffa0 - 6cc509463


Add sanity checks


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9ae960a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9ae960a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9ae960a1

Branch: refs/heads/trunk
Commit: 9ae960a1a4f57e3c9ec018f3cbb32fd3312d7a6e
Parents: 25a46ea
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Jul 24 14:18:57 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 14:18:57 2013 +0200

--
 .../db/index/AbstractSimplePerColumnSecondaryIndex.java | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9ae960a1/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
--
diff --git 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
 
b/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
index 63af51b..2ff2d27 100644
--- 
a/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
+++ 
b/src/java/org/apache/cassandra/db/index/AbstractSimplePerColumnSecondaryIndex.java
@@ -94,7 +94,9 @@ public abstract class AbstractSimplePerColumnSecondaryIndex 
extends PerColumnSec
 DecoratedKey valueKey = getIndexKeyFor(column.value());
 int localDeletionTime = (int) (System.currentTimeMillis() / 1000);
 ColumnFamily cfi = ColumnFamily.create(indexCfs.metadata);
-cfi.addTombstone(makeIndexColumnName(rowKey, column), 
localDeletionTime, column.timestamp());
+ByteBuffer name = makeIndexColumnName(rowKey, column);
+assert name.remaining()  0  name.remaining() = 
IColumn.MAX_NAME_LENGTH : name.remaining();
+cfi.addTombstone(name, localDeletionTime, column.timestamp());
 indexCfs.apply(valueKey, cfi, SecondaryIndexManager.nullUpdater);
 if (logger.isDebugEnabled())
 logger.debug(removed index entry for cleaned-up value {}:{}, 
valueKey, cfi);
@@ -105,6 +107,7 @@ public abstract class AbstractSimplePerColumnSecondaryIndex 
extends PerColumnSec
 DecoratedKey valueKey = getIndexKeyFor(column.value());
 ColumnFamily cfi = ColumnFamily.create(indexCfs.metadata);
 ByteBuffer name = makeIndexColumnName(rowKey, column);
+assert name.remaining()  0  name.remaining() = 
IColumn.MAX_NAME_LENGTH : name.remaining();
 if (column instanceof ExpiringColumn)
 {
 ExpiringColumn ec = (ExpiringColumn)column;



git commit: Better validation when accessing CQL3 table from thrift

2013-07-24 Thread slebresne
Updated Branches:
  refs/heads/trunk 6cc509463 - 371e7bf69


Better validation when accessing CQL3 table from thrift

patch by slebresne; reviewed by jbellis for CASSANDRA-5138


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/371e7bf6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/371e7bf6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/371e7bf6

Branch: refs/heads/trunk
Commit: 371e7bf69bbfdca0f5415e32a8c0f652711f5257
Parents: 6cc5094
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Jul 22 16:48:24 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 17:04:13 2013 +0200

--
 CHANGES.txt |  1 +
 .../cassandra/thrift/ThriftValidation.java  | 34 
 2 files changed, 35 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/371e7bf6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c9e229c..c081b45 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,7 @@
 2.0.0-rc1
  * fix potential spurious wakeup in AsyncOneResponse (CASSANDRA-5690)
  * fix schema-related trigger issues (CASSANDRA-5774)
+ * Better validation when accessing CQL3 table from thrift (CASSANDRA-5138)
 
 
 2.0.0-beta2

http://git-wip-us.apache.org/repos/asf/cassandra/blob/371e7bf6/src/java/org/apache/cassandra/thrift/ThriftValidation.java
--
diff --git a/src/java/org/apache/cassandra/thrift/ThriftValidation.java 
b/src/java/org/apache/cassandra/thrift/ThriftValidation.java
index 23d21f8..ce8fdcb 100644
--- a/src/java/org/apache/cassandra/thrift/ThriftValidation.java
+++ b/src/java/org/apache/cassandra/thrift/ThriftValidation.java
@@ -25,13 +25,17 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.*;
+import org.apache.cassandra.cql3.CFDefinition;
+import org.apache.cassandra.cql3.ColumnIdentifier;
 import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.filter.IDiskAtomFilter;
 import org.apache.cassandra.db.filter.NamesQueryFilter;
 import org.apache.cassandra.db.filter.SliceQueryFilter;
 import org.apache.cassandra.db.index.SecondaryIndexManager;
 import org.apache.cassandra.db.marshal.AbstractType;
+import org.apache.cassandra.db.marshal.ColumnToCollectionType;
 import org.apache.cassandra.db.marshal.CompositeType;
+import org.apache.cassandra.db.marshal.UTF8Type;
 import org.apache.cassandra.dht.IPartitioner;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.service.StorageService;
@@ -207,6 +211,8 @@ public class ThriftValidation
 throw new 
org.apache.cassandra.exceptions.InvalidRequestException(supercolumn specified 
to ColumnFamily  + metadata.cfName +  containing normal columns);
 }
 AbstractType? comparator = SuperColumns.getComparatorFor(metadata, 
superColumnName);
+CFDefinition cfDef = metadata.getCfDef();
+boolean isCQL3Table = cfDef.isComposite  !cfDef.isCompact  
!metadata.isSuper();
 for (ByteBuffer name : column_names)
 {
 if (name.remaining()  maxNameLength)
@@ -221,6 +227,34 @@ public class ThriftValidation
 {
 throw new 
org.apache.cassandra.exceptions.InvalidRequestException(e.getMessage());
 }
+
+if (isCQL3Table)
+{
+// CQL3 table don't support having only part of their 
composite column names set
+CompositeType composite = (CompositeType)comparator;
+ByteBuffer[] components = composite.split(name);
+int minComponents = composite.types.size() - 
(cfDef.hasCollections ? 1 : 0);
+if (components.length  minComponents)
+throw new 
org.apache.cassandra.exceptions.InvalidRequestException(String.format(Not 
enough component (found %d but %d expected) for column name since %s is a CQL3 
table,
+   
 metadata.cfName, components.length, minComponents));
+
+// Furthermore, the column name must be a declared one.
+int columnIndex = composite.types.size() - 
(cfDef.hasCollections ? 2 : 1);
+ByteBuffer CQL3ColumnName = components[columnIndex];
+ColumnIdentifier columnId = new 
ColumnIdentifier(CQL3ColumnName, composite.types.get(columnIndex));
+if (cfDef.columns.get(columnId) == null)
+throw new 
org.apache.cassandra.exceptions.InvalidRequestException(String.format(Invalid 
cell for CQL3 table %s. The CQL3 column component (%s) does not 

[jira] [Commented] (CASSANDRA-5797) DC-local CAS

2013-07-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718464#comment-13718464
 ] 

Jonathan Ellis commented on CASSANDRA-5797:
---

Sounds reasonable, although I think it would be better to come up w/ a 
different enum for the Paxos phases than re-use CL, most of whose options are 
not appropriate.

I actually think CL.ANY on commit is fine.

 DC-local CAS
 

 Key: CASSANDRA-5797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5797
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 2.0
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.1


 For two-datacenter deployments where the second DC is strictly for disaster 
 failover, it would be useful to restrict CAS to a single DC to avoid cross-DC 
 round trips.
 (This would require manually truncating {{system.paxos}} when failing over.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5797) DC-local CAS

2013-07-24 Thread Patrick McFadin (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718476#comment-13718476
 ] 

Patrick McFadin commented on CASSANDRA-5797:


I like LOCAL_SERIAL over ANY. It makes a closer match to LOCAL_QUORUM in that 
it's not meant to cross datacenter boundaries. There is enough confusion about 
ANY as it is and I think this would simplify things.

 DC-local CAS
 

 Key: CASSANDRA-5797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5797
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 2.0
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.1


 For two-datacenter deployments where the second DC is strictly for disaster 
 failover, it would be useful to restrict CAS to a single DC to avoid cross-DC 
 round trips.
 (This would require manually truncating {{system.paxos}} when failing over.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5795) now() is being rejected in INSERTs when inside collections

2013-07-24 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718477#comment-13718477
 ] 

Aleksey Yeschenko commented on CASSANDRA-5795:
--

+1

nits:
- The copy-pasted DelayedValue code from Lists in Sets still references 'List' 
in bind (List value too long)


 now() is being rejected in INSERTs when inside collections
 --

 Key: CASSANDRA-5795
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5795
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
Reporter: Aleksey Yeschenko
Assignee: Sylvain Lebresne
 Fix For: 1.2.8

 Attachments: 5795.txt


 Lists, Sets and Maps reject NonTerminal terms in prepare:
 {code}
 if (t instanceof Term.NonTerminal)
 throw new InvalidRequestException(String.format(Invalid 
 list literal for %s: bind variables are not supported inside collection 
 literals, receiver));
 {code}
 and now() is instanceof NonTerminal since CASSANDRA-5616, hence
 {noformat}
 cqlsh:test insert into demo (id, timeuuids) values (0, [now()]);
 Bad Request: Invalid list literal for tus: bind variables are not supported 
 inside collection literals
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5797) DC-local CAS

2013-07-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718473#comment-13718473
 ] 

Sylvain Lebresne commented on CASSANDRA-5797:
-

bq. although I think it would be better to come up w/ a different enum for the 
Paxos phases than re-use CL

The thing is that for reads, we must have SERIAL and LOCAL_SERIAL in CL if we 
want thrift to support it. So once we have them in CL, is it really worth 
adding a separate enum for the write case? (honest question, I'm fine doing it, 
just wonder if it's worth bothering since things will be mixed up for reads 
anyway).

 DC-local CAS
 

 Key: CASSANDRA-5797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5797
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 2.0
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.1


 For two-datacenter deployments where the second DC is strictly for disaster 
 failover, it would be useful to restrict CAS to a single DC to avoid cross-DC 
 round trips.
 (This would require manually truncating {{system.paxos}} when failing over.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4775) Counters 2.0

2013-07-24 Thread Nicolas Favre-Felix (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718481#comment-13718481
 ] 

Nicolas Favre-Felix commented on CASSANDRA-4775:


[~jbellis] sorry about the late answer.

I am not convinced that locking before to the commit log entry is a great idea.
First, it does not *solve* the retry problem, even if it does mitigate it 
somewhat. It allows batches to be retried internally but doesn't give any 
guarantee to the client in the case of a timeout before the batch is added to 
the batchlog.
I implemented a read-modify-write (RMW) counter as a personal exercise last 
year and gave up on the idea because its performance was much lower than the 
current implementation. Cassandra currently allows concurrent updates to the 
same counter, with two clients applying deltas +x and +y, resulting in two 
replication reads that might both read (+x+y). This is not possible with a 
locked RMW and I remember observing many more timeouts on hot counters due to 
contention on this very coarse lock.
My toy implementation did not even lock around the commit log entry, which 
would be even slower.

It is true that the read in a RMW design is cheaper than the current read which 
might be touching several SSTables, but it's still very expensive and I'm 
worried that the internal retry safety wouldn't be enough to convince users 
that these slower counters are better.

What do you think?

 Counters 2.0
 

 Key: CASSANDRA-4775
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4775
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Arya Goudarzi
Assignee: Aleksey Yeschenko
  Labels: counters
 Fix For: 2.1


 The existing partitioned counters remain a source of frustration for most 
 users almost two years after being introduced.  The remaining problems are 
 inherent in the design, not something that can be fixed given enough 
 time/eyeballs.
 Ideally a solution would give us
 - similar performance
 - less special cases in the code
 - potential for a retry mechanism

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/2] git commit: Fix columns expiring in the middle of 2 phase compactions

2013-07-24 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 9ae960a1a - ce4e4b9b5


Fix columns expiring in the middle of 2 phase compactions

patch by slebresne; reviewed by jbellis for CASSANDRA-5799


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d38446a1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d38446a1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d38446a1

Branch: refs/heads/cassandra-1.2
Commit: d38446a1bbca0abf7a2af8986fecf348355abafb
Parents: 9ae960a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Jul 24 16:43:38 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 16:43:38 2013 +0200

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/db/DeletionTime.java | 2 +-
 .../org/apache/cassandra/db/compaction/LazilyCompactedRow.java | 1 -
 3 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d38446a1/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da03306..aaeb10b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -26,6 +26,7 @@
  * Don't rely on row marker for queries in general to hide lost markers
after TTL expires (CASSANDRA-5762)
  * Sort nodetool help output (CASSANDRA-5776)
+ * Fix column expiring during 2 phases compaction (CASSANDRA-5799)
 
 
 1.2.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d38446a1/src/java/org/apache/cassandra/db/DeletionTime.java
--
diff --git a/src/java/org/apache/cassandra/db/DeletionTime.java 
b/src/java/org/apache/cassandra/db/DeletionTime.java
index bcb1b01..deab30b 100644
--- a/src/java/org/apache/cassandra/db/DeletionTime.java
+++ b/src/java/org/apache/cassandra/db/DeletionTime.java
@@ -85,7 +85,7 @@ public class DeletionTime implements ComparableDeletionTime
 
 public boolean isDeleted(IColumn column)
 {
-return column.isMarkedForDelete()  column.getMarkedForDeleteAt() = 
markedForDeleteAt;
+return column.mostRecentLiveChangeAt() = markedForDeleteAt;
 }
 
 public long memorySize()

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d38446a1/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java 
b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
index 9575d41..9a03598 100644
--- a/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
+++ b/src/java/org/apache/cassandra/db/compaction/LazilyCompactedRow.java
@@ -293,7 +293,6 @@ public class LazilyCompactedRow extends 
AbstractCompactedRow implements Iterable
 
 // PrecompactedRow.removeDeletedAndOldShards have only checked 
the top-level CF deletion times,
 // not the range tombstone. For that we use the columnIndexer 
tombstone tracker.
-// Note that this doesn't work for super columns.
 if (indexBuilder.tombstoneTracker().isDeleted(reduced))
 return null;
 



[2/2] git commit: now() is being rejected in INSERTs when inside collections

2013-07-24 Thread slebresne
now() is being rejected in INSERTs when inside collections

patch by slebresne; reviewed by iamaleksey for CASSANDRA-5795


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce4e4b9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce4e4b9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce4e4b9b

Branch: refs/heads/cassandra-1.2
Commit: ce4e4b9b5b29b0e0518aafe02544de3765ea9dd7
Parents: d38446a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Jul 24 17:46:52 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 17:46:52 2013 +0200

--
 CHANGES.txt |  1 +
 .../apache/cassandra/cql3/AbstractMarker.java   |  5 ++
 src/java/org/apache/cassandra/cql3/Lists.java   | 76 ++
 src/java/org/apache/cassandra/cql3/Maps.java| 84 +++-
 src/java/org/apache/cassandra/cql3/Sets.java| 73 +
 src/java/org/apache/cassandra/cql3/Term.java| 28 ++-
 .../cassandra/cql3/functions/FunctionCall.java  | 10 +++
 7 files changed, 219 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce4e4b9b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index aaeb10b..71a956b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -27,6 +27,7 @@
after TTL expires (CASSANDRA-5762)
  * Sort nodetool help output (CASSANDRA-5776)
  * Fix column expiring during 2 phases compaction (CASSANDRA-5799)
+ * now() is being rejected in INSERTs when inside collections (CASSANDRA-5795)
 
 
 1.2.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce4e4b9b/src/java/org/apache/cassandra/cql3/AbstractMarker.java
--
diff --git a/src/java/org/apache/cassandra/cql3/AbstractMarker.java 
b/src/java/org/apache/cassandra/cql3/AbstractMarker.java
index d8353a1..b4a4143 100644
--- a/src/java/org/apache/cassandra/cql3/AbstractMarker.java
+++ b/src/java/org/apache/cassandra/cql3/AbstractMarker.java
@@ -40,6 +40,11 @@ public abstract class AbstractMarker extends Term.NonTerminal
 boundNames[bindIndex] = receiver;
 }
 
+public boolean containsBindMarker()
+{
+return true;
+}
+
 /**
  * A parsed, but non prepared, bind marker.
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce4e4b9b/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index b579b8c..ae95dca 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -19,6 +19,7 @@ package org.apache.cassandra.cql3;
 
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicReference;
 
@@ -60,34 +61,27 @@ public abstract class Lists
 this.elements = elements;
 }
 
-public Value prepare(ColumnSpecification receiver) throws 
InvalidRequestException
+public Term prepare(ColumnSpecification receiver) throws 
InvalidRequestException
 {
 validateAssignableTo(receiver);
 
 ColumnSpecification valueSpec = Lists.valueSpecOf(receiver);
-ListByteBuffer values = new 
ArrayListByteBuffer(elements.size());
+ListTerm values = new ArrayListTerm(elements.size());
+boolean allTerminal = true;
 for (Term.Raw rt : elements)
 {
 Term t = rt.prepare(valueSpec);
 
-if (t instanceof Term.NonTerminal)
+if (t.containsBindMarker())
 throw new InvalidRequestException(String.format(Invalid 
list literal for %s: bind variables are not supported inside collection 
literals, receiver));
 
-// We don't allow prepared marker in collections, nor nested 
collections (for the later, prepare will throw an exception)
-assert t instanceof Constants.Value;
-ByteBuffer bytes = ((Constants.Value)t).bytes;
-if (bytes == null)
-throw new InvalidRequestException(null is not supported 
inside collections);
-
-// We don't support value  64K because the serialization 
format encode the length as an unsigned short.
-if (bytes.remaining()  FBUtilities.MAX_UNSIGNED_SHORT)
-throw new InvalidRequestException(String.format(List 
value is too long. List values are limited to %d bytes but %d bytes value 
provided,
-

Git Push Summary

2013-07-24 Thread slebresne
Updated Tags:  refs/tags/1.2.7-tentative [created] ce4e4b9b5


Git Push Summary

2013-07-24 Thread slebresne
Updated Tags:  refs/tags/1.2.7-tentative [deleted] ae62c947f


[3/4] git commit: fix KeyCacheTest to work with globalized key cache

2013-07-24 Thread yukim
fix KeyCacheTest to work with globalized key cache


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/422d2236
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/422d2236
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/422d2236

Branch: refs/heads/cassandra-1.2
Commit: 422d2236b59b80e3cafef9fb8cad3235280e8626
Parents: ce4e4b9
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jul 24 10:58:07 2013 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jul 24 11:00:05 2013 -0500

--
 .../org/apache/cassandra/db/KeyCacheTest.java   | 75 +---
 1 file changed, 49 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/422d2236/test/unit/org/apache/cassandra/db/KeyCacheTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/KeyCacheTest.java 
b/test/unit/org/apache/cassandra/db/KeyCacheTest.java
index 93f1fea..3458961 100644
--- a/test/unit/org/apache/cassandra/db/KeyCacheTest.java
+++ b/test/unit/org/apache/cassandra/db/KeyCacheTest.java
@@ -1,6 +1,4 @@
-package org.apache.cassandra.db;
 /*
- *
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -9,35 +7,34 @@ package org.apache.cassandra.db;
  * License); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
  *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
  */
+package org.apache.cassandra.db;
 
 import java.io.IOException;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.concurrent.ExecutionException;
 
-import org.apache.cassandra.cache.KeyCacheKey;
-import org.apache.cassandra.db.filter.QueryFilter;
-import org.apache.cassandra.service.CacheService;
-import org.apache.cassandra.thrift.ColumnParent;
-
 import org.junit.AfterClass;
 import org.junit.Test;
 
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.Util;
-import org.apache.cassandra.db.filter.QueryPath;
+import org.apache.cassandra.cache.KeyCacheKey;
 import org.apache.cassandra.db.compaction.CompactionManager;
+import org.apache.cassandra.db.filter.QueryFilter;
+import org.apache.cassandra.db.filter.QueryPath;
+import org.apache.cassandra.service.CacheService;
+import org.apache.cassandra.thrift.ColumnParent;
 import org.apache.cassandra.utils.ByteBufferUtil;
+
 import static junit.framework.Assert.assertEquals;
 
 public class KeyCacheTest extends SchemaLoader
@@ -61,7 +58,7 @@ public class KeyCacheTest extends SchemaLoader
 
 // empty the cache
 CacheService.instance.invalidateKeyCache();
-assert CacheService.instance.keyCache.size() == 0;
+assertKeyCacheSize(0, TABLE1, COLUMN_FAMILY2);
 
 // insert data and force to disk
 insertData(TABLE1, COLUMN_FAMILY2, 0, 100);
@@ -69,20 +66,37 @@ public class KeyCacheTest extends SchemaLoader
 
 // populate the cache
 readData(TABLE1, COLUMN_FAMILY2, 0, 100);
-assertEquals(100, CacheService.instance.keyCache.size());
+assertKeyCacheSize(100, TABLE1, COLUMN_FAMILY2);
 
 // really? our caches don't implement the map interface? (hence no 
.addAll)
 MapKeyCacheKey, RowIndexEntry savedMap = new HashMapKeyCacheKey, 
RowIndexEntry();
 for (KeyCacheKey k : CacheService.instance.keyCache.getKeySet())
 {
-savedMap.put(k, CacheService.instance.keyCache.get(k));
+if (k.desc.ksname.equals(TABLE1)  
k.desc.cfname.equals(COLUMN_FAMILY2))
+savedMap.put(k, CacheService.instance.keyCache.get(k));
 }
 
 // force the cache to disk
 CacheService.instance.keyCache.submitWrite(Integer.MAX_VALUE).get();
 
 CacheService.instance.invalidateKeyCache();
-assert CacheService.instance.keyCache.size() == 0;
+assertKeyCacheSize(0, TABLE1, COLUMN_FAMILY2);
+
+

[1/4] git commit: now() is being rejected in INSERTs when inside collections

2013-07-24 Thread yukim
Updated Branches:
  refs/heads/cassandra-1.2 ce4e4b9b5 - 422d2236b
  refs/heads/trunk 371e7bf69 - 813577e32


now() is being rejected in INSERTs when inside collections

patch by slebresne; reviewed by iamaleksey for CASSANDRA-5795


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce4e4b9b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce4e4b9b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce4e4b9b

Branch: refs/heads/trunk
Commit: ce4e4b9b5b29b0e0518aafe02544de3765ea9dd7
Parents: d38446a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Wed Jul 24 17:46:52 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Wed Jul 24 17:46:52 2013 +0200

--
 CHANGES.txt |  1 +
 .../apache/cassandra/cql3/AbstractMarker.java   |  5 ++
 src/java/org/apache/cassandra/cql3/Lists.java   | 76 ++
 src/java/org/apache/cassandra/cql3/Maps.java| 84 +++-
 src/java/org/apache/cassandra/cql3/Sets.java| 73 +
 src/java/org/apache/cassandra/cql3/Term.java| 28 ++-
 .../cassandra/cql3/functions/FunctionCall.java  | 10 +++
 7 files changed, 219 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce4e4b9b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index aaeb10b..71a956b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -27,6 +27,7 @@
after TTL expires (CASSANDRA-5762)
  * Sort nodetool help output (CASSANDRA-5776)
  * Fix column expiring during 2 phases compaction (CASSANDRA-5799)
+ * now() is being rejected in INSERTs when inside collections (CASSANDRA-5795)
 
 
 1.2.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce4e4b9b/src/java/org/apache/cassandra/cql3/AbstractMarker.java
--
diff --git a/src/java/org/apache/cassandra/cql3/AbstractMarker.java 
b/src/java/org/apache/cassandra/cql3/AbstractMarker.java
index d8353a1..b4a4143 100644
--- a/src/java/org/apache/cassandra/cql3/AbstractMarker.java
+++ b/src/java/org/apache/cassandra/cql3/AbstractMarker.java
@@ -40,6 +40,11 @@ public abstract class AbstractMarker extends Term.NonTerminal
 boundNames[bindIndex] = receiver;
 }
 
+public boolean containsBindMarker()
+{
+return true;
+}
+
 /**
  * A parsed, but non prepared, bind marker.
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce4e4b9b/src/java/org/apache/cassandra/cql3/Lists.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Lists.java 
b/src/java/org/apache/cassandra/cql3/Lists.java
index b579b8c..ae95dca 100644
--- a/src/java/org/apache/cassandra/cql3/Lists.java
+++ b/src/java/org/apache/cassandra/cql3/Lists.java
@@ -19,6 +19,7 @@ package org.apache.cassandra.cql3;
 
 import java.nio.ByteBuffer;
 import java.util.ArrayList;
+import java.util.Collections;
 import java.util.List;
 import java.util.concurrent.atomic.AtomicReference;
 
@@ -60,34 +61,27 @@ public abstract class Lists
 this.elements = elements;
 }
 
-public Value prepare(ColumnSpecification receiver) throws 
InvalidRequestException
+public Term prepare(ColumnSpecification receiver) throws 
InvalidRequestException
 {
 validateAssignableTo(receiver);
 
 ColumnSpecification valueSpec = Lists.valueSpecOf(receiver);
-ListByteBuffer values = new 
ArrayListByteBuffer(elements.size());
+ListTerm values = new ArrayListTerm(elements.size());
+boolean allTerminal = true;
 for (Term.Raw rt : elements)
 {
 Term t = rt.prepare(valueSpec);
 
-if (t instanceof Term.NonTerminal)
+if (t.containsBindMarker())
 throw new InvalidRequestException(String.format(Invalid 
list literal for %s: bind variables are not supported inside collection 
literals, receiver));
 
-// We don't allow prepared marker in collections, nor nested 
collections (for the later, prepare will throw an exception)
-assert t instanceof Constants.Value;
-ByteBuffer bytes = ((Constants.Value)t).bytes;
-if (bytes == null)
-throw new InvalidRequestException(null is not supported 
inside collections);
-
-// We don't support value  64K because the serialization 
format encode the length as an unsigned short.
-if (bytes.remaining()  FBUtilities.MAX_UNSIGNED_SHORT)
-throw new InvalidRequestException(String.format(List 
value is too 

[4/4] git commit: Merge branch 'cassandra-1.2' into trunk

2013-07-24 Thread yukim
Merge branch 'cassandra-1.2' into trunk

Conflicts:
test/unit/org/apache/cassandra/db/KeyCacheTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/813577e3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/813577e3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/813577e3

Branch: refs/heads/trunk
Commit: 813577e32b7e9a40ff1e71875d019eda44c84586
Parents: 371e7bf 422d223
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jul 24 11:07:31 2013 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jul 24 11:07:31 2013 -0500

--
 CHANGES.txt |  1 +
 .../apache/cassandra/cql3/AbstractMarker.java   |  5 ++
 src/java/org/apache/cassandra/cql3/Lists.java   | 76 ++
 src/java/org/apache/cassandra/cql3/Maps.java| 84 +++-
 src/java/org/apache/cassandra/cql3/Sets.java| 73 +
 src/java/org/apache/cassandra/cql3/Term.java| 28 ++-
 .../cassandra/cql3/functions/FunctionCall.java  | 10 +++
 .../org/apache/cassandra/db/KeyCacheTest.java   | 70 ++--
 8 files changed, 266 insertions(+), 81 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/813577e3/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/813577e3/src/java/org/apache/cassandra/cql3/Lists.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/813577e3/src/java/org/apache/cassandra/cql3/Maps.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/813577e3/src/java/org/apache/cassandra/cql3/Sets.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/813577e3/test/unit/org/apache/cassandra/db/KeyCacheTest.java
--
diff --cc test/unit/org/apache/cassandra/db/KeyCacheTest.java
index e8e524d,3458961..10bcedd
--- a/test/unit/org/apache/cassandra/db/KeyCacheTest.java
+++ b/test/unit/org/apache/cassandra/db/KeyCacheTest.java
@@@ -35,8 -27,14 +27,12 @@@ import org.junit.Test
  
  import org.apache.cassandra.SchemaLoader;
  import org.apache.cassandra.Util;
+ import org.apache.cassandra.cache.KeyCacheKey;
  import org.apache.cassandra.db.compaction.CompactionManager;
+ import org.apache.cassandra.db.filter.QueryFilter;
 -import org.apache.cassandra.db.filter.QueryPath;
+ import org.apache.cassandra.service.CacheService;
 -import org.apache.cassandra.thrift.ColumnParent;
  import org.apache.cassandra.utils.ByteBufferUtil;
+ 
  import static junit.framework.Assert.assertEquals;
  
  public class KeyCacheTest extends SchemaLoader
@@@ -60,28 -58,45 +56,45 @@@
  
  // empty the cache
  CacheService.instance.invalidateKeyCache();
- assert CacheService.instance.keyCache.size() == 0;
 -assertKeyCacheSize(0, TABLE1, COLUMN_FAMILY2);
++assertKeyCacheSize(0, KEYSPACE1, COLUMN_FAMILY2);
  
  // insert data and force to disk
 -insertData(TABLE1, COLUMN_FAMILY2, 0, 100);
 +insertData(KEYSPACE1, COLUMN_FAMILY2, 0, 100);
  store.forceBlockingFlush();
  
  // populate the cache
 -readData(TABLE1, COLUMN_FAMILY2, 0, 100);
 -assertKeyCacheSize(100, TABLE1, COLUMN_FAMILY2);
 +readData(KEYSPACE1, COLUMN_FAMILY2, 0, 100);
- assertEquals(100, CacheService.instance.keyCache.size());
++assertKeyCacheSize(100, KEYSPACE1, COLUMN_FAMILY2);
  
  // really? our caches don't implement the map interface? (hence no 
.addAll)
  MapKeyCacheKey, RowIndexEntry savedMap = new HashMapKeyCacheKey, 
RowIndexEntry();
  for (KeyCacheKey k : CacheService.instance.keyCache.getKeySet())
  {
- savedMap.put(k, CacheService.instance.keyCache.get(k));
 -if (k.desc.ksname.equals(TABLE1)  
k.desc.cfname.equals(COLUMN_FAMILY2))
++if (k.desc.ksname.equals(KEYSPACE1)  
k.desc.cfname.equals(COLUMN_FAMILY2))
+ savedMap.put(k, CacheService.instance.keyCache.get(k));
  }
  
  // force the cache to disk
  CacheService.instance.keyCache.submitWrite(Integer.MAX_VALUE).get();
  
  CacheService.instance.invalidateKeyCache();
- assert CacheService.instance.keyCache.size() == 0;
 -assertKeyCacheSize(0, TABLE1, COLUMN_FAMILY2);
++assertKeyCacheSize(0, KEYSPACE1, COLUMN_FAMILY2);
+ 
+ CacheService.instance.keyCache.loadSaved(store);
 -assertKeyCacheSize(savedMap.size(), TABLE1, COLUMN_FAMILY2);
++

[2/4] git commit: fix KeyCacheTest to work with globalized key cache

2013-07-24 Thread yukim
fix KeyCacheTest to work with globalized key cache


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/422d2236
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/422d2236
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/422d2236

Branch: refs/heads/trunk
Commit: 422d2236b59b80e3cafef9fb8cad3235280e8626
Parents: ce4e4b9
Author: Yuki Morishita yu...@apache.org
Authored: Wed Jul 24 10:58:07 2013 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Jul 24 11:00:05 2013 -0500

--
 .../org/apache/cassandra/db/KeyCacheTest.java   | 75 +---
 1 file changed, 49 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/422d2236/test/unit/org/apache/cassandra/db/KeyCacheTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/KeyCacheTest.java 
b/test/unit/org/apache/cassandra/db/KeyCacheTest.java
index 93f1fea..3458961 100644
--- a/test/unit/org/apache/cassandra/db/KeyCacheTest.java
+++ b/test/unit/org/apache/cassandra/db/KeyCacheTest.java
@@ -1,6 +1,4 @@
-package org.apache.cassandra.db;
 /*
- *
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -9,35 +7,34 @@ package org.apache.cassandra.db;
  * License); you may not use this file except in compliance
  * with the License.  You may obtain a copy of the License at
  *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
+ * http://www.apache.org/licenses/LICENSE-2.0
  *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
  */
+package org.apache.cassandra.db;
 
 import java.io.IOException;
 import java.util.HashMap;
 import java.util.Map;
 import java.util.concurrent.ExecutionException;
 
-import org.apache.cassandra.cache.KeyCacheKey;
-import org.apache.cassandra.db.filter.QueryFilter;
-import org.apache.cassandra.service.CacheService;
-import org.apache.cassandra.thrift.ColumnParent;
-
 import org.junit.AfterClass;
 import org.junit.Test;
 
 import org.apache.cassandra.SchemaLoader;
 import org.apache.cassandra.Util;
-import org.apache.cassandra.db.filter.QueryPath;
+import org.apache.cassandra.cache.KeyCacheKey;
 import org.apache.cassandra.db.compaction.CompactionManager;
+import org.apache.cassandra.db.filter.QueryFilter;
+import org.apache.cassandra.db.filter.QueryPath;
+import org.apache.cassandra.service.CacheService;
+import org.apache.cassandra.thrift.ColumnParent;
 import org.apache.cassandra.utils.ByteBufferUtil;
+
 import static junit.framework.Assert.assertEquals;
 
 public class KeyCacheTest extends SchemaLoader
@@ -61,7 +58,7 @@ public class KeyCacheTest extends SchemaLoader
 
 // empty the cache
 CacheService.instance.invalidateKeyCache();
-assert CacheService.instance.keyCache.size() == 0;
+assertKeyCacheSize(0, TABLE1, COLUMN_FAMILY2);
 
 // insert data and force to disk
 insertData(TABLE1, COLUMN_FAMILY2, 0, 100);
@@ -69,20 +66,37 @@ public class KeyCacheTest extends SchemaLoader
 
 // populate the cache
 readData(TABLE1, COLUMN_FAMILY2, 0, 100);
-assertEquals(100, CacheService.instance.keyCache.size());
+assertKeyCacheSize(100, TABLE1, COLUMN_FAMILY2);
 
 // really? our caches don't implement the map interface? (hence no 
.addAll)
 MapKeyCacheKey, RowIndexEntry savedMap = new HashMapKeyCacheKey, 
RowIndexEntry();
 for (KeyCacheKey k : CacheService.instance.keyCache.getKeySet())
 {
-savedMap.put(k, CacheService.instance.keyCache.get(k));
+if (k.desc.ksname.equals(TABLE1)  
k.desc.cfname.equals(COLUMN_FAMILY2))
+savedMap.put(k, CacheService.instance.keyCache.get(k));
 }
 
 // force the cache to disk
 CacheService.instance.keyCache.submitWrite(Integer.MAX_VALUE).get();
 
 CacheService.instance.invalidateKeyCache();
-assert CacheService.instance.keyCache.size() == 0;
+assertKeyCacheSize(0, TABLE1, COLUMN_FAMILY2);
+
+

[jira] [Updated] (CASSANDRA-5626) Support empty IN queries

2013-07-24 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5626:
-

Fix Version/s: 1.2.8

 Support empty IN queries
 

 Key: CASSANDRA-5626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5626
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Alexander Solovyev
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3
 Fix For: 1.2.8

 Attachments: 5626.txt, 5626-v2.txt


 It would be nice to have support of empty IN queries. 
 Example: SELECT a FROM t WHERE aKey IN (). 
 One of the reasons is to have such support in DataStax Java Driver (see 
 discussion here: https://datastax-oss.atlassian.net/browse/JAVA-106).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5698) cqlsh should support collections in COPY FROM

2013-07-24 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5698:
-

Fix Version/s: (was: 1.2.7)
   1.2.8

 cqlsh should support collections in COPY FROM
 -

 Key: CASSANDRA-5698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5698
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.2, 1.1.3
 Environment: Using the Copy of cqlsh, the data included the “{“ and 
 “[“ (= CollectionType) case,
 I think in the export / import process, data integrity is compromised.
Reporter: Hiroshi Kise
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: collections, cqlsh
 Fix For: 1.2.8

 Attachments: 5698.txt


 Concrete operation is as follows.
 -*-*-*-*-*-*-*-*
 (1)map type's export/import
 export
 [root@castor bin]# ./cqlsh
 Connected to Test Cluster at localhost:9160.
 [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
 Use HELP for help.
 cqlsh create keyspace maptestks with replication  = { 'class' : 
 'SimpleStrategy', 'replication_factor' : '1' };
 cqlsh use maptestks;
 cqlsh:maptestks create table maptestcf (rowkey varchar PRIMARY KEY, 
 targetmap mapvarchar,varchar);
 cqlsh:maptestks insert into maptestcf (rowkey, targetmap) values 
 ('rowkey',{'mapkey':'mapvalue'});
 cqlsh:maptestks select * from maptestcf;
  rowkey | targetmap
 +
  rowkey | {mapkey: mapvalue}
 cqlsh:maptestks  copy maptestcf to 'maptestcf-20130619.txt';
 1 rows exported in 0.008 seconds.
 cqlsh:maptestks exit;
 [root@castor bin]# cat maptestcf-20130619.txt
 rowkey,{mapkey: mapvalue}
    (a)
 import
 [root@castor bin]# ./cqlsh
 Connected to Test Cluster at localhost:9160.
 [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
 Use HELP for help.
 cqlsh create keyspace mapimptestks with replication  = { 'class' : 
 'SimpleStrategy', 'replication_factor' : '1' };
 cqlsh use mapimptestks;
 cqlsh:mapimptestks create table mapimptestcf (rowkey varchar PRIMARY KEY, 
 targetmap mapvarchar,varchar);
 cqlsh:mapimptestks copy mapimptestcf from ' maptestcf-20130619.txt ';
 Bad Request: line 1:83 no viable alternative at input '}'
 Aborting import at record #0 (line 1). Previously-inserted values still 
 present.
 0 rows imported in 0.025 seconds.
 -*-*-*-*-*-*-*-*
 (2)list type's export/import
 export
 [root@castor bin]#./cqlsh
 Connected to Test Cluster at localhost:9160.
 [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
 Use HELP for help.
 cqlsh create keyspace listtestks with replication  = { 'class' : 
 'SimpleStrategy', 'replication_factor' : '1' };
 cqlsh use listtestks;
 cqlsh:listtestks create table listtestcf (rowkey varchar PRIMARY KEY, value 
 listvarchar);
 cqlsh:listtestks insert into listtestcf (rowkey,value) values 
 ('rowkey',['value1','value2']);
 cqlsh:listtestks select * from listtestcf;
  rowkey | value
 +--
  rowkey | [value1, value2]
 cqlsh:listtestks copy listtestcf to 'listtestcf-20130619.txt';
 1 rows exported in 0.014 seconds.
 cqlsh:listtestks exit;
 [root@castor bin]# cat listtestcf-20130619.txt
 rowkey,[value1, value2]
    (b)
 export
 [root@castor bin]# ./cqlsh
 Connected to Test Cluster at localhost:9160.
 [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0]
 Use HELP for help.
 cqlsh create keyspace listimptestks with replication  = { 'class' : 
 'SimpleStrategy', 'replication_factor' : '1' };
 cqlsh use listimptestks;
 cqlsh:listimptestks create table listimptestcf (rowkey varchar PRIMARY KEY, 
 value listvarchar);
 cqlsh:listimptestks copy listimptestcf from ' listtestcf-20130619.txt ';
 Bad Request: line 1:79 no viable alternative at input ']'
 Aborting import at record #0 (line 1). Previously-inserted values still 
 present.
 0 rows imported in 0.030 seconds.
 -*-*-*-*-*-*-*-*
 Reference: (correct, or error, in another dimension)
 Manually, I have rewritten the export file.
 [root@castor bin]# cat nlisttestcf-20130619.txt
 rowkey,['value1',' value2']
 
 cqlsh:listimptestks copy listimptestcf from 'nlisttestcf-20130619.txt';
 1 rows imported in 0.035 seconds.
 cqlsh:listimptestks select * from implisttestcf;
  rowkey | value
 +--
  rowkey | [value1, value2]
 cqlsh:implisttestks exit;
 [root@castor bin]# cat nmaptestcf-20130619.txt
 rowkey,”{'mapkey': 'mapvalue'}”
 [root@castor 

[jira] [Commented] (CASSANDRA-5797) DC-local CAS

2013-07-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718549#comment-13718549
 ] 

Sylvain Lebresne commented on CASSANDRA-5797:
-

bq. I like LOCAL_SERIAL over ANY

I think there is some confusion. The suggestion of CL.ANY was for the commit 
part of Paxos. That part is basically a standard write (that happens after the 
paxos algorithm has unfolded but does impact the visibility of the CAS write by 
non-serial reads). For that, LOCAL_SERIAL don't really make sense imo (it's 
wrong even). ANY does is what match the most what happens, because you are 
guaranteed the write is replicated somewhere (paxos ensures that) but you may 
not be able to see your write right away with normal reads, even at CL.ALL 
(which is also something that CL.ANY).  

 DC-local CAS
 

 Key: CASSANDRA-5797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5797
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 2.0
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.1


 For two-datacenter deployments where the second DC is strictly for disaster 
 failover, it would be useful to restrict CAS to a single DC to avoid cross-DC 
 round trips.
 (This would require manually truncating {{system.paxos}} when failing over.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5626) Support empty IN queries

2013-07-24 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718552#comment-13718552
 ] 

Sylvain Lebresne commented on CASSANDRA-5626:
-

v2 lgtm, +1

 Support empty IN queries
 

 Key: CASSANDRA-5626
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5626
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.0
Reporter: Alexander Solovyev
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3
 Fix For: 1.2.8

 Attachments: 5626.txt, 5626-v2.txt


 It would be nice to have support of empty IN queries. 
 Example: SELECT a FROM t WHERE aKey IN (). 
 One of the reasons is to have such support in DataStax Java Driver (see 
 discussion here: https://datastax-oss.atlassian.net/browse/JAVA-106).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5800) Support pre-1.2 release CQL3 tables in CqlPagingRecordReader

2013-07-24 Thread Alex Liu (JIRA)
Alex Liu created CASSANDRA-5800:
---

 Summary: Support pre-1.2 release CQL3 tables in 
CqlPagingRecordReader
 Key: CASSANDRA-5800
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5800
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Alex Liu
Assignee: Alex Liu


Pre-1.2 release CQL3 table stores the key in system.schema_columnfamilies 
key_alias column which is different from 1.2 release. We should support it in 
CqlPagingRecordReader as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5800) Support pre-1.2 release CQL3 tables in CqlPagingRecordReader

2013-07-24 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-5800:


Attachment: 5800-1.2-branch.txt

Patch on 1.2 branch is attached

 Support pre-1.2 release CQL3 tables in CqlPagingRecordReader
 

 Key: CASSANDRA-5800
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5800
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 5800-1.2-branch.txt


 Pre-1.2 release CQL3 table stores the key in system.schema_columnfamilies 
 key_alias column which is different from 1.2 release. We should support it in 
 CqlPagingRecordReader as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5800) Support pre-1.2 release CQL3 tables in CqlPagingRecordReader

2013-07-24 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-5800:


Attachment: 5800-1.2-branch.txt

Patch on 1.2 branch is attached

 Support pre-1.2 release CQL3 tables in CqlPagingRecordReader
 

 Key: CASSANDRA-5800
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5800
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 5800-1.2-branch.txt


 Pre-1.2 release CQL3 table stores the key in system.schema_columnfamilies 
 key_alias column which is different from 1.2 release. We should support it in 
 CqlPagingRecordReader as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5801) AE during validation compaction

2013-07-24 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-5801:
---

 Summary: AE during validation compaction
 Key: CASSANDRA-5801
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5801
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Sylvain Lebresne
 Fix For: 2.0


While repairing with vnodes enabled:

{noformat}
ERROR [ValidationExecutor:1] 2013-07-24 12:09:36,326 Validator.java (line 197) 
Failed creating a merkle tree for [repair #d13fd210-f483-11e2-b6fb-f1fe0a5dda64 
on ks/cf, (9214460999857687863,-9209369219500956981]], /127.0.0.1 (see log for 
details)
ERROR [ValidationExecutor:1] 2013-07-24 12:09:36,328 CassandraDaemon.java (line 
196) Exception in thread Thread[ValidationExecutor:1,1,main]
java.lang.AssertionError: -9191651187195735134 is not contained in 
(9214460999857687863,-9209369219500956981]
at org.apache.cassandra.repair.Validator.add(Validator.java:136)
at 
org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:669)
at 
org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:64)
at 
org.apache.cassandra.db.compaction.CompactionManager$8.call(CompactionManager.java:395)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5802) NPE in HH metrics

2013-07-24 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-5802:
---

 Summary: NPE in HH metrics
 Key: CASSANDRA-5802
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5802
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 2.0 beta 2
Reporter: Brandon Williams
Assignee: Tyler Hobbs
 Fix For: 2.0


{noformat}
[junit] Testcase: 
testCompactionOfHintsCF(org.apache.cassandra.db.HintedHandOffTest):   
Caused an ERROR
[junit] null
[junit] java.lang.NullPointerException
[junit] at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:191)
[junit] at com.google.common.cache.LocalCache.get(LocalCache.java:3989)
[junit] at 
com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3994)
[junit] at 
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4878)
[junit] at 
com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4884)
[junit] at 
org.apache.cassandra.metrics.HintedHandoffMetrics.incrCreatedHints(HintedHandoffMetrics.java:67)
[junit] at 
org.apache.cassandra.db.HintedHandOffManager.hintFor(HintedHandOffManager.java:125)
[junit] at 
org.apache.cassandra.db.HintedHandOffTest.testCompactionOfHintsCF(HintedHandOffTest.java:68)
[junit] 
[junit] 
[junit] Test org.apache.cassandra.db.HintedHandOffTest FAILED
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5803) Support pre-1.2 CQL3 tables in Pig

2013-07-24 Thread Alex Liu (JIRA)
Alex Liu created CASSANDRA-5803:
---

 Summary: Support pre-1.2 CQL3 tables in Pig
 Key: CASSANDRA-5803
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5803
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Alex Liu
Assignee: Alex Liu


Pre-1.2 release CQL3 table stores the key in system.schema_columnfamilies 
key_alias column which is different from 1.2 release. We should support it in 
CqlPagingRecordReader as well.

The patch is on top of CASSANDRA-5800

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5803) Support pre-1.2 CQL3 tables in Pig

2013-07-24 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-5803:


Attachment: 5803-1.2-branch.txt

patch for 1.2 branch is attached

 Support pre-1.2 CQL3 tables in Pig
 --

 Key: CASSANDRA-5803
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5803
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 5803-1.2-branch.txt


 Pre-1.2 release CQL3 table stores the key in system.schema_columnfamilies 
 key_alias column which is different from 1.2 release. We should support it in 
 CqlPagingRecordReader as well.
 The patch is on top of CASSANDRA-5800

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5768) If a Seed can't be contacted, a new node comes up as a cluster of 1

2013-07-24 Thread Robert Coli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13718931#comment-13718931
 ] 

Robert Coli commented on CASSANDRA-5768:


This comment is not particularly substantive, I just want to express my 
enthusiasm for this behavior being fixed. YAY! This formerly very 
confusing-to-noobs behavior will now be much less potentially confusing. Thanks 
to the reporter and to the contributors of the patch! :D

 If a Seed can't be contacted, a new node comes up as a cluster of 1
 ---

 Key: CASSANDRA-5768
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5768
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 2.0 beta 1
Reporter: Andy Cobley
Assignee: Brandon Williams
Priority: Minor
 Fix For: 1.2.7, 2.0 beta 2

 Attachments: 5768.txt, cassandra.yaml


 Setting up a new test cluster using  2.0.0-beta1 and I noticed the following 
 behaviour with vnodes turned on.  
 I bring up one node all well and good.  however if I bring up a second node, 
 that can't contact the first (the first being the seed for the second) after 
 a short period of time, the second goes ahead and assumes it's the only node 
 and bootstraps with all tokens.  
 NOTE also this email from Robert Coli 
 To: u...@cassandra.apache.org
 Obviously if you have defined a seed and cannot contact it, the node should 
 not start as a cluster of one. I have a to-do list item to file a JIRA on the 
 subject, but if you wanted to file and link us, that'd be super. :)
 Startup trace (from the can't contact the seed messages below).
 http://aep.appspot.com/display/ABcWltCES1srzPrj5CkS69-GB8o/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5797) DC-local CAS

2013-07-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13719041#comment-13719041
 ] 

Jonathan Ellis commented on CASSANDRA-5797:
---

bq. The thing is that for reads, we must have SERIAL and LOCAL_SERIAL in CL if 
we want thrift to support it. So once we have them in CL, is it really worth 
adding a separate enum for the write case?

The problem is that none of {ANY, ONE, TWO, THREE, LOCAL_QUORUM, EACH_QUORUM} 
are valid on writes, which isn't very clear if we reuse CL for everything.

Then again we ANY is not a valid CL for read, and EACH_QUORUM is not valid for 
writes.  I dunno.

 DC-local CAS
 

 Key: CASSANDRA-5797
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5797
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 2.0
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0.1


 For two-datacenter deployments where the second DC is strictly for disaster 
 failover, it would be useful to restrict CAS to a single DC to avoid cross-DC 
 round trips.
 (This would require manually truncating {{system.paxos}} when failing over.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5800) Support pre-1.2 release CQL3 tables in CqlPagingRecordReader

2013-07-24 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13719043#comment-13719043
 ] 

Jonathan Ellis commented on CASSANDRA-5800:
---

I don't understand what the goal is here.  CPRR is only part of 1.2.

 Support pre-1.2 release CQL3 tables in CqlPagingRecordReader
 

 Key: CASSANDRA-5800
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5800
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 5800-1.2-branch.txt


 Pre-1.2 release CQL3 table stores the key in system.schema_columnfamilies 
 key_alias column which is different from 1.2 release. We should support it in 
 CqlPagingRecordReader as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5800) Support pre-1.2 release CQL3 tables in CqlPagingRecordReader

2013-07-24 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13719094#comment-13719094
 ] 

Alex Liu commented on CASSANDRA-5800:
-

If the user upgrades from pre-1.2 release to 1.2 release, any CQL3 table 
created from pre-1.2 release doesn't work for CqlPagingRecordReader because it 
stores the key name in key_alias column instead of key_aliases column. The fix 
helps the upgrading.


 Support pre-1.2 release CQL3 tables in CqlPagingRecordReader
 

 Key: CASSANDRA-5800
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5800
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 5800-1.2-branch.txt


 Pre-1.2 release CQL3 table stores the key in system.schema_columnfamilies 
 key_alias column which is different from 1.2 release. We should support it in 
 CqlPagingRecordReader as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: fix String.format parameter order for error string

2013-07-24 Thread dbrosius
Updated Branches:
  refs/heads/trunk 813577e32 - 843e92c96


fix String.format parameter order for error string


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/843e92c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/843e92c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/843e92c9

Branch: refs/heads/trunk
Commit: 843e92c96b0ce5a8418ba42bbef5a6c26750fe79
Parents: 813577e
Author: Dave Brosius dbros...@apache.org
Authored: Thu Jul 25 01:36:17 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Thu Jul 25 01:36:17 2013 -0400

--
 src/java/org/apache/cassandra/thrift/ThriftValidation.java | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/843e92c9/src/java/org/apache/cassandra/thrift/ThriftValidation.java
--
diff --git a/src/java/org/apache/cassandra/thrift/ThriftValidation.java 
b/src/java/org/apache/cassandra/thrift/ThriftValidation.java
index ce8fdcb..ec3bb00 100644
--- a/src/java/org/apache/cassandra/thrift/ThriftValidation.java
+++ b/src/java/org/apache/cassandra/thrift/ThriftValidation.java
@@ -235,8 +235,8 @@ public class ThriftValidation
 ByteBuffer[] components = composite.split(name);
 int minComponents = composite.types.size() - 
(cfDef.hasCollections ? 1 : 0);
 if (components.length  minComponents)
-throw new 
org.apache.cassandra.exceptions.InvalidRequestException(String.format(Not 
enough component (found %d but %d expected) for column name since %s is a CQL3 
table,
-   
 metadata.cfName, components.length, minComponents));
+throw new 
org.apache.cassandra.exceptions.InvalidRequestException(String.format(Not 
enough components (found %d but %d expected) for column name since %s is a CQL3 
table,
+   
 components.length, minComponents, metadata.cfName));
 
 // Furthermore, the column name must be a declared one.
 int columnIndex = composite.types.size() - 
(cfDef.hasCollections ? 2 : 1);



git commit: CompositeType.build is a static method

2013-07-24 Thread dbrosius
Updated Branches:
  refs/heads/trunk 843e92c96 - a2a31244a


CompositeType.build is a static method


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2a31244
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2a31244
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2a31244

Branch: refs/heads/trunk
Commit: a2a31244a7434cd93409754d6bbeb180147d3df1
Parents: 843e92c
Author: Dave Brosius dbros...@apache.org
Authored: Thu Jul 25 01:48:09 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Thu Jul 25 01:48:09 2013 -0400

--
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2a31244/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
index cc8500e..6d50775 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
@@ -306,7 +306,7 @@ final class CqlRecordWriter extends 
AbstractColumnFamilyRecordWriterMapString,
 for (int i = 0; i keys.length; i++)
 keys[i] = keyColumns.get(partitionKeyColumns[i]);
 
-partitionKey = ((CompositeType) keyValidator).build(keys);
+partitionKey = CompositeType.build(keys);
 }
 else
 {



git commit: don't fall thru cases unless you really want to

2013-07-24 Thread dbrosius
Updated Branches:
  refs/heads/trunk a2a31244a - 4fbed40c6


don't fall thru cases unless you really want to


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4fbed40c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4fbed40c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4fbed40c

Branch: refs/heads/trunk
Commit: 4fbed40c6c7f2dc9af8feb222139d06adc0eecbf
Parents: a2a3124
Author: Dave Brosius dbros...@apache.org
Authored: Thu Jul 25 01:50:05 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Thu Jul 25 01:50:05 2013 -0400

--
 src/java/org/apache/cassandra/transport/ServerConnection.java | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4fbed40c/src/java/org/apache/cassandra/transport/ServerConnection.java
--
diff --git a/src/java/org/apache/cassandra/transport/ServerConnection.java 
b/src/java/org/apache/cassandra/transport/ServerConnection.java
index 97b6b5a..cb9081c 100644
--- a/src/java/org/apache/cassandra/transport/ServerConnection.java
+++ b/src/java/org/apache/cassandra/transport/ServerConnection.java
@@ -106,6 +106,7 @@ public class ServerConnection extends Connection
 // we won't use the authenticator again, null it so that 
it can be GC'd
 saslAuthenticator = null;
 }
+break;
 case READY:
 break;
 default: