[jira] [Commented] (CASSANDRA-5125) Support indexes on composite column components

2013-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620729#comment-13620729
 ] 

Sylvain Lebresne commented on CASSANDRA-5125:
-

Things move fast on trunk lately, so I've pushed a rebased version at 
https://github.com/pcmanus/cassandra/commits/5125-2 to avoid rebasing every day.

 Support indexes on composite column components
 --

 Key: CASSANDRA-5125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5125
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 2.0

 Attachments: 0001-Refactor-aliases-into-column_metadata.txt, 
 0002-Generalize-CompositeIndex-for-all-column-type.txt, 
 0003-Handle-new-type-of-IndexExpression.txt, 
 0004-Handle-partition-key-indexing.txt


 Given
 {code}
 CREATE TABLE foo (
   a int,
   b int,
   c int,
   PRIMARY KEY (a, b)
 );
 {code}
 We should support {{CREATE INDEX ON foo(b)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5381) java.io.EOFException exception while executing nodetool repair with compression enabled

2013-04-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson resolved CASSANDRA-5381.


Resolution: Duplicate

 java.io.EOFException exception while executing nodetool repair with 
 compression enabled
 ---

 Key: CASSANDRA-5381
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5381
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: Linux Virtual Machines, Red Hat Enterprise release 6.4, 
 kernel version  2.6.32-358.2.1.el6.x86_64. Each VM has 8GB memory and 4vCPUS.
Reporter: Neil Thomson
Priority: Minor

 Very similar to issue reported in CASSANDRA-5105. I have 3 nodes configured 
 in a cluster. The nodes are configured with compression enabled. When 
 attempting a nodetool repair on one node, i get exceptions in the other nodes 
 in the cluster.
 Disabling compression on the column family allows nodetool repair to run 
 without error.
 Exception:
 INFO [Streaming to /3.69.211.179:2] 2013-03-25 12:30:27,874 
 StreamReplyVerbHandler.java (line 50) Need to re-stream file 
 /var/lib/cassandra/data/rt/values/rt-values-ib-1-Data.db to /3.69.211.179
 INFO [Streaming to /3.69.211.179:2] 2013-03-25 12:30:27,991 
 StreamReplyVerbHandler.java (line 50) Need to re-stream file 
 /var/lib/cassandra/data/rt/values/rt-values-ib-1-Data.db to /3.69.211.179
 ERROR [Streaming to /3.69.211.179:2] 2013-03-25 12:30:28,113 
 CassandraDaemon.java (line 164) Exception in thread Thread[Streaming to 
 /3.69.211.179:2,5,main]
 java.lang.RuntimeException: java.io.EOFException
 at com.google.common.base.Throwables.propagate(Throwables.java:160)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
 at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
 Source)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.io.EOFException
 at java.io.DataInputStream.readInt(Unknown Source)
 at 
 org.apache.cassandra.streaming.FileStreamTask.receiveReply(FileStreamTask.java:193)
 at 
 org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:114)
 at 
 org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
 at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ... 3 more
 Keyspace configuration is as follows:
 Keyspace: rt:
   Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
   Durable Writes: true
 Options: [replication_factor:3]
   Column Families:
 ColumnFamily: tagname
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: 
 org.apache.cassandra.db.marshal.BytesType
   Columns sorted by: org.apache.cassandra.db.marshal.BytesType
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy
 ColumnFamily: values
   Key Validation Class: org.apache.cassandra.db.marshal.BytesType
   Default column value validator: 
 org.apache.cassandra.db.marshal.BytesType
   Columns sorted by: org.apache.cassandra.db.marshal.BytesType
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.1
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-04-03 Thread Jan Chochol (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620761#comment-13620761
 ] 

Jan Chochol commented on CASSANDRA-5391:


Hi everyone, I am working with Ondrej on same problem.
I just looked to patch (git commit cc6429e2722e764dce8cad77660732146ed596ab) 
and I am not sure that it is exactly correct.
Problematic situation is, when {{length}}  {{CHUNK_SIZE}} (in 
{{CompressedFileStreamTask.stream()}}). In this case data will be sent in more 
chunks, but before every chunk this code will be executed:
{noformat}
file.seek(section.left);
{noformat}
Sending only first chunk every time will probably lead to described error.
I would suggest to move {{file.seek}} before beginning of {{while}} cycle (than 
file pointer will be moved by {{readFully}}) or change mentioned code to
{noformat}
file.seek(section.left + bytesTransferred);
{noformat}

 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at 

[jira] [Created] (CASSANDRA-5421) java.lang.ArrayIndexOutOfBoundsException when cassandra started on hibernated virtual instance

2013-04-03 Thread JIRA
Ondřej Černoš created CASSANDRA-5421:


 Summary: java.lang.ArrayIndexOutOfBoundsException when cassandra 
started on hibernated virtual instance
 Key: CASSANDRA-5421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5421
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Ondřej Černoš


We have a cql3 table records:

{noformat}
CREATE TABLE records (
all varchar,
record_id varchar,
uid varchar,
validity bigint,
some_property int,
PRIMARY KEY (all, record_id)
) WITH comment = 'Records';
{noformat}

with an index:

{noformat}
CREATE INDEX tokens_uid_idx ON tokens (uid);
{noformat}

We stored a couple of values in the table before the weekend with TTL set to 
see if the records expire. The instance we tested the behaviour on was put to 
sleep during the weekend.

We started the instance yesterday on 11:31:41,809 and at 13:57:28,195 we tried 
the  following:

{noformat}
select * from tokens;
{noformat}

just to check the record were deleted on TTL.

This is what we got:

{noformat}
TSocket read 0 bytes
{noformat}

We found the following exception in the log:

{noformat}
ERROR 13:57:28,195 Error occurred during processing of message.
java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.cassandra.cql3.statements.ColumnGroupMap.add(ColumnGroupMap.java:43)
at 
org.apache.cassandra.cql3.statements.ColumnGroupMap.access$200(ColumnGroupMap.java:31)
at 
org.apache.cassandra.cql3.statements.ColumnGroupMap$Builder.add(ColumnGroupMap.java:128)
at 
org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:804)
at 
org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:146)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:135)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:62)
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:140)
at 
org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1739)
at 
org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4074)
at 
org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4062)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{noformat}

Second try raised the following error in {{cqlsh}}:

{noformat}
Traceback (most recent call last):
  File bin/cqlsh, line 1001, in perform_statement_untraced
self.cursor.execute(statement, decoder=decoder)
  File bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cursor.py, line 
80, in execute
response = self.get_response(prepared_q, cl)
  File bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py, 
line 77, in get_response
return self.handle_cql_execution_errors(doquery, compressed_q, compress, cl)
  File bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py, 
line 96, in handle_cql_execution_errors
return executor(*args, **kwargs)
  File 
bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py, 
line 1782, in execute_cql3_query
self.send_execute_cql3_query(query, compression, consistency)
  File 
bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py, 
line 1793, in send_execute_cql3_query
self._oprot.trans.flush()
  File 
bin/../lib/thrift-python-internal-only-0.7.0.zip/thrift/transport/TTransport.py,
 line 293, in flush
self.__trans.write(buf)
  File 
bin/../lib/thrift-python-internal-only-0.7.0.zip/thrift/transport/TSocket.py, 
line 117, in write
plus = self.handle.send(buff)
error: [Errno 32] Broken pipe
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5421) java.lang.ArrayIndexOutOfBoundsException when cassandra started on hibernated virtual instance

2013-04-03 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ondřej Černoš updated CASSANDRA-5421:
-

Description: 
We have a cql3 table records:

{noformat}
CREATE TABLE records (
all varchar,
record_id varchar,
uid varchar,
validity bigint,
some_property int,
PRIMARY KEY (all, record_id)
) WITH comment = 'Records';
{noformat}

with an index:

{noformat}
CREATE INDEX records_uid_idx ON records (uid);
{noformat}

We stored a couple of values in the table before the weekend with TTL set to 
see if the records expire. The instance we tested the behaviour on was put to 
sleep during the weekend.

We started the instance yesterday on 11:31:41,809 and at 13:57:28,195 we tried 
the  following:

{noformat}
select * from records;
{noformat}

just to check the record were deleted on TTL.

This is what we got:

{noformat}
TSocket read 0 bytes
{noformat}

We found the following exception in the log:

{noformat}
ERROR 13:57:28,195 Error occurred during processing of message.
java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.cassandra.cql3.statements.ColumnGroupMap.add(ColumnGroupMap.java:43)
at 
org.apache.cassandra.cql3.statements.ColumnGroupMap.access$200(ColumnGroupMap.java:31)
at 
org.apache.cassandra.cql3.statements.ColumnGroupMap$Builder.add(ColumnGroupMap.java:128)
at 
org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:804)
at 
org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:146)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:135)
at 
org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:62)
at 
org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:132)
at 
org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:140)
at 
org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1739)
at 
org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4074)
at 
org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4062)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
at 
org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
{noformat}

Second try raised the following error in {{cqlsh}}:

{noformat}
Traceback (most recent call last):
  File bin/cqlsh, line 1001, in perform_statement_untraced
self.cursor.execute(statement, decoder=decoder)
  File bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cursor.py, line 
80, in execute
response = self.get_response(prepared_q, cl)
  File bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py, 
line 77, in get_response
return self.handle_cql_execution_errors(doquery, compressed_q, compress, cl)
  File bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py, 
line 96, in handle_cql_execution_errors
return executor(*args, **kwargs)
  File 
bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py, 
line 1782, in execute_cql3_query
self.send_execute_cql3_query(query, compression, consistency)
  File 
bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py, 
line 1793, in send_execute_cql3_query
self._oprot.trans.flush()
  File 
bin/../lib/thrift-python-internal-only-0.7.0.zip/thrift/transport/TTransport.py,
 line 293, in flush
self.__trans.write(buf)
  File 
bin/../lib/thrift-python-internal-only-0.7.0.zip/thrift/transport/TSocket.py, 
line 117, in write
plus = self.handle.send(buff)
error: [Errno 32] Broken pipe
{noformat}

  was:
We have a cql3 table records:

{noformat}
CREATE TABLE records (
all varchar,
record_id varchar,
uid varchar,
validity bigint,
some_property int,
PRIMARY KEY (all, record_id)
) WITH comment = 'Records';
{noformat}

with an index:

{noformat}
CREATE INDEX tokens_uid_idx ON tokens (uid);
{noformat}

We stored a couple of values in the table before the weekend with TTL set to 
see if the records expire. The instance we tested the behaviour on was put to 
sleep during the weekend.

We started the instance yesterday on 11:31:41,809 and at 13:57:28,195 we tried 
the  following:

{noformat}
select * from tokens;
{noformat}

just to check the record were deleted on TTL.

This is what we got:

{noformat}
TSocket read 0 bytes
{noformat}

We found the 

[jira] [Reopened] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-04-03 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reopened CASSANDRA-5391:



that sounds and looks reasonable, reopening to let @yukim have a look

 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: javax.net.ssl.SSLException: bad record MAC
   at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755)
   at 
 

[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620791#comment-13620791
 ] 

Ondřej Černoš commented on CASSANDRA-5391:
--

I tested the patch and verified it doesn't fix the issue.

 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: javax.net.ssl.SSLException: bad record MAC
   at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755)
   at 
 

[Cassandra Wiki] Trivial Update of JasmineBa by JasmineBa

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The JasmineBa page has been changed by JasmineBa:
http://wiki.apache.org/cassandra/JasmineBa

New page:
My name is Jasmine Barbee. I life in Cosio Di Arroscia (Italia).BR
BR
BR
BR
Here is my blog post: [[http://www.shumzagracan.com/EstherK88|More Information]]


[Cassandra Wiki] Trivial Update of RosemaryV by RosemaryV

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The RosemaryV page has been changed by RosemaryV:
http://wiki.apache.org/cassandra/RosemaryV

New page:
Not much to say about myself at all.BR
Feels good to be a part of this community.BR
I really wish Im useful at allBR
BR
Feel free to surf to my web blog :: [[http://www.phi9.com/|web hosting]]


[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620853#comment-13620853
 ] 

Ondřej Černoš commented on CASSANDRA-5391:
--

I tried Jan's proposal to move the seek out of the while loop and let the 
pointer be moved by the readFully method call, but with no luck. I'll let 
someone with more Cassandra internals knowledge to dive into this.

 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: javax.net.ssl.SSLException: bad record MAC
   at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607)
   at 
 

[Cassandra Wiki] Trivial Update of RosalynHu by RosalynHu

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The RosalynHu page has been changed by RosalynHu:
http://wiki.apache.org/cassandra/RosalynHu

New page:
火星 来る チョコレート、黒と 製品。 これ アシスト は 良い取引 シャープで 改善 の 受け入れ 上シープスキン ムートン ブーツします。BR
BR
Here is my webpage: 
[[http://www.greenagora.com/just2test/blogs/user/MittieHarw|ugg]]


[jira] [Updated] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-04-03 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5391:
--

Attachment: 5391-v2-1.2.txt
5391-1.2.3.txt

Jan, Ondřej,

Thanks for reporting.
What I wanted to do was to position the file pointer to the beginning of 
section for each loop, as uncompressed version do.
I attached two patches (-1.2.3 to apply for 1.2.3 release and -1.2 for current 
1.2 branch). Can you try these?

 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.3.txt, 5391-1.2.txt, 5391-v2-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: javax.net.ssl.SSLException: bad record MAC
   at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649)
   at 
 

git commit: IndexHelper.skipBloomFilters won't skip non-SHA filters, part 2 patch by Carl Yeksigian; reviewed by jasobrown got CASSANDRA-5385

2013-04-03 Thread jasobrown
Updated Branches:
  refs/heads/trunk abbb8601e - 4c28cfb57


IndexHelper.skipBloomFilters won't skip non-SHA filters, part 2
patch by Carl Yeksigian; reviewed by jasobrown got CASSANDRA-5385


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4c28cfb5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4c28cfb5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4c28cfb5

Branch: refs/heads/trunk
Commit: 4c28cfb57bfcd8f52991c3ee44eee0044fb3ba65
Parents: abbb860
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Apr 3 06:29:05 2013 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Apr 3 06:29:05 2013 -0700

--
 .../db/columniterator/IndexedSliceReader.java  |2 +-
 .../db/columniterator/SSTableNamesIterator.java|5 +-
 .../db/columniterator/SimpleSliceReader.java   |2 +-
 .../apache/cassandra/io/sstable/Descriptor.java|2 +
 .../apache/cassandra/io/sstable/IndexHelper.java   |   13 +
 .../io/sstable/SSTableIdentityIterator.java|4 +-
 test/unit/org/apache/cassandra/db/ScrubTest.java   |  245 +++
 7 files changed, 268 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c28cfb5/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index e02316c..f63c577 100644
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@ -93,7 +93,7 @@ class IndexedSliceReader extends AbstractIteratorOnDiskAtom 
implements OnDiskA
 else
 {
 setToRowStart(sstable, indexEntry, input);
-IndexHelper.skipBloomFilter(file, version.filterType);
+IndexHelper.skipSSTableBloomFilter(file, version);
 this.indexes = IndexHelper.deserializeIndex(file);
 this.emptyColumnFamily = 
EmptyColumns.factory.create(sstable.metadata);
 
emptyColumnFamily.delete(DeletionInfo.serializer().deserializeFromSSTable(file, 
version));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c28cfb5/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java 
b/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java
index fe9d84f..415a1b8 100644
--- a/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java
+++ b/src/java/org/apache/cassandra/db/columniterator/SSTableNamesIterator.java
@@ -124,7 +124,10 @@ public class SSTableNamesIterator extends 
SimpleAbstractColumnIterator implement
 else
 {
 assert file != null;
-IndexHelper.skipBloomFilter(file, 
sstable.descriptor.version.filterType );
+if (sstable.descriptor.version.hasRowLevelBF)
+{
+IndexHelper.skipSSTableBloomFilter(file, 
sstable.descriptor.version);
+}
 indexList = IndexHelper.deserializeIndex(file);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c28cfb5/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java
index ac556b3..58d8774 100644
--- a/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java
@@ -69,7 +69,7 @@ class SimpleSliceReader extends AbstractIteratorOnDiskAtom 
implements OnDiskAt
 if (!version.hasPromotedIndexes)
 {
 if(sstable.descriptor.version.hasRowLevelBF)
-IndexHelper.skipBloomFilter(file, version.filterType);
+IndexHelper.skipSSTableBloomFilter(file, version);
 IndexHelper.skipIndex(file);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4c28cfb5/src/java/org/apache/cassandra/io/sstable/Descriptor.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/Descriptor.java 
b/src/java/org/apache/cassandra/io/sstable/Descriptor.java
index fabfbb8..c8c87c2 100644
--- a/src/java/org/apache/cassandra/io/sstable/Descriptor.java
+++ 

[Cassandra Wiki] Trivial Update of Jacquelyn by Jacquelyn

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Jacquelyn page has been changed by Jacquelyn:
http://wiki.apache.org/cassandra/Jacquelyn

New page:
What will not likely change the undeniable fact that once again be embodied 
with the young host, who, detail by detail will be to explore the wonders of 
life in the farmer.BR
Furthermore, the specific energetic function as well as how many members to 
switch problems raise the odd expertise and in addition power to satisfy 
existing wants. When you earn Credits early it gives you the advantage of 
leveraging out later to power-level your craft.BR
You won't spend 3 days hitting the level cap and then five years getting ready 
(figures useful for sake of argument). But this doesn't mean you must just 
speed through the process and accept all of the default options thrown at you. 
BR
BR
May be You Are Interested in:. Many thanks rather certainly for that meeting. 
Customize. You can ofcourse search online for that flight simulator aircrafts 
that you're looking for, visit the website of the virtual airline that you 
simply fly with for that flight simulator aircrafts GW2 Gold they use or you 
are able to even try and take default and freeware flight simulator aircrafts 
and still have these flight simulator aircrafts repainted to your virtual 
airline on your own. It actually did a LOT more damage when I threw inside my 
attribute score, level, etc. BR
BR
And I might currently have started out playing as a possible MMOG person as 
well as Incredible. One of the most popular horror bloodfests is Re-Animator. 
($14. Mutations (mutation): Each character within the world of Guild - Wars 2 
has mutations which are caused through the Shiva virus. If fraxel treatments 
takes off, which is capable of meeting its demand it may provide a huge visual 
jump for multiplayer across gaming platforms. BR
BR
However, the developers Guild Wars 2 Online trying to make some concept is 
unique franchise. But for your young children before couple of years old, their 
primary teeth haven't been full, chewing and gastrointestinal digestion of food 
is still relatively weak, you still should pay care about supplying a number of 
nutrients, and parents should consider the cooking meals production, so you 
cannot eat as adults.BR
We have sold over six million units from the game. I scorn your state of mind 
such as this. Combat System- What makes this content scaling much more 
rewarding and possible, will be the improved combat system in Guild Wars 2. 
BR
BR
How many slots you will need depends on the amount of people is going to be 
online at any time. It might seem daunting, but give a few PUGs a chance and 
you is going to be surprised. Monsters usually do not drop armor that's kinda 
frustrating as collecting materials for armor just isn't as fun as obtaining a 
rare drop.BR
That's 90x5 which equals: 450. Monk Elite Skill: Peace and HarmonyMonk 
Elite Skill Boss and Location: Marnta Doomspeaker at Snake Dance.BR
BR
Also visit my website - 
[[http://www.coenembo.com/foro/blogs/user/GenevaCorb|click the next website]]


[jira] [Updated] (CASSANDRA-5074) Add an official way to disable compaction

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5074:
--

 Reviewer: slebresne
Fix Version/s: (was: 1.2.4)
   2.0
 Assignee: Marcus Eriksson

 Add an official way to disable compaction
 -

 Key: CASSANDRA-5074
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5074
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0


 We've traditionally used min or max compaction threshold = 0 to disable 
 compaction, but this isn't exactly intuitive and it's inconsistently 
 implemented -- allowed from jmx, not allowed from cli.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5273) Hanging system after OutOfMemory. Server cannot die due to uncaughtException handling

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-5273:
-

Assignee: Marcus Eriksson

 Hanging system after OutOfMemory. Server cannot die due to uncaughtException 
 handling
 -

 Key: CASSANDRA-5273
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5273
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: linux, 64 bit
Reporter: Ignace Desimpel
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 1.2.4

 Attachments: CassHangs.txt


 On out of memory exception, there is an uncaughtexception handler that is 
 calling System.exit(). However, multiple threads are calling this handler 
 causing a deadlock and the server cannot stop working. See 
 http://www.mail-archive.com/user@cassandra.apache.org/msg27898.html. And see 
 stack trace in attachement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5385) IndexHelper.skipBloomFilters won't skip non-SHA filters

2013-04-03 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620916#comment-13620916
 ] 

Jason Brown commented on CASSANDRA-5385:


Committed to trunk, and ScrubTest and LegacySSTableTest now pass. [~carlyeks], 
does this need to be applied to 1.2, as well? Seems like it should, but the 
ScrubTest and LegacySSTableTest pass on 1.2 without this.

 IndexHelper.skipBloomFilters won't skip non-SHA filters
 ---

 Key: CASSANDRA-5385
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5385
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0, 2.0
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
 Fix For: 1.2.4, 2.0

 Attachments: 5385.patch, 5385-v2.patch


 Currently, if the bloom filter is not of SHA type, we do not properly skip 
 the bytes. We need to read out the number of bytes, as happens in the Murmur 
 deserializer, then skip that many bytes instead of just skipping the hash 
 size. The version needs to be passed into the method as well, so that it 
 knows what type of index it is, and does the appropriate skipping.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4762) Support multiple OR clauses for CQL3 Compact storage

2013-04-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620917#comment-13620917
 ] 

Jonathan Ellis commented on CASSANDRA-4762:
---

At this point should we retarget for 2.0?

 Support multiple OR clauses for CQL3 Compact storage
 

 Key: CASSANDRA-4762
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4762
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
Assignee: T Jake Luciani
  Labels: cql3
 Fix For: 1.2.4

 Attachments: 4762-1.txt


 Given CASSANDRA-3885
 It seems it should be possible to store multiple ranges for many predicates 
 even the inner parts of a composite column.
 They could be expressed as a expanded set of filter queries.
 example:
 {code}
 CREATE TABLE test (
name text,
tdate timestamp,
tdate2 timestamp,
tdate3 timestamp,
num double,
PRIMARY KEY(name,tdate,tdate2,tdate3)
  ) WITH COMPACT STORAGE;
 SELECT * FROM test WHERE 
   name IN ('a','b') and
   tdate IN ('2010-01-01','2011-01-01') and
   tdate2 IN ('2010-01-01','2011-01-01') and
   tdate3 IN ('2010-01-01','2011-01-01') 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5349) Add binary protocol support for bind variables to non-prepared statements

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-5349:
-

Assignee: Marcus Eriksson  (was: Sylvain Lebresne)

 Add binary protocol support for bind variables to non-prepared statements
 -

 Key: CASSANDRA-5349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5349
 Project: Cassandra
  Issue Type: Task
  Components: API
Affects Versions: 1.2.0
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 1.2.4


 Currently, the binary protocol allows requests as string or [prepared 
 statement] id + bind vars.  Allowing string + bind vars as well would 
 simplify life for users with one-off statements and not have to choose 
 between adding boilerplate for PS, and having to manually escape parameters, 
 which is particularly painful for binary data.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5422) Binary protocol sanity check

2013-04-03 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5422:
-

 Summary: Binary protocol sanity check
 Key: CASSANDRA-5422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5422
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson


With MutationStatement.execute turned into a no-op, I only get about 33k 
insert_prepared ops/s on my laptop.  That is: this is an upper bound for our 
performance if Cassandra were infinitely fast, limited by netty handling the 
protocol + connections.

This is up from about 13k/s with MS.execute running normally.

~40% overhead from netty seems awfully high to me, especially for 
insert_prepared where the return value is tiny.  (I also used 4-byte column 
values to minimize that part as well.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5422) Binary protocol sanity check

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5422:
--

Attachment: 5422-test.txt

Patch to disable MS.execute (and batch_mutate) attached.

To run the binary protocol stress test,

{{mvn install -Dmaven.test.skip=true}} from java-driver root 
(https://github.com/datastax/java-driver)

then {{mvn assembly:single from driver-example/stress}}

finally, {{java -jar 
target/cassandra-driver-examples-stress-1.0.0-beta2-APSHOT-jar-with-dependencies.jar
 insert_prepared --value-size 4}}

 Binary protocol sanity check
 

 Key: CASSANDRA-5422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5422
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Attachments: 5422-test.txt


 With MutationStatement.execute turned into a no-op, I only get about 33k 
 insert_prepared ops/s on my laptop.  That is: this is an upper bound for our 
 performance if Cassandra were infinitely fast, limited by netty handling the 
 protocol + connections.
 This is up from about 13k/s with MS.execute running normally.
 ~40% overhead from netty seems awfully high to me, especially for 
 insert_prepared where the return value is tiny.  (I also used 4-byte column 
 values to minimize that part as well.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5422) Binary protocol sanity check

2013-04-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620935#comment-13620935
 ] 

Jonathan Ellis edited comment on CASSANDRA-5422 at 4/3/13 2:00 PM:
---

Patch to disable MS.execute (and batch_mutate) attached.

To run the binary protocol stress test,

{{mvn install -Dmaven.test.skip=true}} from java-driver root 
(https://github.com/datastax/java-driver)

then {{mvn assembly:single}} from driver-example/stress

finally, {{java -jar 
target/cassandra-driver-examples-stress-1.0.0-beta2-APSHOT-jar-with-dependencies.jar
 insert_prepared --value-size 4}}

  was (Author: jbellis):
Patch to disable MS.execute (and batch_mutate) attached.

To run the binary protocol stress test,

{{mvn install -Dmaven.test.skip=true}} from java-driver root 
(https://github.com/datastax/java-driver)

then {{mvn assembly:single from driver-example/stress}}

finally, {{java -jar 
target/cassandra-driver-examples-stress-1.0.0-beta2-APSHOT-jar-with-dependencies.jar
 insert_prepared --value-size 4}}
  
 Binary protocol sanity check
 

 Key: CASSANDRA-5422
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5422
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Attachments: 5422-test.txt


 With MutationStatement.execute turned into a no-op, I only get about 33k 
 insert_prepared ops/s on my laptop.  That is: this is an upper bound for our 
 performance if Cassandra were infinitely fast, limited by netty handling the 
 protocol + connections.
 This is up from about 13k/s with MS.execute running normally.
 ~40% overhead from netty seems awfully high to me, especially for 
 insert_prepared where the return value is tiny.  (I also used 4-byte column 
 values to minimize that part as well.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of Jacquelyn by Jacquelyn

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Jacquelyn page has been changed by Jacquelyn:
http://wiki.apache.org/cassandra/Jacquelyn?action=diffrev1=1rev2=2

+ Another reason gamers like it so much is that it doesn't require any 
subscription fees so everyone can play it online provided they like. Some 
people were unsure of whether or not this feature would stick, but it has been 
incorporated with the game's release.BR
+ Lottery while using the Stalwart -- a lotto for significant level toons. 
Given the warrior's battle readiness, weapon and armor upgrades would be the 
name in the game and lots of items unavailable to any or all other is available 
to Warriors. This partnership now allows credit and debit cards to be used at 
the Distributor level website. BR
- What will not likely change the undeniable fact that once again be embodied 
with the young host, who, detail by detail will be to explore the wonders of 
life in the farmer.BR
- Furthermore, the specific energetic function as well as how many members to 
switch problems raise the odd expertise and in addition power to satisfy 
existing wants. When you earn Credits early it gives you the advantage of 
leveraging out later to power-level your craft.BR
- You won't spend 3 days hitting the level cap and then five years getting 
ready (figures useful for sake of argument). But this doesn't mean you must 
just speed through the process and accept all of the default options thrown at 
you. BR
  BR
- May be You Are Interested in:. Many thanks rather certainly for that meeting. 
Customize. You can ofcourse search online for that flight simulator aircrafts 
that you're looking for, visit the website of the virtual airline that you 
simply fly with for that flight simulator aircrafts GW2 Gold they use or you 
are able to even try and take default and freeware flight simulator aircrafts 
and still have these flight simulator aircrafts repainted to your virtual 
airline on your own. It actually did a LOT more damage when I threw inside my 
attribute score, level, etc. BR
+ For these simply spend the hours in Kamadan watching the trade channel.  is 
displayed and they're going to continue to do this before you give another 
emote or if you move. Minions —The necromancer summons undead minions to fight 
foes and do his bidding.BR
+ However, although you may don't have the cash to pay on experience boosters, 
it is possible to still get them for free just by playing the game. Explained 
here are a couple terms which can be commonly found in most MMORPGs. BR
  BR
- And I might currently have started out playing as a possible MMOG person as 
well as Incredible. One of the most popular horror bloodfests is Re-Animator. 
($14. Mutations (mutation): Each character within the world of Guild - Wars 2 
has mutations which are caused through the Shiva virus. If fraxel treatments 
takes off, which is capable of meeting its demand it may provide a huge visual 
jump for multiplayer across gaming platforms. BR
+ And I might currently have started out playing as a possible MMOG person 
together with Incredible. One of my favorite horror bloodfests is actually 
Re-Animator. ($14. At minimum they did not want other players to possess a 
negative impact on your own experience and that alone is huge in constructing a 
community. …or perhaps you can purchase them for 2g 2s and 1g 51s 50c 
respectively. BR
  BR
+ However, the developers Guild Wars 2 Online trying to make some concept 
differs franchise. Just click the front page and choose the server,amount, then 
you can proceed to make transaction. Obviously, we all start with the 
smoothness publisher. I scorn your state of mind for example this. Factions, 
which could be the second Guild Wars game has 8. BR
- However, the developers Guild Wars 2 Online trying to make some concept is 
unique franchise. But for your young children before couple of years old, their 
primary teeth haven't been full, chewing and gastrointestinal digestion of food 
is still relatively weak, you still should pay care about supplying a number of 
nutrients, and parents should consider the cooking meals production, so you 
cannot eat as adults.BR
- We have sold over six million units from the game. I scorn your state of mind 
such as this. Combat System- What makes this content scaling much more 
rewarding and possible, will be the improved combat system in Guild Wars 2. 
BR
  BR
+ It might be worthwhile assembling a short training video using open source 
screen capture software. The castle was built-in around 1200. On the Gamescom 
Guild Wars 2 was playable for that first time. Here we wish to help each of the 
players that find this like a difficult task. NCsoft and Arena - Net today 
revealed their exciting plans for your future from the hugely successful Guild 
Wars franchise.BR
- How many slots you will need depends on the amount of people is going to be 
online at any 

[jira] [Assigned] (CASSANDRA-2524) Use SSTableBoundedScanner for cleanup

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-2524:
-

Assignee: Marcus Eriksson

(SSTBS has been replaced by a SSTS constructor taking a range)

 Use SSTableBoundedScanner for cleanup
 -

 Key: CASSANDRA-2524
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2524
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Marcus Eriksson
Priority: Minor
  Labels: lhf
 Fix For: 2.0

 Attachments: 
 0001-Use-a-SSTableBoundedScanner-for-cleanup-and-improve-cl.txt, 
 0002-Oops.-When-indexes-or-counters-are-in-use-must-continu.txt


 SSTableBoundedScanner seeks rather than scanning through rows, so it would be 
 significantly more efficient than the existing per-key filtering that cleanup 
 does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-3429) Fix truncate/compaction race without locking

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3429.
---

Resolution: Duplicate
  Assignee: (was: Jonathan Ellis)

done by CASSANDRA-3430

 Fix truncate/compaction race without locking
 

 Key: CASSANDRA-3429
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3429
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Priority: Minor
  Labels: compaction
 Fix For: 2.0


 See CASSANDRA-3399 for original problem description.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-4476) Support 2ndary index queries with only non-EQ clauses

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-4476:
-

Assignee: Marcus Eriksson

 Support 2ndary index queries with only non-EQ clauses
 -

 Key: CASSANDRA-4476
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4476
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Sylvain Lebresne
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0


 Currently, a query that uses 2ndary indexes must have at least one EQ clause 
 (on an indexed column). Given that indexed CFs are local (and use 
 LocalPartitioner that order the row by the type of the indexed column), we 
 should extend 2ndary indexes to allow querying indexed columns even when no 
 EQ clause is provided.
 As far as I can tell, the main problem to solve for this is to update 
 KeysSearcher.highestSelectivityPredicate(). I.e. how do we estimate the 
 selectivity of non-EQ clauses? I note however that if we can do that estimate 
 reasonably accurately, this might provide better performance even for index 
 queries that both EQ and non-EQ clauses, because some non-EQ clauses may have 
 a much better selectivity than EQ ones (say you index both the user country 
 and birth date, for SELECT * FROM users WHERE country = 'US' AND birthdate  
 'Jan 2009' AND birtdate  'July 2009', you'd better use the birthdate index 
 first).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4981) Error when starting a node with vnodes while counter-add operations underway

2013-04-03 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13620963#comment-13620963
 ] 

Ryan McGuire commented on CASSANDRA-4981:
-

I haven't been able to reproduce this in the datastax lab. I will try it on 
ec2/ubuntu like tyler tried.

 Error when starting a node with vnodes while counter-add operations underway
 

 Key: CASSANDRA-4981
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4981
 Project: Cassandra
  Issue Type: Bug
 Environment: 2-node cluster on ec2, ubuntu, cassandra-1.2.0 commit 
 a32eb9f7d2f2868e8154d178e96e045859e1d855
Reporter: Tyler Patterson
Assignee: Ryan McGuire
Priority: Minor
 Attachments: system.log


 Start both nodes, start stress on one node like this: cassandra-stress 
 --replication-factor=2 --operation=COUNTER_ADD
 While that is running: On the other node, kill cassandra, wait for nodetool 
 status to show the node as down, and restart cassandra. I sometimes have to 
 kill and restart cassandra several times to get the problem to happen.
 I get this error several times in the log:
 {code}
 ERROR 15:39:33,198 Exception in thread Thread[MutationStage:16,5,main]
 java.lang.AssertionError
   at 
 org.apache.cassandra.locator.TokenMetadata.firstTokenIndex(TokenMetadata.java:748)
   at 
 org.apache.cassandra.locator.TokenMetadata.firstToken(TokenMetadata.java:762)
   at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:95)
   at 
 org.apache.cassandra.service.StorageService.getNaturalEndpoints(StorageService.java:2426)
   at 
 org.apache.cassandra.service.StorageProxy.performWrite(StorageProxy.java:396)
   at 
 org.apache.cassandra.service.StorageProxy.applyCounterMutationOnLeader(StorageProxy.java:755)
   at 
 org.apache.cassandra.db.CounterMutationVerbHandler.doVerb(CounterMutationVerbHandler.java:53)
   at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621012#comment-13621012
 ] 

Ondřej Černoš edited comment on CASSANDRA-5391 at 4/3/13 3:59 PM:
--

[This|https://issues.apache.org/jira/secure/attachment/12576777/5391-1.2.3.txt] 
patch seems to work. So the issue may be resolved now. Thanks!

  was (Author: ondrej.cernos):
[This] patch seems to work. So the issue may be resolved now. Thanks!
  
 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.3.txt, 5391-1.2.txt, 5391-v2-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: javax.net.ssl.SSLException: bad record MAC
   at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
   at 
 

[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-04-03 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621012#comment-13621012
 ] 

Ondřej Černoš commented on CASSANDRA-5391:
--

[This] patch seems to work. So the issue may be resolved now. Thanks!

 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.3.txt, 5391-1.2.txt, 5391-v2-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: javax.net.ssl.SSLException: bad record MAC
   at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859)
   at 
 

[jira] [Commented] (CASSANDRA-5417) Push composites support in the storage engine

2013-04-03 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621017#comment-13621017
 ] 

Sylvain Lebresne commented on CASSANDRA-5417:
-

I've pushed some initial patch to 
https://github.com/pcmanus/cassandra/commits/5417. It's on top of 
CASSANDRA-5125 as said above (so only the 3 last commits are for this ticket).

I'll note right away that this patch mainly concern itself with the ground 
work, not with the different optimization I discuss in the description.  
Typically, it doesn't do any prefix sharing and a bit more work will be 
necessary to get that, but it makes it at least possible. Also, while the patch 
does clean up the CQL3 code quite a bit imo, I think there is more cleanups 
opportunity that are not taken just yet, but I feel that the patch is huge 
enough like that, so I'd rather keep that for later.

The patch passes the unit tests as well as (almost all) the cql dtests. For the 
cql dtests, due to some refactor, the patch introduce a small limitation that 
if you use a IN on the last clustering key, you have to include that last 
clustering key in the selected columns. This makes one test fail. We should 
definitively fix it, but as it happens, the solution for that is exactly the 
one needed for CASSANDRA-4911. So since that's a bit of a detail, and since 
this patch is already pretty big, I suggest leaving the fix of that remaining 
dtest to CASSANDRA-4911 and I'll take on myself to fix that latter ticket on 
2.0 timeframe.

I'll also note that this patch does bias the code a bit towards Composite, in 
the sense that for non-composite CF, we do allocate one more object per column 
(compare to without the patch). I haven't done much performance testing at that 
point however.


 Push composites support in the storage engine
 -

 Key: CASSANDRA-5417
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5417
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0


 CompositeType happens to be very useful and is now widely used: CQL3 heavily 
 rely on it, and super columns are now using it too internally. Besides, 
 CompositeType has been advised as a replacement of super columns on the 
 thrift side for a while, so it's safe to assume that it's generally used 
 there too.
 CompositeType has initially been introduced as just another AbstractType.  
 Meaning that the storage engine has no nothing whatsoever of composites 
 being, well, composite. This has the following drawbacks:
 * Because internally a composite value is handled as just a ByteBuffer, we 
 end up doing a lot of extra work. Typically, each time we compare 2 composite 
 value, we end up deserializing the components (which, while it doesn't copy 
 data per-se because we just slice the global ByteBuffer, still waste some cpu 
 cycles and allocate a bunch of ByteBuffer objects). And since compare can be 
 called *a lot*, this is likely not negligible.
 * This make CQL3 code uglier than necessary. Basically, CQL3 makes extensive 
 use of composites, and since it gets backs ByteBuffer from the internal 
 columns, it always have to check if it's actually a compositeType or not, and 
 then split it and pick the different parts it needs. It's only an API 
 problem, but having things exposed as composites directly would definitively 
 make thinks cleaner. In particular, in most cases, CQL3 don't care whether it 
 has a composite with only one component or a non-really-composite value, but 
 we still always distinguishes both cases.  Lastly, if we do expose composites 
 more directly internally, it's not a lot more work to internalize better 
 the different parts of the cell name that CQL3 uses (what's the clustering 
 key, what's the actuall CQL3 column name, what's the collection element), 
 making things cleaner. Last but not least, there is currently a bunch of 
 places where methods take a ByteBuffer as argument and it's hard to know 
 whether it expects a cell name or a CQL3 column name. This is pretty error 
 prone.
 * It makes it hard (or impossible) to do a number of performance 
 improvements.  Consider CASSANDRA-4175, I'm not really sure how you can do it 
 properly (in memory) if cell names are just ByteBuffer (since CQL3 column 
 names are just one of the component in general). But we also miss 
 oportunities of sharing prefixes. If we were able to share prefixes of 
 composite names in memory we would 1) lower the memory footprint and 2) 
 potentially speed-up comparison (of the prefixes) by checking reference 
 equality first (also, doing prefix sharing on-disk, which is a separate 
 concern btw, might be easier to do if we do prefix sharing in memory).
 So I suggest pushing 

[jira] [Updated] (CASSANDRA-5410) incremental backups race

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5410:
--

Attachment: 5410.txt

Patch to snapshot synchronously before creating a new View.

 incremental backups race
 

 Key: CASSANDRA-5410
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5410
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.0
Reporter: Jonathan Ellis
Priority: Minor
 Fix For: 1.2.4

 Attachments: 5410.txt


 incremental backups does not mark things referenced or compacting, so it 
 could get compacted away before createLinks runs.  Occasionally you can see 
 this happen during ColumnFamilyStoreTest.  (Since it runs on the background 
 tasks stage, it does not fail the test.)
 {noformat}
 [junit] java.lang.RuntimeException: Tried to hard link to file that does 
 not exist 
 build/test/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-ja-8-Statistics.db
 [junit]   at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:72)
 [junit]   at 
 org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1066)
 [junit]   at 
 org.apache.cassandra.db.DataTracker$1.run(DataTracker.java:168)
 [junit]   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
 [junit]   at 
 java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 [junit]   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 [junit]   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: comments and simplify comparator

2013-04-03 Thread jbellis
Updated Branches:
  refs/heads/trunk 4c28cfb57 - e9c3ee979


comments and simplify comparator


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9c3ee97
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9c3ee97
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9c3ee97

Branch: refs/heads/trunk
Commit: e9c3ee979f1f0dca49b002a2892bd23c6f69235c
Parents: 4c28cfb
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Apr 3 11:39:13 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Apr 3 11:39:21 2013 -0500

--
 .../compaction/SizeTieredCompactionStrategy.java   |   16 +-
 1 files changed, 6 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9c3ee97/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
index 8d990e5..6febc07 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
@@ -19,8 +19,8 @@ package org.apache.cassandra.db.compaction;
 
 import java.util.*;
 import java.util.Map.Entry;
-import java.util.concurrent.Callable;
 
+import com.google.common.primitives.Longs;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -28,7 +28,6 @@ import org.apache.cassandra.cql3.CFPropDefs;
 import org.apache.cassandra.db.ColumnFamilyStore;
 import org.apache.cassandra.exceptions.ConfigurationException;
 import org.apache.cassandra.io.sstable.SSTableReader;
-import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
 
 public class SizeTieredCompactionStrategy extends AbstractCompactionStrategy
@@ -75,6 +74,7 @@ public class SizeTieredCompactionStrategy extends 
AbstractCompactionStrategy
 logger.debug(Compaction buckets are {}, buckets);
 updateEstimatedCompactionsByTasks(buckets);
 
+// skip buckets containing less than minThreshold sstables, and limit 
other buckets to maxThreshold entries
 ListListSSTableReader prunedBuckets = new 
ArrayListListSSTableReader();
 for (ListSSTableReader bucket : buckets)
 {
@@ -92,10 +92,10 @@ public class SizeTieredCompactionStrategy extends 
AbstractCompactionStrategy
 prunedBuckets.add(prunedBucket);
 }
 
+// if there is no sstable to compact in standard way, try compacting 
single sstable whose droppable tombstone
+// ratio is greater than threshold.
 if (prunedBuckets.isEmpty())
 {
-// if there is no sstable to compact in standard way, try 
compacting single sstable whose droppable tombstone
-// ratio is greater than threshold.
 for (ListSSTableReader bucket : buckets)
 {
 for (SSTableReader table : bucket)
@@ -109,16 +109,12 @@ public class SizeTieredCompactionStrategy extends 
AbstractCompactionStrategy
 return Collections.emptyList();
 }
 
+// prefer compacting buckets with smallest average size; that will 
yield the fastest improvement for read performance
 return Collections.min(prunedBuckets, new 
ComparatorListSSTableReader()
 {
 public int compare(ListSSTableReader o1, ListSSTableReader o2)
 {
-long n = avgSize(o1) - avgSize(o2);
-if (n  0)
-return -1;
-if (n  0)
-return 1;
-return 0;
+return Longs.compare(avgSize(o1), avgSize(o2));
 }
 
 private long avgSize(ListSSTableReader sstables)



buildbot success in ASF Buildbot on cassandra-trunk

2013-04-03 Thread buildbot
The Buildbot has detected a restored build on builder cassandra-trunk while 
building cassandra.
Full details are available at:
 http://ci.apache.org/builders/cassandra-trunk/builds/2516

Buildbot URL: http://ci.apache.org/

Buildslave for this Build: portunus_ubuntu

Build Reason: scheduler
Build Source Stamp: [branch trunk] e9c3ee979f1f0dca49b002a2892bd23c6f69235c
Blamelist: Jonathan Ellis jbel...@apache.org

Build succeeded!

sincerely,
 -The Buildbot





[jira] [Created] (CASSANDRA-5423) PasswordAuthenticator is incompatible with various Cassandra clients

2013-04-03 Thread Sven Delmas (JIRA)
Sven Delmas created CASSANDRA-5423:
--

 Summary: PasswordAuthenticator is incompatible with various 
Cassandra clients
 Key: CASSANDRA-5423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5423
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Sven Delmas


Evidently with the old authenticator it was allowed to set keyspace, and then 
login.  With the org.apache.cassandra.auth.PasswordAuthenticator you have to 
login and then setkeyspace

For backwards compatibility it would be good to allow setting before login, and 
perform the actual operation/validation later after the login.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5424) nodetool repair -pr on all nodes won't repair the full range when a Keyspace isn't in all DC's

2013-04-03 Thread Jeremiah Jordan (JIRA)
Jeremiah Jordan created CASSANDRA-5424:
--

 Summary: nodetool repair -pr on all nodes won't repair the full 
range when a Keyspace isn't in all DC's
 Key: CASSANDRA-5424
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5424
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.9
Reporter: Jeremiah Jordan
Priority: Critical


nodetool repair -pr on all nodes won't repair the full range when a Keyspace 
isn't in all DC's

Commands follow, but the TL;DR of it, range 
(127605887595351923798765477786913079296,0] doesn't get repaired between .38 
node and .236 node until I run a repair, no -pr, on .38

It seems like primary arnge calculation doesn't take schema into account, but 
deciding who to ask for merkle tree's from does.

{noformat}
Address DC  RackStatus State   LoadOwns 
   Token   

   127605887595351923798765477786913079296 
10.72.111.225   Cassandra   rack1   Up Normal  455.87 KB   25.00%   
   0   
10.2.29.38  Analytics   rack1   Up Normal  40.74 MB25.00%   
   42535295865117307932921825928971026432  
10.46.113.236   Analytics   rack1   Up Normal  20.65 MB50.00%   
   127605887595351923798765477786913079296 

create keyspace Keyspace1
  with placement_strategy = 'NetworkTopologyStrategy'
  and strategy_options = {Analytics : 2}
  and durable_writes = true;

---
# nodetool -h 10.2.29.38 repair -pr Keyspace1 Standard1
[2013-04-03 15:46:58,000] Starting repair command #1, repairing 1 ranges for 
keyspace Keyspace1
[2013-04-03 15:47:00,881] Repair session b79b4850-9c75-11e2--8b5bf6ebea9e 
for range (0,42535295865117307932921825928971026432] finished
[2013-04-03 15:47:00,881] Repair command #1 finished

root@ip-10-2-29-38:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
/var/log/cassandra/system.log
 INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,009 AntiEntropyService.java 
(line 676) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] new session: will 
sync a1/10.2.29.38, /10.46.113.236 on range 
(0,42535295865117307932921825928971026432] for Keyspace1.[Standard1]
 INFO [AntiEntropySessions:1] 2013-04-03 15:46:58,015 AntiEntropyService.java 
(line 881) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] requesting merkle 
trees for Standard1 (to [/10.46.113.236, a1/10.2.29.38])
 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,202 AntiEntropyService.java 
(line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree 
for Standard1 from /10.46.113.236
 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,697 AntiEntropyService.java 
(line 211) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Received merkle tree 
for Standard1 from a1/10.2.29.38
 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,879 AntiEntropyService.java 
(line 1015) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Endpoints 
/10.46.113.236 and a1/10.2.29.38 are consistent for Standard1
 INFO [AntiEntropyStage:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
(line 788) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Standard1 is fully 
synced
 INFO [AntiEntropySessions:1] 2013-04-03 15:47:00,880 AntiEntropyService.java 
(line 722) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] session completed 
successfully

root@ip-10-46-113-236:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
/var/log/cassandra/system.log
 INFO [AntiEntropyStage:1] 2013-04-03 15:46:59,944 AntiEntropyService.java 
(line 244) [repair #b79b4850-9c75-11e2--8b5bf6ebea9e] Sending completed 
merkle tree to /10.2.29.38 for (Keyspace1,Standard1)

root@ip-10-72-111-225:/home/ubuntu# grep b79b4850-9c75-11e2--8b5bf6ebea9e 
/var/log/cassandra/system.log
root@ip-10-72-111-225:/home/ubuntu# 

---
# nodetool -h 10.46.113.236  repair -pr Keyspace1 Standard1
[2013-04-03 15:48:00,274] Starting repair command #1, repairing 1 ranges for 
keyspace Keyspace1
[2013-04-03 15:48:02,032] Repair session dcb91540-9c75-11e2--a839ee2ccbef 
for range 
(42535295865117307932921825928971026432,127605887595351923798765477786913079296]
 finished
[2013-04-03 15:48:02,033] Repair command #1 finished

root@ip-10-46-113-236:/home/ubuntu# grep dcb91540-9c75-11e2--a839ee2ccbef 
/var/log/cassandra/system.log
 INFO [AntiEntropySessions:5] 2013-04-03 15:48:00,280 AntiEntropyService.java 
(line 676) [repair #dcb91540-9c75-11e2--a839ee2ccbef] new session: will 
sync a0/10.46.113.236, /10.2.29.38 on range 
(42535295865117307932921825928971026432,127605887595351923798765477786913079296]
 for Keyspace1.[Standard1]
 INFO [AntiEntropySessions:5] 2013-04-03 15:48:00,285 AntiEntropyService.java 
(line 881) [repair 

[jira] [Assigned] (CASSANDRA-5423) PasswordAuthenticator is incompatible with various Cassandra clients

2013-04-03 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-5423:


Assignee: Aleksey Yeschenko

 PasswordAuthenticator is incompatible with various Cassandra clients
 

 Key: CASSANDRA-5423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5423
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Sven Delmas
Assignee: Aleksey Yeschenko

 Evidently with the old authenticator it was allowed to set keyspace, and then 
 login.  With the org.apache.cassandra.auth.PasswordAuthenticator you have to 
 login and then setkeyspace
 For backwards compatibility it would be good to allow setting before login, 
 and perform the actual operation/validation later after the login.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5332) 1.2.3 Nodetool reports 1.1.10 nodes as down

2013-04-03 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621118#comment-13621118
 ] 

Brandon Williams commented on CASSANDRA-5332:
-

Can you post gossipinfo from 1.1.10?

 1.2.3 Nodetool reports 1.1.10 nodes as down
 ---

 Key: CASSANDRA-5332
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5332
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: Sun Java 6_u39
 Ubuntu 10.04.1
 Cassandra 1.2.3 and 1.1.10
Reporter: Arya Goudarzi

 During exercise of a rolling upgrade from 1.1.10 to 1.2.3, I upgraded one 
 node from 1.1.10 to 1.2.3 today. It appears that node running 1.2.3 reports 
 nodes in 1.1.10 as Down in nodetool. This means that the nodes running 1.1.10 
 see all other nodes including 1.2.3 as Up. Here is the ring and gossip from 
 nodes with 1.1.10 for example. 231.121 is the upgraded node:
 Address DC  RackStatus State   Load
 Effective-Ownership Token
   
  141784319550391026443072753098378663700
 XX.180.36us-east 1b  Up Normal  49.47 GB25.00%
   1808575600
 XX.231.121  us-east 1c  Up Normal  47.08 GB25.00% 
  7089215977519551322153637656637080005
 XX.177.177  us-east 1d  Up Normal  33.64 GB25.00% 
  14178431955039102644307275311465584410
 XX.7.148us-east 1b  Up Normal  41.27 GB25.00% 
  42535295865117307932921825930779602030
 XX.20.9 us-east 1c  Up Normal  38.51 GB25.00% 
  49624511842636859255075463585608106435
 XX.86.255us-east 1d  Up Normal  34.78 GB25.00%
   56713727820156410577229101240436610840
 XX.63.230us-east 1b  Up Normal  38.11 GB25.00%
   85070591730234615865843651859750628460
 XX.163.36   us-east 1c  Up Normal  44.25 GB25.00% 
  92159807707754167187997289514579132865
 XX.31.234us-east 1d  Up Normal  44.66 GB25.00%
   99249023685273718510150927169407637270
 XX.132.169   us-east 1b  Up Normal  44.2 GB 25.00%
   127605887595351923798765477788721654890
 XX.71.63 us-east 1c  Up Normal  38.74 GB25.00%
   134695103572871475120919115443550159295
 XX.197.209  us-east 1d  Up Normal  41.5 GB 25.00% 
  141784319550391026443072753098378663700
 /XX.71.63
   RACK:1c
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:4.1598705272E10
   DC:us-east
   INTERNAL_IP:XX.194.92
   STATUS:NORMAL,134695103572871475120919115443550159295
   RPC_ADDRESS:XX.194.92
   RELEASE_VERSION:1.1.6
 /XX.86.255
   RACK:1d
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:3.734334162E10
   DC:us-east
   INTERNAL_IP:XX.6.195
   STATUS:NORMAL,56713727820156410577229101240436610840
   RPC_ADDRESS:XX.6.195
   RELEASE_VERSION:1.1.6
 /XX.7.148
   RACK:1b
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:4.4316975808E10
   DC:us-east
   INTERNAL_IP:XX.47.250
   STATUS:NORMAL,42535295865117307932921825930779602030
   RPC_ADDRESS:XX.47.250
   RELEASE_VERSION:1.1.6
 /XX.63.230
   RACK:1b
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:4.0918593305E10
   DC:us-east
   INTERNAL_IP:XX.89.127
   STATUS:NORMAL,85070591730234615865843651859750628460
   RPC_ADDRESS:XX.89.127
   RELEASE_VERSION:1.1.6
 /XX.132.169
   RACK:1b
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:4.745883458E10
   DC:us-east
   INTERNAL_IP:XX.94.161
   STATUS:NORMAL,127605887595351923798765477788721654890
   RPC_ADDRESS:XX.94.161
   RELEASE_VERSION:1.1.6
 /XX.180.36
   RACK:1b
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:5.311963027E10
   DC:us-east
   INTERNAL_IP:XX.123.112
   STATUS:NORMAL,1808575600
   RPC_ADDRESS:XX.123.112
   RELEASE_VERSION:1.1.6
 /XX.163.36
   RACK:1c
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:4.7516755022E10
   DC:us-east
   INTERNAL_IP:XX.163.180
   STATUS:NORMAL,92159807707754167187997289514579132865
   RPC_ADDRESS:XX.163.180
   RELEASE_VERSION:1.1.6
 /XX.31.234
   RACK:1d
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:4.7954372912E10
   DC:us-east
   INTERNAL_IP:XX.192.159
   STATUS:NORMAL,99249023685273718510150927169407637270
   RPC_ADDRESS:XX.192.159
   RELEASE_VERSION:1.1.6
 /XX.197.209
   RACK:1d
   SCHEMA:99dce53b-487e-3e7b-a958-a1cc48d9f575
   LOAD:4.4558968005E10
   DC:us-east
   INTERNAL_IP:XX.66.205
   STATUS:NORMAL,141784319550391026443072753098378663700
   RPC_ADDRESS:XX.66.205
  

[Cassandra Wiki] Trivial Update of EstellaCo by EstellaCo

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The EstellaCo page has been changed by EstellaCo:
http://wiki.apache.org/cassandra/EstellaCo

New page:
My name: Estella ComerBR
Age: 34BR
Country: Great BritainBR
Home town: Hethersgill BR
Post code: CA6 3YSBR
Address: 63 Clasper WayBR
BR
Here is my site 
[[http://video-series.com/read_blog/123629/car-insurance-comparison|car 
insurance quotes online texas]]


[Cassandra Wiki] Trivial Update of LenoreLav by LenoreLav

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The LenoreLav page has been changed by LenoreLav:
http://wiki.apache.org/cassandra/LenoreLav

New page:
I'm Louie and your wife doesn't just like it at each of the.BR
One at my favorite needs is to read in detail books but We have been taking 
through new things these days. I work since a cashier yet it's something Method 
really enjoy. American Samoa 's where we've been quite living for various 
years.BR
BR
My web page; 
[[http://mpuhost04.ait.ac.th/febt/modules/profile/userinfo.php?uid=2782|cat 
costa un test de sarcina]]


[jira] [Commented] (CASSANDRA-5400) Allow multiple ports to gossip from a single IP address

2013-04-03 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621161#comment-13621161
 ] 

Brandon Williams commented on CASSANDRA-5400:
-

Gossiper isn't happy with this:

{noformat}
ERROR 18:39:10,774 Exception in thread Thread[GossipStage:1,5,main]
java.lang.AssertionError: 
org.apache.cassandra.exceptions.InvalidRequestException: Unknown identifier 
rpc_address
at 
org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:183)
at 
org.apache.cassandra.db.SystemTable.updatePeerInfo(SystemTable.java:354)
at 
org.apache.cassandra.service.StorageService.onChange(StorageService.java:1215)
at 
org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1938)
at 
org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:817)
at 
org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:895)
at 
org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:49)
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.cassandra.exceptions.InvalidRequestException: Unknown 
identifier rpc_address
at 
org.apache.cassandra.cql3.statements.UpdateStatement.prepare(UpdateStatement.java:299)
at 
org.apache.cassandra.cql3.statements.UpdateStatement.prepare(UpdateStatement.java:358)
at 
org.apache.cassandra.cql3.QueryProcessor.getStatement(QueryProcessor.java:270)
at 
org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:169)
{noformat}

 Allow multiple ports to gossip from a single IP address
 ---

 Key: CASSANDRA-5400
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5400
 Project: Cassandra
  Issue Type: New Feature
Affects Versions: 2.0
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
 Fix For: 2.0

 Attachments: 5400.txt, 5400-v2.txt, 5400-v3.patch


 If a fat client is running on the same machine as a Cassandra node, the fat 
 client must be allocated a new IP address. However, since the node is now a 
 part of the gossip, the other nodes in the ring must be able to talk to it. 
 This means that a local only address (127.0.0.n) won't actually work for the 
 rest of the ring.
 This also would allow for multiple Cassandra service instances to run on a 
 single machine, or from a group of machines behind a NAT.
 The change is simple in concept: instead of using an InetAddress, use a 
 different class. Instead of using an InetSocketAddress, which would still tie 
 us to using InetAddress, I've added a new class, CassandraInstanceEndpoint. 
 The serializer allows for reading a serialized Inet4Address or Inet6Address; 
 also, the message service can still communicate with 
 non-CassandraInstanceEndpoint aware code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5420) Very slow responses to trivial query.

2013-04-03 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-5420.
-

Resolution: Invalid

You should raise this question on the user list, not jira, this is not a bug.

 Very slow responses to trivial query.
 -

 Key: CASSANDRA-5420
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5420
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.2
 Environment: 4 node cluster with replication=3, 16Gb ram, Xeon CPU 
 E5-2620@ 2.00GHz
Reporter: Igor Ivanov

 Some requests are fine from time to time, but sometimes they take more than 
 60s. Which is very weird, given HW specs.
 Following is CQL3 session with TRACING ON:
 {code}
 cqlsh:footballsite DESCRIBE TABLE content_list_lookup 
 CREATE TABLE content_list_lookup (
   editionContentId text,
   listId text,
   listKey text,
   PRIMARY KEY (editionContentId, listId)
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};
 cqlsh:footballsite SELECT * FROM content_list_lookup WHERE 
 editionContentId = '8bf5cf79-f588-54b4-9114-05023d78d630.pt' ORDER BY 
 listId ASC LIMIT 2 ;
 Tracing session: 3a0d6870-9bfb-11e2-8b38-3183185e53e2
  activity| timestamp| 
 source  | source_elapsed
 -+--+-+
   execute_cql3_query | 18:10:08,760 | 
 10.10.45.60 |  0
Parsing statement | 18:10:08,760 | 
 10.10.45.60 | 30
   Peparing statement | 18:10:08,760 | 
 10.10.45.60 |116
  Sending message to /10.10.45.62 | 18:10:08,760 | 
 10.10.45.60 |394
   Message received from /10.10.45.60 | 18:10:08,767 | 
 10.10.45.62 |108
  Executing single-partition query on content_list_lookup | 18:10:08,768 | 
 10.10.45.62 |   1661
 Acquiring sstable references | 18:10:08,768 | 
 10.10.45.62 |   1736
Merging memtable contents | 18:10:08,768 | 
 10.10.45.62 |   1859
   Key cache hit for sstable 3643 | 18:10:08,769 | 
 10.10.45.62 |   2000
   Key cache hit for sstable 3453 | 18:10:08,769 | 
 10.10.45.62 |   2542
   Key cache hit for sstable 3451 | 18:10:08,770 | 
 10.10.45.62 |   3009
   Merging data from memtables and 3 sstables | 18:10:08,770 | 
 10.10.45.62 |   3417
 Read 0 live cells and 720 tombstoned | 18:11:00,203 | 
 10.10.45.62 |   51436194
   Enqueuing response to /10.10.45.60 | 18:11:00,203 | 
 10.10.45.62 |   51436661
  Sending message to /10.10.45.60 | 18:11:00,203 | 
 10.10.45.62 |   51436861
   Message received from /10.10.45.62 | 18:11:00,240 | 
 10.10.45.60 |   51480364
Processing response from /10.10.45.62 | 18:11:00,241 | 
 10.10.45.60 |   51480509
 Request complete | 18:11:00,240 | 
 10.10.45.60 |   51480626
 cqlsh:footballsite SELECT * FROM content_list_lookup WHERE 
 editionContentId = '8bf5cf79-f588-54b4-9114-05023d78d630.pt' ORDER BY 
 listId ASC LIMIT 2 ;
 Request did not complete within rpc_timeout.
 Tracing session: a8266330-9bff-11e2-8b38-3183185e53e2
  activity| timestamp| 
 source  | source_elapsed
 -+--+-+
   execute_cql3_query | 18:41:51,460 | 
 10.10.45.60 |  0
Parsing statement | 18:41:51,460 | 
 10.10.45.60 | 37
   Peparing statement | 18:41:51,460 | 
 10.10.45.60 |141
  Sending message to /10.10.45.59 | 18:41:51,461 | 
 10.10.45.60 |377
   Message received from /10.10.45.60 | 18:41:51,462 | 
 10.10.45.59 | 24
  Executing single-partition query on content_list_lookup | 18:41:51,462 | 
 10.10.45.59 |190
 

[Cassandra Wiki] Trivial Update of GinoBourn by GinoBourn

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The GinoBourn page has been changed by GinoBourn:
http://wiki.apache.org/cassandra/GinoBourn

New page:
My name: Gino BourneBR
Age: 18BR
Country: SwitzerlandBR
Home town: Elgg BR
Post code: 8353BR
Street: Casa Posrclas 16BR
BR
Visit my web site ... [[http://mesagaragedoors.us|mesa garage]]


git commit: OCD-ninja CFMetaData.BatchlogCF - CFMetaData.BatchlogCf fix

2013-04-03 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 3f883bf4d - 5a43c39f3


OCD-ninja CFMetaData.BatchlogCF - CFMetaData.BatchlogCf fix


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5a43c39f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5a43c39f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5a43c39f

Branch: refs/heads/cassandra-1.2
Commit: 5a43c39f38c8ea519b9be9c7ed9782035ecba383
Parents: 3f883bf
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Apr 3 22:52:57 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Apr 3 22:52:57 2013 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java|2 +-
 .../org/apache/cassandra/config/KSMetaData.java|2 +-
 .../org/apache/cassandra/db/BatchlogManager.java   |4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5a43c39f/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 18cdd93..31720b2 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -214,7 +214,7 @@ public final class CFMetaData
+   PRIMARY 
KEY (session_id, event_id)
+ );, 
Tracing.TRACE_KS);
 
-public static final CFMetaData BatchlogCF = compile(16, CREATE TABLE  + 
SystemTable.BATCHLOG_CF +  (
+public static final CFMetaData BatchlogCf = compile(16, CREATE TABLE  + 
SystemTable.BATCHLOG_CF +  (
 + id uuid PRIMARY 
KEY,
 + written_at 
timestamp,
 + data blob

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5a43c39f/src/java/org/apache/cassandra/config/KSMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/KSMetaData.java 
b/src/java/org/apache/cassandra/config/KSMetaData.java
index 138e24b..a5bc4a0 100644
--- a/src/java/org/apache/cassandra/config/KSMetaData.java
+++ b/src/java/org/apache/cassandra/config/KSMetaData.java
@@ -78,7 +78,7 @@ public final class KSMetaData
 
 public static KSMetaData systemKeyspace()
 {
-ListCFMetaData cfDefs = Arrays.asList(CFMetaData.BatchlogCF,
+ListCFMetaData cfDefs = Arrays.asList(CFMetaData.BatchlogCf,
 CFMetaData.RangeXfersCf,
 CFMetaData.LocalCf,
 CFMetaData.PeersCf,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5a43c39f/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 843cf44..9da9b2d 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -135,7 +135,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 ByteBuffer writtenAt = LongType.instance.decompose(timestamp / 1000);
 ByteBuffer data = serializeRowMutations(mutations);
 
-ColumnFamily cf = ColumnFamily.create(CFMetaData.BatchlogCF);
+ColumnFamily cf = ColumnFamily.create(CFMetaData.BatchlogCf);
 cf.addColumn(new Column(WRITTEN_AT, writtenAt, timestamp));
 cf.addColumn(new Column(DATA, data, timestamp));
 RowMutation rm = new RowMutation(Table.SYSTEM_KS, 
UUIDType.instance.decompose(uuid));
@@ -253,7 +253,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 private static ByteBuffer columnName(String name)
 {
 ByteBuffer raw = UTF8Type.instance.decompose(name);
-return 
CFMetaData.BatchlogCF.getCfDef().getColumnNameBuilder().add(raw).build();
+return 
CFMetaData.BatchlogCf.getCfDef().getColumnNameBuilder().add(raw).build();
 }
 
 private static ListRow getRangeSlice(IDiskAtomFilter columnFilter)



[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-04-03 Thread aleksey
Merge branch 'cassandra-1.2' into trunk

Conflicts:
src/java/org/apache/cassandra/db/BatchlogManager.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4063bcc8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4063bcc8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4063bcc8

Branch: refs/heads/trunk
Commit: 4063bcc850ccc18a0238c8abdceb776a7ef8da71
Parents: e9c3ee9 5a43c39
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Apr 3 22:57:34 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Apr 3 22:57:34 2013 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java|2 +-
 .../org/apache/cassandra/config/KSMetaData.java|2 +-
 .../org/apache/cassandra/db/BatchlogManager.java   |4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4063bcc8/src/java/org/apache/cassandra/config/CFMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4063bcc8/src/java/org/apache/cassandra/config/KSMetaData.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4063bcc8/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --cc src/java/org/apache/cassandra/db/BatchlogManager.java
index 500021d,9da9b2d..a5139e8
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@@ -134,11 -135,13 +134,11 @@@ public class BatchlogManager implement
  ByteBuffer writtenAt = LongType.instance.decompose(timestamp / 1000);
  ByteBuffer data = serializeRowMutations(mutations);
  
- ColumnFamily cf = 
ArrayBackedSortedColumns.factory.create(CFMetaData.BatchlogCF);
 -ColumnFamily cf = ColumnFamily.create(CFMetaData.BatchlogCf);
 -cf.addColumn(new Column(WRITTEN_AT, writtenAt, timestamp));
++ColumnFamily cf = 
ArrayBackedSortedColumns.factory.create(CFMetaData.BatchlogCf);
  cf.addColumn(new Column(DATA, data, timestamp));
 -RowMutation rm = new RowMutation(Table.SYSTEM_KS, 
UUIDType.instance.decompose(uuid));
 -rm.add(cf);
 +cf.addColumn(new Column(WRITTEN_AT, writtenAt, timestamp));
  
 -return rm;
 +return new RowMutation(Table.SYSTEM_KS, 
UUIDType.instance.decompose(uuid), cf);
  }
  
  private static ByteBuffer serializeRowMutations(CollectionRowMutation 
mutations)



[1/2] git commit: OCD-ninja CFMetaData.BatchlogCF - CFMetaData.BatchlogCf fix

2013-04-03 Thread aleksey
Updated Branches:
  refs/heads/trunk e9c3ee979 - 4063bcc85


OCD-ninja CFMetaData.BatchlogCF - CFMetaData.BatchlogCf fix


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5a43c39f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5a43c39f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5a43c39f

Branch: refs/heads/trunk
Commit: 5a43c39f38c8ea519b9be9c7ed9782035ecba383
Parents: 3f883bf
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Apr 3 22:52:57 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Apr 3 22:52:57 2013 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java|2 +-
 .../org/apache/cassandra/config/KSMetaData.java|2 +-
 .../org/apache/cassandra/db/BatchlogManager.java   |4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5a43c39f/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 18cdd93..31720b2 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -214,7 +214,7 @@ public final class CFMetaData
+   PRIMARY 
KEY (session_id, event_id)
+ );, 
Tracing.TRACE_KS);
 
-public static final CFMetaData BatchlogCF = compile(16, CREATE TABLE  + 
SystemTable.BATCHLOG_CF +  (
+public static final CFMetaData BatchlogCf = compile(16, CREATE TABLE  + 
SystemTable.BATCHLOG_CF +  (
 + id uuid PRIMARY 
KEY,
 + written_at 
timestamp,
 + data blob

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5a43c39f/src/java/org/apache/cassandra/config/KSMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/KSMetaData.java 
b/src/java/org/apache/cassandra/config/KSMetaData.java
index 138e24b..a5bc4a0 100644
--- a/src/java/org/apache/cassandra/config/KSMetaData.java
+++ b/src/java/org/apache/cassandra/config/KSMetaData.java
@@ -78,7 +78,7 @@ public final class KSMetaData
 
 public static KSMetaData systemKeyspace()
 {
-ListCFMetaData cfDefs = Arrays.asList(CFMetaData.BatchlogCF,
+ListCFMetaData cfDefs = Arrays.asList(CFMetaData.BatchlogCf,
 CFMetaData.RangeXfersCf,
 CFMetaData.LocalCf,
 CFMetaData.PeersCf,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5a43c39f/src/java/org/apache/cassandra/db/BatchlogManager.java
--
diff --git a/src/java/org/apache/cassandra/db/BatchlogManager.java 
b/src/java/org/apache/cassandra/db/BatchlogManager.java
index 843cf44..9da9b2d 100644
--- a/src/java/org/apache/cassandra/db/BatchlogManager.java
+++ b/src/java/org/apache/cassandra/db/BatchlogManager.java
@@ -135,7 +135,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 ByteBuffer writtenAt = LongType.instance.decompose(timestamp / 1000);
 ByteBuffer data = serializeRowMutations(mutations);
 
-ColumnFamily cf = ColumnFamily.create(CFMetaData.BatchlogCF);
+ColumnFamily cf = ColumnFamily.create(CFMetaData.BatchlogCf);
 cf.addColumn(new Column(WRITTEN_AT, writtenAt, timestamp));
 cf.addColumn(new Column(DATA, data, timestamp));
 RowMutation rm = new RowMutation(Table.SYSTEM_KS, 
UUIDType.instance.decompose(uuid));
@@ -253,7 +253,7 @@ public class BatchlogManager implements BatchlogManagerMBean
 private static ByteBuffer columnName(String name)
 {
 ByteBuffer raw = UTF8Type.instance.decompose(name);
-return 
CFMetaData.BatchlogCF.getCfDef().getColumnNameBuilder().add(raw).build();
+return 
CFMetaData.BatchlogCf.getCfDef().getColumnNameBuilder().add(raw).build();
 }
 
 private static ListRow getRangeSlice(IDiskAtomFilter columnFilter)



git commit: OCD-ninja CFMetaData.CompactionLogCF - CFMetaData.CompactionLogCf fix

2013-04-03 Thread aleksey
Updated Branches:
  refs/heads/trunk 4063bcc85 - 4271d19a1


OCD-ninja CFMetaData.CompactionLogCF - CFMetaData.CompactionLogCf fix


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4271d19a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4271d19a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4271d19a

Branch: refs/heads/trunk
Commit: 4271d19a1580b717fa7f548b00b27fef54949898
Parents: 4063bcc
Author: Aleksey Yeschenko alek...@apache.org
Authored: Wed Apr 3 23:01:20 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Wed Apr 3 23:01:20 2013 +0300

--
 .../org/apache/cassandra/config/CFMetaData.java|2 +-
 .../org/apache/cassandra/config/KSMetaData.java|2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4271d19a/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index ca16f7c..74228a1 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -236,7 +236,7 @@ public final class CFMetaData
   + requested_at 
timestamp
   + ) WITH 
COMMENT='ranges requested for transfer here');
 
-public static final CFMetaData CompactionLogCF = compile(18, CREATE TABLE 
 + SystemTable.COMPACTION_LOG +  (
+public static final CFMetaData CompactionLogCf = compile(18, CREATE TABLE 
 + SystemTable.COMPACTION_LOG +  (
  + id uuid 
PRIMARY KEY,
  + 
keyspace_name text,
  + 
columnfamily_name text,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4271d19a/src/java/org/apache/cassandra/config/KSMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/KSMetaData.java 
b/src/java/org/apache/cassandra/config/KSMetaData.java
index 6628714..1e58864 100644
--- a/src/java/org/apache/cassandra/config/KSMetaData.java
+++ b/src/java/org/apache/cassandra/config/KSMetaData.java
@@ -88,7 +88,7 @@ public final class KSMetaData
 CFMetaData.SchemaKeyspacesCf,
 
CFMetaData.SchemaColumnFamiliesCf,
 CFMetaData.SchemaColumnsCf,
-CFMetaData.CompactionLogCF,
+CFMetaData.CompactionLogCf,
 CFMetaData.OldStatusCf,
 CFMetaData.OldHintsCf,
 CFMetaData.OldMigrationsCf,



[jira] [Commented] (CASSANDRA-5397) Updates to PerRowSecondaryIndex don't use most current values

2013-04-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621283#comment-13621283
 ] 

Jonathan Ellis commented on CASSANDRA-5397:
---

Is this intended for 1.2 or 2.0?  I'm getting lots of conflicts on both.

 Updates to PerRowSecondaryIndex don't use most current values 
 --

 Key: CASSANDRA-5397
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5397
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.3
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
Priority: Minor
 Attachments: 5397.txt


 The way that updates to secondary indexes are performed using  
 SecondaryIndexManager.Updater is flawed for PerRowSecondaryIndexes.  Unlike 
 PerColumnSecondaryIndexes, which only require the old  new values for a 
 single column,  the expectation is that a PerRow indexer can be given just a 
 key which it will use to retrieve the entire row (or as many columns as it 
 requires) and perform its indexing on those columns.  As the indexes are 
 updated before the memtable atomic swap occurs, a per-row indexer may only 
 read the previous values for the row, not the new ones that are being 
 written. In the case of an insert, there is no previous value and so nothing 
 is added to the index.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: fix regression when requested range does not overlap an sstable at all patch by marcuse; reviewed by jbellis for CASSANDRA-5407

2013-04-03 Thread jbellis
Updated Branches:
  refs/heads/trunk 4271d19a1 - e306a87b7


fix regression when requested range does not overlap an sstable at all
patch by marcuse; reviewed by jbellis for CASSANDRA-5407


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e306a87b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e306a87b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e306a87b

Branch: refs/heads/trunk
Commit: e306a87b7d8e1ef15a5269006d7706a4c97d1798
Parents: 4271d19
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Apr 3 15:23:07 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Apr 3 15:23:25 2013 -0500

--
 CHANGES.txt|2 +-
 .../io/sstable/SSTableBoundedScanner.java  |5 +-
 .../apache/cassandra/io/sstable/SSTableReader.java |   46 ++-
 .../cassandra/io/sstable/SSTableReaderTest.java|   25 
 4 files changed, 73 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e306a87b/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 060561f..004607e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -2,7 +2,7 @@
  * Add yaml network topology snitch for mixed ec2/other envs (CASSANDRA-5339)
  * Log when a node is down longer than the hint window (CASSANDRA-4554)
  * Optimize tombstone creation for ExpiringColumns (CASSANDRA-4917)
- * Improve LeveledScanner work estimation (CASSANDRA-5250)
+ * Improve LeveledScanner work estimation (CASSANDRA-5250, 5407)
  * Replace compaction lock with runWithCompactionsDisabled (CASSANDRA-3430)
  * Change Message IDs to ints (CASSANDRA-5307)
  * Move sstable level information into the Stats component, removing the

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e306a87b/src/java/org/apache/cassandra/io/sstable/SSTableBoundedScanner.java
--
diff --git 
a/src/java/org/apache/cassandra/io/sstable/SSTableBoundedScanner.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableBoundedScanner.java
index af6a654..592ce28 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableBoundedScanner.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableBoundedScanner.java
@@ -35,10 +35,11 @@ public class SSTableBoundedScanner extends SSTableScanner
 private final IteratorPairLong, Long rangeIterator;
 private PairLong, Long currentRange;
 
-SSTableBoundedScanner(SSTableReader sstable, RangeToken range)
+SSTableBoundedScanner(SSTableReader sstable, IteratorPairLong, Long 
rangeIterator)
 {
 super(sstable);
-this.rangeIterator = 
sstable.getPositionsForRanges(Collections.singletonList(range)).iterator();
+assert rangeIterator.hasNext(); // use EmptyCompactionScanner otherwise
+this.rangeIterator = rangeIterator;
 currentRange = rangeIterator.next();
 dfile.seek(currentRange.left);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e306a87b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
index e73bd67..4bdfebb 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java
@@ -1019,8 +1019,11 @@ public class SSTableReader extends SSTable
 {
 if (range == null)
 return getScanner();
-
-return new SSTableBoundedScanner(this, range);
+IteratorPairLong, Long rangeIterator = 
getPositionsForRanges(Collections.singletonList(range)).iterator();
+if (rangeIterator.hasNext())
+return new SSTableBoundedScanner(this, rangeIterator);
+else
+return new EmptyCompactionScanner(getFilename());
 }
 
 public FileDataInput getFileDataInput(long position)
@@ -1286,4 +1289,43 @@ public class SSTableReader extends SSTable
 FileUtils.closeQuietly(file);
 }
 }
+
+protected class EmptyCompactionScanner implements ICompactionScanner
+{
+private final String filename;
+
+public EmptyCompactionScanner(String filename)
+{
+this.filename = filename;
+}
+
+public long getLengthInBytes()
+{
+return 0;
+}
+
+public long getCurrentPosition()
+{
+return 0;
+}
+
+public String getBackingFiles()
+{
+return filename;
+}
+
+public boolean hasNext()
+{
+return false;
+  

[jira] [Updated] (CASSANDRA-5423) PasswordAuthenticator is incompatible with various Cassandra clients

2013-04-03 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5423:
-

Attachment: 5423.txt

 PasswordAuthenticator is incompatible with various Cassandra clients
 

 Key: CASSANDRA-5423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5423
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Sven Delmas
Assignee: Aleksey Yeschenko
 Attachments: 5423.txt


 Evidently with the old authenticator it was allowed to set keyspace, and then 
 login.  With the org.apache.cassandra.auth.PasswordAuthenticator you have to 
 login and then setkeyspace
 For backwards compatibility it would be good to allow setting before login, 
 and perform the actual operation/validation later after the login.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5423) PasswordAuthenticator is incompatible with various Cassandra clients

2013-04-03 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-5423:
-

Priority: Minor  (was: Major)

 PasswordAuthenticator is incompatible with various Cassandra clients
 

 Key: CASSANDRA-5423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5423
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
Reporter: Sven Delmas
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: security
 Fix For: 1.2.4

 Attachments: 5423.txt


 Evidently with the old authenticator it was allowed to set keyspace, and then 
 login.  With the org.apache.cassandra.auth.PasswordAuthenticator you have to 
 login and then setkeyspace
 For backwards compatibility it would be good to allow setting before login, 
 and perform the actual operation/validation later after the login.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5332) 1.2.3 Nodetool reports 1.1.10 nodes as down

2013-04-03 Thread Arya Goudarzi (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621336#comment-13621336
 ] 

Arya Goudarzi commented on CASSANDRA-5332:
--

nodetool gossipinfo from a 1.1.10 node:

a...@adbc-f397f09e.us-east-1.test:~
test $ nodetool gossipinfo
/X.X.X.246
  STATUS:NORMAL,14178431955039102644307275311465584410
  RPC_ADDRESS:X.X.X.215
  LOAD:4.8634508696E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  INTERNAL_IP:X.X.X.215
  DC:us-east
  RACK:1d
/X.X.X.165
  STATUS:NORMAL,85070591730234615865843651859750628460
  RPC_ADDRESS:X.X.X.200
  LOAD:4.9058836394E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  DC:us-east
  INTERNAL_IP:X.X.X.200
  RACK:1b
/X.X.X.224
  STATUS:NORMAL,7089215977519551322153637656637080005
  RPC_ADDRESS:X.X.X.16
  LOAD:7.6291067719E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  INTERNAL_IP:X.X.X.16
  DC:us-east
  RACK:1c
/X.X.X.213
  STATUS:NORMAL,99249023685273718510150927169407637270
  RPC_ADDRESS:X.X.X.223
  LOAD:5.5689972443E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  INTERNAL_IP:X.X.X.223
  DC:us-east
  RACK:1d
/X.X.X.77
  STATUS:NORMAL,42535295865117307932921825930779602030
  RPC_ADDRESS:X.X.X.23
  LOAD:3.4091226262E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  INTERNAL_IP:X.X.X.23
  DC:us-east
  RACK:1b
/X.X.X.230
  X5:k̬p
  RPC_ADDRESS:X.X.X.224
  X4:296d0ef1-57c3-418d-95a2-28efdefdb71a
  X2:0.0
  SCHEMA:0b46de13-b46e-3234-9264-e4203165a16b
  RELEASE_VERSION:1.2.3
  X3:6
  INTERNAL_IP:X.X.X.224
  DC:us-east
  RACK:1b
/X.X.X.173
  STATUS:NORMAL,92159807707754167187997289514579132865
  RPC_ADDRESS:X.X.X.22
  LOAD:5.3358992463E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  DC:us-east
  INTERNAL_IP:X.X.X.22
  RACK:1c
/X.X.X.247
  STATUS:NORMAL,127605887595351923798765477788721654890
  RPC_ADDRESS:X.X.X.96
  LOAD:4.3159749625E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  INTERNAL_IP:X.X.X.96
  DC:us-east
  RACK:1b
/X.X.X.201
  STATUS:NORMAL,49624511842636859255075463585608106435
  RPC_ADDRESS:X.X.X.31
  LOAD:3.6405666294E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  INTERNAL_IP:X.X.X.31
  DC:us-east
  RACK:1c
/X.X.X.23
  STATUS:NORMAL,141784319550391026443072753098378663700
  RPC_ADDRESS:X.X.X.246
  LOAD:5.0944074814E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  DC:us-east
  INTERNAL_IP:X.X.X.246
  RACK:1d
/X.X.X.246
  STATUS:NORMAL,134695103572871475120919115443550159295
  RPC_ADDRESS:X.X.X.24
  LOAD:4.418898364E10
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  RELEASE_VERSION:1.1.10
  DC:us-east
  INTERNAL_IP:X.X.X.24
  RACK:1c
/X.X.X.32
  RPC_ADDRESS:X.X.X.116
  X4:fbd2cdbf-a69c-4d32-9fdd-efdeaf85337c
  SCHEMA:0b46de13-b46e-3234-9264-e4203165a16b
  RELEASE_VERSION:1.2.3
  X3:6
  INTERNAL_IP:X.X.X.116
  DC:us-east
  RACK:1d

This morning I decided to upgrade a second node to 1.2.3. After the upgrade the 
previously upgraded 1.2.3 node that I'd used -Dcassandra.load_ring_state=false 
started seeing everybody in the ring but only itself and the other upgraded 
node were marked as Up. The rest of 1.1.10 nodes were marked as Down. This is 
the gossipinfo for a 1.2.3 node:

/X.X.X.246
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  DC:us-east
  INTERNAL_IP:X.X.X.215
  RPC_ADDRESS:X.X.X.215
  LOAD:4.8634508696E10
  RELEASE_VERSION:1.1.10
  RACK:1d
  STATUS:NORMAL,14178431955039102644307275311465584410
/X.X.X.165
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  DC:us-east
  INTERNAL_IP:X.X.X.200
  RPC_ADDRESS:X.X.X.200
  LOAD:4.9058836394E10
  RELEASE_VERSION:1.1.10
  RACK:1b
  STATUS:NORMAL,85070591730234615865843651859750628460
/X.X.X.224
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  DC:us-east
  INTERNAL_IP:X.X.X.16
  RPC_ADDRESS:X.X.X.16
  LOAD:7.6291067719E10
  RELEASE_VERSION:1.1.10
  RACK:1c
  STATUS:NORMAL,7089215977519551322153637656637080005
/X.X.X.213
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  DC:us-east
  INTERNAL_IP:X.X.X.223
  RPC_ADDRESS:X.X.X.223
  LOAD:5.5689972443E10
  RELEASE_VERSION:1.1.10
  RACK:1d
  STATUS:NORMAL,99249023685273718510150927169407637270
/X.X.X.77
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  DC:us-east
  INTERNAL_IP:X.X.X.23
  RPC_ADDRESS:X.X.X.23
  LOAD:3.4091226262E10
  RELEASE_VERSION:1.1.10
  RACK:1b
  STATUS:NORMAL,42535295865117307932921825930779602030
/X.X.X.230
  SCHEMA:1c8b46ab-4aa5-3383-ad1a-d57a00314402
  DC:us-east
  INTERNAL_IP:10.202.150.224
  SEVERITY:-2.710505431213761E-20
  RPC_ADDRESS:10.202.150.224
  NET_VERSION:6
  LOAD:5.1597691774E10
  HOST_ID:296d0ef1-57c3-418d-95a2-28efdefdb71a
  RELEASE_VERSION:1.2.3
  RACK:1b
  STATUS:NORMAL,1808575600
/X.X.X.173
  SCHEMA:b3be65e0-e128-3d8b-af85-9bd8e3585825
  DC:us-east
  INTERNAL_IP:X.X.X.22
  

[jira] [Updated] (CASSANDRA-5412) Lots of deleted rows came back to life after upgrade from 1.1.6 to 1.1.10

2013-04-03 Thread Arya Goudarzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arya Goudarzi updated CASSANDRA-5412:
-

Affects Version/s: 1.1.10

 Lots of deleted rows came back to life after upgrade from 1.1.6 to 1.1.10
 -

 Key: CASSANDRA-5412
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5412
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.10
 Environment: Ubuntu 10.04 LTS
 Sun Java 6 u39
 1.1.6 and 1.1.10
Reporter: Arya Goudarzi

 Also per discussion here:  
 http://www.mail-archive.com/user@cassandra.apache.org/msg28905.html
 I was not able to find any answers as to why a simple upgrade process could 
 bring back a lot of (millions) of deleted rows to life. We have successful 
 repairs running on our cluster every night. Unless repair is not doing its 
 job, it is not possible to the best of my knowledge that the deleted rows 
 come back unless there is a bug. I have previously experienced this issue 
 when I upgraded our sandbox cluster. I failed at every single attempt to 
 reproduce the issue by restoring a fresh cluster from snapshot, and 
 performing the upgrade from 1.1.6 to 1.1.10. I even exercised this with the 
 snapshot of our production cluster before upgrading and was not successful. 
 So, I finally made the decision to upgrade, and guess what?! Millions of 
 deleted rows came back after the upgrade. 
 This time I confirmed the timestamps of the deleted rows that came back; they 
 were actually before the time there were deleted. So, this is just like when 
 tombstones get purged before they get propagated. We use nanosecond precision 
 timestamps (19 digits).
 My discussion on the mailing list did not lead anywhere, though Aaron helped 
 me find one another possible way of this happening by Hinted Handoff which I 
 filed a separate ticket for. I don't believe this is an issue for us as we 
 don't have nodes down for a long period of time. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5412) Lots of deleted rows came back to life after upgrade from 1.1.6 to 1.1.10

2013-04-03 Thread Arya Goudarzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arya Goudarzi updated CASSANDRA-5412:
-

Component/s: Core

 Lots of deleted rows came back to life after upgrade from 1.1.6 to 1.1.10
 -

 Key: CASSANDRA-5412
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5412
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.10
 Environment: Ubuntu 10.04 LTS
 Sun Java 6 u39
 1.1.6 and 1.1.10
Reporter: Arya Goudarzi

 Also per discussion here:  
 http://www.mail-archive.com/user@cassandra.apache.org/msg28905.html
 I was not able to find any answers as to why a simple upgrade process could 
 bring back a lot of (millions) of deleted rows to life. We have successful 
 repairs running on our cluster every night. Unless repair is not doing its 
 job, it is not possible to the best of my knowledge that the deleted rows 
 come back unless there is a bug. I have previously experienced this issue 
 when I upgraded our sandbox cluster. I failed at every single attempt to 
 reproduce the issue by restoring a fresh cluster from snapshot, and 
 performing the upgrade from 1.1.6 to 1.1.10. I even exercised this with the 
 snapshot of our production cluster before upgrading and was not successful. 
 So, I finally made the decision to upgrade, and guess what?! Millions of 
 deleted rows came back after the upgrade. 
 This time I confirmed the timestamps of the deleted rows that came back; they 
 were actually before the time there were deleted. So, this is just like when 
 tombstones get purged before they get propagated. We use nanosecond precision 
 timestamps (19 digits).
 My discussion on the mailing list did not lead anywhere, though Aaron helped 
 me find one another possible way of this happening by Hinted Handoff which I 
 filed a separate ticket for. I don't believe this is an issue for us as we 
 don't have nodes down for a long period of time. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5412) Lots of deleted rows came back to life after upgrade from 1.1.6 to 1.1.10

2013-04-03 Thread Arya Goudarzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arya Goudarzi updated CASSANDRA-5412:
-

Environment: 
Ubuntu 10.04 LTS
Sun Java 6 u39
1.1.6 = 1.1.10

  was:
Ubuntu 10.04 LTS
Sun Java 6 u39
1.1.6 and 1.1.10


 Lots of deleted rows came back to life after upgrade from 1.1.6 to 1.1.10
 -

 Key: CASSANDRA-5412
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5412
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.10
 Environment: Ubuntu 10.04 LTS
 Sun Java 6 u39
 1.1.6 = 1.1.10
Reporter: Arya Goudarzi

 Also per discussion here:  
 http://www.mail-archive.com/user@cassandra.apache.org/msg28905.html
 I was not able to find any answers as to why a simple upgrade process could 
 bring back a lot of (millions) of deleted rows to life. We have successful 
 repairs running on our cluster every night. Unless repair is not doing its 
 job, it is not possible to the best of my knowledge that the deleted rows 
 come back unless there is a bug. I have previously experienced this issue 
 when I upgraded our sandbox cluster. I failed at every single attempt to 
 reproduce the issue by restoring a fresh cluster from snapshot, and 
 performing the upgrade from 1.1.6 to 1.1.10. I even exercised this with the 
 snapshot of our production cluster before upgrading and was not successful. 
 So, I finally made the decision to upgrade, and guess what?! Millions of 
 deleted rows came back after the upgrade. 
 This time I confirmed the timestamps of the deleted rows that came back; they 
 were actually before the time there were deleted. So, this is just like when 
 tombstones get purged before they get propagated. We use nanosecond precision 
 timestamps (19 digits).
 My discussion on the mailing list did not lead anywhere, though Aaron helped 
 me find one another possible way of this happening by Hinted Handoff which I 
 filed a separate ticket for. I don't believe this is an issue for us as we 
 don't have nodes down for a long period of time. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5425) disablebinary nodetool command

2013-04-03 Thread Joaquin Casares (JIRA)
Joaquin Casares created CASSANDRA-5425:
--

 Summary: disablebinary nodetool command
 Key: CASSANDRA-5425
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5425
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.2.3
Reporter: Joaquin Casares
Priority: Minor


The following commands are available via `nodetool`:
{CODE}
  disablehandoff - Disable the future hints storing on the current node
  disablegossip  - Disable gossip (effectively marking the node dead)
  disablethrift  - Disable thrift server
{CODE}

Is it possible to get disablebinary added to help with the testing of binary 
client drivers?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5319) Add new GC Log Rotation options to cassandra-env.sh

2013-04-03 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5319:
--

  Component/s: Packaging
Fix Version/s: 1.2.3

 Add new GC Log Rotation options to cassandra-env.sh
 ---

 Key: CASSANDRA-5319
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5319
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Reporter: Jeremiah Jordan
Assignee: Jeremiah Jordan
Priority: Trivial
 Fix For: 1.2.3

 Attachments: CASSANDRA-5319-1.1.diff, CASSANDRA-5319-1.2.diff, 
 CASSANDRA-5319-trunk.diff


 JDK u34 and later added built in GC log rotation:
 http://www.oracle.com/technetwork/java/javase/2col/6u34-bugfixes-1733379.html
 http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6941923
 1) -XX:+UseGCLogRotation   must be used with 
 -Xloggc:filename
 2) -XX:NumberOfGClogFiles=number of filesmust be =1, default is one
 3) -XX:GCLogFileSize=numberM (or K)  default will be set to 512K
 if UseGCLogRotation set and -Xloggc gives file name, do Log rotation, depends 
 on other flag settings.
 if NumberOfGClogFiles = 1, using same file, rotate log when reaches file 
 capicity (set by GCLogFileSize) in filename.0
 if NumberOfGClogFiles  1, rotate between files filename.0, filename.1, 
 ..., filename.n-1
 We should add these to the commented out gc logging options in 
 cassandra-env.sh

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5371) Perform size-tiered compactions in L0

2013-04-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621414#comment-13621414
 ] 

Jonathan Ellis commented on CASSANDRA-5371:
---

Alternate implementation pushed to 
http://github.com/jbellis/cassandra/commits/5371 with the following 
improvements:

- Only applies STCS to L0 if L0 gets behind (defined as accumulates more than 
MAX_COMPACTING_L0 sstables)
- Performs true STCS, rather than compact in sets of four and then never again

 Perform size-tiered compactions in L0
 -

 Key: CASSANDRA-5371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: T Jake Luciani
 Fix For: 2.0

 Attachments: HybridCompactionStrategy.java


 If LCS gets behind, read performance deteriorates as we have to check bloom 
 filters on man sstables in L0.  For wide rows, this can mean having to seek 
 for each one since the BF doesn't help us reject much.
 Performing size-tiered compaction in L0 will mitigate this until we can catch 
 up on merging it into higher levels.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5371) Perform size-tiered compactions in L0

2013-04-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-5371:
--

Reviewer: tjake
Assignee: Jonathan Ellis  (was: T Jake Luciani)

 Perform size-tiered compactions in L0
 -

 Key: CASSANDRA-5371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
 Fix For: 2.0

 Attachments: HybridCompactionStrategy.java


 If LCS gets behind, read performance deteriorates as we have to check bloom 
 filters on man sstables in L0.  For wide rows, this can mean having to seek 
 for each one since the BF doesn't help us reject much.
 Performing size-tiered compaction in L0 will mitigate this until we can catch 
 up on merging it into higher levels.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5425) disablebinary nodetool command

2013-04-03 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-5425:
---

Assignee: Michał Michalski

That seems reasonable.

 disablebinary nodetool command
 --

 Key: CASSANDRA-5425
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5425
 Project: Cassandra
  Issue Type: Bug
  Components: Drivers
Affects Versions: 1.2.3
Reporter: Joaquin Casares
Assignee: Michał Michalski
Priority: Minor
  Labels: datastax_qa

 The following commands are available via `nodetool`:
 {CODE}
   disablehandoff - Disable the future hints storing on the current 
 node
   disablegossip  - Disable gossip (effectively marking the node dead)
   disablethrift  - Disable thrift server
 {CODE}
 Is it possible to get disablebinary added to help with the testing of binary 
 client drivers?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of bexozyxkutu088 by bexozyxkutu088

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The bexozyxkutu088 page has been changed by bexozyxkutu088:
http://wiki.apache.org/cassandra/bexozyxkutu088?action=diffrev1=1rev2=3

- I am a Search engine optimization specialist providing superb resources and 
also tips for many through which wish to create a high income from their 
websites.BR
+ I am a Search engine optimisation expert delivering fantastic methods along 
with tactics for many that desires to produce a high cash flow using their 
websites.BR
  BR
  BR
- Here is my weblog :: 
[[http://scrapebrokers.com/four-blackhat-methods-to-build-backlinks|Highly 
recommended Webpage]]
+ BR
+ Feel free to visit my web-site: 
[[http://scrapebrokers.com/tag/autoapprove-blog-list|scrapebrokers.com]]
  


git commit: semi-ninja trivial IOE cleanup

2013-04-03 Thread aleksey
Updated Branches:
  refs/heads/cassandra-1.2 5a43c39f3 - 6a03b1103


semi-ninja trivial IOE cleanup

patch by Aleksey Yeschenko; reviewed by Brandon Williams


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6a03b110
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6a03b110
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6a03b110

Branch: refs/heads/cassandra-1.2
Commit: 6a03b1103513bffbf8bc4dacb5abb6b5b3726e37
Parents: 5a43c39
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Apr 4 01:54:48 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Apr 4 01:54:48 2013 +0300

--
 .../org/apache/cassandra/cql/QueryProcessor.java   |   33 +++
 .../cql3/statements/ModificationStatement.java |   45 ++-
 .../cassandra/cql3/statements/SelectStatement.java |   25 ++---
 .../org/apache/cassandra/db/CounterMutation.java   |2 +-
 src/java/org/apache/cassandra/db/ReadCommand.java  |2 +-
 .../org/apache/cassandra/db/ReadVerbHandler.java   |   25 +++--
 .../cassandra/db/RetriedSliceFromReadCommand.java  |6 --
 .../cassandra/db/SliceByNamesReadCommand.java  |2 +-
 .../cassandra/service/IResponseResolver.java   |6 +-
 .../service/RangeSliceResponseResolver.java|5 +-
 .../org/apache/cassandra/service/ReadCallback.java |   10 +--
 .../apache/cassandra/service/RowDataResolver.java  |7 +--
 .../cassandra/service/RowDigestResolver.java   |5 +-
 .../org/apache/cassandra/service/StorageProxy.java |   10 ++--
 .../apache/cassandra/thrift/CassandraServer.java   |   17 --
 15 files changed, 59 insertions(+), 141 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6a03b110/src/java/org/apache/cassandra/cql/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql/QueryProcessor.java
index 01e5ba3..5977301 100644
--- a/src/java/org/apache/cassandra/cql/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql/QueryProcessor.java
@@ -116,14 +116,7 @@ public class QueryProcessor
 }
 }
 
-try
-{
-return StorageProxy.read(commands, select.getConsistencyLevel());
-}
-catch (IOException e)
-{
-throw new RuntimeException(e);
-}
+return StorageProxy.read(commands, select.getConsistencyLevel());
 }
 
 private static SortedSetByteBuffer getColumnNames(SelectStatement 
select, CFMetaData metadata, ListByteBuffer variables)
@@ -144,7 +137,6 @@ public class QueryProcessor
 private static Listorg.apache.cassandra.db.Row 
multiRangeSlice(CFMetaData metadata, SelectStatement select, ListByteBuffer 
variables)
 throws ReadTimeoutException, UnavailableException, InvalidRequestException
 {
-Listorg.apache.cassandra.db.Row rows;
 IPartitioner? p = StorageService.getPartitioner();
 
 AbstractType? keyType = 
Schema.instance.getCFMetaData(metadata.ksName, 
select.getColumnFamily()).getKeyValidator();
@@ -187,21 +179,14 @@ public class QueryProcessor
   ? select.getNumRecords() + 1
   : select.getNumRecords();
 
-try
-{
-rows = StorageProxy.getRangeSlice(new 
RangeSliceCommand(metadata.ksName,
-
select.getColumnFamily(),
-null,
-
columnFilter,
-bounds,
-
expressions,
-limit),
-
select.getConsistencyLevel());
-}
-catch (IOException e)
-{
-throw new RuntimeException(e);
-}
+Listorg.apache.cassandra.db.Row rows = 
StorageProxy.getRangeSlice(new RangeSliceCommand(metadata.ksName,
+   
   select.getColumnFamily(),
+   
   null,
+   
   columnFilter,
+   
   bounds,
+   
   expressions,
+ 

[Cassandra Wiki] Trivial Update of WeldonL80 by WeldonL80

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The WeldonL80 page has been changed by WeldonL80:
http://wiki.apache.org/cassandra/WeldonL80

New page:
Nothing to write about myself I think.BR
Enjoying to be a part of apache.org.BR
I just wish I'm useful at allBR
BR
My blog - [[http://bindinc.com/chicken-coop-guides-review|chicken coop guides 
review]]


[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-04-03 Thread aleksey
Merge branch 'cassandra-1.2' into trunk

Conflicts:
src/java/org/apache/cassandra/cql/QueryProcessor.java
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
src/java/org/apache/cassandra/db/RetriedSliceFromReadCommand.java
src/java/org/apache/cassandra/service/ReadCallback.java
src/java/org/apache/cassandra/service/RowDataResolver.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/27392484
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/27392484
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/27392484

Branch: refs/heads/trunk
Commit: 273924847b255f7c358defa41fa5906d55a22025
Parents: e306a87 6a03b11
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Apr 4 02:10:43 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Apr 4 02:13:26 2013 +0300

--
 .../org/apache/cassandra/cql/QueryProcessor.java   |   31 +++
 .../cql3/statements/ModificationStatement.java |   45 ++-
 .../cassandra/cql3/statements/SelectStatement.java |   25 ++---
 .../org/apache/cassandra/db/CounterMutation.java   |2 +-
 src/java/org/apache/cassandra/db/ReadCommand.java  |2 +-
 .../org/apache/cassandra/db/ReadVerbHandler.java   |   25 +++--
 .../cassandra/db/RetriedSliceFromReadCommand.java  |1 -
 .../cassandra/db/SliceByNamesReadCommand.java  |2 +-
 .../cassandra/service/AbstractReadExecutor.java|2 +-
 .../cassandra/service/IResponseResolver.java   |6 +-
 .../service/RangeSliceResponseResolver.java|5 +-
 .../org/apache/cassandra/service/ReadCallback.java |8 +--
 .../apache/cassandra/service/RowDataResolver.java  |5 +-
 .../cassandra/service/RowDigestResolver.java   |3 +-
 .../org/apache/cassandra/service/StorageProxy.java |   10 ++--
 .../apache/cassandra/thrift/CassandraServer.java   |   16 -
 16 files changed, 58 insertions(+), 130 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/27392484/src/java/org/apache/cassandra/cql/QueryProcessor.java
--
diff --cc src/java/org/apache/cassandra/cql/QueryProcessor.java
index e2fba6f,5977301..b365644
--- a/src/java/org/apache/cassandra/cql/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql/QueryProcessor.java
@@@ -183,20 -179,14 +175,13 @@@ public class QueryProcesso
? select.getNumRecords() + 1
: select.getNumRecords();
  
- try
- {
- rows = StorageProxy.getRangeSlice(new 
RangeSliceCommand(metadata.ksName,
- 
select.getColumnFamily(),
- 
columnFilter,
- bounds,
- 
expressions,
- limit),
- 
select.getConsistencyLevel());
- }
- catch (IOException e)
- {
- throw new RuntimeException(e);
- }
+ Listorg.apache.cassandra.db.Row rows = 
StorageProxy.getRangeSlice(new RangeSliceCommand(metadata.ksName,
+   
select.getColumnFamily(),
 -  
null,
+   
columnFilter,
+   
bounds,
+   
expressions,
+   
limit),
+ 
select.getConsistencyLevel());
  
  // if start key was set and relation was greater than
  if (select.getKeyStart() != null  !select.includeStartKey()  
!rows.isEmpty())

http://git-wip-us.apache.org/repos/asf/cassandra/blob/27392484/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --cc 
src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 6118937,28a003e..87843d2
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ 

[1/2] git commit: semi-ninja trivial IOE cleanup

2013-04-03 Thread aleksey
Updated Branches:
  refs/heads/trunk e306a87b7 - 273924847


semi-ninja trivial IOE cleanup

patch by Aleksey Yeschenko; reviewed by Brandon Williams


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6a03b110
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6a03b110
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6a03b110

Branch: refs/heads/trunk
Commit: 6a03b1103513bffbf8bc4dacb5abb6b5b3726e37
Parents: 5a43c39
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Apr 4 01:54:48 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Apr 4 01:54:48 2013 +0300

--
 .../org/apache/cassandra/cql/QueryProcessor.java   |   33 +++
 .../cql3/statements/ModificationStatement.java |   45 ++-
 .../cassandra/cql3/statements/SelectStatement.java |   25 ++---
 .../org/apache/cassandra/db/CounterMutation.java   |2 +-
 src/java/org/apache/cassandra/db/ReadCommand.java  |2 +-
 .../org/apache/cassandra/db/ReadVerbHandler.java   |   25 +++--
 .../cassandra/db/RetriedSliceFromReadCommand.java  |6 --
 .../cassandra/db/SliceByNamesReadCommand.java  |2 +-
 .../cassandra/service/IResponseResolver.java   |6 +-
 .../service/RangeSliceResponseResolver.java|5 +-
 .../org/apache/cassandra/service/ReadCallback.java |   10 +--
 .../apache/cassandra/service/RowDataResolver.java  |7 +--
 .../cassandra/service/RowDigestResolver.java   |5 +-
 .../org/apache/cassandra/service/StorageProxy.java |   10 ++--
 .../apache/cassandra/thrift/CassandraServer.java   |   17 --
 15 files changed, 59 insertions(+), 141 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6a03b110/src/java/org/apache/cassandra/cql/QueryProcessor.java
--
diff --git a/src/java/org/apache/cassandra/cql/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql/QueryProcessor.java
index 01e5ba3..5977301 100644
--- a/src/java/org/apache/cassandra/cql/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql/QueryProcessor.java
@@ -116,14 +116,7 @@ public class QueryProcessor
 }
 }
 
-try
-{
-return StorageProxy.read(commands, select.getConsistencyLevel());
-}
-catch (IOException e)
-{
-throw new RuntimeException(e);
-}
+return StorageProxy.read(commands, select.getConsistencyLevel());
 }
 
 private static SortedSetByteBuffer getColumnNames(SelectStatement 
select, CFMetaData metadata, ListByteBuffer variables)
@@ -144,7 +137,6 @@ public class QueryProcessor
 private static Listorg.apache.cassandra.db.Row 
multiRangeSlice(CFMetaData metadata, SelectStatement select, ListByteBuffer 
variables)
 throws ReadTimeoutException, UnavailableException, InvalidRequestException
 {
-Listorg.apache.cassandra.db.Row rows;
 IPartitioner? p = StorageService.getPartitioner();
 
 AbstractType? keyType = 
Schema.instance.getCFMetaData(metadata.ksName, 
select.getColumnFamily()).getKeyValidator();
@@ -187,21 +179,14 @@ public class QueryProcessor
   ? select.getNumRecords() + 1
   : select.getNumRecords();
 
-try
-{
-rows = StorageProxy.getRangeSlice(new 
RangeSliceCommand(metadata.ksName,
-
select.getColumnFamily(),
-null,
-
columnFilter,
-bounds,
-
expressions,
-limit),
-
select.getConsistencyLevel());
-}
-catch (IOException e)
-{
-throw new RuntimeException(e);
-}
+Listorg.apache.cassandra.db.Row rows = 
StorageProxy.getRangeSlice(new RangeSliceCommand(metadata.ksName,
+   
   select.getColumnFamily(),
+   
   null,
+   
   columnFilter,
+   
   bounds,
+   
   expressions,
+ 

git commit: IndexHelper.skipBloomFilters won't skip non-SHA filters (backport of patch2 to cassandra-1.2) path by Carl Yekisian; reviewed by jasobrown for CASSANDRA-5385

2013-04-03 Thread jasobrown
Updated Branches:
  refs/heads/cassandra-1.2 6a03b1103 - 213063399


IndexHelper.skipBloomFilters won't skip non-SHA filters (backport of patch2 to 
cassandra-1.2)
path by Carl Yekisian; reviewed by jasobrown for CASSANDRA-5385


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21306339
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21306339
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21306339

Branch: refs/heads/cassandra-1.2
Commit: 213063399cee1433ed73b563ba38f91e9374aacf
Parents: 6a03b11
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Apr 3 16:12:54 2013 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Apr 3 16:14:41 2013 -0700

--
 .../db/columniterator/IndexedSliceReader.java  |2 +-
 .../db/columniterator/SimpleSliceReader.java   |2 +-
 .../apache/cassandra/io/sstable/Descriptor.java|2 ++
 .../apache/cassandra/io/sstable/IndexHelper.java   |   12 
 .../io/sstable/SSTableIdentityIterator.java|4 ++--
 5 files changed, 18 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/21306339/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index 9b34a6a..7289ab0 100644
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@ -96,7 +96,7 @@ class IndexedSliceReader extends AbstractIteratorOnDiskAtom 
implements OnDiskA
 else
 {
 setToRowStart(sstable, indexEntry, input);
-IndexHelper.skipBloomFilter(file, version.filterType);
+IndexHelper.skipSSTableBloomFilter(file, version);
 this.indexes = IndexHelper.deserializeIndex(file);
 this.emptyColumnFamily = ColumnFamily.create(sstable.metadata);
 
emptyColumnFamily.delete(DeletionInfo.serializer().deserializeFromSSTable(file, 
version));

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21306339/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java
index 132f9cb..b30d360 100644
--- a/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/SimpleSliceReader.java
@@ -75,7 +75,7 @@ class SimpleSliceReader extends AbstractIteratorOnDiskAtom 
implements OnDiskAt
 Descriptor.Version version = sstable.descriptor.version;
 if (!version.hasPromotedIndexes)
 {
-IndexHelper.skipBloomFilter(file, version.filterType);
+IndexHelper.skipSSTableBloomFilter(file, version);
 IndexHelper.skipIndex(file);
 }
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21306339/src/java/org/apache/cassandra/io/sstable/Descriptor.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/Descriptor.java 
b/src/java/org/apache/cassandra/io/sstable/Descriptor.java
index f0709d8..0338044 100644
--- a/src/java/org/apache/cassandra/io/sstable/Descriptor.java
+++ b/src/java/org/apache/cassandra/io/sstable/Descriptor.java
@@ -85,6 +85,7 @@ public class Descriptor
 public final boolean hasPromotedIndexes;
 public final FilterFactory.Type filterType;
 public final boolean hasAncestors;
+public final boolean hasBloomFilterSizeInHeader;
 
 public Version(String version)
 {
@@ -108,6 +109,7 @@ public class Descriptor
 filterType = FilterFactory.Type.MURMUR2;
 else
 filterType = FilterFactory.Type.MURMUR3;
+hasBloomFilterSizeInHeader = version.compareTo(ia)  0;
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/21306339/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/IndexHelper.java 
b/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
index 14b2cda..b81f7b8 100644
--- a/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
+++ b/src/java/org/apache/cassandra/io/sstable/IndexHelper.java
@@ -36,6 +36,18 @@ import 

[jira] [Commented] (CASSANDRA-5385) IndexHelper.skipBloomFilters won't skip non-SHA filters

2013-04-03 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621486#comment-13621486
 ] 

Jason Brown commented on CASSANDRA-5385:


Backported and committed to cassandra-1.2. Thanks for the effort, Carl!

 IndexHelper.skipBloomFilters won't skip non-SHA filters
 ---

 Key: CASSANDRA-5385
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5385
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0, 2.0
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
 Fix For: 1.2.4, 2.0

 Attachments: 5385.patch, 5385-v2.patch


 Currently, if the bloom filter is not of SHA type, we do not properly skip 
 the bytes. We need to read out the number of bytes, as happens in the Murmur 
 deserializer, then skip that many bytes instead of just skipping the hash 
 size. The version needs to be passed into the method as well, so that it 
 knows what type of index it is, and does the appropriate skipping.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5385) IndexHelper.skipBloomFilters won't skip non-SHA filters

2013-04-03 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown resolved CASSANDRA-5385.


Resolution: Fixed

 IndexHelper.skipBloomFilters won't skip non-SHA filters
 ---

 Key: CASSANDRA-5385
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5385
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0, 2.0
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
 Fix For: 1.2.4, 2.0

 Attachments: 5385.patch, 5385-v2.patch


 Currently, if the bloom filter is not of SHA type, we do not properly skip 
 the bytes. We need to read out the number of bytes, as happens in the Murmur 
 deserializer, then skip that many bytes instead of just skipping the hash 
 size. The version needs to be passed into the method as well, so that it 
 knows what type of index it is, and does the appropriate skipping.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5426) Redesign repair messages

2013-04-03 Thread Yuki Morishita (JIRA)
Yuki Morishita created CASSANDRA-5426:
-

 Summary: Redesign repair messages
 Key: CASSANDRA-5426
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5426
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.0


Many people have been reporting 'repair hang' when something goes wrong.
Two major causes of hang are 1) validation failure and 2) streaming failure.
Currently, when those failures happen, the failed node would not respond back 
to the repair initiator.
The goal of this ticket is to redesign message flows around repair so that 
repair never hang.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5426) Redesign repair messages

2013-04-03 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5426:
--

Issue Type: Improvement  (was: Bug)

 Redesign repair messages
 

 Key: CASSANDRA-5426
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5426
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.0


 Many people have been reporting 'repair hang' when something goes wrong.
 Two major causes of hang are 1) validation failure and 2) streaming failure.
 Currently, when those failures happen, the failed node would not respond back 
 to the repair initiator.
 The goal of this ticket is to redesign message flows around repair so that 
 repair never hang.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5426) Redesign repair messages

2013-04-03 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621523#comment-13621523
 ] 

Yuki Morishita commented on CASSANDRA-5426:
---

Work in progress is pushed to: https://github.com/yukim/cassandra/commits/5426-1

Only implemented for normal case that works.

--

First of all, ActiveRepairService is broken down to several classes and placed 
into o.a.c.repair to make my work easier.

The main design change around messages is that, all repair related message is 
packed into RepairMessage and handled in RepairMessageVerbHandler, which is 
executed in ANTY_ENTROPY stage. RepairMessage carries RepairMessageHeader and 
its content(if any). RepairMessageHeader is basically to indicate that the 
message belongs to which repair job and to specify content type. Repair message 
content type currently has 6 types defined in RepairMessageType: 
VALIDATION_REQUEST, VALIDATION_COMPLETE, VALIDATION_FAILED, SYNC_REQUEST, 
SYNC_COMPLETE, and SYNC_FAILED.

*VALIDATION_REQUEST*

VALIDATION_REQUEST is sent from repair initiator(coordinator) to request Merkle 
tree.

*VALIDATION_COMPLETE*/*VALIDATION_FAILED*

Calculated Merkle tree is sent back using VALIDATION_COMPLETE message. 
VALIDATION_FAILED message is used when something goes wrong in remote node.

*SYNC_REQUEST*

SYNC_REQUEST is sent when we have to repair remote two nodes. This is forwarded 
StreamingRepairTask we have today.

*SYNC_COMPLETE*/*SYNC_FAILED*

When there is no need to exchange data, or need to exchange but completed 
streaming, the node(this includes the node that received SYNC_REQUEST) sends 
back SYNC_COMPLETE. If streaming data fails, sends back SYNC_FAILED.

The whole repair process is depend on async message exchange using 
MessagingService, so there is still the chance to hang when the node fail to 
deliver message(see CASSANDRA-5393).

Any feedback is appreciated.

 Redesign repair messages
 

 Key: CASSANDRA-5426
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5426
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.0


 Many people have been reporting 'repair hang' when something goes wrong.
 Two major causes of hang are 1) validation failure and 2) streaming failure.
 Currently, when those failures happen, the failed node would not respond back 
 to the repair initiator.
 The goal of this ticket is to redesign message flows around repair so that 
 repair never hang.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of SusannahW by SusannahW

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The SusannahW page has been changed by SusannahW:
http://wiki.apache.org/cassandra/SusannahW

New page:
Not much to tell about myself at all.BR
Feels good to be a part of apache.org.BR
I just hope I'm useful at allBR
BR
my webpage ... 
[[http://www.Dailystrength.org/people/2757620/journal/5951042|Full Document]]


git commit: fix seeking at the wrong place

2013-04-03 Thread yukim
Updated Branches:
  refs/heads/cassandra-1.2 213063399 - 128177c41


fix seeking at the wrong place


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/128177c4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/128177c4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/128177c4

Branch: refs/heads/cassandra-1.2
Commit: 128177c41248734e76499ef382152fd3c40378bd
Parents: 2130633
Author: Yuki Morishita yu...@apache.org
Authored: Wed Apr 3 19:30:53 2013 -0500
Committer: Yuki Morishita yu...@apache.org
Committed: Wed Apr 3 19:30:53 2013 -0500

--
 .../compress/CompressedFileStreamTask.java |4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/128177c4/src/java/org/apache/cassandra/streaming/compress/CompressedFileStreamTask.java
--
diff --git 
a/src/java/org/apache/cassandra/streaming/compress/CompressedFileStreamTask.java
 
b/src/java/org/apache/cassandra/streaming/compress/CompressedFileStreamTask.java
index 2551ca0..60f5a00 100644
--- 
a/src/java/org/apache/cassandra/streaming/compress/CompressedFileStreamTask.java
+++ 
b/src/java/org/apache/cassandra/streaming/compress/CompressedFileStreamTask.java
@@ -77,6 +77,9 @@ public class CompressedFileStreamTask extends FileStreamTask
 // stream each of the required sections of the file
 for (PairLong, Long section : sections)
 {
+// seek to the beginning of the section when socket channel is 
not available
+if (sc == null)
+file.seek(section.left);
 // length of the section to stream
 long length = section.right - section.left;
 // tracks write progress
@@ -92,7 +95,6 @@ public class CompressedFileStreamTask extends FileStreamTask
 }
 else
 {
-file.seek(section.left);
 // NIO is not available. Fall back to normal streaming.
 // This happens when inter-node encryption is turned 
on.
 if (transferBuffer == null)



[jira] [Resolved] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-04-03 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-5391.
---

Resolution: Fixed

Committed, thanks!

 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.3.txt, 5391-1.2.txt, 5391-v2-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: javax.net.ssl.SSLException: bad record MAC
   at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755)
   at 
 

[jira] [Commented] (CASSANDRA-5371) Perform size-tiered compactions in L0

2013-04-03 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621653#comment-13621653
 ] 

T Jake Luciani commented on CASSANDRA-5371:
---

Oh good, this is what I wanted the implementation to end up being.

In LeveledManifest.getCompactionCandidates:

I think there is a bug in the size tier candidate checks.  You seem to be size 
tiering across all the non-compacting sstables and not the level0 ones.  I 
think you mean't to intersect the level0 sstables with the non-compacting ones. 
 You should also add a check after that to make sure the non-compacting level0 
sstables are still  MAX_COMPACTING_L0

Also, the code only checks for STCS when a higher level is ready to be 
compacted.  Maybe move this to the top before the higher level checks. We know 
the higher levels are seek bounded but the code should try to keep up with 
level 0 flushes as much as possible.

 Perform size-tiered compactions in L0
 -

 Key: CASSANDRA-5371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
 Fix For: 2.0

 Attachments: HybridCompactionStrategy.java


 If LCS gets behind, read performance deteriorates as we have to check bloom 
 filters on man sstables in L0.  For wide rows, this can mean having to seek 
 for each one since the BF doesn't help us reject much.
 Performing size-tiered compaction in L0 will mitigate this until we can catch 
 up on merging it into higher levels.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of TonyaWill by TonyaWill

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The TonyaWill page has been changed by TonyaWill:
http://wiki.apache.org/cassandra/TonyaWill?action=diffrev1=1rev2=3

+ Arron is the way in which I'm dubbed and I really like it. To play golf is 
the only hobby my significant other doesn't approve of.BR
+ Virgin Islands is even our house. I am currently a reservation and conveyance 
ticket representative but I've always expected my own home office. I've been 
doing my world wide web for some time now. Check it out here: 
http://soziologie.online.uni-marburg.BR
+ de/wiki/index.php?title=Benutzer_Diskussion:LouiePete
- Karolyn is title my new parents gave me although thought is and never the 
brand on my favorite birth card.BR
- Administering databases is my profession. Mississippi is the only decide to 
put I've lately been residing here in but now i'm considering other options. 
What You love offering is in order to really watch online videos but Now i 
struggle if you want to find available free time for that it.BR
- BR
- If you want to find out more away my website: 
http://wiki-naturba.totemnumerique.com/index.php?title=Membership_Agreement_Of_Online_Facebook_Of_Sex_Users
  


[jira] [Commented] (CASSANDRA-5371) Perform size-tiered compactions in L0

2013-04-03 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621679#comment-13621679
 ] 

Jonathan Ellis commented on CASSANDRA-5371:
---

bq. I think you mean't to intersect the level0 sstables with the non-compacting 
ones

Right.  Fix pushed.

bq. the code only checks for STCS when a higher level is ready to be compacted

The idea is, we'd prefer to do normal LCS compaction on L0.  So if the higher 
levels are okay, we'll treat L0 the same as before.  But if we do need to 
compact a higher level, we'll first check and see if L0 is far enough behind 
that we should do an STCS round there as a stop-gap.

bq. You should also add a check after that to make sure the non-compacting 
level0 sstables are still  MAX_COMPACTING_L0

I think it's more correct as written -- basically, we're doing L0 out-of-turn, 
since for max throughput we'd do the higher level next.  So, we'll do L0 STCS 
until it's under MCL0, then we'll go back to the higher levels until we catch 
up and can actually apply leveling to L0.

 Perform size-tiered compactions in L0
 -

 Key: CASSANDRA-5371
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5371
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
 Fix For: 2.0

 Attachments: HybridCompactionStrategy.java


 If LCS gets behind, read performance deteriorates as we have to check bloom 
 filters on man sstables in L0.  For wide rows, this can mean having to seek 
 for each one since the BF doesn't help us reject much.
 Performing size-tiered compaction in L0 will mitigate this until we can catch 
 up on merging it into higher levels.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5417) Push composites support in the storage engine

2013-04-03 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-5417:
--

Reviewer: tjake

 Push composites support in the storage engine
 -

 Key: CASSANDRA-5417
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5417
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0


 CompositeType happens to be very useful and is now widely used: CQL3 heavily 
 rely on it, and super columns are now using it too internally. Besides, 
 CompositeType has been advised as a replacement of super columns on the 
 thrift side for a while, so it's safe to assume that it's generally used 
 there too.
 CompositeType has initially been introduced as just another AbstractType.  
 Meaning that the storage engine has no nothing whatsoever of composites 
 being, well, composite. This has the following drawbacks:
 * Because internally a composite value is handled as just a ByteBuffer, we 
 end up doing a lot of extra work. Typically, each time we compare 2 composite 
 value, we end up deserializing the components (which, while it doesn't copy 
 data per-se because we just slice the global ByteBuffer, still waste some cpu 
 cycles and allocate a bunch of ByteBuffer objects). And since compare can be 
 called *a lot*, this is likely not negligible.
 * This make CQL3 code uglier than necessary. Basically, CQL3 makes extensive 
 use of composites, and since it gets backs ByteBuffer from the internal 
 columns, it always have to check if it's actually a compositeType or not, and 
 then split it and pick the different parts it needs. It's only an API 
 problem, but having things exposed as composites directly would definitively 
 make thinks cleaner. In particular, in most cases, CQL3 don't care whether it 
 has a composite with only one component or a non-really-composite value, but 
 we still always distinguishes both cases.  Lastly, if we do expose composites 
 more directly internally, it's not a lot more work to internalize better 
 the different parts of the cell name that CQL3 uses (what's the clustering 
 key, what's the actuall CQL3 column name, what's the collection element), 
 making things cleaner. Last but not least, there is currently a bunch of 
 places where methods take a ByteBuffer as argument and it's hard to know 
 whether it expects a cell name or a CQL3 column name. This is pretty error 
 prone.
 * It makes it hard (or impossible) to do a number of performance 
 improvements.  Consider CASSANDRA-4175, I'm not really sure how you can do it 
 properly (in memory) if cell names are just ByteBuffer (since CQL3 column 
 names are just one of the component in general). But we also miss 
 oportunities of sharing prefixes. If we were able to share prefixes of 
 composite names in memory we would 1) lower the memory footprint and 2) 
 potentially speed-up comparison (of the prefixes) by checking reference 
 equality first (also, doing prefix sharing on-disk, which is a separate 
 concern btw, might be easier to do if we do prefix sharing in memory).
 So I suggest pushing CompositeType support inside the storage engine. What I 
 mean by that concretely would be change the internal {{Column.name}} from 
 ByteBuffer to some CellName type. A CellName would API-wise just be a list of 
 ByteBuffer. But in practice, we'd have a specific CellName implementation for 
 not-really-composite names, and the truly composite implementation will allow 
 some prefix sharing. From an external API however, nothing would change, we 
 would pack the composite as usual before sending it back to the client, but 
 at least internally, comparison won't have to deserialize the components 
 every time, and CQL3 code will be cleaner.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of Allison42 by Allison42

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Allison42 page has been changed by Allison42:
http://wiki.apache.org/cassandra/Allison42?action=diffrev1=1rev2=2

+ These notes, however, are backed by outright the entire faith and credit 
from the United States. This ensures that you have to know and still have 
written down equally of info that you just will should provide the bank when 
you go to fulfill using them the commencement the refinance house loan process. 
Reasons To Refinance - There certainly are a variety of top reasons to 
refinance your home. The Magnetar structure was roughly 20% cash bonds, 80% 
synthetics, so $560 x 20% is $112. With having low credit score scores, you 
face impracticality of applying loans right.BR
- In yesteryear, bad credit or bankruptcy would make it extremely hard to 
obtain a home equity loan.BR
- They has to be ready to spend the money for monthly installments on their own 
if they wish to ensure their credit rating will remain untouched. He would ask 
which has a sneer regarding the honorable bodies inside the Legislature. You 
can also provide your repayments debited from your bank account for 
convenience.BR
- That ended the debate on whether or otherwise extraordinary measures were to 
be able.BR
  BR
- My web site: [[|cash loan online]]
+ My weblog :: [[|payday loans payday loans]]
  


[jira] [Updated] (CASSANDRA-5400) Allow multiple ports to gossip from a single IP address

2013-04-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-5400:
--

Attachment: 5400-v4.patch

Updated to use the rpc_address field. Also, cie type is renamed endpoint in 
the cql schema definitions.

 Allow multiple ports to gossip from a single IP address
 ---

 Key: CASSANDRA-5400
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5400
 Project: Cassandra
  Issue Type: New Feature
Affects Versions: 2.0
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
 Fix For: 2.0

 Attachments: 5400.txt, 5400-v2.txt, 5400-v3.patch, 5400-v4.patch


 If a fat client is running on the same machine as a Cassandra node, the fat 
 client must be allocated a new IP address. However, since the node is now a 
 part of the gossip, the other nodes in the ring must be able to talk to it. 
 This means that a local only address (127.0.0.n) won't actually work for the 
 rest of the ring.
 This also would allow for multiple Cassandra service instances to run on a 
 single machine, or from a group of machines behind a NAT.
 The change is simple in concept: instead of using an InetAddress, use a 
 different class. Instead of using an InetSocketAddress, which would still tie 
 us to using InetAddress, I've added a new class, CassandraInstanceEndpoint. 
 The serializer allows for reading a serialized Inet4Address or Inet6Address; 
 also, the message service can still communicate with 
 non-CassandraInstanceEndpoint aware code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5400) Allow multiple ports to gossip from a single IP address

2013-04-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-5400:
--

Attachment: (was: 5400-v4.patch)

 Allow multiple ports to gossip from a single IP address
 ---

 Key: CASSANDRA-5400
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5400
 Project: Cassandra
  Issue Type: New Feature
Affects Versions: 2.0
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
 Fix For: 2.0

 Attachments: 5400.txt, 5400-v2.txt, 5400-v3.patch, 5400-v4.patch


 If a fat client is running on the same machine as a Cassandra node, the fat 
 client must be allocated a new IP address. However, since the node is now a 
 part of the gossip, the other nodes in the ring must be able to talk to it. 
 This means that a local only address (127.0.0.n) won't actually work for the 
 rest of the ring.
 This also would allow for multiple Cassandra service instances to run on a 
 single machine, or from a group of machines behind a NAT.
 The change is simple in concept: instead of using an InetAddress, use a 
 different class. Instead of using an InetSocketAddress, which would still tie 
 us to using InetAddress, I've added a new class, CassandraInstanceEndpoint. 
 The serializer allows for reading a serialized Inet4Address or Inet6Address; 
 also, the message service can still communicate with 
 non-CassandraInstanceEndpoint aware code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5400) Allow multiple ports to gossip from a single IP address

2013-04-03 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian updated CASSANDRA-5400:
--

Attachment: 5400-v4.patch

 Allow multiple ports to gossip from a single IP address
 ---

 Key: CASSANDRA-5400
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5400
 Project: Cassandra
  Issue Type: New Feature
Affects Versions: 2.0
Reporter: Carl Yeksigian
Assignee: Carl Yeksigian
 Fix For: 2.0

 Attachments: 5400.txt, 5400-v2.txt, 5400-v3.patch, 5400-v4.patch


 If a fat client is running on the same machine as a Cassandra node, the fat 
 client must be allocated a new IP address. However, since the node is now a 
 part of the gossip, the other nodes in the ring must be able to talk to it. 
 This means that a local only address (127.0.0.n) won't actually work for the 
 rest of the ring.
 This also would allow for multiple Cassandra service instances to run on a 
 single machine, or from a group of machines behind a NAT.
 The change is simple in concept: instead of using an InetAddress, use a 
 different class. Instead of using an InetSocketAddress, which would still tie 
 us to using InetAddress, I've added a new class, CassandraInstanceEndpoint. 
 The serializer allows for reading a serialized Inet4Address or Inet6Address; 
 also, the message service can still communicate with 
 non-CassandraInstanceEndpoint aware code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5125) Support indexes on composite column components

2013-04-03 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13621748#comment-13621748
 ] 

Carl Yeksigian commented on CASSANDRA-5125:
---

A few errors when running the test suite:

Testcase: testCli(org.apache.cassandra.cli.CliTest):Caused an ERROR
java.lang.RuntimeException: org.apache.cassandra.db.marshal.MarshalException: A 
long is exactly 8 bytes: 4
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1533)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:680)
Caused by: org.apache.cassandra.db.marshal.MarshalException: A long is exactly 
8 bytes: 4
at org.apache.cassandra.db.marshal.LongType.getString(LongType.java:69)
at 
org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.insert(AbstractSimplePerColumnSecondaryIndex.java:121)
at 
org.apache.cassandra.db.index.SecondaryIndexManager$PerColumnIndexUpdater.update(SecondaryIndexManager.java:623)
at 
org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:313)
at 
org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:168)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:253)
at org.apache.cassandra.db.Memtable.put(Memtable.java:169)
at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:852)
at org.apache.cassandra.db.Table.apply(Table.java:379)
at org.apache.cassandra.db.Table.apply(Table.java:342)
at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:189)
at 
org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:667)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1529)
... 3 more

Testcase: testIndexDeletions(org.apache.cassandra.db.ColumnFamilyStoreTest):
Caused an ERROR
A long is exactly 8 bytes: 4
org.apache.cassandra.db.marshal.MarshalException: A long is exactly 8 bytes: 4
at org.apache.cassandra.db.marshal.LongType.getString(LongType.java:69)
at 
org.apache.cassandra.db.index.AbstractSimplePerColumnSecondaryIndex.insert(AbstractSimplePerColumnSecondaryIndex.java:121)
at 
org.apache.cassandra.db.index.SecondaryIndexManager$PerColumnIndexUpdater.update(SecondaryIndexManager.java:623)
at 
org.apache.cassandra.db.AtomicSortedColumns$Holder.addColumn(AtomicSortedColumns.java:313)
at 
org.apache.cassandra.db.AtomicSortedColumns.addAllWithSizeDelta(AtomicSortedColumns.java:168)
at org.apache.cassandra.db.Memtable.resolve(Memtable.java:253)
at org.apache.cassandra.db.Memtable.put(Memtable.java:169)
at 
org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:852)
at org.apache.cassandra.db.Table.apply(Table.java:379)
at org.apache.cassandra.db.Table.apply(Table.java:342)
at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:189)
at 
org.apache.cassandra.db.ColumnFamilyStoreTest.testIndexDeletions(ColumnFamilyStoreTest.java:301)


Testcase: testIndexUpdate(org.apache.cassandra.db.ColumnFamilyStoreTest):   
Caused an ERROR
Index: 0, Size: 0
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.get(ArrayList.java:322)
at 
org.apache.cassandra.db.ColumnFamilyStoreTest.testIndexUpdate(ColumnFamilyStoreTest.java:398)

 Support indexes on composite column components
 --

 Key: CASSANDRA-5125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5125
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 2.0

 Attachments: 0001-Refactor-aliases-into-column_metadata.txt, 
 0002-Generalize-CompositeIndex-for-all-column-type.txt, 
 0003-Handle-new-type-of-IndexExpression.txt, 
 0004-Handle-partition-key-indexing.txt


 Given
 {code}
 CREATE TABLE foo (
   a int,
   b int,
   c int,
   PRIMARY KEY (a, b)
 );
 {code}
 We should support {{CREATE INDEX ON foo(b)}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of HollisDot by HollisDot

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The HollisDot page has been changed by HollisDot:
http://wiki.apache.org/cassandra/HollisDot

New page:
Nothing to write about myself really.BR
Finally a part of apache.org.BR
I really wish Im useful in one way .BR
BR
Feel free to visit my web-site; [[http://www.fairfieldfunding.com/|sell your 
structured settlement]]


[Cassandra Wiki] Trivial Update of IlajadiAn by IlajadiAn

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The IlajadiAn page has been changed by IlajadiAn:
http://wiki.apache.org/cassandra/IlajadiAn

New page:
My name: Ila AmundsonBR
My age: 34BR
Country: ItaliaBR
Home town: La Foce BR
Post code: 19134BR
Street: Via Nolana 119BR
BR
my blog 
[[http://stocknewsnow.com/mark-ghiglieri-ceo-masterpiece-investments-corp-fine-art-company-with-a-high-tech-twist-at-niba-new-york-2012/|mark
 ghiglieri]]


[Cassandra Wiki] Trivial Update of LanoraDug by LanoraDug

2013-04-03 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The LanoraDug page has been changed by LanoraDug:
http://wiki.apache.org/cassandra/LanoraDug

New page:
Not much to write about myself really.BR
Hurrey Im here and a member of this community.BR
BR
BR
I really wish Im useful at allBR
BR
My web blog; [[http://semoneylender.com.sg/personal-loan/|mouse click the up 
coming document]]