[jira] [Updated] (CASSANDRA-3378) Allow configuration of storage protocol socket buffer

2013-01-29 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-3378:
---

Attachment: 3378-v3.patch

Minor cleanup of Michal's patch. Rebased against current 1.2 branch. Restored 
the Buffered*outStream size to 4096 and added yaml comment.

 Allow configuration of storage protocol socket buffer
 -

 Key: CASSANDRA-3378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3378
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Brandon Williams
Assignee: Michał Michalski
Priority: Minor
  Labels: lhf
 Attachments: 3378-v3.patch, cassandra-3378-v1.patch, 
 cassandra-3378-v2.patch


 Similar to rpc_[send,recv]_buff_size_in_bytes, we should expose this for high 
 latency connections.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5191) BufferOverflowException in CommitLogSegment

2013-01-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565389#comment-13565389
 ] 

André Borgqvist commented on CASSANDRA-5191:


I've added a catch and log print on row 265 in CommitLogSegment.

try
{
buffer.putLong(checksum.getValue());
}catch(BufferOverflowException e)
{
logger.error(BUFFEROVERFLOW: mutation=+rowMutation + , 
checksum=+checksum.getValue() + , bufferLen= + buffer.remaining());
throw e;
}   

Result in log (column names contains colons from our application logic which 
looks a bit weird in the printout, for example 'c:res:State' is a column name):

ERROR [COMMIT-LOG-WRITER] 2013-01-29 12:40:15,003 CommitLogSegment.java (line 
271) BUFFEROVERFLOW: mutation=RowMutation(keyspace='cake', 
key='37636634396664363365326431363162643838373631636631643233313365333165303063366566653035626333',
 modifications=[ColumnFamily(vouchers 
[c:res:State:false:7@1359459603537005,c:res:TemporaryState:true:4@1359459603537007,c:res:subscriberId:false:12@1359459603537003,c:res:transactionId:false:5@1359459603537001,])]),
 checksum=3781223592, bufferLen=4


Noticed that marked for deletion was true on 
c:res:TemporaryState:true:4@1359459603537007 and I know we write with a very 
short ttl (5s), I suspected this has something to do with it.

So I made a small test program that loops and inserts a column with a ttl of 
5s, and reproduced the error with it.
The program is run toward node 1 in the cluster and node 2 crashes after some 
minutes (I started three instances on the test program to try to speed it up).

ERROR [COMMIT-LOG-WRITER] 2013-01-29 14:52:43,304 CommitLogSegment.java (line 
271) BUFFEROVERFLOW: mutation=RowMutation(keyspace='cake', 
key='31383037363038', modifications=[ColumnFamily(vouchers 
[testcol:true:4@1359467551818,])]), checksum=2632894560, bufferLen=6

Just noticed NTP is not correctly configured and the system clock on node 2 is 
12 seconds ahead.




 BufferOverflowException in CommitLogSegment
 ---

 Key: CASSANDRA-5191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: RHEL 2.6.32-220.el6.x86_64, jdk1.6.0_27
Reporter: André Borgqvist

 Running mixed reads, writes and deletes on a single column family in a two 
 node cluster. After a few minutes the following appears in the system log:
 ERROR [COMMIT-LOG-WRITER] 2013-01-25 12:49:55,955 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[COMMIT-LOG-WRITER,5,main]
 java.nio.BufferOverflowException
   at java.nio.Buffer.nextPutIndex(Buffer.java:499)
   at java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:756)
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:265)
   at 
 org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:382)
   at 
 org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:50)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   at java.lang.Thread.run(Thread.java:662)
 Possibly related to https://issues.apache.org/jira/browse/CASSANDRA-3615

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5191) BufferOverflowException in CommitLogSegment

2013-01-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Borgqvist updated CASSANDRA-5191:
---

Attachment: BufferOverflowTest.java

inserts columns with short ttl to localhost

 BufferOverflowException in CommitLogSegment
 ---

 Key: CASSANDRA-5191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: RHEL 2.6.32-220.el6.x86_64, jdk1.6.0_27
Reporter: André Borgqvist
 Attachments: BufferOverflowTest.java


 Running mixed reads, writes and deletes on a single column family in a two 
 node cluster. After a few minutes the following appears in the system log:
 ERROR [COMMIT-LOG-WRITER] 2013-01-25 12:49:55,955 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[COMMIT-LOG-WRITER,5,main]
 java.nio.BufferOverflowException
   at java.nio.Buffer.nextPutIndex(Buffer.java:499)
   at java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:756)
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:265)
   at 
 org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:382)
   at 
 org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:50)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   at java.lang.Thread.run(Thread.java:662)
 Possibly related to https://issues.apache.org/jira/browse/CASSANDRA-3615

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Issue Comment Deleted] (CASSANDRA-5191) BufferOverflowException in CommitLogSegment

2013-01-29 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Borgqvist updated CASSANDRA-5191:
---

Comment: was deleted

(was: inserts columns with short ttl to localhost)

 BufferOverflowException in CommitLogSegment
 ---

 Key: CASSANDRA-5191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: RHEL 2.6.32-220.el6.x86_64, jdk1.6.0_27
Reporter: André Borgqvist
 Attachments: BufferOverflowTest.java


 Running mixed reads, writes and deletes on a single column family in a two 
 node cluster. After a few minutes the following appears in the system log:
 ERROR [COMMIT-LOG-WRITER] 2013-01-25 12:49:55,955 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[COMMIT-LOG-WRITER,5,main]
 java.nio.BufferOverflowException
   at java.nio.Buffer.nextPutIndex(Buffer.java:499)
   at java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:756)
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:265)
   at 
 org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:382)
   at 
 org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:50)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   at java.lang.Thread.run(Thread.java:662)
 Possibly related to https://issues.apache.org/jira/browse/CASSANDRA-3615

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5189) compact storage metadata is broken

2013-01-29 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565413#comment-13565413
 ] 

Jason Brown commented on CASSANDRA-5189:


Tested it out locally and worked properly. LGTM. +1

 compact storage metadata is broken
 --

 Key: CASSANDRA-5189
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5189
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5189.txt


 {noformat}
 cqlsh:foo CREATE TABLE bar (
... id int primary key,
... i int
... ) WItH COMPACT STORAGE;
 cqlsh:foo INSERT INTO bar (id, i) VALUES (1, 2);
 Bad Request: Missing PRIMARY KEY part column1
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 cqlsh:foo INSERT INTO bar (id, column1) VALUES (1, 2);
 Bad Request: Missing mandatory column i
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5189) compact storage metadata is broken

2013-01-29 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565413#comment-13565413
 ] 

Jason Brown edited comment on CASSANDRA-5189 at 1/29/13 2:47 PM:
-

Tested locally and it worked properly. LGTM. +1

  was (Author: jasobrown):
Tested it out locally and worked properly. LGTM. +1
  
 compact storage metadata is broken
 --

 Key: CASSANDRA-5189
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5189
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5189.txt


 {noformat}
 cqlsh:foo CREATE TABLE bar (
... id int primary key,
... i int
... ) WItH COMPACT STORAGE;
 cqlsh:foo INSERT INTO bar (id, i) VALUES (1, 2);
 Bad Request: Missing PRIMARY KEY part column1
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 cqlsh:foo INSERT INTO bar (id, column1) VALUES (1, 2);
 Bad Request: Missing mandatory column i
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5189) compact storage metadata is broken

2013-01-29 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565482#comment-13565482
 ] 

Jonathan Ellis commented on CASSANDRA-5189:
---

I'm not actually sure how this fixes the problem, although it apparently does 
-- looks like only validation changes were made to CREATE.

 compact storage metadata is broken
 --

 Key: CASSANDRA-5189
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5189
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5189.txt


 {noformat}
 cqlsh:foo CREATE TABLE bar (
... id int primary key,
... i int
... ) WItH COMPACT STORAGE;
 cqlsh:foo INSERT INTO bar (id, i) VALUES (1, 2);
 Bad Request: Missing PRIMARY KEY part column1
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 cqlsh:foo INSERT INTO bar (id, column1) VALUES (1, 2);
 Bad Request: Missing mandatory column i
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5195) Offline scrub does not migrate the directory structure on migration from 1.0.x to 1.1.x and causes the keyspace to disappear

2013-01-29 Thread Omid Aladini (JIRA)
Omid Aladini created CASSANDRA-5195:
---

 Summary: Offline scrub does not migrate the directory structure on 
migration from 1.0.x to 1.1.x and causes the keyspace to disappear
 Key: CASSANDRA-5195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
Reporter: Omid Aladini


Due to CASSANDRA-4411, upon migration from 1.0.x to 1.1.x containing 
LCS-compacted sstables, an offline scrub should be run before Cassandra 1.1.x 
is started. But Cassandra 1.1.x uses a new directory structure (CASSANDRA-2749) 
that offline scrubber doesn't detect or try to migrate.

How to reproduce:

1- Run cassandra 1.0.12.
2- Run stress tool, let Cassandra flush Keyspace1 or flush manually.
3- Stop cassandra 1.0.12
4- Run ./bin/sstablescrub Keyspace1 Standard1
  which returns Unknown keyspace/columnFamily Keyspace1.Standard1 and notice 
the data directory isn't migrated.
5- Run cassandra 1.1.9. Keyspace1 doesn't get loaded and Cassandra doesn't try 
to migrate the directory structure. Also commitlog entries get skipped: 
Skipped X mutations from unknown (probably removed) CF with id 1000

Without the unsuccessful step 4, Cassandra 1.1.9 loads and migrates the 
Keyspace correctly.

  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3620) Proposal for distributed deletes - fully automatic Reaper Model rather than GCSeconds and manual repairs

2013-01-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565533#comment-13565533
 ] 

Sylvain Lebresne commented on CASSANDRA-3620:
-

bq. extend the coordinator's ack-wait callback (which we currently use to write 
hints if a replica times out) to write a delete successful message

There is one problem I'm afraid. Without batch commit log, we cannot guarantee 
that an acknowledged write won't be lost by a node. 

Don't get me wrong, it's sad, because otherwise it's a fairly simple solution 
to implement. Typically, the delete successful message could just be 
rewriting the same tombstone(s) we just wrote but with a localDeletionTime set 
to 0 (of Integer.MIN_VALUE) to make them readily gcable (which may be what you 
had in mind).

 Proposal for distributed deletes - fully automatic Reaper Model rather than 
 GCSeconds and manual repairs
 --

 Key: CASSANDRA-3620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3620
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dominic Williams
  Labels: GCSeconds,, deletes,, distributed_deletes,, 
 merkle_trees, repair,
 Fix For: 2.0

   Original Estimate: 504h
  Remaining Estimate: 504h

 Proposal for an improved system for handling distributed deletes, which 
 removes the requirement to regularly run repair processes to maintain 
 performance and data integrity. 
 h2. The Problem
 There are various issues with repair:
 * Repair is expensive to run
 * Repair jobs are often made more expensive than they should be by other 
 issues (nodes dropping requests, hinted handoff not working, downtime etc)
 * Repair processes can often fail and need restarting, for example in cloud 
 environments where network issues make a node disappear from the ring for a 
 brief moment
 * When you fail to run repair within GCSeconds, either by error or because of 
 issues with Cassandra, data written to a node that did not see a later delete 
 can reappear (and a node might miss a delete for several reasons including 
 being down or simply dropping requests during load shedding)
 * If you cannot run repair and have to increase GCSeconds to prevent deleted 
 data reappearing, in some cases the growing tombstone overhead can 
 significantly degrade performance
 Because of the foregoing, in high throughput environments it can be very 
 difficult to make repair a cron job. It can be preferable to keep a terminal 
 open and run repair jobs one by one, making sure they succeed and keeping and 
 eye on overall load to reduce system impact. This isn't desirable, and 
 problems are exacerbated when there are lots of column families in a database 
 or it is necessary to run a column family with a low GCSeconds to reduce 
 tombstone load (because there are many write/deletes to that column family). 
 The database owner must run repair within the GCSeconds window, or increase 
 GCSeconds, to avoid potentially losing delete operations. 
 It would be much better if there was no ongoing requirement to run repair to 
 ensure deletes aren't lost, and no GCSeconds window. Ideally repair would be 
 an optional maintenance utility used in special cases, or to ensure ONE reads 
 get consistent data. 
 h2. Reaper Model Proposal
 # Tombstones do not expire, and there is no GCSeconds
 # Tombstones have associated ACK lists, which record the replicas that have 
 acknowledged them
 # Tombstones are deleted (or marked for compaction) when they have been 
 acknowledged by all replicas
 # When a tombstone is deleted, it is added to a relic index. The relic 
 index makes it possible for a reaper to acknowledge a tombstone after it is 
 deleted
 # The ACK lists and relic index are held in memory for speed
 # Background reaper threads constantly stream ACK requests to other nodes, 
 and stream back ACK responses back to requests they have received (throttling 
 their usage of CPU and bandwidth so as not to affect performance)
 # If a reaper receives a request to ACK a tombstone that does not exist, it 
 creates the tombstone and adds an ACK for the requestor, and replies with an 
 ACK. This is the worst that can happen, and does not cause data corruption. 
 ADDENDUM
 The proposal to hold the ACK and relic lists in memory was added after the 
 first posting. Please see comments for full reasons. Furthermore, a proposal 
 for enhancements to repair was posted to comments, which would cause 
 tombstones to be scavenged when repair completes (the author had assumed this 
 was the case anyway, but it seems at time of writing they are only scavenged 
 during compaction on GCSeconds timeout). The proposals are not exclusive and 
 this proposal is 

[jira] [Commented] (CASSANDRA-5195) Offline scrub does not migrate the directory structure on migration from 1.0.x to 1.1.x and causes the keyspace to disappear

2013-01-29 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565540#comment-13565540
 ] 

Jonathan Ellis commented on CASSANDRA-5195:
---

Ryan, can you reproduce?

 Offline scrub does not migrate the directory structure on migration from 
 1.0.x to 1.1.x and causes the keyspace to disappear
 

 Key: CASSANDRA-5195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
Reporter: Omid Aladini
 Fix For: 1.1.9


 Due to CASSANDRA-4411, upon migration from 1.0.x to 1.1.x containing 
 LCS-compacted sstables, an offline scrub should be run before Cassandra 1.1.x 
 is started. But Cassandra 1.1.x uses a new directory structure 
 (CASSANDRA-2749) that offline scrubber doesn't detect or try to migrate.
 How to reproduce:
 1- Run cassandra 1.0.12.
 2- Run stress tool, let Cassandra flush Keyspace1 or flush manually.
 3- Stop cassandra 1.0.12
 4- Run ./bin/sstablescrub Keyspace1 Standard1
   which returns Unknown keyspace/columnFamily Keyspace1.Standard1 and 
 notice the data directory isn't migrated.
 5- Run cassandra 1.1.9. Keyspace1 doesn't get loaded and Cassandra doesn't 
 try to migrate the directory structure. Also commitlog entries get skipped: 
 Skipped X mutations from unknown (probably removed) CF with id 1000
 Without the unsuccessful step 4, Cassandra 1.1.9 loads and migrates the 
 Keyspace correctly.
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5195) Offline scrub does not migrate the directory structure on migration from 1.0.x to 1.1.x and causes the keyspace to disappear

2013-01-29 Thread Omid Aladini (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omid Aladini updated CASSANDRA-5195:


Attachment: 5195.patch

I tried to fix the issue in offline-scrub but the patch doesn't fully fix the 
issue. Cassandra 1.1.9 with this patch only loads the migrated keyspaces after 
2nd restart after offine-scrub has applies the migration.

 Offline scrub does not migrate the directory structure on migration from 
 1.0.x to 1.1.x and causes the keyspace to disappear
 

 Key: CASSANDRA-5195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
Reporter: Omid Aladini
 Fix For: 1.1.9

 Attachments: 5195.patch


 Due to CASSANDRA-4411, upon migration from 1.0.x to 1.1.x containing 
 LCS-compacted sstables, an offline scrub should be run before Cassandra 1.1.x 
 is started. But Cassandra 1.1.x uses a new directory structure 
 (CASSANDRA-2749) that offline scrubber doesn't detect or try to migrate.
 How to reproduce:
 1- Run cassandra 1.0.12.
 2- Run stress tool, let Cassandra flush Keyspace1 or flush manually.
 3- Stop cassandra 1.0.12
 4- Run ./bin/sstablescrub Keyspace1 Standard1
   which returns Unknown keyspace/columnFamily Keyspace1.Standard1 and 
 notice the data directory isn't migrated.
 5- Run cassandra 1.1.9. Keyspace1 doesn't get loaded and Cassandra doesn't 
 try to migrate the directory structure. Also commitlog entries get skipped: 
 Skipped X mutations from unknown (probably removed) CF with id 1000
 Without the unsuccessful step 4, Cassandra 1.1.9 loads and migrates the 
 Keyspace correctly.
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3620) Proposal for distributed deletes - fully automatic Reaper Model rather than GCSeconds and manual repairs

2013-01-29 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565559#comment-13565559
 ] 

Jonathan Ellis commented on CASSANDRA-3620:
---

bq. Without batch commit log, we cannot guarantee that an acknowledged write 
won't be lost by a node.

Right.  I'm willing to live with that. :)

If you're not, we could just just check for BCL and only enable this if they're 
in batch mode.  That's Good Enough for me.  And in a couple years everyone will 
be on SSD and we can make BCL the default. :)

 Proposal for distributed deletes - fully automatic Reaper Model rather than 
 GCSeconds and manual repairs
 --

 Key: CASSANDRA-3620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3620
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dominic Williams
  Labels: GCSeconds,, deletes,, distributed_deletes,, 
 merkle_trees, repair,
 Fix For: 2.0

   Original Estimate: 504h
  Remaining Estimate: 504h

 Proposal for an improved system for handling distributed deletes, which 
 removes the requirement to regularly run repair processes to maintain 
 performance and data integrity. 
 h2. The Problem
 There are various issues with repair:
 * Repair is expensive to run
 * Repair jobs are often made more expensive than they should be by other 
 issues (nodes dropping requests, hinted handoff not working, downtime etc)
 * Repair processes can often fail and need restarting, for example in cloud 
 environments where network issues make a node disappear from the ring for a 
 brief moment
 * When you fail to run repair within GCSeconds, either by error or because of 
 issues with Cassandra, data written to a node that did not see a later delete 
 can reappear (and a node might miss a delete for several reasons including 
 being down or simply dropping requests during load shedding)
 * If you cannot run repair and have to increase GCSeconds to prevent deleted 
 data reappearing, in some cases the growing tombstone overhead can 
 significantly degrade performance
 Because of the foregoing, in high throughput environments it can be very 
 difficult to make repair a cron job. It can be preferable to keep a terminal 
 open and run repair jobs one by one, making sure they succeed and keeping and 
 eye on overall load to reduce system impact. This isn't desirable, and 
 problems are exacerbated when there are lots of column families in a database 
 or it is necessary to run a column family with a low GCSeconds to reduce 
 tombstone load (because there are many write/deletes to that column family). 
 The database owner must run repair within the GCSeconds window, or increase 
 GCSeconds, to avoid potentially losing delete operations. 
 It would be much better if there was no ongoing requirement to run repair to 
 ensure deletes aren't lost, and no GCSeconds window. Ideally repair would be 
 an optional maintenance utility used in special cases, or to ensure ONE reads 
 get consistent data. 
 h2. Reaper Model Proposal
 # Tombstones do not expire, and there is no GCSeconds
 # Tombstones have associated ACK lists, which record the replicas that have 
 acknowledged them
 # Tombstones are deleted (or marked for compaction) when they have been 
 acknowledged by all replicas
 # When a tombstone is deleted, it is added to a relic index. The relic 
 index makes it possible for a reaper to acknowledge a tombstone after it is 
 deleted
 # The ACK lists and relic index are held in memory for speed
 # Background reaper threads constantly stream ACK requests to other nodes, 
 and stream back ACK responses back to requests they have received (throttling 
 their usage of CPU and bandwidth so as not to affect performance)
 # If a reaper receives a request to ACK a tombstone that does not exist, it 
 creates the tombstone and adds an ACK for the requestor, and replies with an 
 ACK. This is the worst that can happen, and does not cause data corruption. 
 ADDENDUM
 The proposal to hold the ACK and relic lists in memory was added after the 
 first posting. Please see comments for full reasons. Furthermore, a proposal 
 for enhancements to repair was posted to comments, which would cause 
 tombstones to be scavenged when repair completes (the author had assumed this 
 was the case anyway, but it seems at time of writing they are only scavenged 
 during compaction on GCSeconds timeout). The proposals are not exclusive and 
 this proposal is extended to include the possible enhancements to repair 
 described.
 NOTES
 * If a node goes down for a prolonged period, the worst that can happen is 
 that some tombstones are recreated across the cluster when it restarts, which 
 does not corrupt data (and 

[jira] [Commented] (CASSANDRA-5189) compact storage metadata is broken

2013-01-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565571#comment-13565571
 ] 

Sylvain Lebresne commented on CASSANDRA-5189:
-

It's not only validation change in fact, the change to the first {{if}} is what 
fixes it. 

 compact storage metadata is broken
 --

 Key: CASSANDRA-5189
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5189
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5189.txt


 {noformat}
 cqlsh:foo CREATE TABLE bar (
... id int primary key,
... i int
... ) WItH COMPACT STORAGE;
 cqlsh:foo INSERT INTO bar (id, i) VALUES (1, 2);
 Bad Request: Missing PRIMARY KEY part column1
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 cqlsh:foo INSERT INTO bar (id, column1) VALUES (1, 2);
 Bad Request: Missing mandatory column i
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5189) compact storage metadata is broken

2013-01-29 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565585#comment-13565585
 ] 

Jonathan Ellis commented on CASSANDRA-5189:
---

+1


 compact storage metadata is broken
 --

 Key: CASSANDRA-5189
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5189
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 5189.txt


 {noformat}
 cqlsh:foo CREATE TABLE bar (
... id int primary key,
... i int
... ) WItH COMPACT STORAGE;
 cqlsh:foo INSERT INTO bar (id, i) VALUES (1, 2);
 Bad Request: Missing PRIMARY KEY part column1
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 cqlsh:foo INSERT INTO bar (id, column1) VALUES (1, 2);
 Bad Request: Missing mandatory column i
 Perhaps you meant to use CQL 2? Try using the -2 option when starting cqlsh.
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-3620) Proposal for distributed deletes - fully automatic Reaper Model rather than GCSeconds and manual repairs

2013-01-29 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565598#comment-13565598
 ] 

Sylvain Lebresne commented on CASSANDRA-3620:
-

bq. Right. I'm willing to live with that.

I'm not. This means that when a node fails, you have a very big chance of 
having data resurrect if there was deletes in the last 10 seconds before the 
crash (or whatever you've set for periodic). Pretty sure an optimization that 
breaks correctness is not what people wants.

bq. we could just just check for BCL and only enable this if they're in batch 
mode.

I'd be fine with that, though I not that to do that properly a node would have 
to know the commit log mode of other nodes. Of course we could say if you use 
batch, use it on all nodes, but I'm always a bit reluctant in assuming people 
will do what we consider the right thing without any validation. Not that it 
would be hard to gossip the commit log mode btw, just pointing it out.

But just saying it may be worth spending a bit more time thinking about this 
issue before rushing into a solution that might not be useful to everyone 
today. 


 Proposal for distributed deletes - fully automatic Reaper Model rather than 
 GCSeconds and manual repairs
 --

 Key: CASSANDRA-3620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3620
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dominic Williams
  Labels: GCSeconds,, deletes,, distributed_deletes,, 
 merkle_trees, repair,
 Fix For: 2.0

   Original Estimate: 504h
  Remaining Estimate: 504h

 Proposal for an improved system for handling distributed deletes, which 
 removes the requirement to regularly run repair processes to maintain 
 performance and data integrity. 
 h2. The Problem
 There are various issues with repair:
 * Repair is expensive to run
 * Repair jobs are often made more expensive than they should be by other 
 issues (nodes dropping requests, hinted handoff not working, downtime etc)
 * Repair processes can often fail and need restarting, for example in cloud 
 environments where network issues make a node disappear from the ring for a 
 brief moment
 * When you fail to run repair within GCSeconds, either by error or because of 
 issues with Cassandra, data written to a node that did not see a later delete 
 can reappear (and a node might miss a delete for several reasons including 
 being down or simply dropping requests during load shedding)
 * If you cannot run repair and have to increase GCSeconds to prevent deleted 
 data reappearing, in some cases the growing tombstone overhead can 
 significantly degrade performance
 Because of the foregoing, in high throughput environments it can be very 
 difficult to make repair a cron job. It can be preferable to keep a terminal 
 open and run repair jobs one by one, making sure they succeed and keeping and 
 eye on overall load to reduce system impact. This isn't desirable, and 
 problems are exacerbated when there are lots of column families in a database 
 or it is necessary to run a column family with a low GCSeconds to reduce 
 tombstone load (because there are many write/deletes to that column family). 
 The database owner must run repair within the GCSeconds window, or increase 
 GCSeconds, to avoid potentially losing delete operations. 
 It would be much better if there was no ongoing requirement to run repair to 
 ensure deletes aren't lost, and no GCSeconds window. Ideally repair would be 
 an optional maintenance utility used in special cases, or to ensure ONE reads 
 get consistent data. 
 h2. Reaper Model Proposal
 # Tombstones do not expire, and there is no GCSeconds
 # Tombstones have associated ACK lists, which record the replicas that have 
 acknowledged them
 # Tombstones are deleted (or marked for compaction) when they have been 
 acknowledged by all replicas
 # When a tombstone is deleted, it is added to a relic index. The relic 
 index makes it possible for a reaper to acknowledge a tombstone after it is 
 deleted
 # The ACK lists and relic index are held in memory for speed
 # Background reaper threads constantly stream ACK requests to other nodes, 
 and stream back ACK responses back to requests they have received (throttling 
 their usage of CPU and bandwidth so as not to affect performance)
 # If a reaper receives a request to ACK a tombstone that does not exist, it 
 creates the tombstone and adds an ACK for the requestor, and replies with an 
 ACK. This is the worst that can happen, and does not cause data corruption. 
 ADDENDUM
 The proposal to hold the ACK and relic lists in memory was added after the 
 first posting. Please see comments for full reasons. 

git commit: Fix bug in compact storage metadata handling

2013-01-29 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 4feb87d37 - be36736d3


Fix bug in compact storage metadata handling

patch by slebresne; reviewed by jasobrown for CASSANDRA-5189


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be36736d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be36736d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be36736d

Branch: refs/heads/cassandra-1.2
Commit: be36736d38eb5793bab6040817260fd5c6cd166b
Parents: 4feb87d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Jan 29 19:08:41 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Jan 29 19:08:41 2013 +0100

--
 CHANGES.txt|1 +
 .../statements/CreateColumnFamilyStatement.java|   14 --
 2 files changed, 9 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be36736d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 46d79cc..e1af9ee 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.2
  * fix symlinks under data dir not working (CASSANDRA-5185)
+ * fix bug in compact storage metadata handling (CASSANDRA-5189)
 
 1.2.1
  * stream undelivered hints on decommission (CASSANDRA-5128)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be36736d/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
 
b/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
index 4b96167..483e083 100644
--- 
a/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
+++ 
b/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
@@ -276,13 +276,10 @@ public class CreateColumnFamilyStatement extends 
SchemaAlteringStatement
 }
 }
 
-if (useCompactStorage  stmt.columns.size() = 1)
+if (useCompactStorage  !stmt.columnAliases.isEmpty())
 {
 if (stmt.columns.isEmpty())
 {
-if (columnAliases.isEmpty())
-throw new 
InvalidRequestException(String.format(COMPACT STORAGE with non-composite 
PRIMARY KEY require one column not part of the PRIMARY KEY (got: %s), 
StringUtils.join(stmt.columns.keySet(), , )));
-
 // The only value we'll insert will be the empty one, so 
the default validator don't matter
 stmt.defaultValidator = BytesType.instance;
 // We need to distinguish between
@@ -293,6 +290,9 @@ public class CreateColumnFamilyStatement extends 
SchemaAlteringStatement
 }
 else
 {
+if (stmt.columns.size()  1)
+throw new 
InvalidRequestException(String.format(COMPACT STORAGE with composite PRIMARY 
KEY allows no more than one column not part of the PRIMARY KEY (got: %s), 
StringUtils.join(stmt.columns.keySet(), , )));
+
 Map.EntryColumnIdentifier, AbstractType lastEntry = 
stmt.columns.entrySet().iterator().next();
 stmt.defaultValidator = lastEntry.getValue();
 stmt.valueAlias = lastEntry.getKey().key;
@@ -301,8 +301,10 @@ public class CreateColumnFamilyStatement extends 
SchemaAlteringStatement
 }
 else
 {
-if (useCompactStorage  !columnAliases.isEmpty())
-throw new InvalidRequestException(String.format(COMPACT 
STORAGE with composite PRIMARY KEY allows no more than one column not part of 
the PRIMARY KEY (got: %s), StringUtils.join(stmt.columns.keySet(), , )));
+// For compact, we are in the static case, so we need at 
least one column defined. For non-compact however, having
+// just the PK is fine since we have CQL3 row marker.
+if (useCompactStorage  stmt.columns.isEmpty())
+throw new InvalidRequestException(COMPACT STORAGE with 
non-composite PRIMARY KEY require one column not part of the PRIMARY KEY, none 
given);
 
 // There is no way to insert/access a column that is not 
defined for non-compact storage, so
 // the actual validator don't matter much (except that we want 
to recognize counter CF as limitation apply to them).



[4/4] git commit: Merge branch 'cassandra-1.2' into trunk

2013-01-29 Thread slebresne
Updated Branches:
  refs/heads/trunk c25a6a14c - de0743fd0


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/de0743fd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/de0743fd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/de0743fd

Branch: refs/heads/trunk
Commit: de0743fd00e469ea42524f1a258a81eced51d0e4
Parents: c25a6a1 be36736
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Jan 29 19:09:52 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Jan 29 19:09:52 2013 +0100

--
 CHANGES.txt|2 +
 .../statements/CreateColumnFamilyStatement.java|   14 +++---
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |4 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |4 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   34 ++-
 .../apache/cassandra/thrift/ITransportFactory.java |3 +-
 .../apache/cassandra/thrift/TBinaryProtocol.java   |8 +++
 .../cassandra/thrift/TFramedTransportFactory.java  |7 ++-
 8 files changed, 61 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/de0743fd/CHANGES.txt
--
diff --cc CHANGES.txt
index 5ae58fe,e1af9ee..7ab72d1
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,14 -1,7 +1,15 @@@
 +1.3
 + * make index_interval configurable per columnfamily (CASSANDRA-3961)
 + * add default_tim_to_live (CASSANDRA-3974)
 + * add memtable_flush_period_in_ms (CASSANDRA-4237)
 + * replace supercolumns internally by composites (CASSANDRA-3237, 5123)
 + * upgrade thrift to 0.9.0 (CASSANDRA-3719)
 +
  1.2.2
   * fix symlinks under data dir not working (CASSANDRA-5185)
+  * fix bug in compact storage metadata handling (CASSANDRA-5189)
  
 +
  1.2.1
   * stream undelivered hints on decommission (CASSANDRA-5128)
   * GossipingPropertyFileSnitch loads saved dc/rack info if needed 
(CASSANDRA-5133)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/de0743fd/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--



[2/4] git commit: merge from 1.1

2013-01-29 Thread slebresne
merge from 1.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4feb87d3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4feb87d3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4feb87d3

Branch: refs/heads/trunk
Commit: 4feb87d37544b9fde722786555475f2f790059ca
Parents: 7752f01 73d828e
Author: Pavel Yaskevich xe...@apache.org
Authored: Mon Jan 28 10:45:47 2013 -0800
Committer: Pavel Yaskevich xe...@apache.org
Committed: Mon Jan 28 10:45:47 2013 -0800

--
 CHANGES.txt|1 +
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |4 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |4 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   34 ++-
 .../apache/cassandra/thrift/ITransportFactory.java |3 +-
 .../apache/cassandra/thrift/TBinaryProtocol.java   |8 +++
 .../cassandra/thrift/TFramedTransportFactory.java  |7 ++-
 7 files changed, 52 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4feb87d3/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4feb87d3/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4feb87d3/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4feb87d3/src/java/org/apache/cassandra/hadoop/ConfigHelper.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4feb87d3/src/java/org/apache/cassandra/thrift/TBinaryProtocol.java
--



[1/4] git commit: add ConfigHelper support for Thrift frame and max message sizes patch by Pavel Yaskevich; reviewed by Brandon Williams for CASSANDRA-5188

2013-01-29 Thread slebresne
add ConfigHelper support for Thrift frame and max message sizes
patch by Pavel Yaskevich; reviewed by Brandon Williams for CASSANDRA-5188


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73d828e4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73d828e4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73d828e4

Branch: refs/heads/trunk
Commit: 73d828e4e8023b9f7ca8fafd12becec34eb59211
Parents: 3298c2f
Author: Pavel Yaskevich xe...@apache.org
Authored: Fri Jan 25 21:49:25 2013 -0800
Committer: Pavel Yaskevich xe...@apache.org
Committed: Mon Jan 28 10:31:13 2013 -0800

--
 CHANGES.txt|1 +
 .../cassandra/hadoop/ColumnFamilyOutputFormat.java |4 +-
 .../cassandra/hadoop/ColumnFamilyRecordReader.java |4 +-
 .../org/apache/cassandra/hadoop/ConfigHelper.java  |   34 ++-
 .../apache/cassandra/thrift/ITransportFactory.java |3 +-
 .../apache/cassandra/thrift/TBinaryProtocol.java   |8 +++
 .../cassandra/thrift/TFramedTransportFactory.java  |7 ++-
 7 files changed, 52 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/73d828e4/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1ad77b1..1c414bc 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,6 +3,7 @@
  * fix ConcurrentModificationException in getBootstrapSource (CASSANDRA-5170)
  * fix sstable maxtimestamp for row deletes and pre-1.1.1 sstables 
(CASSANDRA-5153)
  * fix start key/end token validation for wide row iteration (CASSANDRA-5168)
+ * add ConfigHelper support for Thrift frame and max message sizes 
(CASSANDRA-5188)
 
 
 1.1.9

http://git-wip-us.apache.org/repos/asf/cassandra/blob/73d828e4/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
index e01ada5..caea616 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyOutputFormat.java
@@ -154,8 +154,8 @@ public class ColumnFamilyOutputFormat extends 
OutputFormatByteBuffer,ListMutat
 throws InvalidRequestException, TException, 
AuthenticationException, AuthorizationException, LoginException
 {
 logger.debug(Creating authenticated client for CF output format);
-TTransport transport = 
ConfigHelper.getOutputTransportFactory(conf).openTransport(socket);
-TBinaryProtocol binaryProtocol = new TBinaryProtocol(transport);
+TTransport transport = 
ConfigHelper.getOutputTransportFactory(conf).openTransport(socket, conf);
+TBinaryProtocol binaryProtocol = new TBinaryProtocol(transport, 
ConfigHelper.getThriftMaxMessageLength(conf));
 Cassandra.Client client = new Cassandra.Client(binaryProtocol);
 client.set_keyspace(ConfigHelper.getOutputKeyspace(conf));
 if (ConfigHelper.getOutputKeyspaceUserName(conf) != null)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/73d828e4/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
index 83e436b..a40e6c5 100644
--- a/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/ColumnFamilyRecordReader.java
@@ -161,8 +161,8 @@ public class ColumnFamilyRecordReader extends 
RecordReaderByteBuffer, SortedMap
 // create connection using thrift
 String location = getLocation();
 socket = new TSocket(location, ConfigHelper.getInputRpcPort(conf));
-TTransport transport = 
ConfigHelper.getInputTransportFactory(conf).openTransport(socket);
-TBinaryProtocol binaryProtocol = new TBinaryProtocol(transport);
+TTransport transport = 
ConfigHelper.getInputTransportFactory(conf).openTransport(socket, conf);
+TBinaryProtocol binaryProtocol = new TBinaryProtocol(transport, 
ConfigHelper.getThriftMaxMessageLength(conf));
 client = new Cassandra.Client(binaryProtocol);
 
 // log in

http://git-wip-us.apache.org/repos/asf/cassandra/blob/73d828e4/src/java/org/apache/cassandra/hadoop/ConfigHelper.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java 
b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java
index 

[3/4] git commit: Fix bug in compact storage metadata handling

2013-01-29 Thread slebresne
Fix bug in compact storage metadata handling

patch by slebresne; reviewed by jasobrown for CASSANDRA-5189


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be36736d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be36736d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be36736d

Branch: refs/heads/trunk
Commit: be36736d38eb5793bab6040817260fd5c6cd166b
Parents: 4feb87d
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Tue Jan 29 19:08:41 2013 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Tue Jan 29 19:08:41 2013 +0100

--
 CHANGES.txt|1 +
 .../statements/CreateColumnFamilyStatement.java|   14 --
 2 files changed, 9 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/be36736d/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 46d79cc..e1af9ee 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,6 @@
 1.2.2
  * fix symlinks under data dir not working (CASSANDRA-5185)
+ * fix bug in compact storage metadata handling (CASSANDRA-5189)
 
 1.2.1
  * stream undelivered hints on decommission (CASSANDRA-5128)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/be36736d/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
 
b/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
index 4b96167..483e083 100644
--- 
a/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
+++ 
b/src/java/org/apache/cassandra/cql3/statements/CreateColumnFamilyStatement.java
@@ -276,13 +276,10 @@ public class CreateColumnFamilyStatement extends 
SchemaAlteringStatement
 }
 }
 
-if (useCompactStorage  stmt.columns.size() = 1)
+if (useCompactStorage  !stmt.columnAliases.isEmpty())
 {
 if (stmt.columns.isEmpty())
 {
-if (columnAliases.isEmpty())
-throw new 
InvalidRequestException(String.format(COMPACT STORAGE with non-composite 
PRIMARY KEY require one column not part of the PRIMARY KEY (got: %s), 
StringUtils.join(stmt.columns.keySet(), , )));
-
 // The only value we'll insert will be the empty one, so 
the default validator don't matter
 stmt.defaultValidator = BytesType.instance;
 // We need to distinguish between
@@ -293,6 +290,9 @@ public class CreateColumnFamilyStatement extends 
SchemaAlteringStatement
 }
 else
 {
+if (stmt.columns.size()  1)
+throw new 
InvalidRequestException(String.format(COMPACT STORAGE with composite PRIMARY 
KEY allows no more than one column not part of the PRIMARY KEY (got: %s), 
StringUtils.join(stmt.columns.keySet(), , )));
+
 Map.EntryColumnIdentifier, AbstractType lastEntry = 
stmt.columns.entrySet().iterator().next();
 stmt.defaultValidator = lastEntry.getValue();
 stmt.valueAlias = lastEntry.getKey().key;
@@ -301,8 +301,10 @@ public class CreateColumnFamilyStatement extends 
SchemaAlteringStatement
 }
 else
 {
-if (useCompactStorage  !columnAliases.isEmpty())
-throw new InvalidRequestException(String.format(COMPACT 
STORAGE with composite PRIMARY KEY allows no more than one column not part of 
the PRIMARY KEY (got: %s), StringUtils.join(stmt.columns.keySet(), , )));
+// For compact, we are in the static case, so we need at 
least one column defined. For non-compact however, having
+// just the PK is fine since we have CQL3 row marker.
+if (useCompactStorage  stmt.columns.isEmpty())
+throw new InvalidRequestException(COMPACT STORAGE with 
non-composite PRIMARY KEY require one column not part of the PRIMARY KEY, none 
given);
 
 // There is no way to insert/access a column that is not 
defined for non-compact storage, so
 // the actual validator don't matter much (except that we want 
to recognize counter CF as limitation apply to them).



[jira] [Commented] (CASSANDRA-3620) Proposal for distributed deletes - fully automatic Reaper Model rather than GCSeconds and manual repairs

2013-01-29 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565615#comment-13565615
 ] 

Jonathan Ellis commented on CASSANDRA-3620:
---

bq. we could say if you use batch, use it on all nodes

Also fine with this. :)

 Proposal for distributed deletes - fully automatic Reaper Model rather than 
 GCSeconds and manual repairs
 --

 Key: CASSANDRA-3620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3620
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dominic Williams
  Labels: GCSeconds,, deletes,, distributed_deletes,, 
 merkle_trees, repair,
 Fix For: 2.0

   Original Estimate: 504h
  Remaining Estimate: 504h

 Proposal for an improved system for handling distributed deletes, which 
 removes the requirement to regularly run repair processes to maintain 
 performance and data integrity. 
 h2. The Problem
 There are various issues with repair:
 * Repair is expensive to run
 * Repair jobs are often made more expensive than they should be by other 
 issues (nodes dropping requests, hinted handoff not working, downtime etc)
 * Repair processes can often fail and need restarting, for example in cloud 
 environments where network issues make a node disappear from the ring for a 
 brief moment
 * When you fail to run repair within GCSeconds, either by error or because of 
 issues with Cassandra, data written to a node that did not see a later delete 
 can reappear (and a node might miss a delete for several reasons including 
 being down or simply dropping requests during load shedding)
 * If you cannot run repair and have to increase GCSeconds to prevent deleted 
 data reappearing, in some cases the growing tombstone overhead can 
 significantly degrade performance
 Because of the foregoing, in high throughput environments it can be very 
 difficult to make repair a cron job. It can be preferable to keep a terminal 
 open and run repair jobs one by one, making sure they succeed and keeping and 
 eye on overall load to reduce system impact. This isn't desirable, and 
 problems are exacerbated when there are lots of column families in a database 
 or it is necessary to run a column family with a low GCSeconds to reduce 
 tombstone load (because there are many write/deletes to that column family). 
 The database owner must run repair within the GCSeconds window, or increase 
 GCSeconds, to avoid potentially losing delete operations. 
 It would be much better if there was no ongoing requirement to run repair to 
 ensure deletes aren't lost, and no GCSeconds window. Ideally repair would be 
 an optional maintenance utility used in special cases, or to ensure ONE reads 
 get consistent data. 
 h2. Reaper Model Proposal
 # Tombstones do not expire, and there is no GCSeconds
 # Tombstones have associated ACK lists, which record the replicas that have 
 acknowledged them
 # Tombstones are deleted (or marked for compaction) when they have been 
 acknowledged by all replicas
 # When a tombstone is deleted, it is added to a relic index. The relic 
 index makes it possible for a reaper to acknowledge a tombstone after it is 
 deleted
 # The ACK lists and relic index are held in memory for speed
 # Background reaper threads constantly stream ACK requests to other nodes, 
 and stream back ACK responses back to requests they have received (throttling 
 their usage of CPU and bandwidth so as not to affect performance)
 # If a reaper receives a request to ACK a tombstone that does not exist, it 
 creates the tombstone and adds an ACK for the requestor, and replies with an 
 ACK. This is the worst that can happen, and does not cause data corruption. 
 ADDENDUM
 The proposal to hold the ACK and relic lists in memory was added after the 
 first posting. Please see comments for full reasons. Furthermore, a proposal 
 for enhancements to repair was posted to comments, which would cause 
 tombstones to be scavenged when repair completes (the author had assumed this 
 was the case anyway, but it seems at time of writing they are only scavenged 
 during compaction on GCSeconds timeout). The proposals are not exclusive and 
 this proposal is extended to include the possible enhancements to repair 
 described.
 NOTES
 * If a node goes down for a prolonged period, the worst that can happen is 
 that some tombstones are recreated across the cluster when it restarts, which 
 does not corrupt data (and this will only occur with a very small number of 
 tombstones)
 * The system is simple to implement and predictable 
 * With the reaper model, repair would become an optional process for 
 optimizing the database to increase the consistency seen by 
 ConsistencyLevel.ONE 

[jira] [Commented] (CASSANDRA-3620) Proposal for distributed deletes - fully automatic Reaper Model rather than GCSeconds and manual repairs

2013-01-29 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13565747#comment-13565747
 ] 

Brandon Williams commented on CASSANDRA-3620:
-

bq. Not that it would be hard to gossip the commit log mode btw

I'd be fine with gossiping that as a safety check in addition to saying use 
batch everywhere since that would be a difficult thing to troubleshoot if they 
weren't.

 Proposal for distributed deletes - fully automatic Reaper Model rather than 
 GCSeconds and manual repairs
 --

 Key: CASSANDRA-3620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3620
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dominic Williams
  Labels: GCSeconds,, deletes,, distributed_deletes,, 
 merkle_trees, repair,
 Fix For: 2.0

   Original Estimate: 504h
  Remaining Estimate: 504h

 Proposal for an improved system for handling distributed deletes, which 
 removes the requirement to regularly run repair processes to maintain 
 performance and data integrity. 
 h2. The Problem
 There are various issues with repair:
 * Repair is expensive to run
 * Repair jobs are often made more expensive than they should be by other 
 issues (nodes dropping requests, hinted handoff not working, downtime etc)
 * Repair processes can often fail and need restarting, for example in cloud 
 environments where network issues make a node disappear from the ring for a 
 brief moment
 * When you fail to run repair within GCSeconds, either by error or because of 
 issues with Cassandra, data written to a node that did not see a later delete 
 can reappear (and a node might miss a delete for several reasons including 
 being down or simply dropping requests during load shedding)
 * If you cannot run repair and have to increase GCSeconds to prevent deleted 
 data reappearing, in some cases the growing tombstone overhead can 
 significantly degrade performance
 Because of the foregoing, in high throughput environments it can be very 
 difficult to make repair a cron job. It can be preferable to keep a terminal 
 open and run repair jobs one by one, making sure they succeed and keeping and 
 eye on overall load to reduce system impact. This isn't desirable, and 
 problems are exacerbated when there are lots of column families in a database 
 or it is necessary to run a column family with a low GCSeconds to reduce 
 tombstone load (because there are many write/deletes to that column family). 
 The database owner must run repair within the GCSeconds window, or increase 
 GCSeconds, to avoid potentially losing delete operations. 
 It would be much better if there was no ongoing requirement to run repair to 
 ensure deletes aren't lost, and no GCSeconds window. Ideally repair would be 
 an optional maintenance utility used in special cases, or to ensure ONE reads 
 get consistent data. 
 h2. Reaper Model Proposal
 # Tombstones do not expire, and there is no GCSeconds
 # Tombstones have associated ACK lists, which record the replicas that have 
 acknowledged them
 # Tombstones are deleted (or marked for compaction) when they have been 
 acknowledged by all replicas
 # When a tombstone is deleted, it is added to a relic index. The relic 
 index makes it possible for a reaper to acknowledge a tombstone after it is 
 deleted
 # The ACK lists and relic index are held in memory for speed
 # Background reaper threads constantly stream ACK requests to other nodes, 
 and stream back ACK responses back to requests they have received (throttling 
 their usage of CPU and bandwidth so as not to affect performance)
 # If a reaper receives a request to ACK a tombstone that does not exist, it 
 creates the tombstone and adds an ACK for the requestor, and replies with an 
 ACK. This is the worst that can happen, and does not cause data corruption. 
 ADDENDUM
 The proposal to hold the ACK and relic lists in memory was added after the 
 first posting. Please see comments for full reasons. Furthermore, a proposal 
 for enhancements to repair was posted to comments, which would cause 
 tombstones to be scavenged when repair completes (the author had assumed this 
 was the case anyway, but it seems at time of writing they are only scavenged 
 during compaction on GCSeconds timeout). The proposals are not exclusive and 
 this proposal is extended to include the possible enhancements to repair 
 described.
 NOTES
 * If a node goes down for a prolonged period, the worst that can happen is 
 that some tombstones are recreated across the cluster when it restarts, which 
 does not corrupt data (and this will only occur with a very small number of 
 tombstones)
 * The system is simple to implement and predictable 
 * 

[jira] [Assigned] (CASSANDRA-5195) Offline scrub does not migrate the directory structure on migration from 1.0.x to 1.1.x and causes the keyspace to disappear

2013-01-29 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-5195:
-

Assignee: Ryan McGuire

2nd try of assign-to-ryan

 Offline scrub does not migrate the directory structure on migration from 
 1.0.x to 1.1.x and causes the keyspace to disappear
 

 Key: CASSANDRA-5195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
Reporter: Omid Aladini
Assignee: Ryan McGuire
 Fix For: 1.1.9

 Attachments: 5195.patch


 Due to CASSANDRA-4411, upon migration from 1.0.x to 1.1.x containing 
 LCS-compacted sstables, an offline scrub should be run before Cassandra 1.1.x 
 is started. But Cassandra 1.1.x uses a new directory structure 
 (CASSANDRA-2749) that offline scrubber doesn't detect or try to migrate.
 How to reproduce:
 1- Run cassandra 1.0.12.
 2- Run stress tool, let Cassandra flush Keyspace1 or flush manually.
 3- Stop cassandra 1.0.12
 4- Run ./bin/sstablescrub Keyspace1 Standard1
   which returns Unknown keyspace/columnFamily Keyspace1.Standard1 and 
 notice the data directory isn't migrated.
 5- Run cassandra 1.1.9. Keyspace1 doesn't get loaded and Cassandra doesn't 
 try to migrate the directory structure. Also commitlog entries get skipped: 
 Skipped X mutations from unknown (probably removed) CF with id 1000
 Without the unsuccessful step 4, Cassandra 1.1.9 loads and migrates the 
 Keyspace correctly.
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5191) BufferOverflowException in CommitLogSegment

2013-01-29 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5191.
---

Resolution: Won't Fix

You're right, it has to do with the short ttl.

What happens is, node 1 sends an ExpiredColumn to node 2 
(ColumnSerializer.serialize).  Node 2 reads it (ColumnSerializer.deserialize), 
but because its clock is ahead, reads it as a DeletedColumn (see 
ExpiredColumn.create).

CommitLog$LogRecordAdder checks to see if there is enough room for he mutation 
(hasCapacityFor).  DeletedColumn says, I'm X bytes.  Then 
CommitLogSegment.write goes to write the actual data, but it's over-clever, and 
re-uses the bytes it was originally sent from node 1, which was the 
ExpiringColumn, which is 8 bytes larger.

This is fixed in 1.2 thanks to the MessagingService rewrite there, but for 1.1 
you'll have to either use a longer ttl or sync your clocks better.

 BufferOverflowException in CommitLogSegment
 ---

 Key: CASSANDRA-5191
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5191
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: RHEL 2.6.32-220.el6.x86_64, jdk1.6.0_27
Reporter: André Borgqvist
 Attachments: BufferOverflowTest.java


 Running mixed reads, writes and deletes on a single column family in a two 
 node cluster. After a few minutes the following appears in the system log:
 ERROR [COMMIT-LOG-WRITER] 2013-01-25 12:49:55,955 
 AbstractCassandraDaemon.java (line 135) Exception in thread 
 Thread[COMMIT-LOG-WRITER,5,main]
 java.nio.BufferOverflowException
   at java.nio.Buffer.nextPutIndex(Buffer.java:499)
   at java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:756)
   at 
 org.apache.cassandra.db.commitlog.CommitLogSegment.write(CommitLogSegment.java:265)
   at 
 org.apache.cassandra.db.commitlog.CommitLog$LogRecordAdder.run(CommitLog.java:382)
   at 
 org.apache.cassandra.db.commitlog.PeriodicCommitLogExecutorService$1.runMayThrow(PeriodicCommitLogExecutorService.java:50)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
   at java.lang.Thread.run(Thread.java:662)
 Possibly related to https://issues.apache.org/jira/browse/CASSANDRA-3615

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5196) Uninformative exception thrown when running new installation with old data directories

2013-01-29 Thread Robbie Strickland (JIRA)
Robbie Strickland created CASSANDRA-5196:


 Summary: Uninformative exception thrown when running new 
installation with old data directories
 Key: CASSANDRA-5196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5196
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: CentOS 5.5
Reporter: Robbie Strickland
Priority: Minor


If you install 1.2.1 when there are existing data directories, the scrub 
operation fails, throwing this exception:

ERROR [main] 2013-01-29 15:05:06,564 FileUtils.java (line 373) Stopping the 
gossiper and the RPC server
ERROR [main] 2013-01-29 15:05:06,564 CassandraDaemon.java (line 387) Exception 
encountered during startup
java.lang.IllegalStateException: No configured daemon
at 
org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:314)
at 
org.apache.cassandra.io.util.FileUtils.handleFSError(FileUtils.java:375)
at org.apache.cassandra.db.Directories.init(Directories.java:113)
at org.apache.cassandra.db.Directories.create(Directories.java:91)
at 
org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:403)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:174)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)

This condition produce a more reasonable exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5196) Uninformative exception thrown when running new installation with old data directories

2013-01-29 Thread Robbie Strickland (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Strickland updated CASSANDRA-5196:
-

Description: 
If you install 1.2.1 when there are existing data directories, the scrub 
operation fails, throwing this exception:

ERROR [main] 2013-01-29 15:05:06,564 FileUtils.java (line 373) Stopping the 
gossiper and the RPC server
ERROR [main] 2013-01-29 15:05:06,564 CassandraDaemon.java (line 387) Exception 
encountered during startup
java.lang.IllegalStateException: No configured daemon
at 
org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:314)
at 
org.apache.cassandra.io.util.FileUtils.handleFSError(FileUtils.java:375)
at org.apache.cassandra.db.Directories.init(Directories.java:113)
at org.apache.cassandra.db.Directories.create(Directories.java:91)
at 
org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:403)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:174)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)

This condition should produce a more reasonable exception.

  was:
If you install 1.2.1 when there are existing data directories, the scrub 
operation fails, throwing this exception:

ERROR [main] 2013-01-29 15:05:06,564 FileUtils.java (line 373) Stopping the 
gossiper and the RPC server
ERROR [main] 2013-01-29 15:05:06,564 CassandraDaemon.java (line 387) Exception 
encountered during startup
java.lang.IllegalStateException: No configured daemon
at 
org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:314)
at 
org.apache.cassandra.io.util.FileUtils.handleFSError(FileUtils.java:375)
at org.apache.cassandra.db.Directories.init(Directories.java:113)
at org.apache.cassandra.db.Directories.create(Directories.java:91)
at 
org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:403)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:174)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)

This condition produce a more reasonable exception.


 Uninformative exception thrown when running new installation with old data 
 directories
 --

 Key: CASSANDRA-5196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5196
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: CentOS 5.5
Reporter: Robbie Strickland
Priority: Minor

 If you install 1.2.1 when there are existing data directories, the scrub 
 operation fails, throwing this exception:
 ERROR [main] 2013-01-29 15:05:06,564 FileUtils.java (line 373) Stopping the 
 gossiper and the RPC server
 ERROR [main] 2013-01-29 15:05:06,564 CassandraDaemon.java (line 387) 
 Exception encountered during startup
 java.lang.IllegalStateException: No configured daemon
   at 
 org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:314)
   at 
 org.apache.cassandra.io.util.FileUtils.handleFSError(FileUtils.java:375)
   at org.apache.cassandra.db.Directories.init(Directories.java:113)
   at org.apache.cassandra.db.Directories.create(Directories.java:91)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:403)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:174)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)
 This condition should produce a more reasonable exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5196) IllegalStateException thrown when running new installation with old data directories

2013-01-29 Thread Robbie Strickland (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Strickland updated CASSANDRA-5196:
-

Summary: IllegalStateException thrown when running new installation with 
old data directories  (was: Uninformative exception thrown when running new 
installation with old data directories)

 IllegalStateException thrown when running new installation with old data 
 directories
 

 Key: CASSANDRA-5196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5196
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: CentOS 5.5
Reporter: Robbie Strickland
Priority: Minor

 If you install 1.2.1 when there are existing data directories, the scrub 
 operation fails, throwing this exception:
 ERROR [main] 2013-01-29 15:05:06,564 FileUtils.java (line 373) Stopping the 
 gossiper and the RPC server
 ERROR [main] 2013-01-29 15:05:06,564 CassandraDaemon.java (line 387) 
 Exception encountered during startup
 java.lang.IllegalStateException: No configured daemon
   at 
 org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:314)
   at 
 org.apache.cassandra.io.util.FileUtils.handleFSError(FileUtils.java:375)
   at org.apache.cassandra.db.Directories.init(Directories.java:113)
   at org.apache.cassandra.db.Directories.create(Directories.java:91)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:403)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:174)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)
 This condition should produce a more reasonable exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5196) IllegalStateException thrown when running new installation with old data directories

2013-01-29 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-5196:
---

Assignee: Aleksey Yeschenko

 IllegalStateException thrown when running new installation with old data 
 directories
 

 Key: CASSANDRA-5196
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5196
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: CentOS 5.5
Reporter: Robbie Strickland
Assignee: Aleksey Yeschenko
Priority: Minor

 If you install 1.2.1 when there are existing data directories, the scrub 
 operation fails, throwing this exception:
 ERROR [main] 2013-01-29 15:05:06,564 FileUtils.java (line 373) Stopping the 
 gossiper and the RPC server
 ERROR [main] 2013-01-29 15:05:06,564 CassandraDaemon.java (line 387) 
 Exception encountered during startup
 java.lang.IllegalStateException: No configured daemon
   at 
 org.apache.cassandra.service.StorageService.stopRPCServer(StorageService.java:314)
   at 
 org.apache.cassandra.io.util.FileUtils.handleFSError(FileUtils.java:375)
   at org.apache.cassandra.db.Directories.init(Directories.java:113)
   at org.apache.cassandra.db.Directories.create(Directories.java:91)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:403)
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:174)
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:370)
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:413)
 This condition should produce a more reasonable exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5197) Loading persisted ring state in a mixed cluster can throw AE

2013-01-29 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-5197:
---

 Summary: Loading persisted ring state in a mixed cluster can throw 
AE
 Key: CASSANDRA-5197
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5197
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Brandon Williams
Assignee: Brandon Williams
 Fix For: 1.2.2


{noformat}
 INFO 02:07:16,263 Loading persisted ring state
java.lang.AssertionError
at 
org.apache.cassandra.locator.TokenMetadata.updateHostId(TokenMetadata.java:221)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:451)
at 
org.apache.cassandra.service.StorageService.initServer(StorageService.java:406)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:282)
at 
org.apache.cassandra.service.CassandraDaemon.init(CassandraDaemon.java:315)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:212)
{noformat}

We assume every host has a hostid, but this is not always true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5198) token () function automatically coerses types leading to confusing output

2013-01-29 Thread Edward Capriolo (JIRA)
Edward Capriolo created CASSANDRA-5198:
--

 Summary: token () function automatically coerses types leading to 
confusing output
 Key: CASSANDRA-5198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5198
 Project: Cassandra
  Issue Type: Bug
Reporter: Edward Capriolo
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5198) token () function automatically coerses types leading to confusing output

2013-01-29 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated CASSANDRA-5198:
---

Affects Version/s: 1.2.1

 token () function automatically coerses types leading to confusing output
 -

 Key: CASSANDRA-5198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5198
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Edward Capriolo
Priority: Minor



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5198) token () function automatically coerses types leading to confusing output

2013-01-29 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated CASSANDRA-5198:
---

Description: 
This works as it should.

{noformat}
cqlsh:movies select * from users where token (username)  token('') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
bsmith | null |  null |   bob |smith | null
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token('bsmith') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token('scapriolo') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 ecapriolo | null |  null |edward | capriolo | null

{noformat}

But look what happens when you supply numbers into the token function.


{noformat}
qlsh:movies select * from users where token (username)  token(0) ;
 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 ecapriolo | null |  null |edward | capriolo | null
cqlsh:movies select * from users where token (username)  token(1134314) ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
bsmith | null |  null |   bob |smith | null
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token(113431431) ;
 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token(1134) ;
 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 ecapriolo | null |  null |edward | capriolo | null
cqlsh:movies select * from users where token (username)  token(1134434) ;
 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 scapriolo | null |  null |stacey | capriolo | null
{noformat}

This does not make sense to me. The token function is apparently converting 
integers to strings leading to seemingly unpredictable results. 

However I find this syntax odd, I feel like I should be able to say 
'token(username)  0 and token(username)  10' because from a thrift side I can 
page tokens or I can page keys. In this case, I guess, I am only able to page 
keys because the token is not returned to the user.

Is token 0 = ''? How do I arrive at the minimal token for and int column. 

Should the token() function at least be smart enough to reject integers for 
string columns?

  was:
This works as it should.

{noformat}
cqlsh:movies select * from users where token (username)  token('') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
bsmith | null |  null |   bob |smith | null
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token('bsmith') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token('scapriolo') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 ecapriolo | null |  null |edward | capriolo | null

{noformat}

But look what happens when you supply numbers into the token function.


{noformat}
qlsh:movies select * from users where token (username)  token(0) ;
 username  | created_date | email | firstname | lastname | password

[jira] [Updated] (CASSANDRA-5198) token () function automatically coerses types leading to confusing output

2013-01-29 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated CASSANDRA-5198:
---

Description: 
This works as it should.

{noformat}
cqlsh:movies select * from users where token (username)  token('') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
bsmith | null |  null |   bob |smith | null
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token('bsmith') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token('scapriolo') ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 ecapriolo | null |  null |edward | capriolo | null

{noformat}

But look what happens when you supply numbers into the token function.


{noformat}
qlsh:movies select * from users where token (username)  token(0) ;
 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 ecapriolo | null |  null |edward | capriolo | null
cqlsh:movies select * from users where token (username)  token(1134314) ;

 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
bsmith | null |  null |   bob |smith | null
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token(113431431) ;
 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 scapriolo | null |  null |stacey | capriolo | null
 ecapriolo | null |  null |edward | capriolo | null

cqlsh:movies select * from users where token (username)  token(1134) ;
 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 ecapriolo | null |  null |edward | capriolo | null
cqlsh:movies select * from users where token (username)  token(1134434) ;
 username  | created_date | email | firstname | lastname | password
---+--+---+---+--+--
 scapriolo | null |  null |stacey | capriolo | null
{noformat}

This does not make sense to me. The token function is apparently converting 
integers to strings leading to seemingly unpredictable results. 

However I find this syntax odd, I feel like I should be able to say 
'token(username)  0 and token(username)  10' because from a thrift side I can 
page tokens or I can page keys. In this case, I guess, I am only able to page 
keys because the token is not returned to the user.

Is token 0 = ''? How do I arrive at the minimal token for and int column. 

 token () function automatically coerses types leading to confusing output
 -

 Key: CASSANDRA-5198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5198
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Edward Capriolo
Priority: Minor

 This works as it should.
 {noformat}
 cqlsh:movies select * from users where token (username)  token('') ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
 bsmith | null |  null |   bob |smith | null
  scapriolo | null |  null |stacey | capriolo | null
  ecapriolo | null |  null |edward | capriolo | null
 cqlsh:movies select * from users where token (username)  token('bsmith') ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
  scapriolo | null |  null |stacey | capriolo | null
  ecapriolo | null |  null |edward | capriolo | null
 cqlsh:movies select * from users where token (username)  token('scapriolo') 
 ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
  ecapriolo | null |  null 

[jira] [Updated] (CASSANDRA-5198) token () function automatically coerces types leading to confusing output

2013-01-29 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-5198:


Summary: token () function automatically coerces types leading to confusing 
output  (was: token () function automatically coerses types leading to 
confusing output)

 token () function automatically coerces types leading to confusing output
 -

 Key: CASSANDRA-5198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5198
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.1
Reporter: Edward Capriolo
Priority: Minor

 This works as it should.
 {noformat}
 cqlsh:movies select * from users where token (username)  token('') ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
 bsmith | null |  null |   bob |smith | null
  scapriolo | null |  null |stacey | capriolo | null
  ecapriolo | null |  null |edward | capriolo | null
 cqlsh:movies select * from users where token (username)  token('bsmith') ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
  scapriolo | null |  null |stacey | capriolo | null
  ecapriolo | null |  null |edward | capriolo | null
 cqlsh:movies select * from users where token (username)  token('scapriolo') 
 ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
  ecapriolo | null |  null |edward | capriolo | null
 {noformat}
 But look what happens when you supply numbers into the token function.
 {noformat}
 qlsh:movies select * from users where token (username)  token(0) ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
  ecapriolo | null |  null |edward | capriolo | null
 cqlsh:movies select * from users where token (username)  token(1134314) ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
 bsmith | null |  null |   bob |smith | null
  scapriolo | null |  null |stacey | capriolo | null
  ecapriolo | null |  null |edward | capriolo | null
 cqlsh:movies select * from users where token (username)  token(113431431) ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
  scapriolo | null |  null |stacey | capriolo | null
  ecapriolo | null |  null |edward | capriolo | null
 cqlsh:movies select * from users where token (username)  token(1134) ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
  ecapriolo | null |  null |edward | capriolo | null
 cqlsh:movies select * from users where token (username)  token(1134434) ;
  username  | created_date | email | firstname | lastname | password
 ---+--+---+---+--+--
  scapriolo | null |  null |stacey | capriolo | null
 {noformat}
 This does not make sense to me. The token function is apparently converting 
 integers to strings leading to seemingly unpredictable results. 
 However I find this syntax odd, I feel like I should be able to say 
 'token(username)  0 and token(username)  10' because from a thrift side I 
 can page tokens or I can page keys. In this case, I guess, I am only able to 
 page keys because the token is not returned to the user.
 Is token 0 = ''? How do I arrive at the minimal token for and int column. 
 Should the token() function at least be smart enough to reject integers for 
 string columns?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5195) Offline scrub does not migrate the directory structure on migration from 1.0.x to 1.1.x and causes the keyspace to disappear

2013-01-29 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-5195:
---

Assignee: Jonathan Ellis  (was: Ryan McGuire)

 Offline scrub does not migrate the directory structure on migration from 
 1.0.x to 1.1.x and causes the keyspace to disappear
 

 Key: CASSANDRA-5195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
Reporter: Omid Aladini
Assignee: Jonathan Ellis
 Fix For: 1.1.9

 Attachments: 5195.patch


 Due to CASSANDRA-4411, upon migration from 1.0.x to 1.1.x containing 
 LCS-compacted sstables, an offline scrub should be run before Cassandra 1.1.x 
 is started. But Cassandra 1.1.x uses a new directory structure 
 (CASSANDRA-2749) that offline scrubber doesn't detect or try to migrate.
 How to reproduce:
 1- Run cassandra 1.0.12.
 2- Run stress tool, let Cassandra flush Keyspace1 or flush manually.
 3- Stop cassandra 1.0.12
 4- Run ./bin/sstablescrub Keyspace1 Standard1
   which returns Unknown keyspace/columnFamily Keyspace1.Standard1 and 
 notice the data directory isn't migrated.
 5- Run cassandra 1.1.9. Keyspace1 doesn't get loaded and Cassandra doesn't 
 try to migrate the directory structure. Also commitlog entries get skipped: 
 Skipped X mutations from unknown (probably removed) CF with id 1000
 Without the unsuccessful step 4, Cassandra 1.1.9 loads and migrates the 
 Keyspace correctly.
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5195) Offline scrub does not migrate the directory structure on migration from 1.0.x to 1.1.x and causes the keyspace to disappear

2013-01-29 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13566128#comment-13566128
 ] 

Ryan McGuire commented on CASSANDRA-5195:
-

I have reproduced this issue. Omid's patch works as he described: it does not 
fully fix the issue, but does allow the keyspace to be loaded on the 2nd 
restart of cassandra. Below is my verification workflow:

* Checkout/build 1.0.12
** cd $CASSANDRA_DIR
** git checkout -b 5195-1.0.12
** git reset --hard cassandra-1.0.12
** git clean -f -d
** ant build
* Run 1.0.12 test:
** sudo rm -rf /var/lib/cassandra
** sudo cassandra
** cd tool/stress
** ant build
** ./bin/stress
** sudo pkill -f CassandraDaemon
* Verify the keyspace/cf was created by stress:
** 10:20 PM:~/git/datastax/cassandra/tools/stress[5195-1.0.12*]$ cqlsh
   Connected to Test Cluster at localhost:9160.
   [cqlsh 2.0.0 | Cassandra unknown | CQL spec unknown | Thrift protocol 
19.20.0]
   Use HELP for help.
   cqlsh use Keyspace1 ;
   cqlsh:Keyspace1 select count(*) from Standard1;
   count
   ---
   1
* Checkout/build 1.1.9
** cd $CASSANDRA_DIR
** git checkout -b 5195-1.1.9
** git reset --hard cassandra-1.1.9
** git clean -f -d
** ant build
* Run 1.1.9 test:
** sudo ./bin/sstablescrub Keyspace1 Standard1
*** stdout: Unknown keyspace/columnFamily Keyspace1.Standard1
** sudo cassandra
*** log:  INFO [main] 2013-01-29 22:28:44,800 CommitLogReplayer.java (line 103) 
Skipped 585748 mutations from unknown (probably removed) CF with id 1000
* Verify that Keyspace1 does or does not exist:
** 10:30 PM:~/git/datastax/cassandra[5195-1.1.9*]$ cqlsh
   Connected to Test Cluster at localhost:9160.
   [cqlsh 2.2.0 | Cassandra 1.1.9-SNAPSHOT | CQL spec 2.0.0 | Thrift protocol 
19.33.0]
   Use HELP for help.
   cqlsh use Keyspace1 ;
   Bad Request: Keyspace 'Keyspace1' does not exist
* Run 1.1.9 test again without the sstablescrub (restoring /var/lib/cassandra 
from before):
** sudo pkill -f CassandraDaemon
** sudo cassandra
*** log:  INFO 22:33:01,240 Replaying 
/var/lib/cassandra/commitlog/CommitLog-1359515707503.log, 
/var/lib/cassandra/commitlog/CommitLog-1359515946450.log
INFO 22:33:01,244 Replaying 
/var/lib/cassandra/commitlog/CommitLog-1359515707503.log
INFO 22:33:02,318 CFS(Keyspace='Keyspace1', ColumnFamily='Standard1') 
liveRatio is 4.55084790673026 (just-counted was 4.55084790673026).  calculation 
took 866ms for 4590 columns
INFO 22:33:02,930 CFS(Keyspace='Keyspace1', ColumnFamily='Standard1') 
liveRatio is 5.226616220760892 (just-counted was 5.226616220760892).  
calculation took 357ms for 11635 columns
INFO 22:33:04,186 CFS(Keyspace='Keyspace1', ColumnFamily='Standard1') 
liveRatio is 5.094053078093754 (just-counted was 4.9614899354266155).  
calculation took 859ms for 26720 columns
* Verify that Keyspace1 does or does not exist:
** 10:36 PM:~/git/datastax/cassandra[5195-1.1.9*]$ cqlsh
   Connected to Test Cluster at localhost:9160.
   [cqlsh 2.2.0 | Cassandra 1.1.9-SNAPSHOT | CQL spec 2.0.0 | Thrift protocol 
19.33.0]
   Use HELP for help.
   cqlsh use Keyspace1;
   cqlsh:Keyspace1 select count(*) from Standard1;
   count
   ---
   1
* Apply patch and retest:
** cd $CASSANDRA_DIR
** git apply ~/Downloads/5195.patch
** ant clean build 
** sudo rm -rf /var/lib/cassandra
** (restore /var/lib/cassandra from 1.0.12)
** sudo pkill -f CassandraDaemon
** sudo ./bin/sstablescrub Keyspace1 Standard1
*** stdout:
Pre-scrub sstables snapshotted into snapshot pre-scrub-1359517364042
Scrubbing 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-17-Data.db')
Scrub of 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-17-Data.db')
 complete: 63608 rows in new sstable and 0 empty (tombstoned) rows dropped
Scrubbing 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-10-Data.db')
Scrub of 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-10-Data.db')
 complete: 258153 rows in new sstable and 0 empty (tombstoned) rows dropped
Scrubbing 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-18-Data.db')
Scrub of 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-18-Data.db')
 complete: 65207 rows in new sstable and 0 empty (tombstoned) rows dropped
Scrubbing 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-15-Data.db')
Scrub of 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-15-Data.db')
 complete: 254487 rows in new sstable and 0 empty (tombstoned) rows dropped
Scrubbing 
SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard1/Keyspace1-Standard1-hd-5-Data.db')
Scrub of 

[jira] [Created] (CASSANDRA-5199) Avoid serializing to byte[] on commitlog append

2013-01-29 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5199:
-

 Summary: Avoid serializing to byte[] on commitlog append
 Key: CASSANDRA-5199
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5199
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0


We used to avoid re-serializing RowMutations by caching the byte[] that we read 
off the wire.  We don't do that anymore since we fixed MessagingService to not 
create intermediate byte[].  So we should serialize the mutation directly to 
the commitlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5199) Avoid serializing to byte[] on commitlog append

2013-01-29 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5199:
--

Attachment: 5199-1.2.txt

1.2 version just gets rid of the byte[] caching, since it's never actually 
re-used.

 Avoid serializing to byte[] on commitlog append
 ---

 Key: CASSANDRA-5199
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5199
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0

 Attachments: 5199-1.2.txt


 We used to avoid re-serializing RowMutations by caching the byte[] that we 
 read off the wire.  We don't do that anymore since we fixed MessagingService 
 to not create intermediate byte[].  So we should serialize the mutation 
 directly to the commitlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5195) Offline scrub does not migrate the directory structure on migration from 1.0.x to 1.1.x and causes the keyspace to disappear

2013-01-29 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13566142#comment-13566142
 ] 

Ryan McGuire commented on CASSANDRA-5195:
-

One other interesting aspect: Removing the patch, resetting /var/lib/cassandra 
to the 1.0.12 state, re-running sstablescrub, restarting cassandra TWICE allows 
the keyspace to be read, but the table is empty! :


* 11:03 PM:~/git/datastax/cassandra[5195-1.1.9*]$ cqlsh
  Connected to Test Cluster at localhost:9160.
  [cqlsh 2.2.0 | Cassandra 1.1.9-SNAPSHOT | CQL spec 2.0.0 | Thrift   protocol 
19.33.0]
  Use HELP for help.
  cqlsh use Keyspace1;
  cqlsh:Keyspace1 select count(*) from Standard1;
   count
   ---
0


 Offline scrub does not migrate the directory structure on migration from 
 1.0.x to 1.1.x and causes the keyspace to disappear
 

 Key: CASSANDRA-5195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
Reporter: Omid Aladini
Assignee: Jonathan Ellis
 Fix For: 1.1.9

 Attachments: 5195.patch


 Due to CASSANDRA-4411, upon migration from 1.0.x to 1.1.x containing 
 LCS-compacted sstables, an offline scrub should be run before Cassandra 1.1.x 
 is started. But Cassandra 1.1.x uses a new directory structure 
 (CASSANDRA-2749) that offline scrubber doesn't detect or try to migrate.
 How to reproduce:
 1- Run cassandra 1.0.12.
 2- Run stress tool, let Cassandra flush Keyspace1 or flush manually.
 3- Stop cassandra 1.0.12
 4- Run ./bin/sstablescrub Keyspace1 Standard1
   which returns Unknown keyspace/columnFamily Keyspace1.Standard1 and 
 notice the data directory isn't migrated.
 5- Run cassandra 1.1.9. Keyspace1 doesn't get loaded and Cassandra doesn't 
 try to migrate the directory structure. Also commitlog entries get skipped: 
 Skipped X mutations from unknown (probably removed) CF with id 1000
 Without the unsuccessful step 4, Cassandra 1.1.9 loads and migrates the 
 Keyspace correctly.
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5195) Offline scrub does not migrate the directory structure on migration from 1.0.x to 1.1.x and causes the keyspace to disappear

2013-01-29 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13566142#comment-13566142
 ] 

Ryan McGuire edited comment on CASSANDRA-5195 at 1/30/13 4:07 AM:
--

One other interesting aspect: Removing the patch, resetting /var/lib/cassandra 
to the 1.0.12 state, re-running sstablescrub, restarting cassandra (1,1.9) 
TWICE allows the keyspace to be read, but the table is empty! :


* 11:03 PM:~/git/datastax/cassandra[5195-1.1.9*]$ cqlsh
  Connected to Test Cluster at localhost:9160.
  [cqlsh 2.2.0 | Cassandra 1.1.9-SNAPSHOT | CQL spec 2.0.0 | Thrift   protocol 
19.33.0]
  Use HELP for help.
  cqlsh use Keyspace1;
  cqlsh:Keyspace1 select count(*) from Standard1;
   count
   ---
0


  was (Author: enigmacurry):
One other interesting aspect: Removing the patch, resetting 
/var/lib/cassandra to the 1.0.12 state, re-running sstablescrub, restarting 
cassandra TWICE allows the keyspace to be read, but the table is empty! :


* 11:03 PM:~/git/datastax/cassandra[5195-1.1.9*]$ cqlsh
  Connected to Test Cluster at localhost:9160.
  [cqlsh 2.2.0 | Cassandra 1.1.9-SNAPSHOT | CQL spec 2.0.0 | Thrift   protocol 
19.33.0]
  Use HELP for help.
  cqlsh use Keyspace1;
  cqlsh:Keyspace1 select count(*) from Standard1;
   count
   ---
0

  
 Offline scrub does not migrate the directory structure on migration from 
 1.0.x to 1.1.x and causes the keyspace to disappear
 

 Key: CASSANDRA-5195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
Reporter: Omid Aladini
Assignee: Jonathan Ellis
 Fix For: 1.1.9

 Attachments: 5195.patch


 Due to CASSANDRA-4411, upon migration from 1.0.x to 1.1.x containing 
 LCS-compacted sstables, an offline scrub should be run before Cassandra 1.1.x 
 is started. But Cassandra 1.1.x uses a new directory structure 
 (CASSANDRA-2749) that offline scrubber doesn't detect or try to migrate.
 How to reproduce:
 1- Run cassandra 1.0.12.
 2- Run stress tool, let Cassandra flush Keyspace1 or flush manually.
 3- Stop cassandra 1.0.12
 4- Run ./bin/sstablescrub Keyspace1 Standard1
   which returns Unknown keyspace/columnFamily Keyspace1.Standard1 and 
 notice the data directory isn't migrated.
 5- Run cassandra 1.1.9. Keyspace1 doesn't get loaded and Cassandra doesn't 
 try to migrate the directory structure. Also commitlog entries get skipped: 
 Skipped X mutations from unknown (probably removed) CF with id 1000
 Without the unsuccessful step 4, Cassandra 1.1.9 loads and migrates the 
 Keyspace correctly.
   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5199) Avoid serializing to byte[] on commitlog append

2013-01-29 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5199:
--

Attachment: 5199-2.0.txt

2.0 version adds ByteBufferOutputStream and ChecksummedOutputStream to get rid 
of byte[] serialization entirely.  Also fixes mutation-length checksumming to 
include the entire length, not just the first eight bits.

 Avoid serializing to byte[] on commitlog append
 ---

 Key: CASSANDRA-5199
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5199
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 2.0

 Attachments: 5199-1.2.txt, 5199-2.0.txt


 We used to avoid re-serializing RowMutations by caching the byte[] that we 
 read off the wire.  We don't do that anymore since we fixed MessagingService 
 to not create intermediate byte[].  So we should serialize the mutation 
 directly to the commitlog.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira