[jira] [Commented] (CASSANDRA-1472) Add bitmap secondary indexes

2013-02-14 Thread Alexander Shutyaev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578221#comment-13578221
 ] 

Alexander Shutyaev commented on CASSANDRA-1472:
---

What is the current status of the issue? Is it planned to be implemented? If 
yes then are there any timeframes?

 Add bitmap secondary indexes
 

 Key: CASSANDRA-1472
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1472
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Stu Hood
 Attachments: 0.7-1472-v5.tgz, 0.7-1472-v6.tgz, 1472-v3.tgz, 
 1472-v4.tgz, 1472-v5.tgz, anatomy.png, 
 ASF.LICENSE.NOT.GRANTED--0001-CASSANDRA-1472-rebased-to-0.7-branch.txt, 
 ASF.LICENSE.NOT.GRANTED--0019-Rename-bugfixes-and-fileclose.txt, 
 v4-bench-c32.txt


 Bitmap indexes are a very efficient structure for dealing with immutable 
 data. We can take advantage of the fact that SSTables are immutable by 
 attaching them directly to SSTables as a new component (supported by 
 CASSANDRA-1471).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5251) Hadoop support should be able to work with multiple column families

2013-02-14 Thread Illarion Kovalchuk (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578241#comment-13578241
 ] 

Illarion Kovalchuk commented on CASSANDRA-5251:
---

No. It's about multiple input + multiple output CF. Currently I am preparing a 
patch for this ticket, therefore you'll be able to review.

 Hadoop support should be able to work with multiple column families
 ---

 Key: CASSANDRA-5251
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5251
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 1.2.1
Reporter: Illarion Kovalchuk
Priority: Minor



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5251) Hadoop support should be able to work with multiple column families

2013-02-14 Thread Illarion Kovalchuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illarion Kovalchuk updated CASSANDRA-5251:
--

Affects Version/s: (was: 1.2.1)
   2.0

 Hadoop support should be able to work with multiple column families
 ---

 Key: CASSANDRA-5251
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5251
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 2.0
Reporter: Illarion Kovalchuk
Priority: Minor
 Attachments: trunk-5251.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5251) Hadoop support should be able to work with multiple column families

2013-02-14 Thread Illarion Kovalchuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illarion Kovalchuk updated CASSANDRA-5251:
--

Attachment: trunk-5251.txt

patch

 Hadoop support should be able to work with multiple column families
 ---

 Key: CASSANDRA-5251
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5251
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 2.0
Reporter: Illarion Kovalchuk
Priority: Minor
 Attachments: trunk-5251.txt




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5251) Hadoop support should be able to work with multiple column families

2013-02-14 Thread Illarion Kovalchuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illarion Kovalchuk updated CASSANDRA-5251:
--

Description: 
This patch affects api, so I changed hadoop example in it. The main difference 
is that now ColumnFamilyInput format generates splits for all input column 
families, and ColumnFamilyOutputFormat works not with ListMutation, but with 
ListPairString,Mutatiom, where Pair.left is for column family name.

Thank you

 Hadoop support should be able to work with multiple column families
 ---

 Key: CASSANDRA-5251
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5251
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 2.0
Reporter: Illarion Kovalchuk
Priority: Minor
 Attachments: trunk-5251.txt


 This patch affects api, so I changed hadoop example in it. The main 
 difference is that now ColumnFamilyInput format generates splits for all 
 input column families, and ColumnFamilyOutputFormat works not with 
 ListMutation, but with ListPairString,Mutatiom, where Pair.left is for 
 column family name.
 Thank you

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5251) Hadoop support should be able to work with multiple column families

2013-02-14 Thread Illarion Kovalchuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illarion Kovalchuk updated CASSANDRA-5251:
--

Description: 
This patch affects api, so I changed hadoop example in it. The main difference 
is that now ColumnFamilyInput format generates splits for all input column 
families, and ColumnFamilyOutputFormat works not with ListMutation, but with 
ListPairString,Mutation, where Pair.left is for column family name.

Thank you

  was:
This patch affects api, so I changed hadoop example in it. The main difference 
is that now ColumnFamilyInput format generates splits for all input column 
families, and ColumnFamilyOutputFormat works not with ListMutation, but with 
ListPairString,Mutatiom, where Pair.left is for column family name.

Thank you


 Hadoop support should be able to work with multiple column families
 ---

 Key: CASSANDRA-5251
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5251
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Affects Versions: 2.0
Reporter: Illarion Kovalchuk
Priority: Minor
 Attachments: trunk-5251.txt


 This patch affects api, so I changed hadoop example in it. The main 
 difference is that now ColumnFamilyInput format generates splits for all 
 input column families, and ColumnFamilyOutputFormat works not with 
 ListMutation, but with ListPairString,Mutation, where Pair.left is for 
 column family name.
 Thank you

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-02-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578307#comment-13578307
 ] 

André Cruz commented on CASSANDRA-4785:
---

After trying everything else, the problem was solved by dropping the index, and 
creating it again.

One pattern that I noticed is that this problem only happened when I had 
caching=ALL along with secondary indexes. On CFs that I had the default 
caching=keys_only the secondary indexes were fine.

 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi

 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of DamonPerk by DamonPerk

2013-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The DamonPerk page has been changed by DamonPerk:
http://wiki.apache.org/cassandra/DamonPerk?action=diffrev1=1rev2=2

- Howdy !! I am KATHEY DODSON. I am turning 38. I go to night school at The 
Jittery Prep School which has a branch in Boulder.BR
- I am self employed as a Sheriff. I like Tarot and Card Reading.BR
+ Yo bros !! I am MY MEADOWS. This spring iam going to be 43.BR
+ I go to night school at The Cozy Finishing School of Colossal People situated 
in Minneapolis. I am self employed as a Engineering technician. I am a fan of 
BoardGames.BR
  BR
- Check out my blog [[http://www.unitedchem.com/beatbydre.aspx|dr dre 
earphones]]
+ Feel free to visit my homepage - 
[[http://www.cleanscreenxcel.com/cheapmonsterbeatsbydre.html|power beats by 
dre]]
  


[jira] [Commented] (CASSANDRA-1472) Add bitmap secondary indexes

2013-02-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578374#comment-13578374
 ] 

Jonathan Ellis commented on CASSANDRA-1472:
---

Nobody is actively working on this.

 Add bitmap secondary indexes
 

 Key: CASSANDRA-1472
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1472
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Stu Hood
 Attachments: 0.7-1472-v5.tgz, 0.7-1472-v6.tgz, 1472-v3.tgz, 
 1472-v4.tgz, 1472-v5.tgz, anatomy.png, 
 ASF.LICENSE.NOT.GRANTED--0001-CASSANDRA-1472-rebased-to-0.7-branch.txt, 
 ASF.LICENSE.NOT.GRANTED--0019-Rename-bugfixes-and-fileclose.txt, 
 v4-bench-c32.txt


 Bitmap indexes are a very efficient structure for dealing with immutable 
 data. We can take advantage of the fact that SSTables are immutable by 
 attaching them directly to SSTables as a new component (supported by 
 CASSANDRA-1471).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5253) NPE while loading Saved KeyCache

2013-02-14 Thread Ahmed Guecioueur (JIRA)
Ahmed Guecioueur created CASSANDRA-5253:
---

 Summary: NPE while loading Saved KeyCache
 Key: CASSANDRA-5253
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5253
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: JVM vendor/version: Java HotSpot(TM) Client VM/1.7.0_11
OS: Windows 7 Enterprise SP1, x64.
Reporter: Ahmed Guecioueur
Priority: Minor


This bug occurred in the Beta version and was marked as fixed in this Jira: 
CASSANDRA-4553

However it seems to have reoccurred in the production 1.2.1 release. This is 
the first install I have made of Cassandra (so a clean install), which I 
downloaded prepackaged from 
http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.1/apache-cassandra-1.2.1-bin.tar.gz

I have created a keyspace but not inserted any data, so that is not the issue 
either.

Here is a sample from the logs all the way from startup
{code}
 INFO [main] 2013-02-07 19:48:54,109 CassandraDaemon.java (line 101) Logging 
initialized
 INFO [main] 2013-02-07 19:48:54,125 CassandraDaemon.java (line 123) JVM 
vendor/version: Java HotSpot(TM) Client VM/1.7.0_11
 INFO [main] 2013-02-07 19:48:54,125 CassandraDaemon.java (line 124) Heap size: 
1067057152/1067057152
 INFO [main] 2013-02-07 19:48:54,126 CassandraDaemon.java (line 125) Classpath: 
C:\Cassandra\\conf;C:\Cassandra\\lib\antlr-3.2.jar;C:\Cassandra\\lib\apache-cassandra-1.2.1.jar;C:\Cassandra\\lib\apache-cassandra-clientutil-1.2.1.jar;C:\Cassandra\\lib\apache-cassandra-thrift-1.2.1.jar;C:\Cassandra\\lib\avro-1.4.0-fixes.jar;C:\Cassandra\\lib\avro-1.4.0-sources-fixes.jar;C:\Cassandra\\lib\commons-cli-1.1.jar;C:\Cassandra\\lib\commons-codec-1.2.jar;C:\Cassandra\\lib\commons-lang-2.6.jar;C:\Cassandra\\lib\compress-lzf-0.8.4.jar;C:\Cassandra\\lib\concurrentlinkedhashmap-lru-1.3.jar;C:\Cassandra\\lib\guava-13.0.1.jar;C:\Cassandra\\lib\high-scale-lib-1.1.2.jar;C:\Cassandra\\lib\jackson-core-asl-1.9.2.jar;C:\Cassandra\\lib\jackson-mapper-asl-1.9.2.jar;C:\Cassandra\\lib\jamm-0.2.5.jar;C:\Cassandra\\lib\jline-1.0.jar;C:\Cassandra\\lib\json-simple-1.1.jar;C:\Cassandra\\lib\libthrift-0.7.0.jar;C:\Cassandra\\lib\log4j-1.2.16.jar;C:\Cassandra\\lib\metrics-core-2.0.3.jar;C:\Cassandra\\lib\netty-3.5.9.Final.jar;C:\Cassandra\\lib\servlet-api-2.5-20081211.jar;C:\Cassandra\\lib\slf4j-api-1.7.2.jar;C:\Cassandra\\lib\slf4j-log4j12-1.7.2.jar;C:\Cassandra\\lib\snakeyaml-1.6.jar;C:\Cassandra\\lib\snappy-java-1.0.4.1.jar;C:\Cassandra\\lib\snaptree-0.1.jar;C:\Cassandra\\build\classes\main;C:\Cassandra\\build\classes\thrift;C:\Cassandra\\lib\jamm-0.2.5.jar
 INFO [main] 2013-02-07 19:48:54,130 CLibrary.java (line 61) JNA not found. 
Native methods will be disabled.
 INFO [main] 2013-02-07 19:48:54,147 DatabaseDescriptor.java (line 131) Loading 
settings from file:/C:/Cassandra/conf/cassandra.yaml
 INFO [main] 2013-02-07 19:48:54,515 DatabaseDescriptor.java (line 150) 32bit 
JVM detected.  It is recommended to run Cassandra on a 64bit JVM for better 
performance.
 INFO [main] 2013-02-07 19:48:54,516 DatabaseDescriptor.java (line 190) 
DiskAccessMode 'auto' determined to be standard, indexAccessMode is standard
 INFO [main] 2013-02-07 19:48:54,516 DatabaseDescriptor.java (line 204) 
disk_failure_policy is stop
 INFO [main] 2013-02-07 19:48:54,524 DatabaseDescriptor.java (line 267) Global 
memtable threshold is enabled at 339MB
 INFO [main] 2013-02-07 19:48:55,099 CacheService.java (line 111) Initializing 
key cache with capacity of 50 MBs.
 INFO [main] 2013-02-07 19:48:55,109 CacheService.java (line 140) Scheduling 
key cache save to each 14400 seconds (going to save all keys).
 INFO [main] 2013-02-07 19:48:55,110 CacheService.java (line 154) Initializing 
row cache with capacity of 0 MBs and provider 
org.apache.cassandra.cache.SerializingCacheProvider
 INFO [main] 2013-02-07 19:48:55,117 CacheService.java (line 166) Scheduling 
row cache save to each 0 seconds (going to save all keys).
 INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,452 SSTableReader.java (line 
164) Opening 
C:\Cassandra\data\system\schema_keyspaces\system-schema_keyspaces-ib-1 (258 
bytes)
 INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,484 SSTableReader.java (line 
164) Opening 
C:\Cassandra\data\system\schema_keyspaces\system-schema_keyspaces-ib-3 (262 
bytes)
 INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,489 SSTableReader.java (line 
164) Opening 
C:\Cassandra\data\system\schema_keyspaces\system-schema_keyspaces-ib-2 (262 
bytes)
 INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,517 SSTableReader.java (line 
164) Opening 
C:\Cassandra\data\system\schema_columnfamilies\system-schema_columnfamilies-ib-1
 (4420 bytes)
 INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,522 SSTableReader.java (line 
164) Opening 
C:\Cassandra\data\system\schema_columnfamilies\system-schema_columnfamilies-ib-3
 (4424 bytes)
 INFO 

[jira] [Assigned] (CASSANDRA-5253) NPE while loading Saved KeyCache

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-5253:
-

Assignee: Dave Brosius

 NPE while loading Saved KeyCache
 

 Key: CASSANDRA-5253
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5253
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: JVM vendor/version: Java HotSpot(TM) Client VM/1.7.0_11
 OS: Windows 7 Enterprise SP1, x64.
Reporter: Ahmed Guecioueur
Assignee: Dave Brosius
Priority: Minor

 This bug occurred in the Beta version and was marked as fixed in this Jira: 
 CASSANDRA-4553
 However it seems to have reoccurred in the production 1.2.1 release. This is 
 the first install I have made of Cassandra (so a clean install), which I 
 downloaded prepackaged from 
 http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.2.1/apache-cassandra-1.2.1-bin.tar.gz
 I have created a keyspace but not inserted any data, so that is not the issue 
 either.
 Here is a sample from the logs all the way from startup
 {code}
  INFO [main] 2013-02-07 19:48:54,109 CassandraDaemon.java (line 101) Logging 
 initialized
  INFO [main] 2013-02-07 19:48:54,125 CassandraDaemon.java (line 123) JVM 
 vendor/version: Java HotSpot(TM) Client VM/1.7.0_11
  INFO [main] 2013-02-07 19:48:54,125 CassandraDaemon.java (line 124) Heap 
 size: 1067057152/1067057152
  INFO [main] 2013-02-07 19:48:54,126 CassandraDaemon.java (line 125) 
 Classpath: 
 C:\Cassandra\\conf;C:\Cassandra\\lib\antlr-3.2.jar;C:\Cassandra\\lib\apache-cassandra-1.2.1.jar;C:\Cassandra\\lib\apache-cassandra-clientutil-1.2.1.jar;C:\Cassandra\\lib\apache-cassandra-thrift-1.2.1.jar;C:\Cassandra\\lib\avro-1.4.0-fixes.jar;C:\Cassandra\\lib\avro-1.4.0-sources-fixes.jar;C:\Cassandra\\lib\commons-cli-1.1.jar;C:\Cassandra\\lib\commons-codec-1.2.jar;C:\Cassandra\\lib\commons-lang-2.6.jar;C:\Cassandra\\lib\compress-lzf-0.8.4.jar;C:\Cassandra\\lib\concurrentlinkedhashmap-lru-1.3.jar;C:\Cassandra\\lib\guava-13.0.1.jar;C:\Cassandra\\lib\high-scale-lib-1.1.2.jar;C:\Cassandra\\lib\jackson-core-asl-1.9.2.jar;C:\Cassandra\\lib\jackson-mapper-asl-1.9.2.jar;C:\Cassandra\\lib\jamm-0.2.5.jar;C:\Cassandra\\lib\jline-1.0.jar;C:\Cassandra\\lib\json-simple-1.1.jar;C:\Cassandra\\lib\libthrift-0.7.0.jar;C:\Cassandra\\lib\log4j-1.2.16.jar;C:\Cassandra\\lib\metrics-core-2.0.3.jar;C:\Cassandra\\lib\netty-3.5.9.Final.jar;C:\Cassandra\\lib\servlet-api-2.5-20081211.jar;C:\Cassandra\\lib\slf4j-api-1.7.2.jar;C:\Cassandra\\lib\slf4j-log4j12-1.7.2.jar;C:\Cassandra\\lib\snakeyaml-1.6.jar;C:\Cassandra\\lib\snappy-java-1.0.4.1.jar;C:\Cassandra\\lib\snaptree-0.1.jar;C:\Cassandra\\build\classes\main;C:\Cassandra\\build\classes\thrift;C:\Cassandra\\lib\jamm-0.2.5.jar
  INFO [main] 2013-02-07 19:48:54,130 CLibrary.java (line 61) JNA not found. 
 Native methods will be disabled.
  INFO [main] 2013-02-07 19:48:54,147 DatabaseDescriptor.java (line 131) 
 Loading settings from file:/C:/Cassandra/conf/cassandra.yaml
  INFO [main] 2013-02-07 19:48:54,515 DatabaseDescriptor.java (line 150) 32bit 
 JVM detected.  It is recommended to run Cassandra on a 64bit JVM for better 
 performance.
  INFO [main] 2013-02-07 19:48:54,516 DatabaseDescriptor.java (line 190) 
 DiskAccessMode 'auto' determined to be standard, indexAccessMode is standard
  INFO [main] 2013-02-07 19:48:54,516 DatabaseDescriptor.java (line 204) 
 disk_failure_policy is stop
  INFO [main] 2013-02-07 19:48:54,524 DatabaseDescriptor.java (line 267) 
 Global memtable threshold is enabled at 339MB
  INFO [main] 2013-02-07 19:48:55,099 CacheService.java (line 111) 
 Initializing key cache with capacity of 50 MBs.
  INFO [main] 2013-02-07 19:48:55,109 CacheService.java (line 140) Scheduling 
 key cache save to each 14400 seconds (going to save all keys).
  INFO [main] 2013-02-07 19:48:55,110 CacheService.java (line 154) 
 Initializing row cache with capacity of 0 MBs and provider 
 org.apache.cassandra.cache.SerializingCacheProvider
  INFO [main] 2013-02-07 19:48:55,117 CacheService.java (line 166) Scheduling 
 row cache save to each 0 seconds (going to save all keys).
  INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,452 SSTableReader.java (line 
 164) Opening 
 C:\Cassandra\data\system\schema_keyspaces\system-schema_keyspaces-ib-1 (258 
 bytes)
  INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,484 SSTableReader.java (line 
 164) Opening 
 C:\Cassandra\data\system\schema_keyspaces\system-schema_keyspaces-ib-3 (262 
 bytes)
  INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,489 SSTableReader.java (line 
 164) Opening 
 C:\Cassandra\data\system\schema_keyspaces\system-schema_keyspaces-ib-2 (262 
 bytes)
  INFO [SSTableBatchOpen:1] 2013-02-07 19:48:55,517 SSTableReader.java (line 
 164) Opening 
 

[jira] [Commented] (CASSANDRA-4898) Authentication provider in Cassandra itself

2013-02-14 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578419#comment-13578419
 ] 

Aleksey Yeschenko commented on CASSANDRA-4898:
--

https://github.com/iamaleksey/cassandra/compare/4898

Besides adding CQL3-based IAuthenticator and IAuthorizer implementations, this 
branch also:
- removes SimpleAuth examples since they are no longer actually good examples 
after the interfaces changed
- makes small backwards-compatible changes to IAuthenticator and IAuthorizer 
declared exceptions
- drops limitations on protectedResources except for schema modification

What I'm not 100% sure about is naming - especially for the authorizer. The 
only thing that matters about IAuthorizer implementations is where they keep 
permissions data, so CassandraAuthorizer makes sense, but I'm afraid it's too 
generic. Haven't come up with a better name though.

What needs to be done - after/if the names are confirmed:
- add an entry to NEWS about the new implementations and about the alterable 
system_auth ks (CASSANDRA-5112)
- possibly a comment in cassandra.yaml about the available implementations ?
- dtests for the whole thing - now that we've got working baked-in 
implementations, we can and should test all the auth-related cql3 statements

 Authentication provider in Cassandra itself
 ---

 Key: CASSANDRA-4898
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4898
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.1.6
Reporter: Dirkjan Bussink
  Labels: authentication, authorization

 I've been working on an implementation for both IAuthority2 and 
 IAuthenticator that uses Cassandra itself to store the necessary credentials. 
 I'm planning on open sourcing this shortly.
 Is there any interest in this? It tries to provide reasonable security, for 
 example using PBKDF2 to store passwords with a configurable configuration 
 cycle and managing all the rights available in IAuthority2. 
 My main use goal isn't security / confidentiality of the data, but more that 
 I don't want multiple consumers of the cluster to accidentally screw stuff 
 up. Only certain users can write data, others can read it out again and 
 further process it.
 I'm planning on releasing this soon under an open source license (probably 
 the same as Cassandra itself). Would there be interest in incorporating it as 
 a new reference implementation instead of the properties file implementation 
 perhaps? Or can I better maintain it separately? I would love if people from 
 the community would want to review it, since I have been dabbling in the 
 Cassandra source code only for a short while now.
 During the development of this I've encountered a few bumps and I wonder 
 whether they could be addressed or not.
 = Moment when validateConfiguration() runs =
 Is there a deliberate reason that validateConfiguration() is executed before 
 all information about keyspaces, column families etc. is available? In the 
 current form I therefore can't validate whether column families etc. are 
 available for authentication since they aren't loaded yet.
 I've wanted to use this to make relatively easy bootstrapping possible. My 
 approach here would be to only enable authentication if the needed keyspace 
 is available. This allows for configuring the cluster, then import the 
 necessary authentication data for an admin user to bootstrap further and then 
 restart every node in the cluster.
 Basically the questions here are, can the moment when validateConfiguration() 
 runs for an authentication provider be changed? Is this approach to 
 bootstrapping reasonable or do people have better ideas?
 = AbstractReplicationStrategy has package visible constructor =
 I've added a strategy that basically says that data should be available on 
 all nodes. The amount of data use for authentication is very limited. 
 Replicating it to every node is there for not very problematic and allows for 
 every node to have all data locally available for verifying requests.
 I wanted to put this strategy into it's own package inside the authentication 
 module, but since the constructor of AbstractReplicationStrategy has no 
 visibility explicitly marked, it's only available inside the same package.
 I'm not sure whether implementing a strategy to replicate data to all nodes 
 is a sane idea and whether my implementation of this strategy is correct. 
 What do you people think of this? Would people want to review the 
 implementation?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4898) Authentication provider in Cassandra itself

2013-02-14 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-4898:
-

Fix Version/s: 1.2.2
 Reviewer: jbellis

 Authentication provider in Cassandra itself
 ---

 Key: CASSANDRA-4898
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4898
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.1.6
Reporter: Dirkjan Bussink
  Labels: authentication, authorization
 Fix For: 1.2.2


 I've been working on an implementation for both IAuthority2 and 
 IAuthenticator that uses Cassandra itself to store the necessary credentials. 
 I'm planning on open sourcing this shortly.
 Is there any interest in this? It tries to provide reasonable security, for 
 example using PBKDF2 to store passwords with a configurable configuration 
 cycle and managing all the rights available in IAuthority2. 
 My main use goal isn't security / confidentiality of the data, but more that 
 I don't want multiple consumers of the cluster to accidentally screw stuff 
 up. Only certain users can write data, others can read it out again and 
 further process it.
 I'm planning on releasing this soon under an open source license (probably 
 the same as Cassandra itself). Would there be interest in incorporating it as 
 a new reference implementation instead of the properties file implementation 
 perhaps? Or can I better maintain it separately? I would love if people from 
 the community would want to review it, since I have been dabbling in the 
 Cassandra source code only for a short while now.
 During the development of this I've encountered a few bumps and I wonder 
 whether they could be addressed or not.
 = Moment when validateConfiguration() runs =
 Is there a deliberate reason that validateConfiguration() is executed before 
 all information about keyspaces, column families etc. is available? In the 
 current form I therefore can't validate whether column families etc. are 
 available for authentication since they aren't loaded yet.
 I've wanted to use this to make relatively easy bootstrapping possible. My 
 approach here would be to only enable authentication if the needed keyspace 
 is available. This allows for configuring the cluster, then import the 
 necessary authentication data for an admin user to bootstrap further and then 
 restart every node in the cluster.
 Basically the questions here are, can the moment when validateConfiguration() 
 runs for an authentication provider be changed? Is this approach to 
 bootstrapping reasonable or do people have better ideas?
 = AbstractReplicationStrategy has package visible constructor =
 I've added a strategy that basically says that data should be available on 
 all nodes. The amount of data use for authentication is very limited. 
 Replicating it to every node is there for not very problematic and allows for 
 every node to have all data locally available for verifying requests.
 I wanted to put this strategy into it's own package inside the authentication 
 module, but since the constructor of AbstractReplicationStrategy has no 
 visibility explicitly marked, it's only available inside the same package.
 I'm not sure whether implementing a strategy to replicate data to all nodes 
 is a sane idea and whether my implementation of this strategy is correct. 
 What do you people think of this? Would people want to review the 
 implementation?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-4898) Authentication provider in Cassandra itself

2013-02-14 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko reassigned CASSANDRA-4898:


Assignee: Aleksey Yeschenko

 Authentication provider in Cassandra itself
 ---

 Key: CASSANDRA-4898
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4898
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.1.6
Reporter: Dirkjan Bussink
Assignee: Aleksey Yeschenko
  Labels: authentication, authorization
 Fix For: 1.2.2


 I've been working on an implementation for both IAuthority2 and 
 IAuthenticator that uses Cassandra itself to store the necessary credentials. 
 I'm planning on open sourcing this shortly.
 Is there any interest in this? It tries to provide reasonable security, for 
 example using PBKDF2 to store passwords with a configurable configuration 
 cycle and managing all the rights available in IAuthority2. 
 My main use goal isn't security / confidentiality of the data, but more that 
 I don't want multiple consumers of the cluster to accidentally screw stuff 
 up. Only certain users can write data, others can read it out again and 
 further process it.
 I'm planning on releasing this soon under an open source license (probably 
 the same as Cassandra itself). Would there be interest in incorporating it as 
 a new reference implementation instead of the properties file implementation 
 perhaps? Or can I better maintain it separately? I would love if people from 
 the community would want to review it, since I have been dabbling in the 
 Cassandra source code only for a short while now.
 During the development of this I've encountered a few bumps and I wonder 
 whether they could be addressed or not.
 = Moment when validateConfiguration() runs =
 Is there a deliberate reason that validateConfiguration() is executed before 
 all information about keyspaces, column families etc. is available? In the 
 current form I therefore can't validate whether column families etc. are 
 available for authentication since they aren't loaded yet.
 I've wanted to use this to make relatively easy bootstrapping possible. My 
 approach here would be to only enable authentication if the needed keyspace 
 is available. This allows for configuring the cluster, then import the 
 necessary authentication data for an admin user to bootstrap further and then 
 restart every node in the cluster.
 Basically the questions here are, can the moment when validateConfiguration() 
 runs for an authentication provider be changed? Is this approach to 
 bootstrapping reasonable or do people have better ideas?
 = AbstractReplicationStrategy has package visible constructor =
 I've added a strategy that basically says that data should be available on 
 all nodes. The amount of data use for authentication is very limited. 
 Replicating it to every node is there for not very problematic and allows for 
 every node to have all data locally available for verifying requests.
 I wanted to put this strategy into it's own package inside the authentication 
 module, but since the constructor of AbstractReplicationStrategy has no 
 visibility explicitly marked, it's only available inside the same package.
 I'm not sure whether implementing a strategy to replicate data to all nodes 
 is a sane idea and whether my implementation of this strategy is correct. 
 What do you people think of this? Would people want to review the 
 implementation?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4898) Authentication provider in Cassandra itself

2013-02-14 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578419#comment-13578419
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-4898 at 2/14/13 3:15 PM:
---

https://github.com/iamaleksey/cassandra/compare/4898

Besides adding CQL3-based IAuthenticator and IAuthorizer implementations, this 
branch also:
- removes SimpleAuth examples since they are no longer actually good examples 
after the interfaces changed
- makes small backwards-compatible changes to IAuthenticator and IAuthorizer 
declared exceptions
- drops limitations on protectedResources except for schema modification

What I'm not 100% sure about is naming - especially for the authorizer. The 
only thing that matters about IAuthorizer implementations is where they keep 
permissions data, so CassandraAuthorizer makes sense, but I'm afraid it's too 
generic. Haven't come up with a better name though.

What needs to be done - after/if the names are confirmed:
- add an entry to NEWS about the new implementations and about the alterable 
system_auth ks (CASSANDRA-5112)
- add a NEWS entry for permissions_validity_in_ms (CASSANDRA-4295)
- possibly a comment in cassandra.yaml about the available implementations ?
- dtests for the whole thing - now that we've got working baked-in 
implementations, we can and should test all the auth-related cql3 statements

  was (Author: iamaleksey):
https://github.com/iamaleksey/cassandra/compare/4898

Besides adding CQL3-based IAuthenticator and IAuthorizer implementations, this 
branch also:
- removes SimpleAuth examples since they are no longer actually good examples 
after the interfaces changed
- makes small backwards-compatible changes to IAuthenticator and IAuthorizer 
declared exceptions
- drops limitations on protectedResources except for schema modification

What I'm not 100% sure about is naming - especially for the authorizer. The 
only thing that matters about IAuthorizer implementations is where they keep 
permissions data, so CassandraAuthorizer makes sense, but I'm afraid it's too 
generic. Haven't come up with a better name though.

What needs to be done - after/if the names are confirmed:
- add an entry to NEWS about the new implementations and about the alterable 
system_auth ks (CASSANDRA-5112)
- possibly a comment in cassandra.yaml about the available implementations ?
- dtests for the whole thing - now that we've got working baked-in 
implementations, we can and should test all the auth-related cql3 statements
  
 Authentication provider in Cassandra itself
 ---

 Key: CASSANDRA-4898
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4898
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.1.6
Reporter: Dirkjan Bussink
Assignee: Aleksey Yeschenko
  Labels: authentication, authorization
 Fix For: 1.2.2


 I've been working on an implementation for both IAuthority2 and 
 IAuthenticator that uses Cassandra itself to store the necessary credentials. 
 I'm planning on open sourcing this shortly.
 Is there any interest in this? It tries to provide reasonable security, for 
 example using PBKDF2 to store passwords with a configurable configuration 
 cycle and managing all the rights available in IAuthority2. 
 My main use goal isn't security / confidentiality of the data, but more that 
 I don't want multiple consumers of the cluster to accidentally screw stuff 
 up. Only certain users can write data, others can read it out again and 
 further process it.
 I'm planning on releasing this soon under an open source license (probably 
 the same as Cassandra itself). Would there be interest in incorporating it as 
 a new reference implementation instead of the properties file implementation 
 perhaps? Or can I better maintain it separately? I would love if people from 
 the community would want to review it, since I have been dabbling in the 
 Cassandra source code only for a short while now.
 During the development of this I've encountered a few bumps and I wonder 
 whether they could be addressed or not.
 = Moment when validateConfiguration() runs =
 Is there a deliberate reason that validateConfiguration() is executed before 
 all information about keyspaces, column families etc. is available? In the 
 current form I therefore can't validate whether column families etc. are 
 available for authentication since they aren't loaded yet.
 I've wanted to use this to make relatively easy bootstrapping possible. My 
 approach here would be to only enable authentication if the needed keyspace 
 is available. This allows for configuring the cluster, then import the 
 necessary authentication data for an admin user to bootstrap further and then 
 restart 

[Cassandra Wiki] Trivial Update of Larue72Y by Larue72Y

2013-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Larue72Y page has been changed by Larue72Y:
http://wiki.apache.org/cassandra/Larue72Y

New page:
I am 42. I am taking admission in The Deep Preparatory built at Leeds.BR
I have a job as Artist. I like Four Wheeling. My papa name is Stephen and he is 
a Reporter. My momy is a Bodyguard.BR
BR
Here is my homepage [[http://louis-vuitton-outlets.blinkweb.com|fake louis 
vuitton bags]]


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-14 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578469#comment-13578469
 ] 

Yuki Morishita commented on CASSANDRA-5151:
---

OK, I will revert this from 1.2 branch but leave it in trunk.

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5254) Nodes can be marked up after gossip sends the goodbye command

2013-02-14 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-5254:
---

 Summary: Nodes can be marked up after gossip sends the goodbye 
command
 Key: CASSANDRA-5254
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5254
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.1
Reporter: Brandon Williams
Assignee: Brandon Williams
Priority: Minor


Finally tracked this down on dtestbot after setting the rpc_timeout to 
ridiculous levels:

{noformat}
== logs/last/node1.log ==
 INFO [FlushWriter:1] 2013-02-14 10:01:10,311 Memtable.java (line 305) 
Completed flushing 
/tmp/dtest-iaYzzR/test/node1/data/system/schema_columns/system-schema_columns-hf-2-Data.db
 (558 bytes) for commitlog position ReplayPosition(segmentId=1360857665931, 
position=4770)
 INFO [MemoryMeter:1] 2013-02-14 10:01:10,974 Memtable.java (line 213) 
CFS(Keyspace='ks', ColumnFamily='cf') liveRatio is 20.488836662749705 
(just-counted was 20.488836662749705).  calculation took 96ms for 144 columns
 INFO [GossipStage:1] 2013-02-14 10:01:12,119 Gossiper.java (line 831) 
InetAddress /127.0.0.3 is now dead.

== logs/last/node2.log ==
 INFO [GossipStage:1] 2013-02-14 10:01:12,119 Gossiper.java (line 831) 
InetAddress /127.0.0.3 is now dead.
 INFO [GossipStage:1] 2013-02-14 10:01:12,238 Gossiper.java (line 817) 
InetAddress /127.0.0.3 is now UP
 INFO [GossipTasks:1] 2013-02-14 10:01:26,386 Gossiper.java (line 831) 
InetAddress /127.0.0.3 is now dead.

== logs/last/node3.log ==
 INFO [StorageServiceShutdownHook] 2013-02-14 10:01:11,115 Gossiper.java (line 
1134) Announcing shutdown
 INFO [StorageServiceShutdownHook] 2013-02-14 10:01:12,118 
MessagingService.java (line 549) Waiting for messaging service to quiesce
 INFO [ACCEPT-/127.0.0.3] 2013-02-14 10:01:12,119 MessagingService.java (line 
705) MessagingService shutting down server thread.
{noformat}

node2 receives the goodbye command from node3, and node1 has already marked 
node3 down, but some kind of signal is still coming from node3 to node2 marking 
it up again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[2/3] git commit: comment

2013-02-14 Thread jbellis
comment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/694e3a74
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/694e3a74
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/694e3a74

Branch: refs/heads/trunk
Commit: 694e3a74a9b3b31d43eed5477dd6fed3db4c293e
Parents: 3925f56
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Feb 14 11:17:51 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Feb 14 11:17:51 2013 -0600

--
 .../cassandra/locator/DynamicEndpointSnitch.java   |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/694e3a74/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java 
b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
index 214208e..1094a00 100644
--- a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
+++ b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
@@ -254,7 +254,8 @@ public class DynamicEndpointSnitch extends 
AbstractEndpointSnitch implements ILa
 else
 // there's a chance a host was added to the samples after our 
previous loop to get the time penalties.  Add 1.0 to it, or '100% bad' for the 
time penalty.
 score += 1; // maxPenalty / maxPenalty
-// finally, add the severity without any weighting, since hosts 
scale this relative to their own load and the size of the task causing the 
severity
+// finally, add the severity without any weighting, since hosts 
scale this relative to their own load and the size of the task causing the 
severity.
+// Severity is basically a measure of compaction activity 
(CASSANDRA-3722).
 score += StorageService.instance.getSeverity(entry.getKey());
 // lowest score (least amount of badness) wins.
 scores.put(entry.getKey(), score);



[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-14 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 3925f5606 - 694e3a74a
  refs/heads/trunk 7242a0054 - 5673ab3b6


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5673ab3b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5673ab3b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5673ab3b

Branch: refs/heads/trunk
Commit: 5673ab3b618ca4c2fb0c739ad4cd48d6d7d757d5
Parents: 7242a00 694e3a7
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Feb 14 11:17:55 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Feb 14 11:17:55 2013 -0600

--
 .../cassandra/locator/DynamicEndpointSnitch.java   |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)
--




[1/3] git commit: comment

2013-02-14 Thread jbellis
comment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/694e3a74
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/694e3a74
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/694e3a74

Branch: refs/heads/cassandra-1.2
Commit: 694e3a74a9b3b31d43eed5477dd6fed3db4c293e
Parents: 3925f56
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Feb 14 11:17:51 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Feb 14 11:17:51 2013 -0600

--
 .../cassandra/locator/DynamicEndpointSnitch.java   |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/694e3a74/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
--
diff --git a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java 
b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
index 214208e..1094a00 100644
--- a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
+++ b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitch.java
@@ -254,7 +254,8 @@ public class DynamicEndpointSnitch extends 
AbstractEndpointSnitch implements ILa
 else
 // there's a chance a host was added to the samples after our 
previous loop to get the time penalties.  Add 1.0 to it, or '100% bad' for the 
time penalty.
 score += 1; // maxPenalty / maxPenalty
-// finally, add the severity without any weighting, since hosts 
scale this relative to their own load and the size of the task causing the 
severity
+// finally, add the severity without any weighting, since hosts 
scale this relative to their own load and the size of the task causing the 
severity.
+// Severity is basically a measure of compaction activity 
(CASSANDRA-3722).
 score += StorageService.instance.getSeverity(entry.getKey());
 // lowest score (least amount of badness) wins.
 scores.put(entry.getKey(), score);



[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-14 Thread yukim
Updated Branches:
  refs/heads/cassandra-1.2 694e3a74a - 7ff2805c7
  refs/heads/trunk 5673ab3b6 - 0411d5a2b


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0411d5a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0411d5a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0411d5a2

Branch: refs/heads/trunk
Commit: 0411d5a2b29c8dc2f2e202c6daa1cc38978537a9
Parents: 5673ab3 7ff2805
Author: Yuki Morishita yu...@apache.org
Authored: Thu Feb 14 11:27:32 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Feb 14 11:27:32 2013 -0600

--
 CHANGES.txt |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0411d5a2/CHANGES.txt
--
diff --cc CHANGES.txt
index ff0a9ba,aa25b63..6047e66
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,5 +1,16 @@@
 +1.3
 + * avoid serializing to byte[] on commitlog append (CASSANDRA-5199)
 + * make index_interval configurable per columnfamily (CASSANDRA-3961)
 + * add default_tim_to_live (CASSANDRA-3974)
 + * add memtable_flush_period_in_ms (CASSANDRA-4237)
 + * replace supercolumns internally by composites (CASSANDRA-3237, 5123)
 + * upgrade thrift to 0.9.0 (CASSANDRA-3719)
 + * drop unnecessary keyspace from user-defined compaction API (CASSANDRA-5139)
++ * more robust solution to incomplete compactions + counters (CASSANDRA-5151)
 +
 +
  1.2.2
   * avoid no-op caching of byte[] on commitlog append (CASSANDRA-5199)
-  * more robust solution to incomplete compactions + counters (CASSANDRA-5151)
   * fix symlinks under data dir not working (CASSANDRA-5185)
   * fix bug in compact storage metadata handling (CASSANDRA-5189)
   * Validate login for USE queries (CASSANDRA-5207)



[2/3] git commit: revert CASSANDRA-5151 from 1.2

2013-02-14 Thread yukim
revert CASSANDRA-5151 from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7ff2805c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7ff2805c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7ff2805c

Branch: refs/heads/trunk
Commit: 7ff2805c7e6a362eb451e6b8dfaec59642fee0f2
Parents: 694e3a7
Author: Yuki Morishita yu...@apache.org
Authored: Thu Feb 14 11:13:11 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Feb 14 11:24:09 2013 -0600

--
 CHANGES.txt|1 -
 .../org/apache/cassandra/config/CFMetaData.java|7 -
 .../org/apache/cassandra/config/KSMetaData.java|1 -
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   95 --
 src/java/org/apache/cassandra/db/SystemTable.java  |   69 ---
 .../cassandra/db/compaction/CompactionTask.java|7 -
 .../apache/cassandra/service/CassandraDaemon.java  |9 --
 .../apache/cassandra/db/ColumnFamilyStoreTest.java |   55 -
 .../cassandra/db/compaction/CompactionsTest.java   |   34 -
 9 files changed, 27 insertions(+), 251 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ff2805c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9281b5e..aa25b63 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,5 @@
 1.2.2
  * avoid no-op caching of byte[] on commitlog append (CASSANDRA-5199)
- * more robust solution to incomplete compactions + counters (CASSANDRA-5151)
  * fix symlinks under data dir not working (CASSANDRA-5185)
  * fix bug in compact storage metadata handling (CASSANDRA-5189)
  * Validate login for USE queries (CASSANDRA-5207)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ff2805c/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 9c76c9b..73b7b6b 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -226,13 +226,6 @@ public final class CFMetaData
   + requested_at 
timestamp
   + ) WITH 
COMMENT='ranges requested for transfer here');
 
-public static final CFMetaData CompactionLogCF = compile(18, CREATE TABLE 
 + SystemTable.COMPACTION_LOG +  (
- + id uuid 
PRIMARY KEY,
- + 
keyspace_name text,
- + 
columnfamily_name text,
- + inputs 
setint
- + ) WITH 
COMMENT='unfinished compactions');
-
 public enum Caching
 {
 ALL, KEYS_ONLY, ROWS_ONLY, NONE;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ff2805c/src/java/org/apache/cassandra/config/KSMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/KSMetaData.java 
b/src/java/org/apache/cassandra/config/KSMetaData.java
index b0764cc..9522aa0 100644
--- a/src/java/org/apache/cassandra/config/KSMetaData.java
+++ b/src/java/org/apache/cassandra/config/KSMetaData.java
@@ -89,7 +89,6 @@ public final class KSMetaData
 CFMetaData.SchemaKeyspacesCf,
 
CFMetaData.SchemaColumnFamiliesCf,
 CFMetaData.SchemaColumnsCf,
-CFMetaData.CompactionLogCF,
 CFMetaData.OldStatusCf,
 CFMetaData.OldHintsCf,
 CFMetaData.OldMigrationsCf,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ff2805c/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 0769d5c..6043be5 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -61,7 +61,6 @@ import org.apache.cassandra.db.index.SecondaryIndexManager;
 import org.apache.cassandra.db.marshal.AbstractType;
 import 

[1/3] git commit: revert CASSANDRA-5151 from 1.2

2013-02-14 Thread yukim
revert CASSANDRA-5151 from 1.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7ff2805c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7ff2805c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7ff2805c

Branch: refs/heads/cassandra-1.2
Commit: 7ff2805c7e6a362eb451e6b8dfaec59642fee0f2
Parents: 694e3a7
Author: Yuki Morishita yu...@apache.org
Authored: Thu Feb 14 11:13:11 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Feb 14 11:24:09 2013 -0600

--
 CHANGES.txt|1 -
 .../org/apache/cassandra/config/CFMetaData.java|7 -
 .../org/apache/cassandra/config/KSMetaData.java|1 -
 .../org/apache/cassandra/db/ColumnFamilyStore.java |   95 --
 src/java/org/apache/cassandra/db/SystemTable.java  |   69 ---
 .../cassandra/db/compaction/CompactionTask.java|7 -
 .../apache/cassandra/service/CassandraDaemon.java  |9 --
 .../apache/cassandra/db/ColumnFamilyStoreTest.java |   55 -
 .../cassandra/db/compaction/CompactionsTest.java   |   34 -
 9 files changed, 27 insertions(+), 251 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ff2805c/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9281b5e..aa25b63 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,6 +1,5 @@
 1.2.2
  * avoid no-op caching of byte[] on commitlog append (CASSANDRA-5199)
- * more robust solution to incomplete compactions + counters (CASSANDRA-5151)
  * fix symlinks under data dir not working (CASSANDRA-5185)
  * fix bug in compact storage metadata handling (CASSANDRA-5189)
  * Validate login for USE queries (CASSANDRA-5207)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ff2805c/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 9c76c9b..73b7b6b 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -226,13 +226,6 @@ public final class CFMetaData
   + requested_at 
timestamp
   + ) WITH 
COMMENT='ranges requested for transfer here');
 
-public static final CFMetaData CompactionLogCF = compile(18, CREATE TABLE 
 + SystemTable.COMPACTION_LOG +  (
- + id uuid 
PRIMARY KEY,
- + 
keyspace_name text,
- + 
columnfamily_name text,
- + inputs 
setint
- + ) WITH 
COMMENT='unfinished compactions');
-
 public enum Caching
 {
 ALL, KEYS_ONLY, ROWS_ONLY, NONE;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ff2805c/src/java/org/apache/cassandra/config/KSMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/KSMetaData.java 
b/src/java/org/apache/cassandra/config/KSMetaData.java
index b0764cc..9522aa0 100644
--- a/src/java/org/apache/cassandra/config/KSMetaData.java
+++ b/src/java/org/apache/cassandra/config/KSMetaData.java
@@ -89,7 +89,6 @@ public final class KSMetaData
 CFMetaData.SchemaKeyspacesCf,
 
CFMetaData.SchemaColumnFamiliesCf,
 CFMetaData.SchemaColumnsCf,
-CFMetaData.CompactionLogCF,
 CFMetaData.OldStatusCf,
 CFMetaData.OldHintsCf,
 CFMetaData.OldMigrationsCf,

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7ff2805c/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 0769d5c..6043be5 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -61,7 +61,6 @@ import org.apache.cassandra.db.index.SecondaryIndexManager;
 import org.apache.cassandra.db.marshal.AbstractType;
 import 

[jira] [Updated] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-14 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5151:
--

Fix Version/s: (was: 1.2.2)
   2.0

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 2.0

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5255) dsnitch severity is not correctly set for compaction info

2013-02-14 Thread Brandon Williams (JIRA)
Brandon Williams created CASSANDRA-5255:
---

 Summary: dsnitch severity is not correctly set for compaction info
 Key: CASSANDRA-5255
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5255
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Priority: Minor
 Fix For: 1.2.2


We're doing two things wrong in CI.  First, load can change between calls, 
which can cause a negative severity even though it meant to subtract whatever 
it added before.  Second, we should report based on how much IO we're using, 
since a 1T throttled to 5MB/s is less impactful than a 100MB running at full 
speed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-02-14 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578512#comment-13578512
 ] 

Yuki Morishita commented on CASSANDRA-5151:
---

I reverted change in 1.2 and changed fixver to 2.0.
However, we still need to fix conflict in opening CFS with the patch attached 
before.
Should I create and move to the new ticket for that?

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 2.0

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-14 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 7ff2805c7 - bc428351e
  refs/heads/trunk 0411d5a2b - 8caedfc2c


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8caedfc2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8caedfc2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8caedfc2

Branch: refs/heads/trunk
Commit: 8caedfc2c02fe587aff94f73b5b5bf0f621a9884
Parents: 0411d5a bc42835
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Feb 14 11:56:47 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Feb 14 11:56:47 2013 -0600

--
 .../locator/DynamicEndpointSnitchMBean.java|2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--




[2/3] git commit: comment

2013-02-14 Thread brandonwilliams
comment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc428351
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc428351
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc428351

Branch: refs/heads/trunk
Commit: bc428351ed5df3a7f215142b2144a34a0e15cd63
Parents: 7ff2805
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Feb 14 11:56:04 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Feb 14 11:56:08 2013 -0600

--
 .../locator/DynamicEndpointSnitchMBean.java|2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc428351/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java 
b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java
index becbacf..a413bc5 100644
--- a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java
+++ b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java
@@ -30,7 +30,7 @@ public interface DynamicEndpointSnitchMBean {
 public String getSubsnitchClassName();
 public ListDouble dumpTimings(String hostname) throws 
UnknownHostException;
 /**
- * Use this if you want to specify a severity it can be -ve
+ * Use this if you want to specify a severity; it can be negative
  * Example: Page cache is cold and you want data to be sent 
  *  though it is not preferred one.
  */



[1/3] git commit: comment

2013-02-14 Thread brandonwilliams
comment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bc428351
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bc428351
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bc428351

Branch: refs/heads/cassandra-1.2
Commit: bc428351ed5df3a7f215142b2144a34a0e15cd63
Parents: 7ff2805
Author: Brandon Williams brandonwilli...@apache.org
Authored: Thu Feb 14 11:56:04 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Thu Feb 14 11:56:08 2013 -0600

--
 .../locator/DynamicEndpointSnitchMBean.java|2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bc428351/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java
--
diff --git 
a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java 
b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java
index becbacf..a413bc5 100644
--- a/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java
+++ b/src/java/org/apache/cassandra/locator/DynamicEndpointSnitchMBean.java
@@ -30,7 +30,7 @@ public interface DynamicEndpointSnitchMBean {
 public String getSubsnitchClassName();
 public ListDouble dumpTimings(String hostname) throws 
UnknownHostException;
 /**
- * Use this if you want to specify a severity it can be -ve
+ * Use this if you want to specify a severity; it can be negative
  * Example: Page cache is cold and you want data to be sent 
  *  though it is not preferred one.
  */



[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-14 Thread yukim
Updated Branches:
  refs/heads/cassandra-1.2 bc428351e - ee0be0624
  refs/heads/trunk 8caedfc2c - 12e4c524b


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12e4c524
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12e4c524
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12e4c524

Branch: refs/heads/trunk
Commit: 12e4c524badccfb13fe5c4a28c9ec2f65571403c
Parents: 8caedfc ee0be06
Author: Yuki Morishita yu...@apache.org
Authored: Thu Feb 14 13:43:15 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Feb 14 13:43:15 2013 -0600

--
 CHANGES.txt|1 +
 .../cassandra/io/compress/CompressionMetadata.java |1 +
 .../compress/CompressedInputStreamTest.java|  106 +++
 3 files changed, 108 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/12e4c524/CHANGES.txt
--



[1/3] git commit: fix compressed stream sending extra chunk; patch by yukim reviewed by mkjellman for CASSANDRA-5105

2013-02-14 Thread yukim
fix compressed stream sending extra chunk; patch by yukim reviewed by mkjellman 
for CASSANDRA-5105


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee0be062
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee0be062
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee0be062

Branch: refs/heads/cassandra-1.2
Commit: ee0be0624b0ee1b2151b9be92a5d4e6334801cde
Parents: bc42835
Author: Yuki Morishita yu...@apache.org
Authored: Thu Feb 14 13:43:01 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Feb 14 13:43:01 2013 -0600

--
 CHANGES.txt|1 +
 .../cassandra/io/compress/CompressionMetadata.java |1 +
 .../compress/CompressedInputStreamTest.java|  106 +++
 3 files changed, 108 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee0be062/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index aa25b63..39b1948 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -19,6 +19,7 @@
  * Fix missing columns in wide rows queries (CASSANDRA-5225)
  * Simplify auth setup and make system_auth ks alterable (CASSANDRA-5112)
  * Stop compactions from hanging during bootstrap (CASSANDRA-5244)
+ * fix compressed streaming sending extra chunk (CASSANDRA-5105)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee0be062/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 0a3f8de..93b0091 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -199,6 +199,7 @@ public class CompressionMetadata
 {
 int startIndex = (int) (section.left / parameters.chunkLength());
 int endIndex = (int) (section.right / parameters.chunkLength());
+endIndex = section.right % parameters.chunkLength() == 0 ? 
endIndex - 1 : endIndex;
 for (int i = startIndex; i = endIndex; i++)
 {
 long offset = i * 8;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee0be062/test/unit/org/apache/cassandra/streaming/compress/CompressedInputStreamTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/streaming/compress/CompressedInputStreamTest.java
 
b/test/unit/org/apache/cassandra/streaming/compress/CompressedInputStreamTest.java
new file mode 100644
index 000..ecb6e14
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/streaming/compress/CompressedInputStreamTest.java
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.streaming.compress;
+
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.File;
+import java.io.RandomAccessFile;
+import java.util.*;
+
+import org.junit.Test;
+
+import org.apache.cassandra.io.compress.*;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.sstable.SSTableMetadata;
+import org.apache.cassandra.utils.Pair;
+
+/**
+ */
+public class CompressedInputStreamTest
+{
+@Test
+public void testCompressedRead() throws Exception
+{
+testCompressedReadWith(new long[]{0L});
+testCompressedReadWith(new long[]{1L});
+testCompressedReadWith(new long[]{100L});
+
+testCompressedReadWith(new long[]{1L, 122L, 123L, 124L, 456L});
+}
+
+/**
+ * @param valuesToCheck array of longs of range(0-999)
+ * @throws Exception
+ */
+private void testCompressedReadWith(long[] valuesToCheck) throws Exception
+{
+assert valuesToCheck != null  

[2/3] git commit: fix compressed stream sending extra chunk; patch by yukim reviewed by mkjellman for CASSANDRA-5105

2013-02-14 Thread yukim
fix compressed stream sending extra chunk; patch by yukim reviewed by mkjellman 
for CASSANDRA-5105


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ee0be062
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ee0be062
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ee0be062

Branch: refs/heads/trunk
Commit: ee0be0624b0ee1b2151b9be92a5d4e6334801cde
Parents: bc42835
Author: Yuki Morishita yu...@apache.org
Authored: Thu Feb 14 13:43:01 2013 -0600
Committer: Yuki Morishita yu...@apache.org
Committed: Thu Feb 14 13:43:01 2013 -0600

--
 CHANGES.txt|1 +
 .../cassandra/io/compress/CompressionMetadata.java |1 +
 .../compress/CompressedInputStreamTest.java|  106 +++
 3 files changed, 108 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee0be062/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index aa25b63..39b1948 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -19,6 +19,7 @@
  * Fix missing columns in wide rows queries (CASSANDRA-5225)
  * Simplify auth setup and make system_auth ks alterable (CASSANDRA-5112)
  * Stop compactions from hanging during bootstrap (CASSANDRA-5244)
+ * fix compressed streaming sending extra chunk (CASSANDRA-5105)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee0be062/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
--
diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java 
b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
index 0a3f8de..93b0091 100644
--- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
+++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java
@@ -199,6 +199,7 @@ public class CompressionMetadata
 {
 int startIndex = (int) (section.left / parameters.chunkLength());
 int endIndex = (int) (section.right / parameters.chunkLength());
+endIndex = section.right % parameters.chunkLength() == 0 ? 
endIndex - 1 : endIndex;
 for (int i = startIndex; i = endIndex; i++)
 {
 long offset = i * 8;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ee0be062/test/unit/org/apache/cassandra/streaming/compress/CompressedInputStreamTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/streaming/compress/CompressedInputStreamTest.java
 
b/test/unit/org/apache/cassandra/streaming/compress/CompressedInputStreamTest.java
new file mode 100644
index 000..ecb6e14
--- /dev/null
+++ 
b/test/unit/org/apache/cassandra/streaming/compress/CompressedInputStreamTest.java
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.streaming.compress;
+
+import java.io.ByteArrayInputStream;
+import java.io.DataInputStream;
+import java.io.File;
+import java.io.RandomAccessFile;
+import java.util.*;
+
+import org.junit.Test;
+
+import org.apache.cassandra.io.compress.*;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.sstable.SSTableMetadata;
+import org.apache.cassandra.utils.Pair;
+
+/**
+ */
+public class CompressedInputStreamTest
+{
+@Test
+public void testCompressedRead() throws Exception
+{
+testCompressedReadWith(new long[]{0L});
+testCompressedReadWith(new long[]{1L});
+testCompressedReadWith(new long[]{100L});
+
+testCompressedReadWith(new long[]{1L, 122L, 123L, 124L, 456L});
+}
+
+/**
+ * @param valuesToCheck array of longs of range(0-999)
+ * @throws Exception
+ */
+private void testCompressedReadWith(long[] valuesToCheck) throws Exception
+{
+assert valuesToCheck != null  

[Cassandra Wiki] Trivial Update of NoemiLust by NoemiLust

2013-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The NoemiLust page has been changed by NoemiLust:
http://wiki.apache.org/cassandra/NoemiLust

New page:
Hello !! I am IVETTE HAYDEN. I have a house in Round Lake Beach.BR
My age is 56. I and my sister go to The Flat Prep School located in Savannah. I 
like to do Gardening. My daddy name is Todd  and he is a Computer software 
engineer. My mother is a Retailer.BR
BR
Also visit my web page :: [[http://michael-kors-shoes.blinkweb.com|michael kors 
purses]]


[jira] [Created] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread C. Scott Andreas (JIRA)
C. Scott Andreas created CASSANDRA-5256:
---

 Summary: Memory was freed AssertionError During Major Compaction
 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

java version 1.6.0_30
Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)

Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas


When initiating a major compaction with `./nodetool -h localhost compact`, an 
AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:

ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
(line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
java.lang.AssertionError: Memory was freed
  at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
at 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
at 
org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
at 
org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
at 
org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
at 
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
at 
org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
at 
org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
at 
org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
at 
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
at 
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
at 
org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

---

I've invoked the `nodetool compact` three; this occurred after each. The node 
has been up for a couple days accepting writes and has not been restarted.

Here's the server's log since it was started a few days ago: 
https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log

Here's the code being used to issue writes to the datastore: 
https://gist.github.com/cscotta/20cbd36c2503c71d06e9

---

Configuration: One node, one keyspace, one column family. ~60 writes/second of 
data with a TTL of 86400, zero reads. Stock cassandra.yaml.

Keyspace DDL:

create keyspace jetpack;
use jetpack;
create column family Metrics with key_validation_class = 'UTF8Type' and 
comparator = 'IntegerType';

--
This message is automatically generated by JIRA.
If you think it was sent 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread C. Scott Andreas (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

C. Scott Andreas updated CASSANDRA-5256:


Description: 
When initiating a major compaction with `./nodetool -h localhost compact`, an 
AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:

ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
(line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
java.lang.AssertionError: Memory was freed
  at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
at 
org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
at 
org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
at 
org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
at 
org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
at 
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
at 
org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
at 
org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
at 
org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
at 
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
at 
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
at 
org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

---

I've invoked the `nodetool compact` three times; this occurred after each. The 
node has been up for a couple days accepting writes and has not been restarted.

Here's the server's log since it was started a few days ago: 
https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log

Here's the code being used to issue writes to the datastore: 
https://gist.github.com/cscotta/20cbd36c2503c71d06e9

---

Configuration: One node, one keyspace, one column family. ~60 writes/second of 
data with a TTL of 86400, zero reads. Stock cassandra.yaml.

Keyspace DDL:

create keyspace jetpack;
use jetpack;
create column family Metrics with key_validation_class = 'UTF8Type' and 
comparator = 'IntegerType';

  was:
When initiating a major compaction with `./nodetool -h localhost compact`, an 
AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:

ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
(line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
java.lang.AssertionError: Memory was freed
  at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
at 

[jira] [Created] (CASSANDRA-5257) Compaction race allows sstables to be in multiple compactions simultaneously

2013-02-14 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5257:
-

 Summary: Compaction race allows sstables to be in multiple 
compactions simultaneously
 Key: CASSANDRA-5257
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5257
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
Reporter: Jonathan Ellis
Priority: Critical
 Fix For: 1.2.2


Reported by [~cscotta] on Twitter.  Here is a log fragment showing the 2110 
sstable pulled into two compactions:

{noformat}
 INFO [CompactionExecutor:41495] 2013-02-14 14:19:26,621 CompactionTask.java 
(line 118) Compacting 
[SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2110-Data.db'),
 
SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-1711-Data.db'),
 
SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2068-Data.db'),
 
SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-1391-Data.db'),
 
SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2115-Data.db'),
 
SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2052-Data.db'),
 
SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2089-Data.db')]
 INFO [OptionalTasks:1] 2013-02-14 14:20:28,978 MeteredFlusher.java (line 58) 
flushing high-traffic column family CFS(Keyspace='jetpack', 
ColumnFamily='Metrics') (estimated 399218463 bytes)
 INFO [OptionalTasks:1] 2013-02-14 14:20:28,979 ColumnFamilyStore.java (line 
640) Enqueuing flush of Memtable-Metrics@347404907(60626496/399218463 
serialized/live bytes, 2165232 ops)
 INFO [FlushWriter:1590] 2013-02-14 14:20:28,980 Memtable.java (line 447) 
Writing Memtable-Metrics@347404907(60626496/399218463 serialized/live bytes, 
2165232 ops)
 INFO [FlushWriter:1590] 2013-02-14 14:20:30,606 Memtable.java (line 481) 
Completed flushing 
/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2117-Data.db
 (21066414 bytes) for commitlog position 
ReplayPosition(segmentId=1360737579813, position=4969777)
 INFO [OptionalTasks:1] 2013-02-14 14:21:51,046 MeteredFlusher.java (line 58) 
flushing high-traffic column family CFS(Keyspace='jetpack', 
ColumnFamily='Metrics') (estimated 399221413 bytes)
 INFO [OptionalTasks:1] 2013-02-14 14:21:51,046 ColumnFamilyStore.java (line 
640) Enqueuing flush of Memtable-Metrics@663159049(60626944/399221413 
serialized/live bytes, 2165248 ops)
 INFO [FlushWriter:1591] 2013-02-14 14:21:51,047 Memtable.java (line 447) 
Writing Memtable-Metrics@663159049(60626944/399221413 serialized/live bytes, 
2165248 ops)
 INFO [FlushWriter:1591] 2013-02-14 14:21:52,692 Memtable.java (line 481) 
Completed flushing 
/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2118-Data.db
 (21071657 bytes) for commitlog position 
ReplayPosition(segmentId=1360737579815, position=14067099)
 INFO [OptionalTasks:1] 2013-02-14 14:23:13,059 MeteredFlusher.java (line 58) 
flushing high-traffic column family CFS(Keyspace='jetpack', 
ColumnFamily='Metrics') (estimated 399214407 bytes)
 INFO [OptionalTasks:1] 2013-02-14 14:23:13,060 ColumnFamilyStore.java (line 
640) Enqueuing flush of Memtable-Metrics@480704694(60625880/399214407 
serialized/live bytes, 2165210 ops)
 INFO [FlushWriter:1592] 2013-02-14 14:23:13,061 Memtable.java (line 447) 
Writing Memtable-Metrics@480704694(60625880/399214407 serialized/live bytes, 
2165210 ops)
 INFO [FlushWriter:1592] 2013-02-14 14:23:14,677 Memtable.java (line 481) 
Completed flushing 
/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2119-Data.db
 (21074605 bytes) for commitlog position 
ReplayPosition(segmentId=1360737579817, position=23128798)
 INFO [OptionalTasks:1] 2013-02-14 14:24:35,073 MeteredFlusher.java (line 58) 
flushing high-traffic column family CFS(Keyspace='jetpack', 
ColumnFamily='Metrics') (estimated 399214407 bytes)
 INFO [OptionalTasks:1] 2013-02-14 14:24:35,073 ColumnFamilyStore.java (line 
640) Enqueuing flush of Memtable-Metrics@2075924408(60625880/399214407 
serialized/live bytes, 2165210 ops)
 INFO [FlushWriter:1593] 2013-02-14 14:24:35,074 Memtable.java (line 447) 
Writing Memtable-Metrics@2075924408(60625880/399214407 serialized/live bytes, 
2165210 ops)
 INFO [FlushWriter:1593] 2013-02-14 14:24:36,683 Memtable.java (line 481) 
Completed flushing 
/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2120-Data.db
 (21044055 bytes) for commitlog position 
ReplayPosition(segmentId=1360737579819, position=32199135)
 INFO [CompactionExecutor:41571] 2013-02-14 14:24:36,684 CompactionTask.java 
(line 118) Compacting 

[jira] [Resolved] (CASSANDRA-5257) Compaction race allows sstables to be in multiple compactions simultaneously

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5257.
---

Resolution: Duplicate

duplicate of CASSANDRA-5256

 Compaction race allows sstables to be in multiple compactions simultaneously
 

 Key: CASSANDRA-5257
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5257
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
Reporter: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2


 Reported by [~cscotta] on Twitter.  Here is a log fragment showing the 2110 
 sstable pulled into two compactions:
 {noformat}
  INFO [CompactionExecutor:41495] 2013-02-14 14:19:26,621 CompactionTask.java 
 (line 118) Compacting 
 [SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2110-Data.db'),
  
 SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-1711-Data.db'),
  
 SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2068-Data.db'),
  
 SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-1391-Data.db'),
  
 SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2115-Data.db'),
  
 SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2052-Data.db'),
  
 SSTableReader(path='/home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2089-Data.db')]
  INFO [OptionalTasks:1] 2013-02-14 14:20:28,978 MeteredFlusher.java (line 58) 
 flushing high-traffic column family CFS(Keyspace='jetpack', 
 ColumnFamily='Metrics') (estimated 399218463 bytes)
  INFO [OptionalTasks:1] 2013-02-14 14:20:28,979 ColumnFamilyStore.java (line 
 640) Enqueuing flush of Memtable-Metrics@347404907(60626496/399218463 
 serialized/live bytes, 2165232 ops)
  INFO [FlushWriter:1590] 2013-02-14 14:20:28,980 Memtable.java (line 447) 
 Writing Memtable-Metrics@347404907(60626496/399218463 serialized/live bytes, 
 2165232 ops)
  INFO [FlushWriter:1590] 2013-02-14 14:20:30,606 Memtable.java (line 481) 
 Completed flushing 
 /home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2117-Data.db
  (21066414 bytes) for commitlog position 
 ReplayPosition(segmentId=1360737579813, position=4969777)
  INFO [OptionalTasks:1] 2013-02-14 14:21:51,046 MeteredFlusher.java (line 58) 
 flushing high-traffic column family CFS(Keyspace='jetpack', 
 ColumnFamily='Metrics') (estimated 399221413 bytes)
  INFO [OptionalTasks:1] 2013-02-14 14:21:51,046 ColumnFamilyStore.java (line 
 640) Enqueuing flush of Memtable-Metrics@663159049(60626944/399221413 
 serialized/live bytes, 2165248 ops)
  INFO [FlushWriter:1591] 2013-02-14 14:21:51,047 Memtable.java (line 447) 
 Writing Memtable-Metrics@663159049(60626944/399221413 serialized/live bytes, 
 2165248 ops)
  INFO [FlushWriter:1591] 2013-02-14 14:21:52,692 Memtable.java (line 481) 
 Completed flushing 
 /home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2118-Data.db
  (21071657 bytes) for commitlog position 
 ReplayPosition(segmentId=1360737579815, position=14067099)
  INFO [OptionalTasks:1] 2013-02-14 14:23:13,059 MeteredFlusher.java (line 58) 
 flushing high-traffic column family CFS(Keyspace='jetpack', 
 ColumnFamily='Metrics') (estimated 399214407 bytes)
  INFO [OptionalTasks:1] 2013-02-14 14:23:13,060 ColumnFamilyStore.java (line 
 640) Enqueuing flush of Memtable-Metrics@480704694(60625880/399214407 
 serialized/live bytes, 2165210 ops)
  INFO [FlushWriter:1592] 2013-02-14 14:23:13,061 Memtable.java (line 447) 
 Writing Memtable-Metrics@480704694(60625880/399214407 serialized/live bytes, 
 2165210 ops)
  INFO [FlushWriter:1592] 2013-02-14 14:23:14,677 Memtable.java (line 481) 
 Completed flushing 
 /home/cscotta/jetpack/cassandra/data/jetpack/Metrics/jetpack-Metrics-ib-2119-Data.db
  (21074605 bytes) for commitlog position 
 ReplayPosition(segmentId=1360737579817, position=23128798)
  INFO [OptionalTasks:1] 2013-02-14 14:24:35,073 MeteredFlusher.java (line 58) 
 flushing high-traffic column family CFS(Keyspace='jetpack', 
 ColumnFamily='Metrics') (estimated 399214407 bytes)
  INFO [OptionalTasks:1] 2013-02-14 14:24:35,073 ColumnFamilyStore.java (line 
 640) Enqueuing flush of Memtable-Metrics@2075924408(60625880/399214407 
 serialized/live bytes, 2165210 ops)
  INFO [FlushWriter:1593] 2013-02-14 14:24:35,074 Memtable.java (line 447) 
 Writing Memtable-Metrics@2075924408(60625880/399214407 serialized/live bytes, 
 2165210 ops)
  INFO [FlushWriter:1593] 2013-02-14 14:24:36,683 Memtable.java (line 481) 
 Completed flushing 
 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

 Priority: Critical  (was: Major)
Affects Version/s: (was: 1.2.1)
   1.2.0

Some race is allowing sstables to participate in multiple concurrent 
compactions; when the first finishes, it frees the compression metadata, which 
causes the above AssertionError in the other.  Look at sstable 2110 for example.

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Priority: Critical

 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 

[jira] [Commented] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578694#comment-13578694
 ] 

Jonathan Ellis commented on CASSANDRA-5256:
---

Is this LeveledCompactionStrategy or SizeTiered?

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Priority: Critical

 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 https://gist.github.com/cscotta/20cbd36c2503c71d06e9
 ---
 Configuration: One node, one keyspace, one column 

[jira] [Commented] (CASSANDRA-4898) Authentication provider in Cassandra itself

2013-02-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578695#comment-13578695
 ] 

Jonathan Ellis commented on CASSANDRA-4898:
---

LGTM.  Okay with the class names.

 Authentication provider in Cassandra itself
 ---

 Key: CASSANDRA-4898
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4898
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.1.6
Reporter: Dirkjan Bussink
Assignee: Aleksey Yeschenko
  Labels: authentication, authorization
 Fix For: 1.2.2


 I've been working on an implementation for both IAuthority2 and 
 IAuthenticator that uses Cassandra itself to store the necessary credentials. 
 I'm planning on open sourcing this shortly.
 Is there any interest in this? It tries to provide reasonable security, for 
 example using PBKDF2 to store passwords with a configurable configuration 
 cycle and managing all the rights available in IAuthority2. 
 My main use goal isn't security / confidentiality of the data, but more that 
 I don't want multiple consumers of the cluster to accidentally screw stuff 
 up. Only certain users can write data, others can read it out again and 
 further process it.
 I'm planning on releasing this soon under an open source license (probably 
 the same as Cassandra itself). Would there be interest in incorporating it as 
 a new reference implementation instead of the properties file implementation 
 perhaps? Or can I better maintain it separately? I would love if people from 
 the community would want to review it, since I have been dabbling in the 
 Cassandra source code only for a short while now.
 During the development of this I've encountered a few bumps and I wonder 
 whether they could be addressed or not.
 = Moment when validateConfiguration() runs =
 Is there a deliberate reason that validateConfiguration() is executed before 
 all information about keyspaces, column families etc. is available? In the 
 current form I therefore can't validate whether column families etc. are 
 available for authentication since they aren't loaded yet.
 I've wanted to use this to make relatively easy bootstrapping possible. My 
 approach here would be to only enable authentication if the needed keyspace 
 is available. This allows for configuring the cluster, then import the 
 necessary authentication data for an admin user to bootstrap further and then 
 restart every node in the cluster.
 Basically the questions here are, can the moment when validateConfiguration() 
 runs for an authentication provider be changed? Is this approach to 
 bootstrapping reasonable or do people have better ideas?
 = AbstractReplicationStrategy has package visible constructor =
 I've added a strategy that basically says that data should be available on 
 all nodes. The amount of data use for authentication is very limited. 
 Replicating it to every node is there for not very problematic and allows for 
 every node to have all data locally available for verifying requests.
 I wanted to put this strategy into it's own package inside the authentication 
 module, but since the constructor of AbstractReplicationStrategy has no 
 visibility explicitly marked, it's only available inside the same package.
 I'm not sure whether implementing a strategy to replicate data to all nodes 
 is a sane idea and whether my implementation of this strategy is correct. 
 What do you people think of this? Would people want to review the 
 implementation?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4872) Move manifest into sstable metadata

2013-02-14 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-4872:
---

Attachment: 
0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v4.patch

v4 fixes overlapping sstables within a level on startup

 Move manifest into sstable metadata
 ---

 Key: CASSANDRA-4872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4872
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0

 Attachments: 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v1.patch, 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v2.patch, 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v3.patch, 
 0001-CASSANDRA-4872-move-sstable-level-into-sstable-metad-v4.patch


 Now that we have a metadata component it would be better to keep sstable 
 level there, than in a separate manifest.  With information per-sstable we 
 don't need to do a full re-level if there is corruption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578723#comment-13578723
 ] 

Brandon Williams commented on CASSANDRA-5256:
-

It's STS.

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Priority: Critical

 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 https://gist.github.com/cscotta/20cbd36c2503c71d06e9
 ---
 Configuration: One node, one keyspace, one column family. ~60 writes/second 
 of data 

[Cassandra Wiki] Trivial Update of Larue72Y by Larue72Y

2013-02-14 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Larue72Y page has been changed by Larue72Y:
http://wiki.apache.org/cassandra/Larue72Y?action=diffrev1=1rev2=2

+ I am 25. I might join The Profusely Boarding School in Kennewick.BR
- I am 42. I am taking admission in The Deep Preparatory built at Leeds.BR
- I have a job as Artist. I like Four Wheeling. My papa name is Stephen and he 
is a Reporter. My momy is a Bodyguard.BR
  BR
+ I have a job as Surveyor. I like to do Vivariums. My father name is Randy and 
he is a Architect. My mummy is a Fitter.BR
+ BR
- Here is my homepage [[http://louis-vuitton-outlets.blinkweb.com|fake louis 
vuitton bags]]
+ Also visit my homepage :: [[http://louis-vuitton-outlets.blinkweb.com|louis 
vuitton bag]]
  


[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-14 Thread aleksey
Updated Branches:
  refs/heads/trunk 12e4c524b - 36f632c9e


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36f632c9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36f632c9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36f632c9

Branch: refs/heads/trunk
Commit: 36f632c9eb866b995673b3d03ae9d469eb261098
Parents: 12e4c52 0b83682
Author: Aleksey Yeschenko alek...@apache.org
Authored: Fri Feb 15 01:27:36 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Fri Feb 15 01:27:36 2013 +0300

--
 CHANGES.txt|2 +
 NEWS.txt   |   17 +
 build.xml  |3 +-
 conf/cassandra.yaml|   22 +-
 examples/simple_authentication/README.txt  |   25 --
 .../simple_authentication/conf/access.properties   |   39 ---
 .../simple_authentication/conf/passwd.properties   |   24 --
 .../apache/cassandra/auth/SimpleAuthenticator.java |  142 
 .../apache/cassandra/auth/SimpleAuthorizer.java|  157 -
 lib/jbcrypt-0.3m.jar   |  Bin 0 - 17750 bytes
 lib/licenses/jbcrypt-0.3m.txt  |   17 +
 src/java/org/apache/cassandra/auth/Auth.java   |8 +-
 .../apache/cassandra/auth/CassandraAuthorizer.java |  254 +++
 .../org/apache/cassandra/auth/IAuthenticator.java  |   20 +-
 .../org/apache/cassandra/auth/IAuthorizer.java |   24 +-
 .../apache/cassandra/auth/LegacyAuthenticator.java |   14 +-
 .../apache/cassandra/auth/LegacyAuthorizer.java|4 +-
 .../cassandra/auth/PasswordAuthenticator.java  |  223 +
 .../org/apache/cassandra/cli/CliSessionState.java  |4 +-
 .../cql3/statements/AlterUserStatement.java|   15 +-
 .../cql3/statements/CreateUserStatement.java   |   12 +-
 .../cql3/statements/DropUserStatement.java |9 +-
 .../cassandra/cql3/statements/GrantStatement.java  |6 +-
 .../cql3/statements/ListPermissionsStatement.java  |   16 +-
 .../statements/PermissionAlteringStatement.java|   22 +-
 .../cassandra/cql3/statements/RevokeStatement.java |6 +-
 .../org/apache/cassandra/service/ClientState.java  |5 +-
 test/conf/access.properties|   29 --
 28 files changed, 630 insertions(+), 489 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/36f632c9/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/36f632c9/build.xml
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/36f632c9/conf/cassandra.yaml
--



[jira] [Created] (CASSANDRA-5258) Create dtests for all auth CQL3 statements + PasswordAuthenticator + CassandraAuthorizer

2013-02-14 Thread Aleksey Yeschenko (JIRA)
Aleksey Yeschenko created CASSANDRA-5258:


 Summary: Create dtests for all auth CQL3 statements + 
PasswordAuthenticator + CassandraAuthorizer
 Key: CASSANDRA-5258
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5258
 Project: Cassandra
  Issue Type: Test
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
Priority: Trivial


Now that we ship default implementations of IAuthenticator and IAuthorizer it's 
time to create dtests for everything auth-related (and the new implementations, 
too).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4973) Secondary Index stops returning rows when caching=ALL

2013-02-14 Thread Matthew F. Dennis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578823#comment-13578823
 ] 

Matthew F. Dennis commented on CASSANDRA-4973:
--

duplicate of CASSANDRA-4785 ?

 Secondary Index stops returning rows when caching=ALL
 -

 Key: CASSANDRA-4973
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4973
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.2, 1.1.6
 Environment: Centos 6.3, Java 1.6.0_35, cass. 1.1.2 upgraded to 1.1.6
Reporter: Daniel Strawson
 Attachments: secondary_index_rowcache_restart_test.py


 I've been using cassandra on a project for a little while in development and 
 have recently suddenly started having an issue where the secondary index 
 stops working, this is happening on my new production system, we are not yet 
 live.   Things work ok one moment, then suddenly queries to the cf through 
 the secondary index stop returning data.  I've seen it happen on 3 CFs. I've 
 tried:
 - various nodetools repair / scrub / rebuild_indexes options, none seem to 
 make a difference.
 - Doing a 'update column family whatever with column_metadata=[]' then 
 repeating with my correct column_metadata definition.  This seems to fix the 
 problem (temporarily) until it comes back.
 The last time it happened I had just restarted cassandra, so it could be that 
 which is causing the issue, I've got the production system ok at the moment, 
 I will try restarting a bit later when its not being used and if I can get 
 the issue to reoccur I will add more information.
 The problem first manifested itself in 1.1.2, so I upgraded to 1.1.6, this 
 has not fixed it.
 Here is an example of the create column family I'm using for one of the CFs 
 that affected:
 create column family region
   with column_type = 'Standard'
   and comparator = 'UTF8Type'
   and default_validation_class = 'BytesType'
   and key_validation_class = 'UTF8Type'
   and read_repair_chance = 0.1
   and dclocal_read_repair_chance = 0.0
   and gc_grace = 864000
   and min_compaction_threshold = 4
   and max_compaction_threshold = 32
   and replicate_on_write = true
   and compaction_strategy = 
 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
   and caching = 'KEYS_ONLY'
   and column_metadata = [
 
 {column_name : 'label',
 validation_class : UTF8Type},
 
 {column_name : 'countryCode',
 validation_class : UTF8Type,
 index_name : 'region_countryCode_idx',
 index_type : 0},
 
 ]
   and compression_options = {'sstable_compression' : 
 'org.apache.cassandra.io.compress.SnappyCompressor'};
 I've noticed that CASSANDRA-4785 looks similar, in my case once the system 
 has the problem, it doesn't go away until I fix it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-5256:
-

Assignee: Jonathan Ellis

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical

 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 https://gist.github.com/cscotta/20cbd36c2503c71d06e9
 ---
 Configuration: One node, one keyspace, one column family. ~60 writes/second 
 of data with a 

[jira] [Created] (CASSANDRA-5259) getNextBackgroundTask gives up too easily

2013-02-14 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5259:
-

 Summary: getNextBackgroundTask gives up too easily
 Key: CASSANDRA-5259
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5259
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.2.2


getNextBackgroundTask only attempts once to markSSTablesForCompaction.  So if 
many task invocations are scanning compaction candidates concurrently (e.g. 
because submitBackground fires off a bunch of gNBT at once), it's highly likely 
that all will compute the same victim list and only one will return a 
CompactionTask, resulting in less compaction concurrency than desired.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

Attachment: 5256.txt

compactionLock is ugly and confusing but I think the logic is correct as far as 
it goes.  I think the only problem is that STCS.getMaximalTask doesn't mark its 
victims as compacting, so a minor compaction is free to attempt them as well.

There is a similar problem with STCS.getUserDefinedCompaction.  Both addressed 
in attached patch.

I've also created CASSANDRA-5259 to make getNextBackgroundTask try harder 
before deciding there is no work to do.

Finally, I have a patch to remove compactionLock entirely that I'll attach to 
CASSANDRA-3430.  I think the prudent course is to do that for trunk only.

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical

 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

Attachment: (was: 5256.txt)

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
 Attachments: 5256.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 https://gist.github.com/cscotta/20cbd36c2503c71d06e9
 ---
 Configuration: One node, one keyspace, one column family. ~60 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

Attachment: 5256.txt

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 https://gist.github.com/cscotta/20cbd36c2503c71d06e9
 ---
 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

Attachment: (was: 5256.txt)

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 https://gist.github.com/cscotta/20cbd36c2503c71d06e9
 ---
 Configuration: One node, one 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

Attachment: 5256.txt

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 https://gist.github.com/cscotta/20cbd36c2503c71d06e9
 ---
 

[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-14 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 0b83682b4 - f2a57f04c
  refs/heads/trunk 36f632c9e - 7049ab44d


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7049ab44
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7049ab44
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7049ab44

Branch: refs/heads/trunk
Commit: 7049ab44df7ac89229c16973ddedb772ee23db16
Parents: 36f632c f2a57f0
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Feb 14 19:06:24 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Feb 14 19:06:24 2013 -0600

--
 .../compaction/SizeTieredCompactionStrategy.java   |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7049ab44/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
--



[2/3] git commit: comment

2013-02-14 Thread jbellis
comment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f2a57f04
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f2a57f04
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f2a57f04

Branch: refs/heads/trunk
Commit: f2a57f04c4ea11705d848e08f230b8201d475fc3
Parents: 0b83682
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Feb 14 19:06:00 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Feb 14 19:06:14 2013 -0600

--
 .../compaction/SizeTieredCompactionStrategy.java   |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2a57f04/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
index 5e01733..fab087e 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
@@ -58,6 +58,7 @@ public class SizeTieredCompactionStrategy extends 
AbstractCompactionStrategy
 cfs.setCompactionThresholds(cfs.metadata.getMinCompactionThreshold(), 
cfs.metadata.getMaxCompactionThreshold());
 }
 
+// synchronized so that multiple callers as in 
CompactionManager.submitBackground will compute different candidates
 public synchronized AbstractCompactionTask getNextBackgroundTask(final int 
gcBefore)
 {
 // make local copies so they can't be changed out from under us 
mid-method



[1/3] git commit: comment

2013-02-14 Thread jbellis
comment


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f2a57f04
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f2a57f04
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f2a57f04

Branch: refs/heads/cassandra-1.2
Commit: f2a57f04c4ea11705d848e08f230b8201d475fc3
Parents: 0b83682
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Feb 14 19:06:00 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Feb 14 19:06:14 2013 -0600

--
 .../compaction/SizeTieredCompactionStrategy.java   |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2a57f04/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
index 5e01733..fab087e 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/SizeTieredCompactionStrategy.java
@@ -58,6 +58,7 @@ public class SizeTieredCompactionStrategy extends 
AbstractCompactionStrategy
 cfs.setCompactionThresholds(cfs.metadata.getMinCompactionThreshold(), 
cfs.metadata.getMaxCompactionThreshold());
 }
 
+// synchronized so that multiple callers as in 
CompactionManager.submitBackground will compute different candidates
 public synchronized AbstractCompactionTask getNextBackgroundTask(final int 
gcBefore)
 {
 // make local copies so they can't be changed out from under us 
mid-method



[jira] [Resolved] (CASSANDRA-5259) getNextBackgroundTask gives up too easily

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5259.
---

Resolution: Not A Problem

Not actually a problem -- gNBT is synchronized to prevent this.  Not an elegant 
solution but adequate.  Added a comment.

 getNextBackgroundTask gives up too easily
 -

 Key: CASSANDRA-5259
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5259
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.2.2


 getNextBackgroundTask only attempts once to markSSTablesForCompaction.  So if 
 many task invocations are scanning compaction candidates concurrently (e.g. 
 because submitBackground fires off a bunch of gNBT at once), it's highly 
 likely that all will compute the same victim list and only one will return a 
 CompactionTask, resulting in less compaction concurrency than desired.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5260) Stuck in GC loop on startup

2013-02-14 Thread kanwar sangha (JIRA)
kanwar sangha created CASSANDRA-5260:


 Summary: Stuck in GC loop on startup
 Key: CASSANDRA-5260
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5260
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux, Cent OS, 16 Core, 132 GB RAM.
Reporter: kanwar sangha
Priority: Critical


Populated 2 TB of data across 2 nodes with replication enabled.

System does not come up and stuck in GC cycles. Key cache disabled. Row cache 
disabled.

No compaction tasks pending.


INFO [MemoryMeter:1] 2013-02-14 21:41:31,057 Memtable.java (line 207) 
CFS(Keyspace='system', ColumnFamily='hints') liveRatio is 4.620251329611602 
(just-counted was 4.620251329611602).  calculation took 50ms for 16258 columns
 INFO [ScheduledTasks:1] 2013-02-14 21:41:31,629 GCInspector.java (line 119) GC 
for ParNew: 201 ms for 1 collections, 8962506680 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:33,630 GCInspector.java (line 119) GC 
for ParNew: 471 ms for 2 collections, 7728324072 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:35,631 GCInspector.java (line 119) GC 
for ParNew: 384 ms for 1 collections, 6473280848 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:37,632 GCInspector.java (line 119) GC 
for ParNew: 453 ms for 1 collections, 5185352184 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:38,920 GCInspector.java (line 119) GC 
for ParNew: 318 ms for 1 collections, 4007593160 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:42,998 GCInspector.java (line 119) GC 
for ParNew: 501 ms for 1 collections, 4518308744 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:45,000 GCInspector.java (line 119) GC 
for ParNew: 378 ms for 1 collections, 4963476368 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:47,155 GCInspector.java (line 119) GC 
for ParNew: 417 ms for 2 collections, 5210050616 used; max is 14465368064

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5260) Stuck in GC loop on startup

2013-02-14 Thread kanwar sangha (JIRA)
kanwar sangha created CASSANDRA-5260:


 Summary: Stuck in GC loop on startup
 Key: CASSANDRA-5260
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5260
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux, Cent OS, 16 Core, 132 GB RAM.
Reporter: kanwar sangha
Priority: Critical


Populated 2 TB of data across 2 nodes with replication enabled.

System does not come up and stuck in GC cycles. Key cache disabled. Row cache 
disabled.

No compaction tasks pending.


INFO [MemoryMeter:1] 2013-02-14 21:41:31,057 Memtable.java (line 207) 
CFS(Keyspace='system', ColumnFamily='hints') liveRatio is 4.620251329611602 
(just-counted was 4.620251329611602).  calculation took 50ms for 16258 columns
 INFO [ScheduledTasks:1] 2013-02-14 21:41:31,629 GCInspector.java (line 119) GC 
for ParNew: 201 ms for 1 collections, 8962506680 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:33,630 GCInspector.java (line 119) GC 
for ParNew: 471 ms for 2 collections, 7728324072 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:35,631 GCInspector.java (line 119) GC 
for ParNew: 384 ms for 1 collections, 6473280848 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:37,632 GCInspector.java (line 119) GC 
for ParNew: 453 ms for 1 collections, 5185352184 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:38,920 GCInspector.java (line 119) GC 
for ParNew: 318 ms for 1 collections, 4007593160 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:42,998 GCInspector.java (line 119) GC 
for ParNew: 501 ms for 1 collections, 4518308744 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:45,000 GCInspector.java (line 119) GC 
for ParNew: 378 ms for 1 collections, 4963476368 used; max is 14465368064
 INFO [ScheduledTasks:1] 2013-02-14 21:41:47,155 GCInspector.java (line 119) GC 
for ParNew: 417 ms for 2 collections, 5210050616 used; max is 14465368064

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5260) Stuck in GC loop on startup

2013-02-14 Thread kanwar sangha (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanwar sangha updated CASSANDRA-5260:
-

Attachment: system.rar

system.log attached

 Stuck in GC loop on startup
 ---

 Key: CASSANDRA-5260
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5260
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux, Cent OS, 16 Core, 132 GB RAM.
Reporter: kanwar sangha
Priority: Critical
 Attachments: system.rar


 Populated 2 TB of data across 2 nodes with replication enabled.
 System does not come up and stuck in GC cycles. Key cache disabled. Row cache 
 disabled.
 No compaction tasks pending.
 INFO [MemoryMeter:1] 2013-02-14 21:41:31,057 Memtable.java (line 207) 
 CFS(Keyspace='system', ColumnFamily='hints') liveRatio is 4.620251329611602 
 (just-counted was 4.620251329611602).  calculation took 50ms for 16258 columns
  INFO [ScheduledTasks:1] 2013-02-14 21:41:31,629 GCInspector.java (line 119) 
 GC for ParNew: 201 ms for 1 collections, 8962506680 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:33,630 GCInspector.java (line 119) 
 GC for ParNew: 471 ms for 2 collections, 7728324072 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:35,631 GCInspector.java (line 119) 
 GC for ParNew: 384 ms for 1 collections, 6473280848 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:37,632 GCInspector.java (line 119) 
 GC for ParNew: 453 ms for 1 collections, 5185352184 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:38,920 GCInspector.java (line 119) 
 GC for ParNew: 318 ms for 1 collections, 4007593160 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:42,998 GCInspector.java (line 119) 
 GC for ParNew: 501 ms for 1 collections, 4518308744 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:45,000 GCInspector.java (line 119) 
 GC for ParNew: 378 ms for 1 collections, 4963476368 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:47,155 GCInspector.java (line 119) 
 GC for ParNew: 417 ms for 2 collections, 5210050616 used; max is 14465368064

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5260) Stuck in GC loop on startup

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5260.
---

Resolution: Invalid

Raise ops questions on the mailing list, please.

P.S. You're clearly not running out of memory, so I don't see anything to panic 
about here.

 Stuck in GC loop on startup
 ---

 Key: CASSANDRA-5260
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5260
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux, Cent OS, 16 Core, 132 GB RAM.
Reporter: kanwar sangha
Priority: Critical
 Attachments: system.rar


 Populated 2 TB of data across 2 nodes with replication enabled.
 System does not come up and stuck in GC cycles. Key cache disabled. Row cache 
 disabled.
 No compaction tasks pending.
 INFO [MemoryMeter:1] 2013-02-14 21:41:31,057 Memtable.java (line 207) 
 CFS(Keyspace='system', ColumnFamily='hints') liveRatio is 4.620251329611602 
 (just-counted was 4.620251329611602).  calculation took 50ms for 16258 columns
  INFO [ScheduledTasks:1] 2013-02-14 21:41:31,629 GCInspector.java (line 119) 
 GC for ParNew: 201 ms for 1 collections, 8962506680 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:33,630 GCInspector.java (line 119) 
 GC for ParNew: 471 ms for 2 collections, 7728324072 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:35,631 GCInspector.java (line 119) 
 GC for ParNew: 384 ms for 1 collections, 6473280848 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:37,632 GCInspector.java (line 119) 
 GC for ParNew: 453 ms for 1 collections, 5185352184 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:38,920 GCInspector.java (line 119) 
 GC for ParNew: 318 ms for 1 collections, 4007593160 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:42,998 GCInspector.java (line 119) 
 GC for ParNew: 501 ms for 1 collections, 4518308744 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:45,000 GCInspector.java (line 119) 
 GC for ParNew: 378 ms for 1 collections, 4963476368 used; max is 14465368064
  INFO [ScheduledTasks:1] 2013-02-14 21:41:47,155 GCInspector.java (line 119) 
 GC for ParNew: 417 ms for 2 collections, 5210050616 used; max is 14465368064

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

Attachment: 5266-v2.txt

v2 does a little more cleanup.  removed unnecessary reference-keeping in 
submitUserDefined.  standardized concurrency control on markCompacting instead 
of mix of that + synchronized.

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256.txt, 5266-v2.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

Attachment: (was: 5266-v2.txt)

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256.txt, 5256-v2.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 

[jira] [Updated] (CASSANDRA-5256) Memory was freed AssertionError During Major Compaction

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5256:
--

Attachment: 5256-v2.txt

 Memory was freed AssertionError During Major Compaction
 -

 Key: CASSANDRA-5256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5256
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.0
 Environment: Linux ashbdrytest01p 3.2.0-37-generic #58-Ubuntu SMP Thu 
 Jan 24 15:28:10 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
 java version 1.6.0_30
 Java(TM) SE Runtime Environment (build 1.6.0_30-b12)
 Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode)
 Ubuntu 12.04.2 LTS
Reporter: C. Scott Andreas
Assignee: Jonathan Ellis
Priority: Critical
  Labels: compaction
 Fix For: 1.2.2

 Attachments: 5256.txt, 5256-v2.txt


 When initiating a major compaction with `./nodetool -h localhost compact`, an 
 AssertionError is thrown in the CompactionExecutor from o.a.c.io.util.Memory:
 ERROR [CompactionExecutor:41495] 2013-02-14 14:38:35,720 CassandraDaemon.java 
 (line 133) Exception in thread Thread[CompactionExecutor:41495,1,RMI Runtime]
 java.lang.AssertionError: Memory was freed
   at org.apache.cassandra.io.util.Memory.checkPosition(Memory.java:146)
   at org.apache.cassandra.io.util.Memory.getLong(Memory.java:116)
   at 
 org.apache.cassandra.io.compress.CompressionMetadata.chunkFor(CompressionMetadata.java:176)
   at 
 org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:88)
   at 
 org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:327)
   at java.io.RandomAccessFile.readInt(RandomAccessFile.java:755)
   at java.io.RandomAccessFile.readLong(RandomAccessFile.java:792)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readLong(BytesReadTracker.java:114)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:101)
   at 
 org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:92)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumnsFromSSTable(ColumnFamilySerializer.java:149)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:235)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:109)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:93)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:162)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:76)
   at 
 org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:57)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:114)
   at 
 org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:97)
   at 
 com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
   at 
 com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:158)
   at 
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:71)
   at 
 org.apache.cassandra.db.compaction.CompactionManager$6.runMayThrow(CompactionManager.java:342)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 ---
 I've invoked the `nodetool compact` three times; this occurred after each. 
 The node has been up for a couple days accepting writes and has not been 
 restarted.
 Here's the server's log since it was started a few days ago: 
 https://gist.github.com/cscotta/4956472/raw/95e7cbc68de1aefaeca11812cbb98d5d46f534e8/cassandra.log
 Here's the code being used to issue writes to the datastore: 
 https://gist.github.com/cscotta/20cbd36c2503c71d06e9
 

[jira] [Resolved] (CASSANDRA-4951) Leveled compaction manifest sometimes references nonexistent sstables in a snapshot

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4951.
---

Resolution: Duplicate

Addressed by CASSANDRA-4872

 Leveled compaction manifest sometimes references nonexistent sstables in a 
 snapshot
 ---

 Key: CASSANDRA-4951
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4951
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.4
Reporter: Oleg Kibirev
Assignee: Yuki Morishita

 After nodetool snapshot on a node under load, we sometimes see sstables not 
 referenced in the leveled compaction json manifest, or sstables in the 
 manifest which are not found on disk. There are two concerns with this:
 1. What would happened to leveled compaction and to reads if the snapshot is 
 restored with missing or extra sstables?
 2. Is this a sign of a snapshot not having a complete copy of the data?
 To support automated restore, manifest and/or a list of links should be made 
 correct at snapshot time.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5116) upgradesstables does not upgrade all sstables on a node

2013-02-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578951#comment-13578951
 ] 

Jonathan Ellis commented on CASSANDRA-5116:
---

Confused about #1 -- sounds like you're saying that we don't mark upgrade 
targets as compacting?  That's done by performAllSSTableOperation.  Probably 
I'm misunderstanding...

 upgradesstables does not upgrade all sstables on a node
 ---

 Key: CASSANDRA-5116
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5116
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
 Environment: Ubuntu 12.04
Reporter: Michael Kjellman
Assignee: Yuki Morishita

 upgradesstables appears to be skipping sstables randomly.
 finding a sstable with an mtime  the upgrade time and grepping through the 
 logs for a corresponding compaction log line, i find nothing. I have 
 reproduced this on all of my nodes across the cluster.
 is performAllSSTableOperation somehow skipping sstables?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5104) new nodes should not attempt to bootstrap and stream until entire cluster is on the same major version

2013-02-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578952#comment-13578952
 ] 

Jonathan Ellis commented on CASSANDRA-5104:
---

This sounds messy...  There have been major versions that are 
streaming-compatible, and versions that didn't need upgradesstables.  Those are 
rules of thumb, not iron laws.

 new nodes should not attempt to bootstrap and stream until entire cluster is 
 on the same major version
 --

 Key: CASSANDRA-5104
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5104
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Michael Kjellman
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.0


 current behavior for bootstrapping nodes is for an exception to be thrown 
 while a node attempts to stream from another node that has not had 
 upgradesstables run yet.
 A node should not attempt to bootstrap into the cluster until the entire 
 cluster is on the same major version and upgradesstables has already been run 
 on every node in the ring.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-4338) Experiment with direct buffer in SequentialWriter

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-4338:
-

Assignee: Aleksey Yeschenko  (was: Yuki Morishita)

Let's see if another set of eyes here would be useful.

 Experiment with direct buffer in SequentialWriter
 -

 Key: CASSANDRA-4338
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4338
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Aleksey Yeschenko
Priority: Minor
 Attachments: 4338-gc.tar.gz, gc-4338-patched.png, gc-trunk.png


 Using a direct buffer instead of a heap-based byte[] should let us avoid a 
 copy into native memory when we flush the buffer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-3620) Proposal for distributed deletes - fully automatic Reaper Model rather than GCSeconds and manual repairs

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-3620:
-

Assignee: Aleksey Yeschenko

 Proposal for distributed deletes - fully automatic Reaper Model rather than 
 GCSeconds and manual repairs
 --

 Key: CASSANDRA-3620
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3620
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Dominic Williams
Assignee: Aleksey Yeschenko
  Labels: GCSeconds,, deletes,, distributed_deletes,, 
 merkle_trees, repair,
 Fix For: 2.0

   Original Estimate: 504h
  Remaining Estimate: 504h

 Proposal for an improved system for handling distributed deletes, which 
 removes the requirement to regularly run repair processes to maintain 
 performance and data integrity. 
 h2. The Problem
 There are various issues with repair:
 * Repair is expensive to run
 * Repair jobs are often made more expensive than they should be by other 
 issues (nodes dropping requests, hinted handoff not working, downtime etc)
 * Repair processes can often fail and need restarting, for example in cloud 
 environments where network issues make a node disappear from the ring for a 
 brief moment
 * When you fail to run repair within GCSeconds, either by error or because of 
 issues with Cassandra, data written to a node that did not see a later delete 
 can reappear (and a node might miss a delete for several reasons including 
 being down or simply dropping requests during load shedding)
 * If you cannot run repair and have to increase GCSeconds to prevent deleted 
 data reappearing, in some cases the growing tombstone overhead can 
 significantly degrade performance
 Because of the foregoing, in high throughput environments it can be very 
 difficult to make repair a cron job. It can be preferable to keep a terminal 
 open and run repair jobs one by one, making sure they succeed and keeping and 
 eye on overall load to reduce system impact. This isn't desirable, and 
 problems are exacerbated when there are lots of column families in a database 
 or it is necessary to run a column family with a low GCSeconds to reduce 
 tombstone load (because there are many write/deletes to that column family). 
 The database owner must run repair within the GCSeconds window, or increase 
 GCSeconds, to avoid potentially losing delete operations. 
 It would be much better if there was no ongoing requirement to run repair to 
 ensure deletes aren't lost, and no GCSeconds window. Ideally repair would be 
 an optional maintenance utility used in special cases, or to ensure ONE reads 
 get consistent data. 
 h2. Reaper Model Proposal
 # Tombstones do not expire, and there is no GCSeconds
 # Tombstones have associated ACK lists, which record the replicas that have 
 acknowledged them
 # Tombstones are deleted (or marked for compaction) when they have been 
 acknowledged by all replicas
 # When a tombstone is deleted, it is added to a relic index. The relic 
 index makes it possible for a reaper to acknowledge a tombstone after it is 
 deleted
 # The ACK lists and relic index are held in memory for speed
 # Background reaper threads constantly stream ACK requests to other nodes, 
 and stream back ACK responses back to requests they have received (throttling 
 their usage of CPU and bandwidth so as not to affect performance)
 # If a reaper receives a request to ACK a tombstone that does not exist, it 
 creates the tombstone and adds an ACK for the requestor, and replies with an 
 ACK. This is the worst that can happen, and does not cause data corruption. 
 ADDENDUM
 The proposal to hold the ACK and relic lists in memory was added after the 
 first posting. Please see comments for full reasons. Furthermore, a proposal 
 for enhancements to repair was posted to comments, which would cause 
 tombstones to be scavenged when repair completes (the author had assumed this 
 was the case anyway, but it seems at time of writing they are only scavenged 
 during compaction on GCSeconds timeout). The proposals are not exclusive and 
 this proposal is extended to include the possible enhancements to repair 
 described.
 NOTES
 * If a node goes down for a prolonged period, the worst that can happen is 
 that some tombstones are recreated across the cluster when it restarts, which 
 does not corrupt data (and this will only occur with a very small number of 
 tombstones)
 * The system is simple to implement and predictable 
 * With the reaper model, repair would become an optional process for 
 optimizing the database to increase the consistency seen by 
 ConsistencyLevel.ONE reads, and for fixing up nodes, for example after an 
 sstable was lost
 

[jira] [Commented] (CASSANDRA-5104) new nodes should not attempt to bootstrap and stream until entire cluster is on the same major version

2013-02-14 Thread Michael Kjellman (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578955#comment-13578955
 ] 

Michael Kjellman commented on CASSANDRA-5104:
-

if the versions are compat, by all means it should be allowed, but it is 
equally as messy for the ned user in the middle of a upgrade currently. not 
saying this is the best solution, but the current one is not the best imho.

 new nodes should not attempt to bootstrap and stream until entire cluster is 
 on the same major version
 --

 Key: CASSANDRA-5104
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5104
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Michael Kjellman
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 2.0


 current behavior for bootstrapping nodes is for an exception to be thrown 
 while a node attempts to stream from another node that has not had 
 upgradesstables run yet.
 A node should not attempt to bootstrap into the cluster until the entire 
 cluster is on the same major version and upgradesstables has already been run 
 on every node in the ring.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: add missing string format specifier char

2013-02-14 Thread dbrosius
Updated Branches:
  refs/heads/cassandra-1.2 f2a57f04c - 0d30c8cea


add missing string format specifier char


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d30c8ce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d30c8ce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d30c8ce

Branch: refs/heads/cassandra-1.2
Commit: 0d30c8ceaef95e05979a0271b8b11baac4ee5502
Parents: f2a57f0
Author: Dave Brosius dbros...@apache.org
Authored: Fri Feb 15 00:22:54 2013 -0500
Committer: Dave Brosius dbros...@apache.org
Committed: Fri Feb 15 00:22:54 2013 -0500

--
 src/java/org/apache/cassandra/cql3/Operation.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d30c8ce/src/java/org/apache/cassandra/cql3/Operation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Operation.java 
b/src/java/org/apache/cassandra/cql3/Operation.java
index 9754f50..90887e0 100644
--- a/src/java/org/apache/cassandra/cql3/Operation.java
+++ b/src/java/org/apache/cassandra/cql3/Operation.java
@@ -248,7 +248,7 @@ public abstract class Operation
 if (!(receiver.type instanceof CollectionType))
 {
 if (!(receiver.type instanceof CounterColumnType))
-throw new InvalidRequestException(String.format(Invalid 
operation for non counter column %s, toString(receiver), receiver));
+throw new InvalidRequestException(String.format(Invalid 
operation (%s) for non counter column %s, toString(receiver), receiver));
 return new Constants.Adder(receiver.kind == 
CFDefinition.Name.Kind.VALUE_ALIAS ? null : receiver.name, v);
 }
 



[jira] [Commented] (CASSANDRA-4885) Remove or rework per-row bloom filters

2013-02-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578957#comment-13578957
 ] 

Jonathan Ellis commented on CASSANDRA-4885:
---

Ping again.  I can ask Aleksey to take a look at this if you like.  (In a bit 
of a hurry since it's blocking CASSANDRA-4180.)

 Remove or rework per-row bloom filters
 --

 Key: CASSANDRA-4885
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4885
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Jason Brown
 Fix For: 2.0

 Attachments: 0001-CASSANRDA-4885-Remove-per-row-bloom-filter.patch, 
 0002-CASSANRDA-4885-update-test.patch


 Per-row bloom filters may be a misfeature.
 On small rows we don't create them.
 On large rows we essentially only do slice queries that can't take advantage 
 of it.
 And on very large rows if we ever did deserialize it, the performance hit of 
 doing so would outweigh the benefit of skipping the actual read.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-14 Thread dbrosius
Updated Branches:
  refs/heads/trunk 7049ab44d - 22609801d


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22609801
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22609801
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22609801

Branch: refs/heads/trunk
Commit: 22609801d9bb03b768912cae8800411054ce6150
Parents: 7049ab4 0d30c8c
Author: Dave Brosius dbros...@apache.org
Authored: Fri Feb 15 00:30:04 2013 -0500
Committer: Dave Brosius dbros...@apache.org
Committed: Fri Feb 15 00:30:04 2013 -0500

--
 src/java/org/apache/cassandra/cql3/Operation.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--




[1/2] git commit: add missing string format specifier char

2013-02-14 Thread dbrosius
add missing string format specifier char


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0d30c8ce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0d30c8ce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0d30c8ce

Branch: refs/heads/trunk
Commit: 0d30c8ceaef95e05979a0271b8b11baac4ee5502
Parents: f2a57f0
Author: Dave Brosius dbros...@apache.org
Authored: Fri Feb 15 00:22:54 2013 -0500
Committer: Dave Brosius dbros...@apache.org
Committed: Fri Feb 15 00:22:54 2013 -0500

--
 src/java/org/apache/cassandra/cql3/Operation.java |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0d30c8ce/src/java/org/apache/cassandra/cql3/Operation.java
--
diff --git a/src/java/org/apache/cassandra/cql3/Operation.java 
b/src/java/org/apache/cassandra/cql3/Operation.java
index 9754f50..90887e0 100644
--- a/src/java/org/apache/cassandra/cql3/Operation.java
+++ b/src/java/org/apache/cassandra/cql3/Operation.java
@@ -248,7 +248,7 @@ public abstract class Operation
 if (!(receiver.type instanceof CollectionType))
 {
 if (!(receiver.type instanceof CounterColumnType))
-throw new InvalidRequestException(String.format(Invalid 
operation for non counter column %s, toString(receiver), receiver));
+throw new InvalidRequestException(String.format(Invalid 
operation (%s) for non counter column %s, toString(receiver), receiver));
 return new Constants.Adder(receiver.kind == 
CFDefinition.Name.Kind.VALUE_ALIAS ? null : receiver.name, v);
 }
 



[jira] [Resolved] (CASSANDRA-2655) update wiki with CLI help

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2655.
---

Resolution: Not A Problem

Resolving under unlikely to get attention any time soon.

 update wiki with CLI help
 -

 Key: CASSANDRA-2655
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2655
 Project: Cassandra
  Issue Type: Task
  Components: Documentation  website
Affects Versions: 0.8.0
Reporter: amorton
Assignee: amorton
Priority: Minor
 Attachments: 0001-add-command-text-to-help-sections.patch, 
 yaml-to-mm.py


 Need a way to update the wiki with the help written for the CLI. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-3983) Change order of directory searching for c*.in.sh

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3983:
--

Reviewer: iamaleksey  (was: thepaul)

 Change order of directory searching for c*.in.sh
 

 Key: CASSANDRA-3983
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3983
 Project: Cassandra
  Issue Type: Improvement
Reporter: Nick Bailey
Assignee: Benjamin Coverston
Priority: Minor
 Attachments: 3983.txt


 When you have a c* package installed but attempt to run from a source build, 
 'bin/cassandra' will search the packaged dirs for 'cassandra.in.sh' before 
 searching the dirs in your source build. We should reverse the order of that 
 search so it checks locally first. Also the init scripts for a package should 
 set the environment variables correctly so no searching needs to be done and 
 there is no worry of the init scripts loading the wrong file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-3983) Change order of directory searching for c*.in.sh

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-3983:
--

  Component/s: Packaging
Fix Version/s: 2.0

 Change order of directory searching for c*.in.sh
 

 Key: CASSANDRA-3983
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3983
 Project: Cassandra
  Issue Type: Improvement
  Components: Packaging
Reporter: Nick Bailey
Assignee: Benjamin Coverston
Priority: Minor
 Fix For: 2.0

 Attachments: 3983.txt


 When you have a c* package installed but attempt to run from a source build, 
 'bin/cassandra' will search the packaged dirs for 'cassandra.in.sh' before 
 searching the dirs in your source build. We should reverse the order of that 
 search so it checks locally first. Also the init scripts for a package should 
 set the environment variables correctly so no searching needs to be done and 
 there is no worry of the init scripts loading the wrong file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-2143) Uncaught exceptions counter

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2143.
---

Resolution: Duplicate
  Assignee: (was: Chris Goffinet)

done as part of CASSANDRA-2521

 Uncaught exceptions counter
 ---

 Key: CASSANDRA-2143
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2143
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Goffinet
Priority: Trivial

 We should keep a counter that is exposed through JMX of how many errors 
 occurred that were thrown in the uncaught exception handler. This allows us 
 to do better alerting if an error occurred instead of grepping logs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-4615) cannot set log4j level on modules/classes

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4615.
---

Resolution: Not A Problem

I think this is a non-issue after switching tracing to its own API.

 cannot set log4j level on modules/classes
 -

 Key: CASSANDRA-4615
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4615
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0 beta 1
Reporter: Brandon Williams
Assignee: David Alves
Priority: Minor

 For example, setting log4j.logger.org.apache.cassandra.db=DEBUG in the config 
 has no effect. Perhaps there is something else that needs to be set as well, 
 but we should at least document that there.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-4971) Token Generator needs to be partitioner aware

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4971.
---

Resolution: Won't Fix

vnodes makes tokengen obsolete.

 Token Generator needs to be partitioner aware
 -

 Key: CASSANDRA-4971
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4971
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.0 beta 2
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor

 I do not agree with the decision to change the default partition from RP to 
 murmur. Also whatever micro-benchmarking demonstrating performance gains is 
 completely outweighed by disk latency anyway.
 Assuming we are not going to change the default back there is another issue.
 Historically, there are many blogs and reference material shows how to make 
 tokens for RandomPartitioner. Indeed the relatively new TokenGenerator is 
 'unaware' of this change, as it is giving the user tokens for the Random 
 Partitioner. 
  
 {noformat}
 [edward@tablitha 2]$ vi 
 /home/edward/.ccm/repository/1.2.0-beta2/tools/bin/token-generator 
 [edward@tablitha 2]$ python 
 /home/edward/.ccm/repository/1.2.0-beta2/tools/bin/token-generator 
 Token Generator Interactive Mode
 
  How many datacenters will participate in this Cassandra cluster? 1
  How many nodes are in datacenter #1? 3
 DC #1:
   Node #1:0
   Node #2:   56713727820156410577229101238628035242
   Node #3:  113427455640312821154458202477256070484
 {noformat}
 This will lead to confusion amount new users and imbalanced rings. We should 
 enhance the token-generator so it will require input from use on which 
 partitioner they are using so it can do the appropriate math and give users 
 the correct information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4971) Token Generator needs to be partitioner aware

2013-02-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578966#comment-13578966
 ] 

Jonathan Ellis edited comment on CASSANDRA-4971 at 2/15/13 5:44 AM:


vnodes makes tokengen obsolete (CASSANDRA-5261)

  was (Author: jbellis):
vnodes makes tokengen obsolete.
  
 Token Generator needs to be partitioner aware
 -

 Key: CASSANDRA-4971
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4971
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.2.0 beta 2
Reporter: Edward Capriolo
Assignee: Edward Capriolo
Priority: Minor

 I do not agree with the decision to change the default partition from RP to 
 murmur. Also whatever micro-benchmarking demonstrating performance gains is 
 completely outweighed by disk latency anyway.
 Assuming we are not going to change the default back there is another issue.
 Historically, there are many blogs and reference material shows how to make 
 tokens for RandomPartitioner. Indeed the relatively new TokenGenerator is 
 'unaware' of this change, as it is giving the user tokens for the Random 
 Partitioner. 
  
 {noformat}
 [edward@tablitha 2]$ vi 
 /home/edward/.ccm/repository/1.2.0-beta2/tools/bin/token-generator 
 [edward@tablitha 2]$ python 
 /home/edward/.ccm/repository/1.2.0-beta2/tools/bin/token-generator 
 Token Generator Interactive Mode
 
  How many datacenters will participate in this Cassandra cluster? 1
  How many nodes are in datacenter #1? 3
 DC #1:
   Node #1:0
   Node #2:   56713727820156410577229101238628035242
   Node #3:  113427455640312821154458202477256070484
 {noformat}
 This will lead to confusion amount new users and imbalanced rings. We should 
 enhance the token-generator so it will require input from use on which 
 partitioner they are using so it can do the appropriate math and give users 
 the correct information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5261) Remove token generator

2013-02-14 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-5261:
-

 Summary: Remove token generator
 Key: CASSANDRA-5261
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5261
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor


Obsoleted by vnodes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-2192) Daemons should not spew to console

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-2192.
---

Resolution: Won't Fix
  Assignee: (was: Eric Evans)

 Daemons should not spew to console
 --

 Key: CASSANDRA-2192
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2192
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 0.7.2
Reporter: Benson Margulies
Priority: Minor

 When I start cassandra on MacOSX from command line, it daemonizes, but it 
 still spews log into my current terminal window.
 I submit to you that this wrong. If it's going to daemonize, it should just 
 log to a file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-3880) Random Partitioner does not check if tokens are outside of its range

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3880.
---

Resolution: Not A Problem
  Assignee: (was: Harish Doddi)

Resolving NotAProblem since modern versions do check per Marcel's investigation.

 Random Partitioner does not check if tokens are outside of its range
 

 Key: CASSANDRA-3880
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3880
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 0.7.6, 1.0.7
Reporter: Marcel Steinbach
Priority: Minor

 Setting up a ring where the tokens are outside RP's token range leads to an 
 unbalanced cluster. The partitioner still reports equally distributed 
 ownership since it calculates ownership only with the _distances_ of the 
 tokens in relation to the maximum token. 
 E.g. maximum token = 15
 token1 = 5
 token2 = 10
 token3 = 15
 token4 = 20
 ownership4 = (token4 - token3) / maximum_token = 5 / 15 = 1/3
 So token4 claims to own 33.33% of the ring but is not responsible for any 
 primary replicas.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-3911) Basic QoS support for helping reduce OOMing cluster

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3911.
---

   Resolution: Won't Fix
Fix Version/s: (was: 2.0)
 Assignee: (was: Harish Doddi)

WontFixing for lack of progress.  (And modern efforts here should focus on 
CQL3/binary protocol IMO.)

 Basic QoS support for helping reduce OOMing cluster
 ---

 Key: CASSANDRA-3911
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3911
 Project: Cassandra
  Issue Type: Improvement
Reporter: Chris Goffinet
Priority: Minor
 Attachments: CASSANDRA-3911-trunk.txt


 We'd like to propose adding some basic QoS features to Cassandra. There can 
 be a lot to be done here but for v1 to keep things less invasive, and still 
 provide basics we would like to contribute the following features and see if 
 the community thinks this is OK.
 We would set these on server (cassandra.yaml). If threshold is crossed, we 
 throw an exception up to the client.
 1) Limit how many rows a client can fetch over RPC through multi-get.
 2) Limit how many columns may be returned (if count  N) throw exception 
 before processing.
 3) Limit how many rows and columns a client can try to batch mutate.
 This can be added in our Thrift logic, before any processing can be done. The 
 big reason why we want to do this, is so that customers don't shoot 
 themselves in the foot, by making mistakes or not knowing how many columns 
 they might have returned.
 We can build logic like this into a basic client, but I propose one of the 
 features we might want in Cassandra is support for not being able to OOM a 
 node. We've done lots of work around memtable flushing, dropping messages, 
 etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-3125) Move gossip library into a standalone artifact

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3125.
---

Resolution: Won't Fix
  Assignee: (was: Jake Farrell)

WontFixing for lack of progress, but I'm still not opposed in principle.

 Move gossip library into a standalone artifact 
 ---

 Key: CASSANDRA-3125
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3125
 Project: Cassandra
  Issue Type: Task
Reporter: Jake Farrell
Priority: Minor
  Labels: gsoc, gsoc2012, mentor

 There has been some talk on the mailing list of people want to use the gossip 
 portion of cassandra in their own applications. The goal for this will be to 
 create a standalone artifact

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5080) cassandra-cli doesn't support JMX authentication.

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5080:
--

Reviewer: iamaleksey

 cassandra-cli doesn't support JMX authentication.
 -

 Key: CASSANDRA-5080
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5080
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.6, 1.1.7
Reporter: Sergey Olefir
Assignee: Michał Michalski
 Attachments: 5080-v1.patch, enable-jmx-authentication.patch


 It seems that cassandra-cli doesn't support JMX user authentication.
 Specifically I went about securing our Cassandra cluster slightly -- I've 
 added cassandra-level authentication (which cassandra-cli does support), but 
 then I discovered that nodetool is still completely unprotected. So I went 
 ahead and secured JMX (via -Dcom.sun.management.jmxremote.password.file and 
 -Dcom.sun.management.jmxremote.access.file). Nodetool supports JMX 
 authentication via -u and -pw options.
 However it seems that cassandra-cli doesn't support JMX authentication, e.g.:
 {quote}
 apache-cassandra-1.1.6\bincassandra-cli -h hostname -u experiment -pw 
 password
 Starting Cassandra Client
 Connected to: db on hostname/9160
 Welcome to Cassandra CLI version 1.1.6
 [experiment@unknown] show keyspaces;
 WARNING: Could not connect to the JMX on hostname:7199, information won't be 
 shown.
 Keyspace: system:
   Replication Strategy: org.apache.cassandra.locator.LocalStrategy
   Durable Writes: true
 Options: [replication_factor:1]
 ... (rest of keyspace output snipped)
 {quote}
 help connect; and cassandra-cli --help do not seem to indicate that there's 
 any way to specify JMX login information.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-3878) Allow posix_fadvise call to be skipped based on a configuration option

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-3878.
---

Resolution: Duplicate

We're ripping out fadvise from compaction reads (CASSANDRA-4937), so that would 
seem to take care of that.

 Allow posix_fadvise call to be skipped based on a configuration option
 --

 Key: CASSANDRA-3878
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3878
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Rick Branson
Assignee: Rick Branson
  Labels: datastax_qa

 It is sometimes desirable to skip the posix_fadvise() call performed on 
 SSTable files when Cassandra has JNA enabled, without disabling JNA as other 
 functionality is dependent on JNA such as the off-heap row cache. This can be 
 either for performance reasons, or to avoid bugs caused by faulty 
 interactions of the fadvise call with the underlying hardware and it's 
 drivers. 
 The enhancement is to be able to disable the fadvise() call using an option 
 in the cassandra.yaml file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (CASSANDRA-3613) Commit Log Test broken on Windows

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-3613:
-

Assignee: (was: Rick Branson)

 Commit Log Test broken on Windows
 -

 Key: CASSANDRA-3613
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3613
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
 Environment: Windows
Reporter: Rick Branson

 Get errors like this when trying to run trunk CommitLogTest on Windows:
 java.io.IOException: Failed to delete 
 C:\Users\Piotr\Projekty\cassandra\build\test\cassandra\commitlog\CommitLog-210658686438781.log

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-4616) add test to make sure KEYS indexes handle row deletions

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4616.
---

   Resolution: Won't Fix
Fix Version/s: (was: 1.2.2)
 Assignee: (was: Sam Tunnicliffe)

 add test to make sure KEYS indexes handle row deletions
 ---

 Key: CASSANDRA-4616
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4616
 Project: Cassandra
  Issue Type: Test
Affects Versions: 1.2.0 beta 1
Reporter: Jonathan Ellis

 pretty sure we lost this in the refactoring we did for CASSANDRA-2897

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-4126) write tests for vnodes

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4126.
---

Resolution: Fixed
  Assignee: (was: Sam Overton)

Done in https://github.com/riptano/cassandra-dtest

 write tests for vnodes
 --

 Key: CASSANDRA-4126
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4126
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Sam Overton



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5262) Don't generate compation statistics if logging isn't enabled

2013-02-14 Thread Dave Brosius (JIRA)
Dave Brosius created CASSANDRA-5262:
---

 Summary: Don't generate compation statistics if logging isn't 
enabled
 Key: CASSANDRA-5262
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5262
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.9
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1.10


wrap compaction statistics generation in a if (logger.isInfoEnabled()) block

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4123) vnodes aware Replication Strategy

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4123:
--

Issue Type: New Feature  (was: Sub-task)
Parent: (was: CASSANDRA-4119)

 vnodes aware Replication Strategy 
 --

 Key: CASSANDRA-4123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4123
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Sam Overton
Assignee: Sam Overton

 The simplest implementation for this would be if NTS regarded a single host 
 as a distinct rack. This would prevent replicas being placed on the same 
 host. The rest of the logic for replica selection would be identical to NTS 
 (but this would be removing a level of topology hierarchy). This would be 
 achievable just by writing a snitch to place hosts in their own rack.
 A better solution would be to add an extra level of hierarchy to NTS so that 
 it still supported DC  rack, and IP would be the new level at the bottom of 
 the hierarchy. The logic would remain largely the same.
 I would very much like to build in Peter Schuller's notion of Distribution 
 Factor (as described in 
 http://www.mail-archive.com/dev@cassandra.apache.org/msg03844.html). This 
 requires a method of defining a replica set for each host and then treating 
 it in a similar way to a DC (ie. RF replicas are chosen from that set, 
 instead of from the whole cluster). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5262) Don't generate compation statistics if logging isn't enabled

2013-02-14 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-5262:


Attachment: 5262.txt

 Don't generate compation statistics if logging isn't enabled
 

 Key: CASSANDRA-5262
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5262
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.9
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1.10

 Attachments: 5262.txt


 wrap compaction statistics generation in a if (logger.isInfoEnabled()) block

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-4124) staggered repair for multiple ranges

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4124.
---

Resolution: Later
  Assignee: (was: Sam Overton)

 staggered repair for multiple ranges
 

 Key: CASSANDRA-4124
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4124
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Sam Overton

 One difference to take into account when considering the benefit of 
 staggering repair: with the introduction of a distribution factor, replicas 
 for all ranges on one host will be spread across a larger number of other 
 hosts (DF  RF), so the impact of repair will be less with larger values of 
 DF.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-4119) Support multiple non-consecutive tokens per host (virtual nodes)

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4119.
---

   Resolution: Fixed
Fix Version/s: 1.2.0

 Support multiple non-consecutive tokens per host (virtual nodes)
 

 Key: CASSANDRA-4119
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4119
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Sam Overton
Assignee: Sam Overton
  Labels: virtualnodes, vnodes
 Fix For: 1.2.0


 This is the parent ticket for the virtual nodes implementation which was 
 proposed here: 
 http://www.mail-archive.com/dev@cassandra.apache.org/msg03837.html and 
 discussed in the subsequent thread.
 The goals of this ticket are:
 * reduced operations complexity for scaling up/down
 * reduced rebuild time in event of failure
 * evenly distributed load impact in the event of failure
 * evenly distributed impact of streaming operations
 * more viable support for heterogeneity of hardware
 The intention is that this can be done in a way which is
 * fully backwards-compatible
 * optionally enabled
 The latter of these can be trivially achieved by setting the number of tokens 
 per host to 1, to reproduce the existing behaviour.
 Implementation detail can be added and discussed in the sub-tickets, but here 
 is an overview of the proposed changes:
 * TokenMetadata will allow multiple tokens per host
 * Hosts will be referred to by a UUID instead of token (e.g. in Gossip, when 
 storing hints, etc.)
 * A bootstrapping node can get multiple tokens from initial_token (comma 
 separated) or by random allocation
 * NetworkTopologyStrategy will be extended to be aware of virtual nodes so 
 that replicas are not placed on the same host (similar to racks now)
 * Repairs will be staggered similar to CASSANDRA-3721
 * Nodetool operations will be virtual-node aware, while maintaining backwards 
 compatibility (ie. existing scripts won't have to change)
 * Upgrade will be a standard rolling upgrade, with optional rolling migration 
 to full vnode support

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5262) Don't generate compaction statistics if logging isn't enabled

2013-02-14 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius updated CASSANDRA-5262:


Summary: Don't generate compaction statistics if logging isn't enabled  
(was: Don't generate compation statistics if logging isn't enabled)

 Don't generate compaction statistics if logging isn't enabled
 -

 Key: CASSANDRA-5262
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5262
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.1.9
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 1.1.10

 Attachments: 5262.txt


 wrap compaction statistics generation in a if (logger.isInfoEnabled()) block

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4123) vnodes aware Replication Strategy

2013-02-14 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4123:
--

Fix Version/s: 2.0

Are you still looking at addressing this, Sam?

 vnodes aware Replication Strategy 
 --

 Key: CASSANDRA-4123
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4123
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Sam Overton
Assignee: Sam Overton
 Fix For: 2.0


 The simplest implementation for this would be if NTS regarded a single host 
 as a distinct rack. This would prevent replicas being placed on the same 
 host. The rest of the logic for replica selection would be identical to NTS 
 (but this would be removing a level of topology hierarchy). This would be 
 achievable just by writing a snitch to place hosts in their own rack.
 A better solution would be to add an extra level of hierarchy to NTS so that 
 it still supported DC  rack, and IP would be the new level at the bottom of 
 the hierarchy. The logic would remain largely the same.
 I would very much like to build in Peter Schuller's notion of Distribution 
 Factor (as described in 
 http://www.mail-archive.com/dev@cassandra.apache.org/msg03844.html). This 
 requires a method of defining a replica set for each host and then treating 
 it in a similar way to a DC (ie. RF replicas are chosen from that set, 
 instead of from the whole cluster). 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4762) Support multiple OR clauses for CQL3 Compact storage

2013-02-14 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13578977#comment-13578977
 ] 

Jonathan Ellis commented on CASSANDRA-4762:
---

Still working on this?

 Support multiple OR clauses for CQL3 Compact storage
 

 Key: CASSANDRA-4762
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4762
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: T Jake Luciani
Assignee: T Jake Luciani
  Labels: cql3
 Fix For: 1.2.2

 Attachments: 4762-1.txt


 Given CASSANDRA-3885
 It seems it should be possible to store multiple ranges for many predicates 
 even the inner parts of a composite column.
 They could be expressed as a expanded set of filter queries.
 example:
 {code}
 CREATE TABLE test (
name text,
tdate timestamp,
tdate2 timestamp,
tdate3 timestamp,
num double,
PRIMARY KEY(name,tdate,tdate2,tdate3)
  ) WITH COMPACT STORAGE;
 SELECT * FROM test WHERE 
   name IN ('a','b') and
   tdate IN ('2010-01-01','2011-01-01') and
   tdate2 IN ('2010-01-01','2011-01-01') and
   tdate3 IN ('2010-01-01','2011-01-01') 
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >