[jira] [Updated] (CASSANDRA-5834) Changing LCS SSTable size does not work/error using compaction_strategy_options

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5834:
--

Priority: Trivial  (was: Major)

 Changing LCS SSTable size does not work/error using 
 compaction_strategy_options
 ---

 Key: CASSANDRA-5834
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5834
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.4
Reporter: Keith Wright
Priority: Trivial

 The following alter does not fail not succeed:
 alter table table with compaction_strategy_options = 
 {'sstable_size_in_mb':size as int};
 Correct alter:
  alter table table with compaction={'sstable_size_in_mb': 'size as int', 
 'class': 'LeveledCompactionStrategy'};
 Same is true when creating the table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5833) Duplicate classes in Cassandra-all package.

2013-07-31 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725203#comment-13725203
 ] 

Brandon Williams commented on CASSANDRA-5833:
-

It seems like the easiest thing to do would be not use the extension, or 
configure it to ignore cassandra-thrift.

 Duplicate classes in Cassandra-all package.
 ---

 Key: CASSANDRA-5833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5833
 Project: Cassandra
  Issue Type: Bug
  Components: API, Packaging
Affects Versions: 1.1.6, 1.2.7
Reporter: sam schumer
  Labels: maven

 As of Cassandra-All version 1.1.6 the classes 
 org.apache.cassandra.thrift.ITransportFactory and 
 org.apache.cassandra.thrift.TFramedTransportFactory are located in both the 
 cassandra-thrift and the cassandra-all Maven JARS, and caasandra-thrift is 
 imported by cassandra-all POM. This makes the cassandra-all package 
 unbuildable when using the duplicate-finder Maven extension. The files were 
 originally copied over due to 
 [CASSANDRA-4668|https://issues.apache.org/jira/browse/CASSANDRA-4668]. All 
 versions since have failed to build when using this maven extension.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5833) Duplicate classes in Cassandra-all package.

2013-07-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725206#comment-13725206
 ] 

Jonathan Ellis commented on CASSANDRA-5833:
---

bq. cassandra-thrift is imported by cassandra-all POM

that also sounds like a problem to me.

 Duplicate classes in Cassandra-all package.
 ---

 Key: CASSANDRA-5833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5833
 Project: Cassandra
  Issue Type: Bug
  Components: API, Packaging
Affects Versions: 1.1.6, 1.2.7
Reporter: sam schumer
  Labels: maven

 As of Cassandra-All version 1.1.6 the classes 
 org.apache.cassandra.thrift.ITransportFactory and 
 org.apache.cassandra.thrift.TFramedTransportFactory are located in both the 
 cassandra-thrift and the cassandra-all Maven JARS, and caasandra-thrift is 
 imported by cassandra-all POM. This makes the cassandra-all package 
 unbuildable when using the duplicate-finder Maven extension. The files were 
 originally copied over due to 
 [CASSANDRA-4668|https://issues.apache.org/jira/browse/CASSANDRA-4668]. All 
 versions since have failed to build when using this maven extension.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4871) get_paged_slice does not obey SlicePredicate

2013-07-31 Thread Adam Masters (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725241#comment-13725241
 ] 

Adam Masters commented on CASSANDRA-4871:
-

I'm finding this issue is still prevalent in 1.2.6, and the attached patch 
(still) doesnt apply to trunk. 

 get_paged_slice does not obey SlicePredicate
 

 Key: CASSANDRA-4871
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4871
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Hadoop
Affects Versions: 1.1.0
Reporter: Scott Fines
Priority: Minor
 Attachments: CASSANDRA-4816.patch


 When experimenting with WideRow support, I noticed that it is not possible to 
 specify a bounding SlicePredicate. This means that, no matter what you may 
 wish, the entire Column Family will be used during a get_paged_slice call. 
 This is unfortunate, if (for example) you are attempting to do MapReduce over 
 a subset of your column range.
 get_paged_slice should support a SlicePredicate, which will bound the column 
 range over which data is returned. It seems like this SlicePredicate should 
 be optional, so that existing code is not broken--when the SlicePredicate is 
 not specified, have it default to going over the entire column range.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5834) Changing LCS SSTable size does not work/error using compaction_strategy_options

2013-07-31 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-5834.
-

Resolution: Invalid

The first query produces:

{noformat}
 WARN 13:37:24,836 Ignoring obsolete property compaction_strategy_options
{noformat}

which explains why it doesn't work.

 Changing LCS SSTable size does not work/error using 
 compaction_strategy_options
 ---

 Key: CASSANDRA-5834
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5834
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.4
Reporter: Keith Wright
Priority: Trivial

 The following alter does not fail not succeed:
 alter table table with compaction_strategy_options = 
 {'sstable_size_in_mb':size as int};
 Correct alter:
  alter table table with compaction={'sstable_size_in_mb': 'size as int', 
 'class': 'LeveledCompactionStrategy'};
 Same is true when creating the table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5834) Changing LCS SSTable size does not work/error using compaction_strategy_options

2013-07-31 Thread Keith Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725253#comment-13725253
 ] 

Keith Wright commented on CASSANDRA-5834:
-

I do not see that within cqlsh.  Are you saying that its only outputted to 
system.log?  It should preferably fail the alter or at least show that warn 
within cqlsh.

 Changing LCS SSTable size does not work/error using 
 compaction_strategy_options
 ---

 Key: CASSANDRA-5834
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5834
 Project: Cassandra
  Issue Type: Bug
  Components: API
Affects Versions: 1.2.4
Reporter: Keith Wright
Priority: Trivial

 The following alter does not fail not succeed:
 alter table table with compaction_strategy_options = 
 {'sstable_size_in_mb':size as int};
 Correct alter:
  alter table table with compaction={'sstable_size_in_mb': 'size as int', 
 'class': 'LeveledCompactionStrategy'};
 Same is true when creating the table.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5823) nodetool history logging

2013-07-31 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725255#comment-13725255
 ] 

Dave Brosius commented on CASSANDRA-5823:
-

{quote}Maybe FBU.getToolsOutputDirectory() ?{quote}

+1

 nodetool history logging
 

 Key: CASSANDRA-5823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jason Brown
Assignee: Jason Brown
Priority: Minor
 Fix For: 1.2.8, 2.0 rc1

 Attachments: 5823-v1.patch, 5823-v2.patch


 Capture the commands and time executed from nodetool into a log file, similar 
 to the cli.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4871) get_paged_slice does not obey SlicePredicate

2013-07-31 Thread Adam Masters (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725335#comment-13725335
 ] 

Adam Masters commented on CASSANDRA-4871:
-

Looking at the word_count example in github, its interesting to note the 
comment which has been added against the wide-row functionality: this will 
cause the predicate to be ignored in favor of scanning everything as a wide 
row. This suggests that ignoring the SlicePredicate for wide rows is by 
design. In which case, how does one limit the columns when using wide rows?

 get_paged_slice does not obey SlicePredicate
 

 Key: CASSANDRA-4871
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4871
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Hadoop
Affects Versions: 1.1.0
Reporter: Scott Fines
Priority: Minor
 Attachments: CASSANDRA-4816.patch


 When experimenting with WideRow support, I noticed that it is not possible to 
 specify a bounding SlicePredicate. This means that, no matter what you may 
 wish, the entire Column Family will be used during a get_paged_slice call. 
 This is unfortunate, if (for example) you are attempting to do MapReduce over 
 a subset of your column range.
 get_paged_slice should support a SlicePredicate, which will bound the column 
 range over which data is returned. It seems like this SlicePredicate should 
 be optional, so that existing code is not broken--when the SlicePredicate is 
 not specified, have it default to going over the entire column range.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4871) get_paged_slice does not obey SlicePredicate

2013-07-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725382#comment-13725382
 ] 

Jonathan Ellis commented on CASSANDRA-4871:
---

Right.  Basically, wide-row mode and get_paged_slice are hacks that are 
obsolete now that we have CQL and CqlPagingRecordReader.  See 
http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows for 
background.

 get_paged_slice does not obey SlicePredicate
 

 Key: CASSANDRA-4871
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4871
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Hadoop
Affects Versions: 1.1.0
Reporter: Scott Fines
Priority: Minor
 Attachments: CASSANDRA-4816.patch


 When experimenting with WideRow support, I noticed that it is not possible to 
 specify a bounding SlicePredicate. This means that, no matter what you may 
 wish, the entire Column Family will be used during a get_paged_slice call. 
 This is unfortunate, if (for example) you are attempting to do MapReduce over 
 a subset of your column range.
 get_paged_slice should support a SlicePredicate, which will bound the column 
 range over which data is returned. It seems like this SlicePredicate should 
 be optional, so that existing code is not broken--when the SlicePredicate is 
 not specified, have it default to going over the entire column range.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5833) Duplicate classes in Cassandra-all package.

2013-07-31 Thread Alex Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725447#comment-13725447
 ] 

Alex Tang commented on CASSANDRA-5833:
--

Yes, the fact that cassandra-thrift imports cassandra-thrift means that you 
can't get away from this problem. Disabling the duplicate-finder extension is a 
sub-optimal way to fix the problem as having the exact same class in two 
separate jar files can lead to very odd bugs later in life if the files ever 
diverge.  

 Duplicate classes in Cassandra-all package.
 ---

 Key: CASSANDRA-5833
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5833
 Project: Cassandra
  Issue Type: Bug
  Components: API, Packaging
Affects Versions: 1.1.6, 1.2.7
Reporter: sam schumer
  Labels: maven

 As of Cassandra-All version 1.1.6 the classes 
 org.apache.cassandra.thrift.ITransportFactory and 
 org.apache.cassandra.thrift.TFramedTransportFactory are located in both the 
 cassandra-thrift and the cassandra-all Maven JARS, and caasandra-thrift is 
 imported by cassandra-all POM. This makes the cassandra-all package 
 unbuildable when using the duplicate-finder Maven extension. The files were 
 originally copied over due to 
 [CASSANDRA-4668|https://issues.apache.org/jira/browse/CASSANDRA-4668]. All 
 versions since have failed to build when using this maven extension.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5826) Fix trigger directory detection code

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5826:
--

Fix Version/s: 2.0 rc1

 Fix trigger directory detection code
 

 Key: CASSANDRA-5826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5826
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0 beta 2
 Environment: OS X
Reporter: Aleksey Yeschenko
Assignee: Vijay
  Labels: triggers
 Fix For: 2.0 rc1


 At least when building from source, Cassandra determines the trigger 
 directory wrong. C* calculates the trigger directory as 'build/triggers' 
 instead of 'triggers'.
 FBUtilities.cassandraHomeDir() is to blame, and should be replaced with 
 something more robust.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-2524) Use SSTableBoundedScanner for cleanup

2013-07-31 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-2524:
---

Attachment: 2524-v1.txt

Patch {{2524-v1.txt}} (and 
[branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-2524]) builds on 
Marcus's work and adds multiple range support to SSTableScanner.  There didn't 
end up being much overlap with the work on CASSANDRA-5722.

 Use SSTableBoundedScanner for cleanup
 -

 Key: CASSANDRA-2524
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2524
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Tyler Hobbs
Priority: Minor
  Labels: lhf
 Fix For: 2.0.1

 Attachments: 
 0001-CASSANDRA-2524-use-SSTableBoundedScanner-for-cleanup.patch, 
 0001-Use-a-SSTableBoundedScanner-for-cleanup-and-improve-cl.txt, 
 0002-Oops.-When-indexes-or-counters-are-in-use-must-continu.txt, 2524-v1.txt


 SSTableBoundedScanner seeks rather than scanning through rows, so it would be 
 significantly more efficient than the existing per-key filtering that cleanup 
 does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5831) Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the first time breaks stuff

2013-07-31 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725523#comment-13725523
 ] 

Tyler Hobbs commented on CASSANDRA-5831:


bq. I think all we need to do here is don't run upgradesstables if the ks/cf/ 
heirarchy doesn't exist already for the system tables.

If I'm interpreting you correctly, we'll just want upgradesstables to error out 
in that case and mention something about starting Cassandra 1.1+ before running 
upgradesstables again, is that correct?

 Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the 
 first time breaks stuff
 -

 Key: CASSANDRA-5831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5831
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 1.2.9


 If you try to upgrade from C* 1.0.X to 1.2.X and run offline sstableupgrade 
 to try and migrate the sstables before starting 1.2.X for the first time, it 
 messes up the system folder, because it doesn't migrate it right, and then C* 
 1.2 can't start.
 sstableupgrade should either refuse to run against a C* 1.0 data folder, or 
 migrate stuff the right way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-2524) Use SSTableBoundedScanner for cleanup

2013-07-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725525#comment-13725525
 ] 

Jonathan Ellis commented on CASSANDRA-2524:
---

With a range-aware scanner, do we still need 5722?

 Use SSTableBoundedScanner for cleanup
 -

 Key: CASSANDRA-2524
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2524
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Tyler Hobbs
Priority: Minor
  Labels: lhf
 Fix For: 2.0.1

 Attachments: 
 0001-CASSANDRA-2524-use-SSTableBoundedScanner-for-cleanup.patch, 
 0001-Use-a-SSTableBoundedScanner-for-cleanup-and-improve-cl.txt, 
 0002-Oops.-When-indexes-or-counters-are-in-use-must-continu.txt, 2524-v1.txt


 SSTableBoundedScanner seeks rather than scanning through rows, so it would be 
 significantly more efficient than the existing per-key filtering that cleanup 
 does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5831) Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the first time breaks stuff

2013-07-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725527#comment-13725527
 ] 

Jonathan Ellis commented on CASSANDRA-5831:
---

Right.

 Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the 
 first time breaks stuff
 -

 Key: CASSANDRA-5831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5831
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 1.2.9


 If you try to upgrade from C* 1.0.X to 1.2.X and run offline sstableupgrade 
 to try and migrate the sstables before starting 1.2.X for the first time, it 
 messes up the system folder, because it doesn't migrate it right, and then C* 
 1.2 can't start.
 sstableupgrade should either refuse to run against a C* 1.0 data folder, or 
 migrate stuff the right way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: nodetool history logging patch by jasobrown; reviewed by Dave Brosius for CASSANDRA-5823

2013-07-31 Thread jasobrown
Updated Branches:
  refs/heads/cassandra-1.2 1a4942583 - ba274adb7


nodetool history logging
patch by jasobrown; reviewed by Dave Brosius for CASSANDRA-5823


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ba274adb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ba274adb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ba274adb

Branch: refs/heads/cassandra-1.2
Commit: ba274adb7a0ee5f85df93eb5f3e40423772e1c07
Parents: 1a49425
Author: Jason Brown jasedbr...@gmail.com
Authored: Mon Jul 29 11:17:29 2013 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jul 31 11:14:36 2013 -0700

--
 CHANGES.txt |  1 +
 bin/cqlsh   | 15 +++-
 .../org/apache/cassandra/cli/CliClient.java | 17 +++--
 src/java/org/apache/cassandra/cli/CliMain.java  | 23 +---
 .../org/apache/cassandra/tools/NodeCmd.java | 39 ++--
 .../org/apache/cassandra/utils/FBUtilities.java |  8 
 6 files changed, 88 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba274adb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a809bc6..da1ec20 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,6 +5,7 @@
  * update default LCS sstable size to 160MB (CASSANDRA-5727)
  * Allow compacting 2Is via nodetool (CASSANDRA-5670)
  * Hex-encode non-String keys in OPP (CASSANDRA-5793)
+ * nodetool history logging (CASSANDRA-5823)
 
 
 1.2.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba274adb/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 59aac0d..f0db1b3 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -118,8 +118,19 @@ from cqlshlib.formatting import format_by_type
 from cqlshlib.util import trim_if_present
 from cqlshlib.tracing import print_trace_session
 
-CONFIG_FILE = os.path.expanduser(os.path.join('~', '.cqlshrc'))
-HISTORY = os.path.expanduser(os.path.join('~', '.cqlsh_history'))
+HISTORY_DIR = os.path.expanduser(os.path.join('~', '.cassandra'))
+CONFIG_FILE = os.path.join(HISTORY_DIR, 'cqlshrc')
+HISTORY = os.path.join(HISTORY_DIR, 'cqlsh_history')
+if not os.path.exists(HISTORY_DIR):
+os.mkdir(HISTORY_DIR)
+
+OLD_CONFIG_FILE = os.path.expanduser(os.path.join('~', '.cqlshrc'))
+if os.path.exists(OLD_CONFIG_FILE):
+os.rename(OLD_CONFIG_FILE, CONFIG_FILE)
+OLD_HISTORY = os.path.expanduser(os.path.join('~', '.cqlsh_history'))
+if os.path.exists(OLD_HISTORY):
+os.rename(OLD_HISTORY, HISTORY)
+
 DEFAULT_HOST = 'localhost'
 DEFAULT_PORT = 9160
 DEFAULT_CQLVER = '3'

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba274adb/src/java/org/apache/cassandra/cli/CliClient.java
--
diff --git a/src/java/org/apache/cassandra/cli/CliClient.java 
b/src/java/org/apache/cassandra/cli/CliClient.java
index 6857aea..2229207 100644
--- a/src/java/org/apache/cassandra/cli/CliClient.java
+++ b/src/java/org/apache/cassandra/cli/CliClient.java
@@ -3032,6 +3032,7 @@ public class CliClient
 
 class CfAssumptions
 {
+private static final String ASSUMPTIONS_FILENAME = assumptions.json;
 //MapKeySpace, MapColumnFamily, MapProperty, Value
 private MapString, MapString, MapString, String assumptions;
 private boolean assumptionsChanged;
@@ -3041,8 +3042,16 @@ public class CliClient
 {
 assumptions = new HashMapString, MapString, MapString, 
String();
 assumptionsChanged = false;
-assumptionDirectory = new File(System.getProperty(user.home), 
.cassandra-cli);
-assumptionDirectory.mkdirs();
+assumptionDirectory = FBUtilities.getToolsOutputDirectory();
+
+File oldAssumptionDir = new File(System.getProperty(user.home) + 
File.separator + .cassandra-cli);
+if (oldAssumptionDir.exists())
+{
+File oldAssumptionFile = new File(oldAssumptionDir, 
ASSUMPTIONS_FILENAME);
+if (oldAssumptionFile.exists())
+FileUtils.renameWithConfirm(oldAssumptionFile, new 
File(assumptionDirectory, ASSUMPTIONS_FILENAME));
+FileUtils.deleteRecursive(oldAssumptionDir);
+}
 }
 
 public void addAssumption(String keyspace, String columnFamily, String 
property, String value)
@@ -3088,7 +3097,7 @@ public class CliClient
 
 private void readAssumptions()
 {
-File assumptionFile = new File(assumptionDirectory, 
assumptions.json);
+File assumptionFile = new File(assumptionDirectory, 
ASSUMPTIONS_FILENAME);
 if 

[1/2] git commit: nodetool history logging patch by jasobrown; reviewed by Dave Brosius for CASSANDRA-5823

2013-07-31 Thread jasobrown
Updated Branches:
  refs/heads/trunk 22bf2c40e - e01c238fa


nodetool history logging
patch by jasobrown; reviewed by Dave Brosius for CASSANDRA-5823


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ba274adb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ba274adb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ba274adb

Branch: refs/heads/trunk
Commit: ba274adb7a0ee5f85df93eb5f3e40423772e1c07
Parents: 1a49425
Author: Jason Brown jasedbr...@gmail.com
Authored: Mon Jul 29 11:17:29 2013 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jul 31 11:14:36 2013 -0700

--
 CHANGES.txt |  1 +
 bin/cqlsh   | 15 +++-
 .../org/apache/cassandra/cli/CliClient.java | 17 +++--
 src/java/org/apache/cassandra/cli/CliMain.java  | 23 +---
 .../org/apache/cassandra/tools/NodeCmd.java | 39 ++--
 .../org/apache/cassandra/utils/FBUtilities.java |  8 
 6 files changed, 88 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba274adb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index a809bc6..da1ec20 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -5,6 +5,7 @@
  * update default LCS sstable size to 160MB (CASSANDRA-5727)
  * Allow compacting 2Is via nodetool (CASSANDRA-5670)
  * Hex-encode non-String keys in OPP (CASSANDRA-5793)
+ * nodetool history logging (CASSANDRA-5823)
 
 
 1.2.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba274adb/bin/cqlsh
--
diff --git a/bin/cqlsh b/bin/cqlsh
index 59aac0d..f0db1b3 100755
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@ -118,8 +118,19 @@ from cqlshlib.formatting import format_by_type
 from cqlshlib.util import trim_if_present
 from cqlshlib.tracing import print_trace_session
 
-CONFIG_FILE = os.path.expanduser(os.path.join('~', '.cqlshrc'))
-HISTORY = os.path.expanduser(os.path.join('~', '.cqlsh_history'))
+HISTORY_DIR = os.path.expanduser(os.path.join('~', '.cassandra'))
+CONFIG_FILE = os.path.join(HISTORY_DIR, 'cqlshrc')
+HISTORY = os.path.join(HISTORY_DIR, 'cqlsh_history')
+if not os.path.exists(HISTORY_DIR):
+os.mkdir(HISTORY_DIR)
+
+OLD_CONFIG_FILE = os.path.expanduser(os.path.join('~', '.cqlshrc'))
+if os.path.exists(OLD_CONFIG_FILE):
+os.rename(OLD_CONFIG_FILE, CONFIG_FILE)
+OLD_HISTORY = os.path.expanduser(os.path.join('~', '.cqlsh_history'))
+if os.path.exists(OLD_HISTORY):
+os.rename(OLD_HISTORY, HISTORY)
+
 DEFAULT_HOST = 'localhost'
 DEFAULT_PORT = 9160
 DEFAULT_CQLVER = '3'

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ba274adb/src/java/org/apache/cassandra/cli/CliClient.java
--
diff --git a/src/java/org/apache/cassandra/cli/CliClient.java 
b/src/java/org/apache/cassandra/cli/CliClient.java
index 6857aea..2229207 100644
--- a/src/java/org/apache/cassandra/cli/CliClient.java
+++ b/src/java/org/apache/cassandra/cli/CliClient.java
@@ -3032,6 +3032,7 @@ public class CliClient
 
 class CfAssumptions
 {
+private static final String ASSUMPTIONS_FILENAME = assumptions.json;
 //MapKeySpace, MapColumnFamily, MapProperty, Value
 private MapString, MapString, MapString, String assumptions;
 private boolean assumptionsChanged;
@@ -3041,8 +3042,16 @@ public class CliClient
 {
 assumptions = new HashMapString, MapString, MapString, 
String();
 assumptionsChanged = false;
-assumptionDirectory = new File(System.getProperty(user.home), 
.cassandra-cli);
-assumptionDirectory.mkdirs();
+assumptionDirectory = FBUtilities.getToolsOutputDirectory();
+
+File oldAssumptionDir = new File(System.getProperty(user.home) + 
File.separator + .cassandra-cli);
+if (oldAssumptionDir.exists())
+{
+File oldAssumptionFile = new File(oldAssumptionDir, 
ASSUMPTIONS_FILENAME);
+if (oldAssumptionFile.exists())
+FileUtils.renameWithConfirm(oldAssumptionFile, new 
File(assumptionDirectory, ASSUMPTIONS_FILENAME));
+FileUtils.deleteRecursive(oldAssumptionDir);
+}
 }
 
 public void addAssumption(String keyspace, String columnFamily, String 
property, String value)
@@ -3088,7 +3097,7 @@ public class CliClient
 
 private void readAssumptions()
 {
-File assumptionFile = new File(assumptionDirectory, 
assumptions.json);
+File assumptionFile = new File(assumptionDirectory, 
ASSUMPTIONS_FILENAME);
 if 

[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-07-31 Thread jasobrown
Merge branch 'cassandra-1.2' into trunk

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/tools/NodeCmd.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e01c238f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e01c238f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e01c238f

Branch: refs/heads/trunk
Commit: e01c238fa1f9ef45fc098385462a8eec46001620
Parents: 22bf2c4 ba274ad
Author: Jason Brown jasedbr...@gmail.com
Authored: Wed Jul 31 11:24:41 2013 -0700
Committer: Jason Brown jasedbr...@gmail.com
Committed: Wed Jul 31 11:24:41 2013 -0700

--
 CHANGES.txt |  4 +++
 bin/cqlsh   | 15 +++--
 .../org/apache/cassandra/cli/CliClient.java | 17 ---
 src/java/org/apache/cassandra/cli/CliMain.java  | 23 +++---
 .../org/apache/cassandra/tools/NodeCmd.java | 32 ++--
 .../org/apache/cassandra/utils/FBUtilities.java |  8 +
 6 files changed, 85 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e01c238f/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e01c238f/bin/cqlsh
--
diff --cc bin/cqlsh
index 0b9c6c6,f0db1b3..21d43c6
--- a/bin/cqlsh
+++ b/bin/cqlsh
@@@ -117,11 -118,22 +117,22 @@@ from cqlshlib.formatting import format_
  from cqlshlib.util import trim_if_present
  from cqlshlib.tracing import print_trace_session
  
- CONFIG_FILE = os.path.expanduser(os.path.join('~', '.cqlshrc'))
- HISTORY = os.path.expanduser(os.path.join('~', '.cqlsh_history'))
+ HISTORY_DIR = os.path.expanduser(os.path.join('~', '.cassandra'))
+ CONFIG_FILE = os.path.join(HISTORY_DIR, 'cqlshrc')
+ HISTORY = os.path.join(HISTORY_DIR, 'cqlsh_history')
+ if not os.path.exists(HISTORY_DIR):
+ os.mkdir(HISTORY_DIR)
+ 
+ OLD_CONFIG_FILE = os.path.expanduser(os.path.join('~', '.cqlshrc'))
+ if os.path.exists(OLD_CONFIG_FILE):
+ os.rename(OLD_CONFIG_FILE, CONFIG_FILE)
+ OLD_HISTORY = os.path.expanduser(os.path.join('~', '.cqlsh_history'))
+ if os.path.exists(OLD_HISTORY):
+ os.rename(OLD_HISTORY, HISTORY)
+ 
  DEFAULT_HOST = 'localhost'
  DEFAULT_PORT = 9160
 -DEFAULT_CQLVER = '3'
 +DEFAULT_CQLVER = '3.1.0'
  DEFAULT_TRANSPORT_FACTORY = 'cqlshlib.tfactory.regular_transport_factory'
  
  DEFAULT_TIME_FORMAT = '%Y-%m-%d %H:%M:%S%z'

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e01c238f/src/java/org/apache/cassandra/cli/CliClient.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e01c238f/src/java/org/apache/cassandra/cli/CliMain.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e01c238f/src/java/org/apache/cassandra/tools/NodeCmd.java
--
diff --cc src/java/org/apache/cassandra/tools/NodeCmd.java
index 24a4c57,f6d4310..1d415a8
--- a/src/java/org/apache/cassandra/tools/NodeCmd.java
+++ b/src/java/org/apache/cassandra/tools/NodeCmd.java
@@@ -29,9 -28,13 +28,11 @@@ import java.util.*
  import java.util.Map.Entry;
  import java.util.concurrent.ExecutionException;
  
+ import com.google.common.base.Joiner;
  import com.google.common.collect.LinkedHashMultimap;
  import com.google.common.collect.Maps;
+ import org.apache.cassandra.utils.FBUtilities;
  import org.apache.commons.cli.*;
 -import org.yaml.snakeyaml.Loader;
 -import org.yaml.snakeyaml.TypeDescription;
  import org.yaml.snakeyaml.Yaml;
  import org.yaml.snakeyaml.constructor.Constructor;
  
@@@ -1026,6 -1089,6 +1028,9 @@@ public class NodeCm
  
  NodeCmd nodeCmd = new NodeCmd(probe);
  
++//print history here after we've already determined we can 
reasonably call cassandra
++printHistory(args, cmd);
++
  // Execute the requested command.
  String[] arguments = cmd.getCommandArguments();
  String tag;
@@@ -1256,6 -1333,34 +1261,27 @@@
  System.exit(probe.isFailed() ? 1 : 0);
  }
  
+ private static void printHistory(String[] args, ToolCommandLine cmd)
+ {
+ //don't bother to print if no args passed (meaning, nodetool is just 
printing out the sub-commands list)
+ if (args.length == 0)
+ return;
+ String cmdLine = Joiner.on( ).skipNulls().join(args);
+ final String password = cmd.getOptionValue(PASSWORD_OPT.left);
+ if (password != null)
+ cmdLine = cmdLine.replace(password, hidden);
+ 
 -FileWriter writer = null;
 -try

[jira] [Commented] (CASSANDRA-5823) nodetool history logging

2013-07-31 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725547#comment-13725547
 ] 

Jason Brown commented on CASSANDRA-5823:


Decided on punting the additional check to see if the new file is there before 
overwriting. Didn't think it added much value, and would just make code even 
messier (for a very minor edge case (us c* devs)).

committed to 1.2 and trunk

 nodetool history logging
 

 Key: CASSANDRA-5823
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jason Brown
Assignee: Jason Brown
Priority: Minor
 Fix For: 1.2.8, 2.0 rc1

 Attachments: 5823-v1.patch, 5823-v2.patch


 Capture the commands and time executed from nodetool into a log file, similar 
 to the cli.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4774) IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip

2013-07-31 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725548#comment-13725548
 ] 

Chris Lohfink commented on CASSANDRA-4774:
--

Saw this once on a 1.1.5 node under high load with significant heap pressure, 
GCInspector reported 95% full right before.

{code}
 WARN [ScheduledTasks:1] 2013-07-07 19:37:03,834 GCInspector.java (line 145) 
Heap is 0.9552299542288667 full.  You may need to reduce memtable and/or cache 
sizes.  Cassandra will now flush up to the two largest memtables to free up 
memory.  Adjust flush_largest_memtables_at threshold in cassandra.yaml if you 
don't want Cassandra to do this automatically
 WARN [ScheduledTasks:1] 2013-07-07 19:37:03,834 StorageService.java (line 
2855) Flushing CFS(Keyspace='x', ColumnFamily='x') to relieve memory pressure
 INFO [ScheduledTasks:1] 2013-07-07 19:37:03,834 ColumnFamilyStore.java (line 
659) Enqueuing flush of Memtable-x@766608353(261434/1801824 serialized/live 
bytes, 5150 ops)
 INFO [GossipStage:1] 2013-07-07 19:37:05,125 Gossiper.java (line 816) 
InetAddress /10.x.x.x is now UP
 INFO [GossipStage:1] 2013-07-07 19:37:05,146 Gossiper.java (line 816) 
InetAddress /10.x.x.x is now UP
ERROR [GossipTasks:1] 2013-07-07 19:37:05,155 Gossiper.java (line 171) Gossip 
error
java.lang.IndexOutOfBoundsException: Index: 10, Size: 10
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.get(ArrayList.java:322)
at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:560)
at 
org.apache.cassandra.gms.Gossiper.doGossipToUnreachableMember(Gossiper.java:594)
at org.apache.cassandra.gms.Gossiper.access$300(Gossiper.java:61)
at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:143)
at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:79)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
{code}


 IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip
 -

 Key: CASSANDRA-4774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4774
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: Saw this when looking through some logs in version 1.0.0 
 system was under a lot of load.
Reporter: Benjamin Coverston
Assignee: Brandon Williams
Priority: Minor

 ERROR [GossipTasks:1] 2012-10-06 10:47:48,390 Gossiper.java (line 169) Gossip 
 error
 java.lang.IndexOutOfBoundsException: Index: 13, Size: 5
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:541)
   at 
 org.apache.cassandra.gms.Gossiper.doGossipToUnreachableMember(Gossiper.java:575)
   at org.apache.cassandra.gms.Gossiper.access$300(Gossiper.java:59)
   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:141)
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:79)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 

[jira] [Commented] (CASSANDRA-2524) Use SSTableBoundedScanner for cleanup

2013-07-31 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725565#comment-13725565
 ] 

Tyler Hobbs commented on CASSANDRA-2524:


bq. With a range-aware scanner, do we still need 5722?

When there are counters and secondary indexes, a full scan would still need to 
be performed, so 5722 would let us avoid that case when it's unnecessary.

 Use SSTableBoundedScanner for cleanup
 -

 Key: CASSANDRA-2524
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2524
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Tyler Hobbs
Priority: Minor
  Labels: lhf
 Fix For: 2.0.1

 Attachments: 
 0001-CASSANDRA-2524-use-SSTableBoundedScanner-for-cleanup.patch, 
 0001-Use-a-SSTableBoundedScanner-for-cleanup-and-improve-cl.txt, 
 0002-Oops.-When-indexes-or-counters-are-in-use-must-continu.txt, 2524-v1.txt


 SSTableBoundedScanner seeks rather than scanning through rows, so it would be 
 significantly more efficient than the existing per-key filtering that cleanup 
 does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-4774) IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip

2013-07-31 Thread Chris Lohfink (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Lohfink updated CASSANDRA-4774:
-

Attachment: patch.txt

 IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip
 -

 Key: CASSANDRA-4774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4774
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: Saw this when looking through some logs in version 1.0.0 
 system was under a lot of load.
Reporter: Benjamin Coverston
Assignee: Brandon Williams
Priority: Minor
 Attachments: patch.txt


 ERROR [GossipTasks:1] 2012-10-06 10:47:48,390 Gossiper.java (line 169) Gossip 
 error
 java.lang.IndexOutOfBoundsException: Index: 13, Size: 5
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:541)
   at 
 org.apache.cassandra.gms.Gossiper.doGossipToUnreachableMember(Gossiper.java:575)
   at org.apache.cassandra.gms.Gossiper.access$300(Gossiper.java:59)
   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:141)
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:79)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4774) IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip

2013-07-31 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725587#comment-13725587
 ] 

Chris Lohfink commented on CASSANDRA-4774:
--

doGossipToUnreachableMember calls
{code}
sendGossip(prod, unreachableEndpoints.keySet());
{code}
the keyset returned is backed by the set so changes are reflected in the set.  
Since sendGossip gets the size, then picks the random number, then does a get 
on a list created it created a race condition where the list is smaller then 
the keyset.  attached a possible change to do this.

 IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip
 -

 Key: CASSANDRA-4774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4774
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: Saw this when looking through some logs in version 1.0.0 
 system was under a lot of load.
Reporter: Benjamin Coverston
Assignee: Brandon Williams
Priority: Minor
 Attachments: patch.txt


 ERROR [GossipTasks:1] 2012-10-06 10:47:48,390 Gossiper.java (line 169) Gossip 
 error
 java.lang.IndexOutOfBoundsException: Index: 13, Size: 5
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:541)
   at 
 org.apache.cassandra.gms.Gossiper.doGossipToUnreachableMember(Gossiper.java:575)
   at org.apache.cassandra.gms.Gossiper.access$300(Gossiper.java:59)
   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:141)
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:79)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4774) IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip

2013-07-31 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725587#comment-13725587
 ] 

Chris Lohfink edited comment on CASSANDRA-4774 at 7/31/13 6:48 PM:
---

doGossipToUnreachableMember calls
{code}
sendGossip(prod, unreachableEndpoints.keySet());
{code}
the keyset returned is backed by the set so changes are reflected in the set.  
Since sendGossip gets the size, then picks the random number, then does a get 
on a list created it created a race condition where the list is smaller then 
the keyset.  attached a possible change to do this.

note: patch off of 1.1

  was (Author: cnlwsu):
doGossipToUnreachableMember calls
{code}
sendGossip(prod, unreachableEndpoints.keySet());
{code}
the keyset returned is backed by the set so changes are reflected in the set.  
Since sendGossip gets the size, then picks the random number, then does a get 
on a list created it created a race condition where the list is smaller then 
the keyset.  attached a possible change to do this.
  
 IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip
 -

 Key: CASSANDRA-4774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4774
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: Saw this when looking through some logs in version 1.0.0 
 system was under a lot of load.
Reporter: Benjamin Coverston
Assignee: Brandon Williams
Priority: Minor
 Attachments: patch.txt


 ERROR [GossipTasks:1] 2012-10-06 10:47:48,390 Gossiper.java (line 169) Gossip 
 error
 java.lang.IndexOutOfBoundsException: Index: 13, Size: 5
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:541)
   at 
 org.apache.cassandra.gms.Gossiper.doGossipToUnreachableMember(Gossiper.java:575)
   at org.apache.cassandra.gms.Gossiper.access$300(Gossiper.java:59)
   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:141)
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:79)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)

2013-07-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725604#comment-13725604
 ] 

Benedict commented on CASSANDRA-2698:
-

Hi Yuki,

Had some fun rebasing, but think everything looks good now. A few things to 
note:

1) I'm not sure what you mean by not serializing those - for correctness I 
serialize all of the data in a node. Do you want me to change the serialization 
methods to not send these values? I don't log them the other end, but I would 
prefer they were sent to ensure no surprises for users of the data, and also 
because of some optimisations to difference() that rely on knowing the number 
of rows for each sub-tree. It's not a tremendous amount of data after all.

2) I've modified DifferencerTest, and created two versions of the 
testDifference() method - one that tests differences on an empty tree, and one 
which tests a tree that has been populated with rows. Previously only the 
former was tested. This is because the changes I made to difference() for my 
previous patch, which I have retained and which ensures contiguous ranges are 
emitted where possible, treats the entire empty tree as one contiguous 
difference range (since the only non-empty sub-range in the tree is different), 
which was breaking the previous test. This test now works with the fully 
populated tree, and the previous test now confirms that the whole tree is 
considered different when it is empty. It's possible you may want to not deploy 
these improvements in this patch, but it seems a good idea to me whilst it's 
being modified, and given that I'd made the change already. Since we're not 
logging the ranges themselves at this time it won't have any direct impact, but 
it will be useful if that ever changes, and might help with future debugging.

3) I've updated the MerkleTreeTest methods to test the serialization and 
difference changes, and introduced a new HistogramBuilderTest

4) The histogram is built differently from my first patch, and is described in 
HistogramBuilder. Basically rather than creating neat linear ranges, I 
calculate the mean and create ranges that are multiples of the standard 
deviation either side of the mean, up to min/max (or, in this case, 3 stdevs, 
plus one range to min/max)

5) One thing we might want to consider changing is the format of the 
EstimatedHistogram ranges in the log messages. I've reproduced faithfully the 
boundary conventions of the EstimatedHistogram, but this is not a user friendly 
convention - it has an exclusive lower bound and inclusive upper bound, as 
opposed to the typical opposite convention. As such we get ranges like (-1, 0] 
to represent the range containing only 0, as opposed to [0, 1)

Think that's everything. Should respond quickly to queries at the moment, so 
drop me a line if you have any questions.

 Instrument repair to be able to assess it's efficiency (precision)
 --

 Key: CASSANDRA-2698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
Priority: Minor
  Labels: lhf
 Attachments: nodetool_repair_and_cfhistogram.tar.gz, 
 patch_2698_v1.txt, patch.diff, patch-rebased.diff, patch.taketwo.alpha.diff


 Some reports indicate that repair sometime transfer huge amounts of data. One 
 hypothesis is that the merkle tree precision may deteriorate too much at some 
 data size. To check this hypothesis, it would be reasonably to gather 
 statistic during the merkle tree building of how many rows each merkle tree 
 range account for (and the size that this represent). It is probably an 
 interesting statistic to have anyway.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5826) Fix trigger directory detection code

2013-07-31 Thread Vijay (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vijay updated CASSANDRA-5826:
-

Attachment: 0001-5826.patch

Attached a small patch moves the trigger directory into conf directory, hope it 
is fine. that way we can just search for the triggers directory in the class 
path (which is Conf). Thanks!

 Fix trigger directory detection code
 

 Key: CASSANDRA-5826
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5826
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0 beta 2
 Environment: OS X
Reporter: Aleksey Yeschenko
Assignee: Vijay
  Labels: triggers
 Fix For: 2.0 rc1

 Attachments: 0001-5826.patch


 At least when building from source, Cassandra determines the trigger 
 directory wrong. C* calculates the trigger directory as 'build/triggers' 
 instead of 'triggers'.
 FBUtilities.cassandraHomeDir() is to blame, and should be replaced with 
 something more robust.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)

2013-07-31 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725607#comment-13725607
 ] 

Benedict edited comment on CASSANDRA-2698 at 7/31/13 7:06 PM:
--

Oh, and patch is against main trunk, as before

  was (Author: benedict):
against main trunk, as before
  
 Instrument repair to be able to assess it's efficiency (precision)
 --

 Key: CASSANDRA-2698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
Priority: Minor
  Labels: lhf
 Attachments: nodetool_repair_and_cfhistogram.tar.gz, 
 patch_2698_v1.txt, patch.diff, patch-rebased.diff, patch.taketwo.alpha.diff, 
 patch.taketwo.forreview.diff


 Some reports indicate that repair sometime transfer huge amounts of data. One 
 hypothesis is that the merkle tree precision may deteriorate too much at some 
 data size. To check this hypothesis, it would be reasonably to gather 
 statistic during the merkle tree building of how many rows each merkle tree 
 range account for (and the size that this represent). It is probably an 
 interesting statistic to have anyway.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)

2013-07-31 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-2698:


Attachment: patch.taketwo.forreview.diff

against main trunk, as before

 Instrument repair to be able to assess it's efficiency (precision)
 --

 Key: CASSANDRA-2698
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Benedict
Priority: Minor
  Labels: lhf
 Attachments: nodetool_repair_and_cfhistogram.tar.gz, 
 patch_2698_v1.txt, patch.diff, patch-rebased.diff, patch.taketwo.alpha.diff, 
 patch.taketwo.forreview.diff


 Some reports indicate that repair sometime transfer huge amounts of data. One 
 hypothesis is that the merkle tree precision may deteriorate too much at some 
 data size. To check this hypothesis, it would be reasonably to gather 
 statistic during the merkle tree building of how many rows each merkle tree 
 range account for (and the size that this represent). It is probably an 
 interesting statistic to have anyway.   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-2524) Use SSTableBoundedScanner for cleanup

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-2524:
--

Reviewer: krummas  (was: jbellis)

 Use SSTableBoundedScanner for cleanup
 -

 Key: CASSANDRA-2524
 URL: https://issues.apache.org/jira/browse/CASSANDRA-2524
 Project: Cassandra
  Issue Type: Improvement
Reporter: Stu Hood
Assignee: Tyler Hobbs
Priority: Minor
  Labels: lhf
 Fix For: 2.0.1

 Attachments: 
 0001-CASSANDRA-2524-use-SSTableBoundedScanner-for-cleanup.patch, 
 0001-Use-a-SSTableBoundedScanner-for-cleanup-and-improve-cl.txt, 
 0002-Oops.-When-indexes-or-counters-are-in-use-must-continu.txt, 2524-v1.txt


 SSTableBoundedScanner seeks rather than scanning through rows, so it would be 
 significantly more efficient than the existing per-key filtering that cleanup 
 does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (CASSANDRA-4774) IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reopened CASSANDRA-4774:
---


 IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip
 -

 Key: CASSANDRA-4774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4774
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: Saw this when looking through some logs in version 1.0.0 
 system was under a lot of load.
Reporter: Benjamin Coverston
Assignee: Brandon Williams
Priority: Minor
 Attachments: patch.txt


 ERROR [GossipTasks:1] 2012-10-06 10:47:48,390 Gossiper.java (line 169) Gossip 
 error
 java.lang.IndexOutOfBoundsException: Index: 13, Size: 5
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:541)
   at 
 org.apache.cassandra.gms.Gossiper.doGossipToUnreachableMember(Gossiper.java:575)
   at org.apache.cassandra.gms.Gossiper.access$300(Gossiper.java:59)
   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:141)
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:79)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4687) Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)

2013-07-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725630#comment-13725630
 ] 

Balázs Póka commented on CASSANDRA-4687:


I've just reproduced it with Cassandra 1.2.6 (on Ubuntu 12.04 OpenVZ box). 1 
node cluster.

A have multiple big column families, and have been doing lots of reads/writes 
while running a data migration program. Those CFs seem to be affected more 
which get most of the reads and writes.

Log of the first exception and lines immediately before it:

INFO [OptionalTasks:1] 2013-07-23 16:36:08,651 ColumnFamilyStore.java (line 
631) Enqueuing flush of Memtable-measuredata_201305@1749058828(11711/26488 
serialized/live bytes, 276 ops)
 INFO [FlushWriter:755] 2013-07-23 16:36:08,681 Memtable.java (line 461) 
Writing Memtable-measuredata_201305@1749058828(11711/26488 serialized/live 
bytes, 276 ops)
 INFO [FlushWriter:755] 2013-07-23 16:36:08,789 Memtable.java (line 495) 
Completed flushing 
/mnt/db/cassandra/gps/measuredata_201305/gps-measuredata_201305-ic-1-Data.db 
(7808 bytes) for commitlog position ReplayPosition(segmentId=136978869451
5, position=7660)
ERROR [ReadStage:55564] 2013-07-23 17:10:00,268 CassandraDaemon.java (line 175) 
Exception in thread Thread[ReadStage:55564,5,main]
java.lang.RuntimeException: java.lang.IllegalArgumentException: unable to seek 
to position 60965 in 
/mnt/db/cassandra/gps/measuredata_201305/gps-measuredata_201305-ic-1-Data.db 
(12801 bytes) in read-only mode
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: unable to seek to position 60965 
in /mnt/db/cassandra/gps/measuredata_201305/gps-measuredata_201305-ic-1-Data.db 
(12801 bytes) in read-only mode
at 
org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:306)
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:42)
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:976)
at 
org.apache.cassandra.db.columniterator.SSTableNamesIterator.createFileDataInput(SSTableNamesIterator.java:94)
at 
org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:112)
at 
org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:60)
at 
org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
at 
org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
at org.apache.cassandra.db.Table.getRow(Table.java:347)
at 
org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
... 3 more

Clearing the key cache with nodetool invalidatekeycache effectively fixed the 
problem. I have not disabled it. Row cache is disabled.

 Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)
 ---

 Key: CASSANDRA-4687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4687
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.3 64-bit, Oracle JRE 1.6.0.33 64-bit, single 
 node cluster
Reporter: Leonid Shalupov
Priority: Minor
 Attachments: 4687-debugging.txt


 Under heavy write load sometimes cassandra fails with assertion error.
 git bisect leads to commit 295aedb278e7a495213241b66bc46d763fd4ce66.
 works fine if global key/row caches disabled in code.
 {quote}
 java.lang.AssertionError: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk) in 
 /var/lib/cassandra/data/...-he-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 

[jira] [Updated] (CASSANDRA-4774) IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4774:
--

 Reviewer: brandon.williams
Fix Version/s: 1.2.9
 Assignee: Chris Lohfink  (was: Brandon Williams)

 IndexOutOfBoundsException in org.apache.cassandra.gms.Gossiper.sendGossip
 -

 Key: CASSANDRA-4774
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4774
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.0
 Environment: Saw this when looking through some logs in version 1.0.0 
 system was under a lot of load.
Reporter: Benjamin Coverston
Assignee: Chris Lohfink
Priority: Minor
 Fix For: 1.2.9

 Attachments: patch.txt


 ERROR [GossipTasks:1] 2012-10-06 10:47:48,390 Gossiper.java (line 169) Gossip 
 error
 java.lang.IndexOutOfBoundsException: Index: 13, Size: 5
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at org.apache.cassandra.gms.Gossiper.sendGossip(Gossiper.java:541)
   at 
 org.apache.cassandra.gms.Gossiper.doGossipToUnreachableMember(Gossiper.java:575)
   at org.apache.cassandra.gms.Gossiper.access$300(Gossiper.java:59)
   at org.apache.cassandra.gms.Gossiper$GossipTask.run(Gossiper.java:141)
   at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:79)
   at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
   at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
   at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4687) Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)

2013-07-31 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725630#comment-13725630
 ] 

Balázs Póka edited comment on CASSANDRA-4687 at 7/31/13 7:36 PM:
-

I've just reproduced it with Cassandra 1.2.6 (on Ubuntu 12.04 OpenVZ box). 1 
node cluster. Oracle Java 1.7.0_u21.

A have multiple big column families, and have been doing lots of reads/writes 
while running a data migration program. Those CFs seem to be affected more 
which get most of the reads and writes.

Log of the first exception and lines immediately before it:

INFO [OptionalTasks:1] 2013-07-23 16:36:08,651 ColumnFamilyStore.java (line 
631) Enqueuing flush of Memtable-measuredata_201305@1749058828(11711/26488 
serialized/live bytes, 276 ops)
 INFO [FlushWriter:755] 2013-07-23 16:36:08,681 Memtable.java (line 461) 
Writing Memtable-measuredata_201305@1749058828(11711/26488 serialized/live 
bytes, 276 ops)
 INFO [FlushWriter:755] 2013-07-23 16:36:08,789 Memtable.java (line 495) 
Completed flushing 
/mnt/db/cassandra/gps/measuredata_201305/gps-measuredata_201305-ic-1-Data.db 
(7808 bytes) for commitlog position ReplayPosition(segmentId=136978869451
5, position=7660)
ERROR [ReadStage:55564] 2013-07-23 17:10:00,268 CassandraDaemon.java (line 175) 
Exception in thread Thread[ReadStage:55564,5,main]
java.lang.RuntimeException: java.lang.IllegalArgumentException: unable to seek 
to position 60965 in 
/mnt/db/cassandra/gps/measuredata_201305/gps-measuredata_201305-ic-1-Data.db 
(12801 bytes) in read-only mode
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: unable to seek to position 60965 
in /mnt/db/cassandra/gps/measuredata_201305/gps-measuredata_201305-ic-1-Data.db 
(12801 bytes) in read-only mode
at 
org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:306)
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:42)
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:976)
at 
org.apache.cassandra.db.columniterator.SSTableNamesIterator.createFileDataInput(SSTableNamesIterator.java:94)
at 
org.apache.cassandra.db.columniterator.SSTableNamesIterator.read(SSTableNamesIterator.java:112)
at 
org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:60)
at 
org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
at 
org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126)
at org.apache.cassandra.db.Table.getRow(Table.java:347)
at 
org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1578)
... 3 more

Clearing the key cache with nodetool invalidatekeycache effectively fixed the 
problem. I have not disabled it. Row cache is disabled.

  was (Author: pokabalazs):
I've just reproduced it with Cassandra 1.2.6 (on Ubuntu 12.04 OpenVZ box). 
1 node cluster.

A have multiple big column families, and have been doing lots of reads/writes 
while running a data migration program. Those CFs seem to be affected more 
which get most of the reads and writes.

Log of the first exception and lines immediately before it:

INFO [OptionalTasks:1] 2013-07-23 16:36:08,651 ColumnFamilyStore.java (line 
631) Enqueuing flush of Memtable-measuredata_201305@1749058828(11711/26488 
serialized/live bytes, 276 ops)
 INFO [FlushWriter:755] 2013-07-23 16:36:08,681 Memtable.java (line 461) 
Writing Memtable-measuredata_201305@1749058828(11711/26488 serialized/live 
bytes, 276 ops)
 INFO [FlushWriter:755] 2013-07-23 16:36:08,789 Memtable.java (line 495) 
Completed flushing 
/mnt/db/cassandra/gps/measuredata_201305/gps-measuredata_201305-ic-1-Data.db 
(7808 bytes) for 

[jira] [Updated] (CASSANDRA-5831) Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the first time breaks stuff

2013-07-31 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-5831:
---

Attachment: 0001-Check-for-current-directory-layout-before-upgrading.patch

Attached patch 0001 does exactly that.

 Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the 
 first time breaks stuff
 -

 Key: CASSANDRA-5831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5831
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 1.2.9

 Attachments: 
 0001-Check-for-current-directory-layout-before-upgrading.patch


 If you try to upgrade from C* 1.0.X to 1.2.X and run offline sstableupgrade 
 to try and migrate the sstables before starting 1.2.X for the first time, it 
 messes up the system folder, because it doesn't migrate it right, and then C* 
 1.2 can't start.
 sstableupgrade should either refuse to run against a C* 1.0 data folder, or 
 migrate stuff the right way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5752) Thrift tables are not supported from CqlPagingInputFormat

2013-07-31 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725731#comment-13725731
 ] 

Alex Liu edited comment on CASSANDRA-5752 at 7/31/13 9:23 PM:
--

5752-2-1.2-branch.txt is attached to clean up the code as suggested. It also 
includes the fix for CqlRecordWriter.

  was (Author: alexliu68):
5752-2-1.2-branch.txt is attached to clean up the code as suggested.
  
 Thrift tables are not supported from CqlPagingInputFormat
 -

 Key: CASSANDRA-5752
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5752
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Jonathan Ellis
Assignee: Alex Liu
 Fix For: 1.2.9

 Attachments: 5752-1-1.2-branch.txt, 5752-1.2-branch.txt, 
 5752-2-1.2-branch.txt


 CqlPagingInputFormat inspects the system schema to generate the WHERE clauses 
 needed to page wide rows, but for a classic Thrift table there are no 
 entries for the default column names of key, column1, column2, ..., value 
 so CPIF breaks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5752) Thrift tables are not supported from CqlPagingInputFormat

2013-07-31 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-5752:


Attachment: 5752-2-1.2-branch.txt

5752-2-1.2-branch.txt is attached to clean up the code as suggested.

 Thrift tables are not supported from CqlPagingInputFormat
 -

 Key: CASSANDRA-5752
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5752
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Jonathan Ellis
Assignee: Alex Liu
 Fix For: 1.2.9

 Attachments: 5752-1-1.2-branch.txt, 5752-1.2-branch.txt, 
 5752-2-1.2-branch.txt


 CqlPagingInputFormat inspects the system schema to generate the WHERE clauses 
 needed to page wide rows, but for a classic Thrift table there are no 
 entries for the default column names of key, column1, column2, ..., value 
 so CPIF breaks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5835) Pig CqlStorage doesn't support classic thrift tables

2013-07-31 Thread Alex Liu (JIRA)
Alex Liu created CASSANDRA-5835:
---

 Summary: Pig CqlStorage doesn't support classic thrift tables
 Key: CASSANDRA-5835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5835
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.7
Reporter: Alex Liu
Assignee: Alex Liu


CASSANDRA-5752 fix the issue to support thrift tables in Hadoop. This ticket to 
support thrift tables in Pig CqlStorage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5835) Pig CqlStorage doesn't support classic thrift tables

2013-07-31 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-5835:


Attachment: 5853-1.2-branch.txt

Attached the patch.

 Pig CqlStorage doesn't support classic thrift tables
 

 Key: CASSANDRA-5835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5835
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.7
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 5853-1.2-branch.txt


 CASSANDRA-5752 fix the issue to support thrift tables in Hadoop. This ticket 
 to support thrift tables in Pig CqlStorage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5835) Pig CqlStorage doesn't support classic thrift tables

2013-07-31 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725737#comment-13725737
 ] 

Alex Liu edited comment on CASSANDRA-5835 at 7/31/13 9:27 PM:
--

Attached the patch. It is on top of CASSANDRA-5752

  was (Author: alexliu68):
Attached the patch.
  
 Pig CqlStorage doesn't support classic thrift tables
 

 Key: CASSANDRA-5835
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5835
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.7
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 5853-1.2-branch.txt


 CASSANDRA-5752 fix the issue to support thrift tables in Hadoop. This ticket 
 to support thrift tables in Pig CqlStorage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5825) StatusLogger should print out the All time blocked stat like tpstats does

2013-07-31 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-5825:
---

Attachment: 0002-Add-dropped-message-counts-to-status-log.patch
0001-Add-completed-total-blocked-to-TP-status-logs.patch

0001 adds the Completed and All Time Blocked columns to the TP status log.

0002 adds a new section with dropped message counts by type.

 StatusLogger should print out the All time blocked stat like tpstats does
 ---

 Key: CASSANDRA-5825
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5825
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
Priority: Minor
  Labels: lhf
 Fix For: 1.2.9

 Attachments: 
 0001-Add-completed-total-blocked-to-TP-status-logs.patch, 
 0002-Add-dropped-message-counts-to-status-log.patch


 StatusLogger currently prints out Pool Name, Active, Pending, Blocked.
 We should change it to be Pool Name, Active, Pending, Completed, 
 Blocked, All time blocked like tpstats has.
 The DROPPED counts would be nice in there too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5825) StatusLogger should print out the All time blocked stat like tpstats does

2013-07-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725808#comment-13725808
 ] 

Jonathan Ellis commented on CASSANDRA-5825:
---

I'm not really sold on this to be honest.  StatusLogger kicks in when Bad 
Things Happen; lifetime numbers are just noise when it comes to diagnosing the 
immediate problem they have.  It's not intended to be a replacement for 
tpstats, cfhistograms, et al.

 StatusLogger should print out the All time blocked stat like tpstats does
 ---

 Key: CASSANDRA-5825
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5825
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
Priority: Minor
  Labels: lhf
 Fix For: 1.2.9

 Attachments: 
 0001-Add-completed-total-blocked-to-TP-status-logs.patch, 
 0002-Add-dropped-message-counts-to-status-log.patch


 StatusLogger currently prints out Pool Name, Active, Pending, Blocked.
 We should change it to be Pool Name, Active, Pending, Completed, 
 Blocked, All time blocked like tpstats has.
 The DROPPED counts would be nice in there too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5831) Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the first time breaks stuff

2013-07-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725810#comment-13725810
 ] 

Jonathan Ellis commented on CASSANDRA-5831:
---

I think we probably want to decouble sstableNeedsMigration from its caller in 
CassandraDaemon, it throws a bunch of exceptions that are probably not expected.

However, I also note that StandaloneScrubber calls sNM, so maybe making 
upgradesstables able to perform the migration automagically isn't as painful as 
I thought.

 Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the 
 first time breaks stuff
 -

 Key: CASSANDRA-5831
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5831
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 1.2.9

 Attachments: 
 0001-Check-for-current-directory-layout-before-upgrading.patch


 If you try to upgrade from C* 1.0.X to 1.2.X and run offline sstableupgrade 
 to try and migrate the sstables before starting 1.2.X for the first time, it 
 messes up the system folder, because it doesn't migrate it right, and then C* 
 1.2 can't start.
 sstableupgrade should either refuse to run against a C* 1.0 data folder, or 
 migrate stuff the right way.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5836) Seed nodes should be able to bootstrap without manual intervention

2013-07-31 Thread Bill Hathaway (JIRA)
Bill Hathaway created CASSANDRA-5836:


 Summary: Seed nodes should be able to bootstrap without manual 
intervention
 Key: CASSANDRA-5836
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5836
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Bill Hathaway
Priority: Minor


The current logic doesn't allow a seed node to be bootstrapped.  If a user 
wants to bootstrap a node configured as a seed (for example to replace a seed 
node via replace_token), they first need to remove the node's own IP from the 
seed list, and then start the bootstrap process.  This seems like an 
unnecessary step since a node never uses itself as a seed.

I think it would be a better experience if the logic was changed to allow a 
seed node to bootstrap without manual intervention when there are other seed 
nodes up in a ring.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/3] git commit: fix support for Thrift tables in CqlPagingRecordReader patch by Alex Liu; reviewed by jbellis for CASSANDRA-5752

2013-07-31 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 ba274adb7 - 7a3942107
  refs/heads/trunk e01c238fa - 2451140b8


fix support for Thrift tables in CqlPagingRecordReader
patch by Alex Liu; reviewed by jbellis for CASSANDRA-5752


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7a394210
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7a394210
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7a394210

Branch: refs/heads/cassandra-1.2
Commit: 7a39421074d3d14bfc1a4fa1ab986b4fa614f324
Parents: ba274ad
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:15:40 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:16:23 2013 -0500

--
 CHANGES.txt |  2 +
 .../hadoop/cql3/CqlPagingRecordReader.java  | 41 ++--
 .../cassandra/hadoop/cql3/CqlRecordWriter.java  | 33 
 3 files changed, 72 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a394210/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da1ec20..377b5a1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,6 +6,8 @@
  * Allow compacting 2Is via nodetool (CASSANDRA-5670)
  * Hex-encode non-String keys in OPP (CASSANDRA-5793)
  * nodetool history logging (CASSANDRA-5823)
+ * (Hadoop) fix support for Thrift tables in CqlPagingRecordReader 
+   (CASSANDRA-5752)
 
 
 1.2.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a394210/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
index fc07131..db77c9e 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
@@ -29,6 +29,9 @@ import com.google.common.collect.Iterables;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.cql3.CFDefinition;
+import org.apache.cassandra.cql3.ColumnIdentifier;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.CompositeType;
 import org.apache.cassandra.db.marshal.LongType;
@@ -671,6 +674,11 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 
 for (String key : keys)
 partitionBoundColumns.add(new BoundColumn(key));
+if (partitionBoundColumns.size() == 0)
+{
+retrieveKeysForThriftTables();
+return;
+}
 
 keyString = 
ByteBufferUtil.string(ByteBuffer.wrap(cqlRow.columns.get(1).getValue()));
 logger.debug(cluster columns: {}, keyString);
@@ -679,10 +687,35 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 for (String key : keys)
 clusterColumns.add(new BoundColumn(key));
 
-Column rawKeyValidator = cqlRow.columns.get(2);
-String validator = 
ByteBufferUtil.string(ByteBuffer.wrap(rawKeyValidator.getValue()));
-logger.debug(row key validator: {}, validator);
-keyValidator = parseType(validator);
+
parseKeyValidators(ByteBufferUtil.string(ByteBuffer.wrap(cqlRow.columns.get(2).getValue(;
+}
+
+/** 
+ * retrieve the fake partition keys and cluster keys for classic thrift 
table 
+ * use CFDefinition to get keys and columns
+ * */
+private void retrieveKeysForThriftTables() throws Exception
+{
+KsDef ksDef = client.describe_keyspace(keyspace);
+for (CfDef cfDef : ksDef.cf_defs)
+{
+if (cfDef.name.equalsIgnoreCase(cfName))
+{
+CFMetaData cfMeta = CFMetaData.fromThrift(cfDef);
+CFDefinition cfDefinition = new CFDefinition(cfMeta);
+for (ColumnIdentifier columnIdentifier : 
cfDefinition.keys.keySet())
+partitionBoundColumns.add(new 
BoundColumn(columnIdentifier.toString()));
+parseKeyValidators(cfDef.key_validation_class);
+return;
+}
+}
+}
+
+/** parse key validators */
+private void parseKeyValidators(String rowKeyValidator) throws IOException
+{
+logger.debug(row key validator: {} , rowKeyValidator);
+keyValidator = parseType(rowKeyValidator);
 
 if (keyValidator instanceof CompositeType)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a394210/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java

[2/3] git commit: fix support for Thrift tables in CqlPagingRecordReader patch by Alex Liu; reviewed by jbellis for CASSANDRA-5752

2013-07-31 Thread jbellis
fix support for Thrift tables in CqlPagingRecordReader
patch by Alex Liu; reviewed by jbellis for CASSANDRA-5752


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7a394210
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7a394210
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7a394210

Branch: refs/heads/trunk
Commit: 7a39421074d3d14bfc1a4fa1ab986b4fa614f324
Parents: ba274ad
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:15:40 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:16:23 2013 -0500

--
 CHANGES.txt |  2 +
 .../hadoop/cql3/CqlPagingRecordReader.java  | 41 ++--
 .../cassandra/hadoop/cql3/CqlRecordWriter.java  | 33 
 3 files changed, 72 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a394210/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index da1ec20..377b5a1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,6 +6,8 @@
  * Allow compacting 2Is via nodetool (CASSANDRA-5670)
  * Hex-encode non-String keys in OPP (CASSANDRA-5793)
  * nodetool history logging (CASSANDRA-5823)
+ * (Hadoop) fix support for Thrift tables in CqlPagingRecordReader 
+   (CASSANDRA-5752)
 
 
 1.2.8

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a394210/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
index fc07131..db77c9e 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
@@ -29,6 +29,9 @@ import com.google.common.collect.Iterables;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.cql3.CFDefinition;
+import org.apache.cassandra.cql3.ColumnIdentifier;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.CompositeType;
 import org.apache.cassandra.db.marshal.LongType;
@@ -671,6 +674,11 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 
 for (String key : keys)
 partitionBoundColumns.add(new BoundColumn(key));
+if (partitionBoundColumns.size() == 0)
+{
+retrieveKeysForThriftTables();
+return;
+}
 
 keyString = 
ByteBufferUtil.string(ByteBuffer.wrap(cqlRow.columns.get(1).getValue()));
 logger.debug(cluster columns: {}, keyString);
@@ -679,10 +687,35 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 for (String key : keys)
 clusterColumns.add(new BoundColumn(key));
 
-Column rawKeyValidator = cqlRow.columns.get(2);
-String validator = 
ByteBufferUtil.string(ByteBuffer.wrap(rawKeyValidator.getValue()));
-logger.debug(row key validator: {}, validator);
-keyValidator = parseType(validator);
+
parseKeyValidators(ByteBufferUtil.string(ByteBuffer.wrap(cqlRow.columns.get(2).getValue(;
+}
+
+/** 
+ * retrieve the fake partition keys and cluster keys for classic thrift 
table 
+ * use CFDefinition to get keys and columns
+ * */
+private void retrieveKeysForThriftTables() throws Exception
+{
+KsDef ksDef = client.describe_keyspace(keyspace);
+for (CfDef cfDef : ksDef.cf_defs)
+{
+if (cfDef.name.equalsIgnoreCase(cfName))
+{
+CFMetaData cfMeta = CFMetaData.fromThrift(cfDef);
+CFDefinition cfDefinition = new CFDefinition(cfMeta);
+for (ColumnIdentifier columnIdentifier : 
cfDefinition.keys.keySet())
+partitionBoundColumns.add(new 
BoundColumn(columnIdentifier.toString()));
+parseKeyValidators(cfDef.key_validation_class);
+return;
+}
+}
+}
+
+/** parse key validators */
+private void parseKeyValidators(String rowKeyValidator) throws IOException
+{
+logger.debug(row key validator: {} , rowKeyValidator);
+keyValidator = parseType(rowKeyValidator);
 
 if (keyValidator instanceof CompositeType)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7a394210/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
--
diff --git 

[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-07-31 Thread jbellis
Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2451140b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2451140b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2451140b

Branch: refs/heads/trunk
Commit: 2451140b868faed033c291ba0589a05564aae04b
Parents: e01c238 7a39421
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:16:31 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:16:31 2013 -0500

--
 CHANGES.txt |  2 +
 .../hadoop/cql3/CqlPagingRecordReader.java  | 41 ++--
 .../cassandra/hadoop/cql3/CqlRecordWriter.java  | 33 
 3 files changed, 72 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/2451140b/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2451140b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/2451140b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
--



git commit: r/m retrieveKeysForThriftTables since it is unneeded in 2.0

2013-07-31 Thread jbellis
Updated Branches:
  refs/heads/trunk 2451140b8 - d410a719d


r/m retrieveKeysForThriftTables since it is unneeded in 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d410a719
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d410a719
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d410a719

Branch: refs/heads/trunk
Commit: d410a719d2cbe9d8e7cb89fdf2075792e06a303f
Parents: 2451140
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:20:17 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:20:17 2013 -0500

--
 .../hadoop/cql3/CqlPagingRecordReader.java  | 41 ++--
 1 file changed, 4 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d410a719/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
index a342ac4..54506e9 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
@@ -29,9 +29,6 @@ import com.google.common.collect.Iterables;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.cql3.CFDefinition;
-import org.apache.cassandra.cql3.ColumnIdentifier;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.CompositeType;
 import org.apache.cassandra.db.marshal.LongType;
@@ -674,11 +671,6 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 
 for (String key : keys)
 partitionBoundColumns.add(new BoundColumn(key));
-if (partitionBoundColumns.size() == 0)
-{
-retrieveKeysForThriftTables();
-return;
-}
 
 keyString = 
ByteBufferUtil.string(ByteBuffer.wrap(cqlRow.columns.get(1).getValue()));
 logger.debug(cluster columns: {}, keyString);
@@ -687,35 +679,10 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 for (String key : keys)
 clusterColumns.add(new BoundColumn(key));
 
-
parseKeyValidators(ByteBufferUtil.string(ByteBuffer.wrap(cqlRow.columns.get(2).getValue(;
-}
-
-/** 
- * retrieve the fake partition keys and cluster keys for classic thrift 
table 
- * use CFDefinition to get keys and columns
- * */
-private void retrieveKeysForThriftTables() throws Exception
-{
-KsDef ksDef = client.describe_keyspace(keyspace);
-for (CfDef cfDef : ksDef.cf_defs)
-{
-if (cfDef.name.equalsIgnoreCase(cfName))
-{
-CFMetaData cfMeta = CFMetaData.fromThrift(cfDef);
-CFDefinition cfDefinition = new CFDefinition(cfMeta);
-for (ColumnIdentifier columnIdentifier : 
cfDefinition.keys.keySet())
-partitionBoundColumns.add(new 
BoundColumn(columnIdentifier.toString()));
-parseKeyValidators(cfDef.key_validation_class);
-return;
-}
-}
-}
-
-/** parse key validators */
-private void parseKeyValidators(String rowKeyValidator) throws IOException
-{
-logger.debug(row key validator: {} , rowKeyValidator);
-keyValidator = parseType(rowKeyValidator);
+Column rawKeyValidator = cqlRow.columns.get(2);
+String validator = 
ByteBufferUtil.string(ByteBuffer.wrap(rawKeyValidator.getValue()));
+logger.debug(row key validator: {}, validator);
+keyValidator = parseType(validator);
 
 if (keyValidator instanceof CompositeType)
 {



[jira] [Updated] (CASSANDRA-5752) Thrift tables are not supported from CqlPagingInputFormat

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5752:
--

   Tester: enigmacurry
Fix Version/s: 2.0 rc1

Committed.

I left the CqlPRR changes out of 2.0 since the system tables include 
information for Thrift tables there, as mentioned above.

 Thrift tables are not supported from CqlPagingInputFormat
 -

 Key: CASSANDRA-5752
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5752
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Jonathan Ellis
Assignee: Alex Liu
 Fix For: 2.0 rc1, 1.2.9

 Attachments: 5752-1-1.2-branch.txt, 5752-1.2-branch.txt, 
 5752-2-1.2-branch.txt


 CqlPagingInputFormat inspects the system schema to generate the WHERE clauses 
 needed to page wide rows, but for a classic Thrift table there are no 
 entries for the default column names of key, column1, column2, ..., value 
 so CPIF breaks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/3] git commit: cleanup

2013-07-31 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 7a3942107 - a7202effa
  refs/heads/trunk d410a719d - 14943766e


cleanup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a7202eff
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a7202eff
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a7202eff

Branch: refs/heads/cassandra-1.2
Commit: a7202effa7067604ef23eae441caa8b9480a63f4
Parents: 7a39421
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:22:49 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:22:49 2013 -0500

--
 .../org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java   | 2 +-
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java| 3 ++-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7202eff/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
index db77c9e..c6eb46d 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
@@ -674,7 +674,7 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 
 for (String key : keys)
 partitionBoundColumns.add(new BoundColumn(key));
-if (partitionBoundColumns.size() == 0)
+if (partitionBoundColumns.isEmpty())
 {
 retrieveKeysForThriftTables();
 return;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7202eff/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
index 76d419e..3d03486 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
@@ -340,7 +340,7 @@ final class CqlRecordWriter extends 
AbstractColumnFamilyRecordWriterMapString,
 logger.debug(partition keys:  + keyString);
 
 ListString keys = FBUtilities.fromJsonList(keyString);
-if (keys.size() == 0)
+if (keys.isEmpty())
 {
 retrieveKeysForThriftTables(client);
 return;
@@ -385,6 +385,7 @@ final class CqlRecordWriter extends 
AbstractColumnFamilyRecordWriterMapString,
 }
 }
 }
+
 private AbstractType? parseType(String type) throws 
ConfigurationException
 {
 try



git commit: r/m retrieveKeysForThriftTables since it is unneeded in 2.0

2013-07-31 Thread jbellis
Updated Branches:
  refs/heads/trunk 14943766e - 0f49210cb


r/m retrieveKeysForThriftTables since it is unneeded in 2.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0f49210c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0f49210c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0f49210c

Branch: refs/heads/trunk
Commit: 0f49210cb6af7a57b9d8a37cab4f3b1edbd64ec1
Parents: 1494376
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:23:39 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:23:39 2013 -0500

--
 .../cassandra/hadoop/cql3/CqlRecordWriter.java  | 33 
 1 file changed, 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0f49210c/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
index 838dbda..4746f8a 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
@@ -26,9 +26,6 @@ import java.util.concurrent.ConcurrentHashMap;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import org.apache.cassandra.config.CFMetaData;
-import org.apache.cassandra.cql3.CFDefinition;
-import org.apache.cassandra.cql3.ColumnIdentifier;
 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.CompositeType;
 import org.apache.cassandra.db.marshal.LongType;
@@ -340,11 +337,6 @@ final class CqlRecordWriter extends 
AbstractColumnFamilyRecordWriterMapString,
 logger.debug(partition keys:  + keyString);
 
 ListString keys = FBUtilities.fromJsonList(keyString);
-if (keys.size() == 0)
-{
-retrieveKeysForThriftTables(client);
-return;
-}
 partitionKeyColumns = new String[keys.size()];
 int i = 0;
 for (String key : keys)
@@ -360,31 +352,6 @@ final class CqlRecordWriter extends 
AbstractColumnFamilyRecordWriterMapString,
 clusterColumns = FBUtilities.fromJsonList(clusterColumnString);
 }
 
-/** 
- * retrieve the fake partition keys and cluster keys for classic thrift 
table 
- * use CFDefinition to get keys and columns
- * */
-private void retrieveKeysForThriftTables(Cassandra.Client client) throws 
Exception
-{
-String keyspace = ConfigHelper.getOutputKeyspace(conf);
-String cfName = ConfigHelper.getOutputColumnFamily(conf);
-KsDef ksDef = client.describe_keyspace(keyspace);
-for (CfDef cfDef : ksDef.cf_defs)
-{
-if (cfDef.name.equalsIgnoreCase(cfName))
-{
-CFMetaData cfMeta = CFMetaData.fromThrift(cfDef);
-CFDefinition cfDefinition = new CFDefinition(cfMeta);
-int i = 0;
-for (ColumnIdentifier column : cfDefinition.keys.keySet())
-{
-partitionKeyColumns[i] = column.toString();
-i++;
-}
-return;
-}
-}
-}
 private AbstractType? parseType(String type) throws 
ConfigurationException
 {
 try



[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-07-31 Thread jbellis
Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/14943766
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/14943766
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/14943766

Branch: refs/heads/trunk
Commit: 14943766ef48d8e2e7a1c6978512904c5a1d5aaa
Parents: d410a71 a7202ef
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:23:08 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:23:08 2013 -0500

--

--




[2/3] git commit: cleanup

2013-07-31 Thread jbellis
cleanup


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a7202eff
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a7202eff
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a7202eff

Branch: refs/heads/trunk
Commit: a7202effa7067604ef23eae441caa8b9480a63f4
Parents: 7a39421
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:22:49 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:22:49 2013 -0500

--
 .../org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java   | 2 +-
 src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java| 3 ++-
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7202eff/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
--
diff --git 
a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
index db77c9e..c6eb46d 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java
@@ -674,7 +674,7 @@ public class CqlPagingRecordReader extends 
RecordReaderMapString, ByteBuffer,
 
 for (String key : keys)
 partitionBoundColumns.add(new BoundColumn(key));
-if (partitionBoundColumns.size() == 0)
+if (partitionBoundColumns.isEmpty())
 {
 retrieveKeysForThriftTables();
 return;

http://git-wip-us.apache.org/repos/asf/cassandra/blob/a7202eff/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java 
b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
index 76d419e..3d03486 100644
--- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
+++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java
@@ -340,7 +340,7 @@ final class CqlRecordWriter extends 
AbstractColumnFamilyRecordWriterMapString,
 logger.debug(partition keys:  + keyString);
 
 ListString keys = FBUtilities.fromJsonList(keyString);
-if (keys.size() == 0)
+if (keys.isEmpty())
 {
 retrieveKeysForThriftTables(client);
 return;
@@ -385,6 +385,7 @@ final class CqlRecordWriter extends 
AbstractColumnFamilyRecordWriterMapString,
 }
 }
 }
+
 private AbstractType? parseType(String type) throws 
ConfigurationException
 {
 try



git commit: and finally r/m irrelevant line from CHANGES

2013-07-31 Thread jbellis
Updated Branches:
  refs/heads/trunk 0f49210cb - ec4b1fe61


and finally r/m irrelevant line from CHANGES


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ec4b1fe6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ec4b1fe6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ec4b1fe6

Branch: refs/heads/trunk
Commit: ec4b1fe61591497246e887c5f7e49ce9d684f44d
Parents: 0f49210
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 18:25:01 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 18:25:01 2013 -0500

--
 CHANGES.txt | 2 --
 1 file changed, 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ec4b1fe6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c1b83b9..ff3ea41 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -16,8 +16,6 @@ Merged from 1.2:
  * Allow compacting 2Is via nodetool (CASSANDRA-5670)
  * Hex-encode non-String keys in OPP (CASSANDRA-5793)
  * nodetool history logging (CASSANDRA-5823)
- * (Hadoop) fix support for Thrift tables in CqlPagingRecordReader 
-   (CASSANDRA-5752)
 
 
 1.2.8



[jira] [Updated] (CASSANDRA-5752) Thrift tables are not supported from CqlPagingInputFormat

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5752:
--

Fix Version/s: (was: 2.0 rc1)

 Thrift tables are not supported from CqlPagingInputFormat
 -

 Key: CASSANDRA-5752
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5752
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Jonathan Ellis
Assignee: Alex Liu
 Fix For: 1.2.9

 Attachments: 5752-1-1.2-branch.txt, 5752-1.2-branch.txt, 
 5752-2-1.2-branch.txt


 CqlPagingInputFormat inspects the system schema to generate the WHERE clauses 
 needed to page wide rows, but for a classic Thrift table there are no 
 entries for the default column names of key, column1, column2, ..., value 
 so CPIF breaks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5752) Thrift tables are not supported from CqlPagingInputFormat

2013-07-31 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13725823#comment-13725823
 ] 

Jonathan Ellis edited comment on CASSANDRA-5752 at 7/31/13 11:24 PM:
-

Committed to 1.2 only. (In 2.0 the system schema include the required 
information for Thrift tables, as mentioned above.)

  was (Author: jbellis):
Committed.

I left the CqlPRR changes out of 2.0 since the system tables include 
information for Thrift tables there, as mentioned above.
  
 Thrift tables are not supported from CqlPagingInputFormat
 -

 Key: CASSANDRA-5752
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5752
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.6
Reporter: Jonathan Ellis
Assignee: Alex Liu
 Fix For: 2.0 rc1, 1.2.9

 Attachments: 5752-1-1.2-branch.txt, 5752-1.2-branch.txt, 
 5752-2-1.2-branch.txt


 CqlPagingInputFormat inspects the system schema to generate the WHERE clauses 
 needed to page wide rows, but for a classic Thrift table there are no 
 entries for the default column names of key, column1, column2, ..., value 
 so CPIF breaks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5836) Seed nodes should be able to bootstrap without manual intervention

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-5836.
---

Resolution: Won't Fix

The list of special cases here is complex enough without adding more.  Tweaking 
a config file (no restart is required) doesn't seem unreasonable.

 Seed nodes should be able to bootstrap without manual intervention
 --

 Key: CASSANDRA-5836
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5836
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Bill Hathaway
Priority: Minor

 The current logic doesn't allow a seed node to be bootstrapped.  If a user 
 wants to bootstrap a node configured as a seed (for example to replace a seed 
 node via replace_token), they first need to remove the node's own IP from the 
 seed list, and then start the bootstrap process.  This seems like an 
 unnecessary step since a node never uses itself as a seed.
 I think it would be a better experience if the logic was changed to allow a 
 seed node to bootstrap without manual intervention when there are other seed 
 nodes up in a ring.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5830) Paxos loops endlessly due to faulty condition check

2013-07-31 Thread Soumava Ghosh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumava Ghosh updated CASSANDRA-5830:
-

Labels: 2.0 cas paxos  (was: )

 Paxos loops endlessly due to faulty condition check
 ---

 Key: CASSANDRA-5830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5830
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 2.0 beta 2
Reporter: Soumava Ghosh
  Labels: 2.0, cas, paxos

 Following is the code segment (StorageProxy.java:361) which causes the issue: 
 Start is the start time of the paxos, is always less than the current system 
 time, and therefore the negative difference is always less than the timeout. 
 {code:title=StorageProxy.java|borderStyle=solid}
 private static UUID beginAndRepairPaxos(long start, ByteBuffer key, 
 CFMetaData metadata, ListInetAddress liveEndpoints, int 
 requiredParticipants, ConsistencyLevel consistencyForPaxos)
 throws WriteTimeoutException
 {
 long timeout = 
 TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout());
 PrepareCallback summary = null;
 while (start - System.nanoTime()  timeout)
 {
 long ballotMillis = summary == null
   ? System.currentTimeMillis()
   : Math.max(System.currentTimeMillis(), 1 + 
 UUIDGen.unixTimestamp(summary.inProgressCommit.ballot));
 UUID ballot = UUIDGen.getTimeUUID(ballotMillis);
 {code}
 Here, the paxos gets stuck when PREPARE returns 'true' but with 
 inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then 
 tries to issue a PROPOSE and COMMIT for the inProgressCommit, and if it 
 repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless 
 loop until PREPARE_RESPONSE is true. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5830) Paxos loops endlessly due to faulty condition check

2013-07-31 Thread Soumava Ghosh (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Soumava Ghosh updated CASSANDRA-5830:
-

Labels: 2.0 paxos  (was: 2.0 cas paxos)

 Paxos loops endlessly due to faulty condition check
 ---

 Key: CASSANDRA-5830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5830
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 2.0 beta 2
Reporter: Soumava Ghosh
  Labels: 2.0, paxos

 Following is the code segment (StorageProxy.java:361) which causes the issue: 
 Start is the start time of the paxos, is always less than the current system 
 time, and therefore the negative difference is always less than the timeout. 
 {code:title=StorageProxy.java|borderStyle=solid}
 private static UUID beginAndRepairPaxos(long start, ByteBuffer key, 
 CFMetaData metadata, ListInetAddress liveEndpoints, int 
 requiredParticipants, ConsistencyLevel consistencyForPaxos)
 throws WriteTimeoutException
 {
 long timeout = 
 TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout());
 PrepareCallback summary = null;
 while (start - System.nanoTime()  timeout)
 {
 long ballotMillis = summary == null
   ? System.currentTimeMillis()
   : Math.max(System.currentTimeMillis(), 1 + 
 UUIDGen.unixTimestamp(summary.inProgressCommit.ballot));
 UUID ballot = UUIDGen.getTimeUUID(ballotMillis);
 {code}
 Here, the paxos gets stuck when PREPARE returns 'true' but with 
 inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then 
 tries to issue a PROPOSE and COMMIT for the inProgressCommit, and if it 
 repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless 
 loop until PREPARE_RESPONSE is true. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5830) Paxos loops endlessly due to faulty condition check

2013-07-31 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5830:
--

Labels: paxos  (was: 2.0 paxos)

 Paxos loops endlessly due to faulty condition check
 ---

 Key: CASSANDRA-5830
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5830
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 2.0 beta 2
Reporter: Soumava Ghosh
  Labels: paxos

 Following is the code segment (StorageProxy.java:361) which causes the issue: 
 Start is the start time of the paxos, is always less than the current system 
 time, and therefore the negative difference is always less than the timeout. 
 {code:title=StorageProxy.java|borderStyle=solid}
 private static UUID beginAndRepairPaxos(long start, ByteBuffer key, 
 CFMetaData metadata, ListInetAddress liveEndpoints, int 
 requiredParticipants, ConsistencyLevel consistencyForPaxos)
 throws WriteTimeoutException
 {
 long timeout = 
 TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout());
 PrepareCallback summary = null;
 while (start - System.nanoTime()  timeout)
 {
 long ballotMillis = summary == null
   ? System.currentTimeMillis()
   : Math.max(System.currentTimeMillis(), 1 + 
 UUIDGen.unixTimestamp(summary.inProgressCommit.ballot));
 UUID ballot = UUIDGen.getTimeUUID(ballotMillis);
 {code}
 Here, the paxos gets stuck when PREPARE returns 'true' but with 
 inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then 
 tries to issue a PROPOSE and COMMIT for the inProgressCommit, and if it 
 repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless 
 loop until PREPARE_RESPONSE is true. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-07-31 Thread jbellis
Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/05d27ea2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/05d27ea2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/05d27ea2

Branch: refs/heads/trunk
Commit: 05d27ea27e2f8f2e76b5ff104d30871b8264261d
Parents: ec4b1fe 5c28958
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 19:22:41 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 19:22:41 2013 -0500

--

--




[2/3] git commit: r/m PBS test since it keeps heisenfailing (PBS is already gone for 2.0; see #5455)

2013-07-31 Thread jbellis
r/m PBS test since it keeps heisenfailing (PBS is already gone for 2.0; see 
#5455)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c289588
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c289588
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c289588

Branch: refs/heads/trunk
Commit: 5c2895881b8f4ff080f1f7236f5d759f0323ea91
Parents: a7202ef
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 19:22:29 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 19:22:29 2013 -0500

--
 .../cassandra/service/PBSPredictorTest.java | 114 ---
 1 file changed, 114 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c289588/test/unit/org/apache/cassandra/service/PBSPredictorTest.java
--
diff --git a/test/unit/org/apache/cassandra/service/PBSPredictorTest.java 
b/test/unit/org/apache/cassandra/service/PBSPredictorTest.java
deleted file mode 100644
index 92e863d..000
--- a/test/unit/org/apache/cassandra/service/PBSPredictorTest.java
+++ /dev/null
@@ -1,114 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.cassandra.service;
-
-import org.junit.Test;
-import static org.junit.Assert.*;
-
-public class PBSPredictorTest
-{
-private static PBSPredictor predictor = PBSPredictor.instance();
-
-private void createWriteResponse(long W, long A, String id)
-{
-predictor.startWriteOperation(id, 0);
-predictor.logWriteResponse(id, W, W+A);
-}
-
-private void createReadResponse(long R, long S, String id)
-{
-predictor.startReadOperation(id, 0);
-predictor.logReadResponse(id, R, R+S);
-}
-
-@Test
-public void testDoPrediction()
-{
-try {
-predictor.enableConsistencyPredictionLogging();
-predictor.init();
-
-/*
-Ensure accuracy given a set of basic latencies
-Predictions here match a prior Python implementation
- */
-
-for (int i = 0; i  10; ++i)
-{
-createWriteResponse(10, 0, String.format(W%d, i));
-createReadResponse(0, 0, String.format(R%d, i));
-}
-
-for (int i = 0; i  10; ++i)
-{
-createWriteResponse(0, 0, String.format(WS%d, i));
-}
-
-// 10ms after write
-PBSPredictionResult result = predictor.doPrediction(2,1,1,10.0f,1, 
0.99f);
-
-assertEquals(1, result.getConsistencyProbability(), 0);
-assertEquals(2.5, result.getAverageWriteLatency(), .5);
-
-// 0ms after write
-result = predictor.doPrediction(2,1,1,0f,1, 0.99f);
-
-assertEquals(.75, result.getConsistencyProbability(), 0.05);
-
-// k=5 versions staleness
-result = predictor.doPrediction(2,1,1,5.0f,5, 0.99f);
-assertEquals(.98, result.getConsistencyProbability(), 0.05);
-assertEquals(2.5, result.getAverageWriteLatency(), .5);
-
-for (int i = 0; i  10; ++i)
-{
-createWriteResponse(20, 0, String.format(WL%d, i));
-}
-
-// 5ms after write
-result = predictor.doPrediction(2,1,1,5.0f,1, 0.99f);
-
-assertEquals(.67, result.getConsistencyProbability(), .05);
-
-// N = 5
-result = predictor.doPrediction(5,1,1,5.0f,1, 0.99f);
-
-assertEquals(.42, result.getConsistencyProbability(), .05);
-assertEquals(1.33, result.getAverageWriteLatency(), .5);
-
-for (int i = 0; i  10; ++i)
-{
-createWriteResponse(100, 100, String.format(WVL%d, i));
-createReadResponse(100, 100, String.format(RL%d, i));
-}
-
-result = predictor.doPrediction(2,1,1,0f,1, 0.99f);
-
-assertEquals(.860, 

[1/3] git commit: r/m PBS test since it keeps heisenfailing (PBS is already gone for 2.0; see #5455)

2013-07-31 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 a7202effa - 5c2895881
  refs/heads/trunk ec4b1fe61 - 05d27ea27


r/m PBS test since it keeps heisenfailing (PBS is already gone for 2.0; see 
#5455)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5c289588
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5c289588
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5c289588

Branch: refs/heads/cassandra-1.2
Commit: 5c2895881b8f4ff080f1f7236f5d759f0323ea91
Parents: a7202ef
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Jul 31 19:22:29 2013 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Jul 31 19:22:29 2013 -0500

--
 .../cassandra/service/PBSPredictorTest.java | 114 ---
 1 file changed, 114 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5c289588/test/unit/org/apache/cassandra/service/PBSPredictorTest.java
--
diff --git a/test/unit/org/apache/cassandra/service/PBSPredictorTest.java 
b/test/unit/org/apache/cassandra/service/PBSPredictorTest.java
deleted file mode 100644
index 92e863d..000
--- a/test/unit/org/apache/cassandra/service/PBSPredictorTest.java
+++ /dev/null
@@ -1,114 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * License); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-package org.apache.cassandra.service;
-
-import org.junit.Test;
-import static org.junit.Assert.*;
-
-public class PBSPredictorTest
-{
-private static PBSPredictor predictor = PBSPredictor.instance();
-
-private void createWriteResponse(long W, long A, String id)
-{
-predictor.startWriteOperation(id, 0);
-predictor.logWriteResponse(id, W, W+A);
-}
-
-private void createReadResponse(long R, long S, String id)
-{
-predictor.startReadOperation(id, 0);
-predictor.logReadResponse(id, R, R+S);
-}
-
-@Test
-public void testDoPrediction()
-{
-try {
-predictor.enableConsistencyPredictionLogging();
-predictor.init();
-
-/*
-Ensure accuracy given a set of basic latencies
-Predictions here match a prior Python implementation
- */
-
-for (int i = 0; i  10; ++i)
-{
-createWriteResponse(10, 0, String.format(W%d, i));
-createReadResponse(0, 0, String.format(R%d, i));
-}
-
-for (int i = 0; i  10; ++i)
-{
-createWriteResponse(0, 0, String.format(WS%d, i));
-}
-
-// 10ms after write
-PBSPredictionResult result = predictor.doPrediction(2,1,1,10.0f,1, 
0.99f);
-
-assertEquals(1, result.getConsistencyProbability(), 0);
-assertEquals(2.5, result.getAverageWriteLatency(), .5);
-
-// 0ms after write
-result = predictor.doPrediction(2,1,1,0f,1, 0.99f);
-
-assertEquals(.75, result.getConsistencyProbability(), 0.05);
-
-// k=5 versions staleness
-result = predictor.doPrediction(2,1,1,5.0f,5, 0.99f);
-assertEquals(.98, result.getConsistencyProbability(), 0.05);
-assertEquals(2.5, result.getAverageWriteLatency(), .5);
-
-for (int i = 0; i  10; ++i)
-{
-createWriteResponse(20, 0, String.format(WL%d, i));
-}
-
-// 5ms after write
-result = predictor.doPrediction(2,1,1,5.0f,1, 0.99f);
-
-assertEquals(.67, result.getConsistencyProbability(), .05);
-
-// N = 5
-result = predictor.doPrediction(5,1,1,5.0f,1, 0.99f);
-
-assertEquals(.42, result.getConsistencyProbability(), .05);
-assertEquals(1.33, result.getAverageWriteLatency(), .5);
-
-for (int i = 0; i  10; ++i)
-{
-createWriteResponse(100, 100, String.format(WVL%d, i));
-createReadResponse(100, 100, String.format(RL%d, i));
-}
-
-  

git commit: Don't swallow ConfigurationException for unknown compaction properties

2013-07-31 Thread aleksey
Updated Branches:
  refs/heads/trunk 05d27ea27 - 4cdd75a58


Don't swallow ConfigurationException for unknown compaction properties


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4cdd75a5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4cdd75a5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4cdd75a5

Branch: refs/heads/trunk
Commit: 4cdd75a5887f39cc6f2cb7971b06699d534663d4
Parents: 05d27ea
Author: Aleksey Yeschenko alek...@apache.org
Authored: Thu Aug 1 06:34:47 2013 +0300
Committer: Aleksey Yeschenko alek...@apache.org
Committed: Thu Aug 1 06:34:47 2013 +0300

--
 src/java/org/apache/cassandra/config/CFMetaData.java | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4cdd75a5/src/java/org/apache/cassandra/config/CFMetaData.java
--
diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java 
b/src/java/org/apache/cassandra/config/CFMetaData.java
index 3a44226..2da2361 100644
--- a/src/java/org/apache/cassandra/config/CFMetaData.java
+++ b/src/java/org/apache/cassandra/config/CFMetaData.java
@@ -1070,6 +1070,10 @@ public final class CFMetaData
 throw (ConfigurationException) e.getTargetException();
 throw new ConfigurationException(Failed to validate compaction 
options);
 }
+catch (ConfigurationException e)
+{
+throw e;
+}
 catch (Exception e)
 {
 throw new ConfigurationException(Failed to validate compaction 
options);



[jira] [Commented] (CASSANDRA-5664) Improve serialization in the native protocol

2013-07-31 Thread Daniel Norberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13726026#comment-13726026
 ] 

Daniel Norberg commented on CASSANDRA-5664:
---

Some quick observations:

1. There's quite a bit of string encoding and object serialization going on in 
some of the encodedSize() methods. This means that strings/objects will be 
encoded/serialized twice.

2. byte[] allocation and copying in encode() should be possible to avoid when 
serializing strings by careful use of ChannelBuffer.toByteBuffer(), 
CharBuffer.wrap() and CharsetEncoder.encode().

3. It might be worth investigating if the code duplication in encode() and 
encodedSize() can be eliminated by e.g. having encode() operate on a higher 
level buffer interface with writeString()/writeValue()/etc methods (i.e. a lot 
of the writeXYZ() methods in CBUtil) and having a counting implementation of 
this interface. The counting implementation could simply sum up the size of 
output without performing any actual writing/encoding, while a writing 
implementation would perform encoding/serialization and write to a 
ChannelBuffer. Then encode() could be used for both calculating the size of the 
output buffer and the actual serialization.


 Improve serialization in the native protocol
 

 Key: CASSANDRA-5664
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5664
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0

 Attachments: 0001-Rewrite-encoding-methods.txt, 
 0002-Avoid-copy-when-compressing-native-protocol-frames.txt


 Message serialization in the native protocol currently make a Netty's 
 ChannelBuffers.wrappedBuffer(). The rational was to avoid copying of the 
 values bytes when such value are biggish. This has a cost however, especially 
 with lots of small values, and as suggested in CASSANDRA-5422, this might 
 well be a more common scenario for Cassandra, so let's consider directly 
 serializing in a newly allocated buffer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira