git commit: Fix ClassCastException for super columns

2014-01-06 Thread slebresne
Updated Branches:
  refs/heads/trunk e674abe8e - 694988015


Fix ClassCastException for super columns


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69498801
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69498801
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69498801

Branch: refs/heads/trunk
Commit: 694988015f67685dc2decaf950153f557f7f598f
Parents: e674abe
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Jan 6 09:42:48 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Jan 6 09:42:48 2014 +0100

--
 src/java/org/apache/cassandra/thrift/CassandraServer.java | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69498801/src/java/org/apache/cassandra/thrift/CassandraServer.java
--
diff --git a/src/java/org/apache/cassandra/thrift/CassandraServer.java 
b/src/java/org/apache/cassandra/thrift/CassandraServer.java
index e91bea2..5859f92 100644
--- a/src/java/org/apache/cassandra/thrift/CassandraServer.java
+++ b/src/java/org/apache/cassandra/thrift/CassandraServer.java
@@ -474,10 +474,10 @@ public class CassandraServer implements Cassandra.Iface
 IDiskAtomFilter filter;
 if (metadata.isSuper())
 {
-CellNameType type = metadata.comparator;
-SortedSet names = new TreeSetByteBuffer(column_path.column 
== null ? type.subtype(0) : type.subtype(1));
-names.add(column_path.column == null ? 
column_path.super_column : column_path.column);
-filter = SuperColumns.fromSCNamesFilter(type, 
column_path.column == null ? null : column_path.bufferForSuper_column(), new 
NamesQueryFilter(names));
+CellNameType columnType = new 
SimpleDenseCellNameType(metadata.comparator.subtype(column_path.column == null 
? 0 : 1));
+SortedSetCellName names = new TreeSetCellName(columnType);
+names.add(columnType.cellFromByteBuffer(column_path.column == 
null ? column_path.super_column : column_path.column));
+filter = SuperColumns.fromSCNamesFilter(metadata.comparator, 
column_path.column == null ? null : column_path.bufferForSuper_column(), new 
NamesQueryFilter(names));
 }
 else
 {



[jira] [Commented] (CASSANDRA-6446) Faster range tombstones on wide partitions

2014-01-06 Thread Oleg Anastasyev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862957#comment-13862957
 ] 

Oleg Anastasyev commented on CASSANDRA-6446:


read patch v2 looks good to me.

 Faster range tombstones on wide partitions
 --

 Key: CASSANDRA-6446
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6446
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Oleg Anastasyev
Assignee: Oleg Anastasyev
 Fix For: 2.1

 Attachments: 0001-6446-write-path-v2.txt, 
 0002-6446-Read-patch-v2.txt, RangeTombstonesReadOptimization.diff, 
 RangeTombstonesWriteOptimization.diff


 Having wide CQL rows (~1M in single partition) and after deleting some of 
 them, we found inefficiencies in handling of range tombstones on both write 
 and read paths.
 I attached 2 patches here, one for write path 
 (RangeTombstonesWriteOptimization.diff) and another on read 
 (RangeTombstonesReadOptimization.diff).
 On write path, when you have some CQL rows deletions by primary key, each of 
 deletion is represented by range tombstone. On put of this tombstone to 
 memtable the original code takes all columns from memtable from partition and 
 checks DeletionInfo.isDeleted by brute for loop to decide, should this column 
 stay in memtable or it was deleted by new tombstone. Needless to say, more 
 columns you have on partition the slower deletions you have heating your CPU 
 with brute range tombstones check. 
 The RangeTombstonesWriteOptimization.diff patch for partitions with more than 
 1 columns loops by tombstones instead and checks existance of columns for 
 each of them. Also it copies of whole memtable range tombstone list only if 
 there are changes to be made there (original code copies range tombstone list 
 on every write).
 On read path, original code scans whole range tombstone list of a partition 
 to match sstable columns to their range tomstones. The 
 RangeTombstonesReadOptimization.diff patch scans only necessary range of 
 tombstones, according to filter used for read.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6381) Refactor nodetool

2014-01-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clément Lardeur updated CASSANDRA-6381:
---

Attachment: trunk-6381.patch

The patch include 2 binary files 'airline-06.jar' and 'javax.inject-1.jar' 
placed in the /lib folder.

 Refactor nodetool
 -

 Key: CASSANDRA-6381
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6381
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Priority: Minor
  Labels: lhf, nodetool
 Attachments: trunk-6381.patch


 We have way too many nodetool commands(more than 40) packed in one NodeCmd 
 class. And we are trying to add more commands.
 https://github.com/airlift/airline could be a good fit to take out each 
 command into sub command class.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6438) Decide if we want to make user types keyspace scoped

2014-01-06 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6438:


Attachment: (was: 6438.txt)

 Decide if we want to make user types keyspace scoped
 

 Key: CASSANDRA-6438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6438
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1

 Attachments: 6438.txt


 Currently, user types are declared at the top level. I wonder however if we 
 might not want to make them scoped to a given keyspace. It was not done in 
 the initial patch for simplicity and because I was not sure of the advantages 
 of doing so. However, if we ever want to use user types in system tables, 
 having them scoped by keyspace means we won't have to care about the new type 
 conflicting with another existing type.
 Besides, having user types be part of a keyspace would allow for slightly 
 more fine grained permissions on them. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6438) Decide if we want to make user types keyspace scoped

2014-01-06 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6438:


Attachment: 6438.txt

Rebased version attached.

 Decide if we want to make user types keyspace scoped
 

 Key: CASSANDRA-6438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6438
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1

 Attachments: 6438.txt


 Currently, user types are declared at the top level. I wonder however if we 
 might not want to make them scoped to a given keyspace. It was not done in 
 the initial patch for simplicity and because I was not sure of the advantages 
 of doing so. However, if we ever want to use user types in system tables, 
 having them scoped by keyspace means we won't have to care about the new type 
 conflicting with another existing type.
 Besides, having user types be part of a keyspace would allow for slightly 
 more fine grained permissions on them. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5201) Cassandra/Hadoop does not support current Hadoop releases

2014-01-06 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862986#comment-13862986
 ] 

Jeremy Hanna commented on CASSANDRA-5201:
-

Thanks [~dvryaboy]!

Just for completeness the twitter thread is 
https://twitter.com/jeromatron/status/419607697588510721

[~bcoverston] [~jbellis] what do you think?  Do you mind 

As for me, it sounds like the EB (or if it makes more sense Parquet) dependency 
makes sense.  The hadoop incompatibility findbugs detector also sounds great to 
include to catch anything before it is committed.  So I'm +1 on this approach.

 Cassandra/Hadoop does not support current Hadoop releases
 -

 Key: CASSANDRA-5201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5201
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.0
Reporter: Brian Jeltema
Assignee: Dave Brosius
 Attachments: 5201_a.txt, hadoopCompat.patch


 Using Hadoop 0.22.0 with Cassandra results in the stack trace below.
 It appears that version 0.21+ changed org.apache.hadoop.mapreduce.JobContext
 from a class to an interface.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 interface org.apache.hadoop.mapreduce.JobContext, but class was expected
   at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:103)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:445)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:462)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:357)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1045)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1042)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1042)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1062)
   at MyHadoopApp.run(MyHadoopApp.java:163)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
   at MyHadoopApp.main(MyHadoopApp.java:82)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:192)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6501) Cannot run pig examples on current 2.0 branch

2014-01-06 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams resolved CASSANDRA-6501.
-

Resolution: Cannot Reproduce

 Cannot run pig examples on current 2.0 branch
 -

 Key: CASSANDRA-6501
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6501
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Jeremy Hanna
Assignee: Alex Liu
  Labels: pig

 I checked out the cassandra-2.0 branch to try the pig examples because the 
 2.0.3 release has the CASSANDRA-6309 problem which is fixed on the branch.  I 
 tried to run both the cql and the CassandraStorage examples in local mode 
 with pig 0.10.1, 0.11.1, and 0.12.0 and all of them give the following error 
 and stack trace:
 {quote}
 ERROR 2998: Unhandled internal error. readLength_
 java.lang.NoSuchFieldError: readLength_
   at 
 org.apache.cassandra.thrift.TBinaryProtocol$Factory.getProtocol(TBinaryProtocol.java:57)
   at org.apache.thrift.TSerializer.init(TSerializer.java:66)
   at 
 org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cfdefToString(AbstractCassandraStorage.java:508)
   at 
 org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.initSchema(AbstractCassandraStorage.java:470)
   at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:318)
   at 
 org.apache.cassandra.hadoop.pig.CassandraStorage.getSchema(CassandraStorage.java:357)
   at 
 org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:151)
   at 
 org.apache.pig.newplan.logical.relational.LOLoad.getSchema(LOLoad.java:110)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.alias_col_ref(LogicalPlanGenerator.java:15356)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.col_ref(LogicalPlanGenerator.java:15203)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.projectable_expr(LogicalPlanGenerator.java:8881)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.var_expr(LogicalPlanGenerator.java:8632)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.expr(LogicalPlanGenerator.java:7984)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.flatten_generated_item(LogicalPlanGenerator.java:5962)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.generate_clause(LogicalPlanGenerator.java:14101)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.foreach_plan(LogicalPlanGenerator.java:12493)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.foreach_clause(LogicalPlanGenerator.java:12360)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1577)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:789)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:507)
   at 
 org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:382)
   at 
 org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:175)
   at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1589)
   at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1540)
   at org.apache.pig.PigServer.registerQuery(PigServer.java:540)
   at 
 org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:970)
   at 
 org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:386)
   at 
 org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:189)
   at 
 org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:165)
   at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
   at org.apache.pig.Main.run(Main.java:555)
   at org.apache.pig.Main.main(Main.java:111)
 
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6503) sstables from stalled repair sessions become live after a reboot and can resurrect deleted data

2014-01-06 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-6503:
---

Fix Version/s: 1.2.14

 sstables from stalled repair sessions become live after a reboot and can 
 resurrect deleted data
 ---

 Key: CASSANDRA-6503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6503
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2.14, 2.0.5

 Attachments: 6503_c1.2-v1.patch


 The sstables streamed in during a repair session don't become active until 
 the session finishes.  If something causes the repair session to hang for 
 some reason, those sstables will hang around until the next reboot, and 
 become active then.  If you don't reboot for 3 months, this can cause data to 
 resurrect, as GC grace has expired, so tombstones for the data in those 
 sstables may have already been collected.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6503) sstables from stalled repair sessions become live after a reboot and can resurrect deleted data

2014-01-06 Thread Jason Brown (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Brown updated CASSANDRA-6503:
---

Attachment: 6503_c1.2-v1.patch

Attached patch, 6503_c1.2-v1, defers the release of the sstables to the CFS 
until the session is complete. Note: that patch is only for 1.2.

For c* 2.0, I'd like [~yukim]'s advice. I have a WIP here: 
https://github.com/jasobrown/cassandra/tree/6503_c2.0. The problem I'm running 
into is that FileMessage.sstable is of type SSTableReader, which we need on the 
sender side, but on the receiver side we want SSTableWriter (if we are going to 
defer the release of the sstables. For hacking things up sake, I've just 
changed FileMessage.sstable to a plain SSTable and let the users do the casting 
- which is only in two places, one of which is the 
FileMessage.Serializer.serailize() method. Not very extensive, but perhaps a 
bit sloppy.

Yuki, do you think it's worthwhile to split up the FileMessage object into two 
classes like OutFileMessage (which has a SSTR) and InFileMessage (which has 
SSTW)? 

 sstables from stalled repair sessions become live after a reboot and can 
 resurrect deleted data
 ---

 Key: CASSANDRA-6503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6503
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2.14, 2.0.5

 Attachments: 6503_c1.2-v1.patch


 The sstables streamed in during a repair session don't become active until 
 the session finishes.  If something causes the repair session to hang for 
 some reason, those sstables will hang around until the next reboot, and 
 become active then.  If you don't reboot for 3 months, this can cause data to 
 resurrect, as GC grace has expired, so tombstones for the data in those 
 sstables may have already been collected.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6381) Refactor nodetool

2014-01-06 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863025#comment-13863025
 ] 

Jonathan Ellis commented on CASSANDRA-6381:
---

I don't see any usages of javax.inject -- is that a dependency of airline?

 Refactor nodetool
 -

 Key: CASSANDRA-6381
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6381
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Priority: Minor
  Labels: lhf, nodetool
 Attachments: trunk-6381.patch


 We have way too many nodetool commands(more than 40) packed in one NodeCmd 
 class. And we are trying to add more commands.
 https://github.com/airlift/airline could be a good fit to take out each 
 command into sub command class.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6381) Refactor nodetool

2014-01-06 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6381:
--

Reviewer: Mikhail Stepura
Assignee: Clément Lardeur

Can you review, [~mishail]?

 Refactor nodetool
 -

 Key: CASSANDRA-6381
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6381
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Assignee: Clément Lardeur
Priority: Minor
  Labels: lhf, nodetool
 Attachments: trunk-6381.patch


 We have way too many nodetool commands(more than 40) packed in one NodeCmd 
 class. And we are trying to add more commands.
 https://github.com/airlift/airline could be a good fit to take out each 
 command into sub command class.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6463) cleanup causes permission problems

2014-01-06 Thread Andreas Schnitzerling (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863039#comment-13863039
 ] 

Andreas Schnitzerling commented on CASSANDRA-6463:
--

Which node(s) need repair? The one(s) cleaned up or all nodes of the cluster?

 cleanup causes permission problems
 --

 Key: CASSANDRA-6463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6463
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Tools
 Environment: Windows 7 / Java 1.7.0.25
Reporter: Andreas Schnitzerling
  Labels: authentication, cleanup, consistency, cql3, newbie, 
 nodetool, permissions
 Fix For: 2.0.5


 After a cleanup I loose permissions (f.e. SELECT, MODIFY). When I listed the 
 system_auth/permissions CF after cleanup I recognized, around the half of all 
 permissions lost - BUT: If I list the permissions table with consistency ALL, 
 all entries appear again and my programm continues working. Only if I manual 
 trigger that read-repair! That tells me indirect, that system_auth is reading 
 using consistency ONE, what causes problems after cleanup (?). A good 
 approach (?) could be to re-read system_auth after permission fail with 
 consistency  ONE to trigger read-repair once and keep speed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6381) Refactor nodetool

2014-01-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863032#comment-13863032
 ] 

Clément Lardeur commented on CASSANDRA-6381:


Yes, this is airline that need javax.inject.

 Refactor nodetool
 -

 Key: CASSANDRA-6381
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6381
 Project: Cassandra
  Issue Type: Improvement
Reporter: Yuki Morishita
Priority: Minor
  Labels: lhf, nodetool
 Attachments: trunk-6381.patch


 We have way too many nodetool commands(more than 40) packed in one NodeCmd 
 class. And we are trying to add more commands.
 https://github.com/airlift/airline could be a good fit to take out each 
 command into sub command class.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-4687) Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)

2014-01-06 Thread Jacek Furmankiewicz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863104#comment-13863104
 ] 

Jacek Furmankiewicz commented on CASSANDRA-4687:


Hi, sorry for the delay, was out during the holidays.

So, I have good news and I have bad news.

The custom patch that was created seemed to have fixed the issue. The customer 
was able to do the initial data load and complete it without Cassandra freezing 
in the middle of it, as before. That is the good news.

Unfortunately, after a few hours of usage of this custom 1.2 version they 
started seeing other errors they had never seen before:

ERROR [ReadStage:103] 2013-12-29 08:20:49,676 CassandraDaemon.java (line 191) 
Exception in thread Thread[ReadStage:103,5,main]
java.lang.RuntimeException: java.lang.IllegalArgumentException: unable to seek 
to position 315117685 in 
/app/cassandra-int/data/SCHEDULE/TRIGGER_EVENT/SCHEDULE-TRIGGER_EVENT-ic-5-Data.db
 (191774 bytes) in read-only mode
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1614)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.lang.IllegalArgumentException: unable to seek to position 
315117685 in 
/app/cassandra-int/data/SCHEDULE/TRIGGER_EVENT/SCHEDULE-TRIGGER_EVENT-ic-5-Data.db
 (191774 bytes) in read-only mode
at 
org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:306)
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:42)
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1048)
at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.setToRowStart(IndexedSliceReader.java:130)
at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.init(IndexedSliceReader.java:91)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:68)
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:44)
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:104)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:272)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1397)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1213)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1129)
at org.apache.cassandra.db.Table.getRow(Table.java:344)
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:70)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1058)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1610)
... 3 more

They never saw this errors with their current production 1.1.12 version.

So I am not sure if this problem is related to the patch or whether it is a new 
bug in the 1.2 version that they are just happening to see.

We reverted back to 1.1.12 and disabled the key cache so we could continue 
towards our live date.

But there is still something fishy going on here.

 Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)
 ---

 Key: CASSANDRA-4687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4687
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.3 64-bit, Oracle JRE 1.6.0.33 64-bit, single 
 node cluster
Reporter: Leonid Shalupov
Priority: Minor
 Attachments: 4687-debugging.txt, 
 apache-cassandra-1.2.13-SNAPSHOT.jar, guava-backed-cache.patch


 Under heavy write load sometimes cassandra fails with assertion error.
 git bisect leads to commit 295aedb278e7a495213241b66bc46d763fd4ce66.
 works fine if global key/row caches disabled in code.
 {quote}
 java.lang.AssertionError: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk) in 
 /var/lib/cassandra/data/...-he-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 

[jira] [Commented] (CASSANDRA-6503) sstables from stalled repair sessions become live after a reboot and can resurrect deleted data

2014-01-06 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863083#comment-13863083
 ] 

Yuki Morishita commented on CASSANDRA-6503:
---

[~jasobrown] What I'm thinking to do is to closeAndOpenReader without renaming 
from tmp as we receive, and rename them at once at the end. And for renaming 
multiple files at once we probably want some kind of lockfile(also resolve 
CASSANDRA-2900?).

Though finalizing SSTable write in closeAndOpenReader takes some time, so 
completely defer finalize as you do may be a good idea. I think splitting 
FileMessage is better than casting.

 sstables from stalled repair sessions become live after a reboot and can 
 resurrect deleted data
 ---

 Key: CASSANDRA-6503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6503
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2.14, 2.0.5

 Attachments: 6503_c1.2-v1.patch


 The sstables streamed in during a repair session don't become active until 
 the session finishes.  If something causes the repair session to hang for 
 some reason, those sstables will hang around until the next reboot, and 
 become active then.  If you don't reboot for 3 months, this can cause data to 
 resurrect, as GC grace has expired, so tombstones for the data in those 
 sstables may have already been collected.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6480) Custom secondary index options in CQL3

2014-01-06 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863080#comment-13863080
 ] 

Sylvain Lebresne commented on CASSANDRA-6480:
-

Minor remarks on the validation:
* I'd reject 'class_name' as an option, instead of silently overriding.
* I'd rather let non-custom {{CREATE INDEX}} with options pass the parser but 
be rejected later. An antlr parse error is a lot more confusing that a clear 
message telling you that options are not supported for non-custom indexes (I 
know there is existing places where we do something similar, but there's no 
reason not to improve our ways :)).

Nit: I'd move the isCustom and customClass fields inside IndexPropDefs.


 Custom secondary index options in CQL3
 --

 Key: CASSANDRA-6480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6480
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Andrés de la Peña
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3, index
 Fix For: 2.0.5

 Attachments: 6480-v2.txt


 The CQL3 create index statement syntax does not allow to specify the 
 options map internally used by custom indexes. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-4687) Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)

2014-01-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863113#comment-13863113
 ] 

Balázs Póka commented on CASSANDRA-4687:


I'd like to point out that Jacek's last comment contains an exception which is 
_very_ similar to mine, posted quite a while back. It suggests that this bug 
manifests itself differently in Cassandra 1.1 vs 1.2, but I can imagine the 
root cause to be the same or very similar. If that is true, we could infer that 
the bug was not fixed with the customized 1.1 version.

 Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)
 ---

 Key: CASSANDRA-4687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4687
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.3 64-bit, Oracle JRE 1.6.0.33 64-bit, single 
 node cluster
Reporter: Leonid Shalupov
Priority: Minor
 Attachments: 4687-debugging.txt, 
 apache-cassandra-1.2.13-SNAPSHOT.jar, guava-backed-cache.patch


 Under heavy write load sometimes cassandra fails with assertion error.
 git bisect leads to commit 295aedb278e7a495213241b66bc46d763fd4ce66.
 works fine if global key/row caches disabled in code.
 {quote}
 java.lang.AssertionError: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk) in 
 /var/lib/cassandra/data/...-he-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1345)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1207)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1142)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:819)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1253)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-4687) Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)

2014-01-06 Thread Jacek Furmankiewicz (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863118#comment-13863118
 ] 

Jacek Furmankiewicz commented on CASSANDRA-4687:


The version we were running was a customized 1.2. It had the patch + libsnappy 
was downgraded from 0.5 to 0.4, so that it could still run on RHEL5.

 Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)
 ---

 Key: CASSANDRA-4687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4687
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.3 64-bit, Oracle JRE 1.6.0.33 64-bit, single 
 node cluster
Reporter: Leonid Shalupov
Priority: Minor
 Attachments: 4687-debugging.txt, 
 apache-cassandra-1.2.13-SNAPSHOT.jar, guava-backed-cache.patch


 Under heavy write load sometimes cassandra fails with assertion error.
 git bisect leads to commit 295aedb278e7a495213241b66bc46d763fd4ce66.
 works fine if global key/row caches disabled in code.
 {quote}
 java.lang.AssertionError: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk) in 
 /var/lib/cassandra/data/...-he-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1345)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1207)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1142)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:819)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1253)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-01-06 Thread Nikolai Grigoriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolai Grigoriev resolved CASSANDRA-6528.
--

Resolution: Cannot Reproduce

Closing since I cannot reproduce it anymore. Will reopen if I manage to 
reproduce it again and capture the debug information as per instructions above.

 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6528) TombstoneOverwhelmingException is thrown while populating data in recently truncated CF

2014-01-06 Thread Nikolai Grigoriev (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863123#comment-13863123
 ] 

Nikolai Grigoriev commented on CASSANDRA-6528:
--

I have retried the same test 5 times after recovering from sstable corruption 
and can no longer reproduce the problem :(

One possibility is that one or more nodes in the cluster were suffering from a 
network problem caused by broken irqbalance. Since then I have got rid of it. 

 

 TombstoneOverwhelmingException is thrown while populating data in recently 
 truncated CF
 ---

 Key: CASSANDRA-6528
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6528
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassadra 2.0.3, Linux, 6 nodes
Reporter: Nikolai Grigoriev
Priority: Minor

 I am running some performance tests and recently I had to flush the data from 
 one of the tables and repopulate it. I have about 30M rows with a few columns 
 in each, about 5kb per row in in total. In order to repopulate the data I do 
 truncate table from CQLSH and then relaunch the test. The test simply 
 inserts the data in the table, does not read anything. Shortly after 
 restarting the data generator I see this on one of the nodes:
 {code}
  INFO [HintedHandoff:655] 2013-12-26 16:45:42,185 HintedHandOffManager.java 
 (line 323) Started hinted handoff f
 or host: 985c8a08-3d92-4fad-a1d1-7135b2b9774a with IP: /10.5.45.158
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 SliceQueryFilter.java (line 
 200) Scanned ove
 r 10 tombstones; query aborted (see tombstone_fail_threshold)
 ERROR [HintedHandoff:655] 2013-12-26 16:45:42,680 CassandraDaemon.java (line 
 187) Exception in thread Thread[HintedHandoff:655,1,main]
 org.apache.cassandra.db.filter.TombstoneOverwhelmingException
 at 
 org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:201)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
 at 
 org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72)
 at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297)
 at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:56)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306)
 at 
 org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:351)
 at 
 org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:309)
 at 
 org.apache.cassandra.db.HintedHandOffManager.access$4(HintedHandOffManager.java:281)
 at 
 org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:530)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:724)
  INFO [OptionalTasks:1] 2013-12-26 16:45:53,946 MeteredFlusher.java (line 63) 
 flushing high-traffic column family CFS(Keyspace='test_jmeter', 
 ColumnFamily='test_profiles') (estimated 192717267 bytes)
 {code}
 I am inserting the data with CL=1.
 It seems to be happening every time I do it. But I do not see any errors on 
 the client side and the node seems to continue operating, this is why I think 
 it is not a major issue. Maybe not an issue at all, but the message is logged 
 as ERROR.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-5201) Cassandra/Hadoop does not support current Hadoop releases

2014-01-06 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13862986#comment-13862986
 ] 

Jeremy Hanna edited comment on CASSANDRA-5201 at 1/6/14 5:00 PM:
-

Thanks [~dvryaboy]!

Just for completeness the twitter thread is 
https://twitter.com/jeromatron/status/419607697588510721

[~bcoverston] [~jbellis] what do you think?

As for me, it sounds like the EB (or if it makes more sense Parquet) dependency 
makes sense.  The hadoop incompatibility findbugs detector also sounds great to 
include to catch anything before it is committed.  So I'm +1 on this approach.


was (Author: jeromatron):
Thanks [~dvryaboy]!

Just for completeness the twitter thread is 
https://twitter.com/jeromatron/status/419607697588510721

[~bcoverston] [~jbellis] what do you think?  Do you mind 

As for me, it sounds like the EB (or if it makes more sense Parquet) dependency 
makes sense.  The hadoop incompatibility findbugs detector also sounds great to 
include to catch anything before it is committed.  So I'm +1 on this approach.

 Cassandra/Hadoop does not support current Hadoop releases
 -

 Key: CASSANDRA-5201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5201
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.0
Reporter: Brian Jeltema
Assignee: Dave Brosius
 Attachments: 5201_a.txt, hadoopCompat.patch


 Using Hadoop 0.22.0 with Cassandra results in the stack trace below.
 It appears that version 0.21+ changed org.apache.hadoop.mapreduce.JobContext
 from a class to an interface.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 interface org.apache.hadoop.mapreduce.JobContext, but class was expected
   at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:103)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:445)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:462)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:357)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1045)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1042)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1042)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1062)
   at MyHadoopApp.run(MyHadoopApp.java:163)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
   at MyHadoopApp.main(MyHadoopApp.java:82)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:192)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-4687) Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)

2014-01-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863130#comment-13863130
 ] 

Balázs Póka commented on CASSANDRA-4687:


I'm sorry, must have misunderstood something. My first statement may still be 
valid, though. Since this stack was from 1.2, it's not a big surprise that it 
looks like mine.

 Exception: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk)
 ---

 Key: CASSANDRA-4687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4687
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: CentOS 6.3 64-bit, Oracle JRE 1.6.0.33 64-bit, single 
 node cluster
Reporter: Leonid Shalupov
Priority: Minor
 Attachments: 4687-debugging.txt, 
 apache-cassandra-1.2.13-SNAPSHOT.jar, guava-backed-cache.patch


 Under heavy write load sometimes cassandra fails with assertion error.
 git bisect leads to commit 295aedb278e7a495213241b66bc46d763fd4ce66.
 works fine if global key/row caches disabled in code.
 {quote}
 java.lang.AssertionError: DecoratedKey(xxx, yyy) != DecoratedKey(zzz, kkk) in 
 /var/lib/cassandra/data/...-he-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1345)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1207)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1142)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:819)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1253)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 {quote}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5549) Remove Table.switchLock

2014-01-06 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863143#comment-13863143
 ] 

Jonathan Ellis commented on CASSANDRA-5549:
---

What happens now when I hit my memtable memory ceiling, before flush makes more 
room?

 Remove Table.switchLock
 ---

 Key: CASSANDRA-5549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5549
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 5549-removed-switchlock.png, 5549-sunnyvale.png


 As discussed in CASSANDRA-5422, Table.switchLock is a bottleneck on the write 
 path.  ReentrantReadWriteLock is not lightweight, even if there is no 
 contention per se between readers and writers of the lock (in Cassandra, 
 memtable updates and switches).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5549) Remove Table.switchLock

2014-01-06 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863148#comment-13863148
 ] 

Benedict commented on CASSANDRA-5549:
-

Largely the same as before, the mutation thread blocks until enough memory 
becomes available to complete the request. The difference is, depending on 
where/why the breach occurs, it may not block until after it completes its 
modification (as some of the memory bookkeeping is batched to the end for ease 
and speed).

Any call to MemoryOwner.allocate() is potentially a blocking call, if there 
isn't enough room available to satisfy the allocation.

 Remove Table.switchLock
 ---

 Key: CASSANDRA-5549
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5549
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Benedict
  Labels: performance
 Fix For: 2.1

 Attachments: 5549-removed-switchlock.png, 5549-sunnyvale.png


 As discussed in CASSANDRA-5422, Table.switchLock is a bottleneck on the write 
 path.  ReentrantReadWriteLock is not lightweight, even if there is no 
 contention per se between readers and writers of the lock (in Cassandra, 
 memtable updates and switches).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6348) TimeoutException throws if Cql query allows data filtering and index is too big and it can't find the data in base CF after filtering

2014-01-06 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863159#comment-13863159
 ] 

Alex Liu commented on CASSANDRA-6348:
-

Add @bcoverston, This issue hits hard on customer if hadoop uses multiple 
indexes.

 TimeoutException throws if Cql query allows data filtering and index is too 
 big and it can't find the data in base CF after filtering 
 --

 Key: CASSANDRA-6348
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6348
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Alex Liu
Assignee: Alex Liu
 Attachments: 6348.txt


 If index row is too big, and filtering can't find the match Cql row in base 
 CF, it keep scanning the index row and retrieving base CF until the index row 
 is scanned completely which may take too long and thrift server returns 
 TimeoutException. This is one of the reasons why we shouldn't index a column 
 if the index is too big.
 Multiple indexes merging can resolve the case where there are only EQUAL 
 clauses. (CASSANDRA-6048 addresses it).
 If the query has none-EQUAL clauses, we still need do data filtering which 
 might lead to timeout exception.
 We can either disable those kind of queries or WARN the user that data 
 filtering might lead to timeout exception or OOM.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6517) Loose of secondary index entries if nodetool cleanup called before compaction

2014-01-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863161#comment-13863161
 ] 

Michael Shuler edited comment on CASSANDRA-6517 at 1/6/14 5:59 PM:
---

I reproduced on 2.0.4 with a simple 3-node RF=3 ccm cluster.  Repair did not 
change the results and querying a different node has the same result.
{code}
cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
computer=50;

(0 rows)

cqlsh:mwerrch select * from B4Container_Demo;

 key  | archived | bytes | computer | deleted | 
description | doarchive | filename | first | frames | ifversion | imported | 
jobid | keepuntil | nextchunk | node | recordingkey | recstart | recstop | 
simulationid | systemstart | systemstop | tapelabel | version
--+--+---+--+-+-+---+--+---++---+--+---+---+---+--+--+--+-+--+-++---+-
 78c70562-1f98-3971-9c28-2c3d8e09c10f | null |  null |   50 |null | 
   null |  null | null |  null |   null |  null | null |  
null |  null |  null |   50 | null | null |null |   
  null |null |   null |  null |null

(1 rows)

cqlsh:mwerrch
{code}

Update: full cluster restart shows the same results


was (Author: mshuler):
I reproduced on 2.0.4 with a simple 3-node RF=3 ccm cluster.  Repair did not 
change the results and querying a different node has the same result.
{code}
cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
computer=50;

(0 rows)

cqlsh:mwerrch select * from B4Container_Demo;

 key  | archived | bytes | computer | deleted | 
description | doarchive | filename | first | frames | ifversion | imported | 
jobid | keepuntil | nextchunk | node | recordingkey | recstart | recstop | 
simulationid | systemstart | systemstop | tapelabel | version
--+--+---+--+-+-+---+--+---++---+--+---+---+---+--+--+--+-+--+-++---+-
 78c70562-1f98-3971-9c28-2c3d8e09c10f | null |  null |   50 |null | 
   null |  null | null |  null |   null |  null | null |  
null |  null |  null |   50 | null | null |null |   
  null |null |   null |  null |null

(1 rows)

cqlsh:mwerrch
{code}

 Loose of secondary index entries if nodetool cleanup called before compaction
 -

 Key: CASSANDRA-6517
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6517
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Ubuntu 12.0.4 with 8+ GB RAM and 40GB hard disk for data 
 directory.
Reporter: Christoph Werres
Assignee: Michael Shuler

 From time to time we had the feeling of not getting all results that should 
 have been returned using secondary indexes. Now we tracked down some 
 situations and found out, it happened:
 1) To primary keys that were already deleted and have been re-created later on
 2) After our nightly maintenance scripts were running
 We can reproduce now the following szenario:
 - create a row entry with an indexed column included
 - query it and use the secondary index criteria - Success
 - delete it, query again - entry gone as expected
 - re-create it with the same key, query it - success again
 Now use in exactly that sequence
 nodetool cleanup
 nodetool flush
 nodetool compact
 When issuing the query now, we don't get the result using the index. The 
 entry is indeed available in it's table when I just ask for the key. Below is 
 the exact copy-paste output from CQL when I reproduced the problem with an 
 example entry on on of our tables.
 mwerrch@mstc01401:/opt/cassandra$ current/bin/cqlsh Connected to 
 14-15-Cluster at localhost:9160.
 [cqlsh 4.1.0 | Cassandra 2.0.3 | CQL spec 3.1.1 | Thrift protocol 19.38.0] 
 Use HELP for help.
 cqlsh use mwerrch;
 cqlsh:mwerrch desc tables;
 B4Container_Demo
 cqlsh:mwerrch desc table B4Container_Demo;
 CREATE TABLE B4Container_Demo (
   key uuid,
   archived boolean,
   bytes int,
   computer int,
   deleted boolean,
   description text,
   doarchive boolean,
   filename text,
   first boolean,
   frames int,
   ifversion int,
   imported boolean,
   jobid int,
   keepuntil bigint,
   nextchunk text,
   node int,
   recordingkey blob,
   recstart bigint,
   recstop bigint,
   

[jira] [Commented] (CASSANDRA-5201) Cassandra/Hadoop does not support current Hadoop releases

2014-01-06 Thread Benjamin Coverston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863170#comment-13863170
 ] 

Benjamin Coverston commented on CASSANDRA-5201:
---

I'll submit the reporter impl upstream. Thanks [~dvryaboy]!


 Cassandra/Hadoop does not support current Hadoop releases
 -

 Key: CASSANDRA-5201
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5201
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Affects Versions: 1.2.0
Reporter: Brian Jeltema
Assignee: Dave Brosius
 Attachments: 5201_a.txt, hadoopCompat.patch


 Using Hadoop 0.22.0 with Cassandra results in the stack trace below.
 It appears that version 0.21+ changed org.apache.hadoop.mapreduce.JobContext
 from a class to an interface.
 Exception in thread main java.lang.IncompatibleClassChangeError: Found 
 interface org.apache.hadoop.mapreduce.JobContext, but class was expected
   at 
 org.apache.cassandra.hadoop.ColumnFamilyInputFormat.getSplits(ColumnFamilyInputFormat.java:103)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:445)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:462)
   at 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:357)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1045)
   at org.apache.hadoop.mapreduce.Job$2.run(Job.java:1042)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1153)
   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1042)
   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1062)
   at MyHadoopApp.run(MyHadoopApp.java:163)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
   at MyHadoopApp.main(MyHadoopApp.java:82)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:192)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6517) Loose of secondary index entries if nodetool cleanup called before compaction

2014-01-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863161#comment-13863161
 ] 

Michael Shuler edited comment on CASSANDRA-6517 at 1/6/14 6:16 PM:
---

I reproduced on 2.0.4 with a simple 3-node RF=3 ccm cluster.  Repair did not 
change the results and querying a different node has the same result.
{code}
cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
computer=50;

(0 rows)

cqlsh:mwerrch select * from B4Container_Demo;

 key  | archived | bytes | computer | deleted | 
description | doarchive | filename | first | frames | ifversion | imported | 
jobid | keepuntil | nextchunk | node | recordingkey | recstart | recstop | 
simulationid | systemstart | systemstop | tapelabel | version
--+--+---+--+-+-+---+--+---++---+--+---+---+---+--+--+--+-+--+-++---+-
 78c70562-1f98-3971-9c28-2c3d8e09c10f | null |  null |   50 |null | 
   null |  null | null |  null |   null |  null | null |  
null |  null |  null |   50 | null | null |null |   
  null |null |   null |  null |null

(1 rows)

cqlsh:mwerrch
{code}

Update1: full cluster restart shows the same results

Update2: debug logs, query run on node1 ... where computer=50 resulting in (0 
rows)
{code}
DEBUG [Thrift:1] 2014-01-06 12:11:42,122 CassandraServer.java (line 1954) 
execute_cql3_query
DEBUG [Thrift:1] 2014-01-06 12:11:42,136 AbstractReplicationStrategy.java (line 
86) clearing cached endpoints
DEBUG [WRITE-/127.0.0.2] 2014-01-06 12:11:42,270 OutboundTcpConnection.java 
(line 290) attempting to connect to /127.0.0.2
 INFO [HANDSHAKE-/127.0.0.2] 2014-01-06 12:11:42,271 OutboundTcpConnection.java 
(line 386) Handshaking version with /127.0.0.2

== .ccm/test/node2/logs/system.log ==
DEBUG [ACCEPT-/127.0.0.2] 2014-01-06 12:11:42,271 MessagingService.java (line 
850) Connection version 7 from /127.0.0.1
DEBUG [Thread-7] 2014-01-06 12:11:42,272 IncomingTcpConnection.java (line 107) 
Upgrading incoming connection to be compressed
DEBUG [Thread-7] 2014-01-06 12:11:42,274 IncomingTcpConnection.java (line 115) 
Max version for /127.0.0.1 is 7
DEBUG [Thread-7] 2014-01-06 12:11:42,274 MessagingService.java (line 743) 
Setting version 7 for /127.0.0.1
DEBUG [Thread-7] 2014-01-06 12:11:42,274 IncomingTcpConnection.java (line 124) 
set version for /127.0.0.1 to 7
DEBUG [ReadStage:1] 2014-01-06 12:11:42,283 KeysSearcher.java (line 69) 
Most-selective indexed predicate is 'B4Container_Demo.computer EQ 50'
DEBUG [ReadStage:1] 2014-01-06 12:11:42,285 FileCacheService.java (line 70) 
Evicting cold readers for 
/home/mshuler/.ccm/test/node2/data/system/schema_columns/system-schema_columns-jb-7-Data.db
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11033316 compared to actual usage 0
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11164665 compared to actual usage 131349
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11296014 compared to actual usage 262698

== .ccm/test/node1/logs/system.log ==
DEBUG [Thrift:1] 2014-01-06 12:11:42,287 Tracing.java (line 159) request 
complete
{code}


was (Author: mshuler):
I reproduced on 2.0.4 with a simple 3-node RF=3 ccm cluster.  Repair did not 
change the results and querying a different node has the same result.
{code}
cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
computer=50;

(0 rows)

cqlsh:mwerrch select * from B4Container_Demo;

 key  | archived | bytes | computer | deleted | 
description | doarchive | filename | first | frames | ifversion | imported | 
jobid | keepuntil | nextchunk | node | recordingkey | recstart | recstop | 
simulationid | systemstart | systemstop | tapelabel | version
--+--+---+--+-+-+---+--+---++---+--+---+---+---+--+--+--+-+--+-++---+-
 78c70562-1f98-3971-9c28-2c3d8e09c10f | null |  null |   50 |null | 
   null |  null | null |  null |   null |  null | null |  
null |  null |  null |   50 | null | null |null |   
  null |null |   null |  null |null

(1 rows)

cqlsh:mwerrch
{code}

Update: full cluster restart shows the same results

 Loose of secondary index entries if 

[jira] [Updated] (CASSANDRA-6438) Make user types keyspace scoped

2014-01-06 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6438:
-

Summary: Make user types keyspace scoped  (was: Decide if we want to make 
user types keyspace scoped)

 Make user types keyspace scoped
 ---

 Key: CASSANDRA-6438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6438
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1

 Attachments: 6438.txt


 Currently, user types are declared at the top level. I wonder however if we 
 might not want to make them scoped to a given keyspace. It was not done in 
 the initial patch for simplicity and because I was not sure of the advantages 
 of doing so. However, if we ever want to use user types in system tables, 
 having them scoped by keyspace means we won't have to care about the new type 
 conflicting with another existing type.
 Besides, having user types be part of a keyspace would allow for slightly 
 more fine grained permissions on them. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6517) Loose of secondary index entries if nodetool cleanup called before compaction

2014-01-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863161#comment-13863161
 ] 

Michael Shuler edited comment on CASSANDRA-6517 at 1/6/14 6:20 PM:
---

I reproduced on 2.0.4 with a simple 3-node RF=3 ccm cluster.  Repair did not 
change the results and querying a different node has the same result.
{code}
cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
computer=50;

(0 rows)

cqlsh:mwerrch select * from B4Container_Demo;

 key  | archived | bytes | computer | deleted | 
description | doarchive | filename | first | frames | ifversion | imported | 
jobid | keepuntil | nextchunk | node | recordingkey | recstart | recstop | 
simulationid | systemstart | systemstop | tapelabel | version
--+--+---+--+-+-+---+--+---++---+--+---+---+---+--+--+--+-+--+-++---+-
 78c70562-1f98-3971-9c28-2c3d8e09c10f | null |  null |   50 |null | 
   null |  null | null |  null |   null |  null | null |  
null |  null |  null |   50 | null | null |null |   
  null |null |   null |  null |null

(1 rows)

cqlsh:mwerrch
{code}

Update1: full cluster restart shows the same results

Update2: debug logs, query run on node1 ... where computer=50 resulting in (0 
rows)
{code}
DEBUG [Thrift:1] 2014-01-06 12:11:42,122 CassandraServer.java (line 1954) 
execute_cql3_query
DEBUG [Thrift:1] 2014-01-06 12:11:42,136 AbstractReplicationStrategy.java (line 
86) clearing cached endpoints
DEBUG [WRITE-/127.0.0.2] 2014-01-06 12:11:42,270 OutboundTcpConnection.java 
(line 290) attempting to connect to /127.0.0.2
 INFO [HANDSHAKE-/127.0.0.2] 2014-01-06 12:11:42,271 OutboundTcpConnection.java 
(line 386) Handshaking version with /127.0.0.2

== .ccm/test/node2/logs/system.log ==
DEBUG [ACCEPT-/127.0.0.2] 2014-01-06 12:11:42,271 MessagingService.java (line 
850) Connection version 7 from /127.0.0.1
DEBUG [Thread-7] 2014-01-06 12:11:42,272 IncomingTcpConnection.java (line 107) 
Upgrading incoming connection to be compressed
DEBUG [Thread-7] 2014-01-06 12:11:42,274 IncomingTcpConnection.java (line 115) 
Max version for /127.0.0.1 is 7
DEBUG [Thread-7] 2014-01-06 12:11:42,274 MessagingService.java (line 743) 
Setting version 7 for /127.0.0.1
DEBUG [Thread-7] 2014-01-06 12:11:42,274 IncomingTcpConnection.java (line 124) 
set version for /127.0.0.1 to 7
DEBUG [ReadStage:1] 2014-01-06 12:11:42,283 KeysSearcher.java (line 69) 
Most-selective indexed predicate is 'B4Container_Demo.computer EQ 50'
DEBUG [ReadStage:1] 2014-01-06 12:11:42,285 FileCacheService.java (line 70) 
Evicting cold readers for 
/home/mshuler/.ccm/test/node2/data/system/schema_columns/system-schema_columns-jb-7-Data.db
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11033316 compared to actual usage 0
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11164665 compared to actual usage 131349
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11296014 compared to actual usage 262698

== .ccm/test/node1/logs/system.log ==
DEBUG [Thrift:1] 2014-01-06 12:11:42,287 Tracing.java (line 159) request 
complete
{code}

'select * ... query debug log is just:
{code}
DEBUG [Thrift:1] 2014-01-06 12:18:07,087 CassandraServer.java (line 1954) 
execute_cql3_query
DEBUG [Thrift:1] 2014-01-06 12:18:07,122 Tracing.java (line 159) request 
complete
{code}


was (Author: mshuler):
I reproduced on 2.0.4 with a simple 3-node RF=3 ccm cluster.  Repair did not 
change the results and querying a different node has the same result.
{code}
cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
computer=50;

(0 rows)

cqlsh:mwerrch select * from B4Container_Demo;

 key  | archived | bytes | computer | deleted | 
description | doarchive | filename | first | frames | ifversion | imported | 
jobid | keepuntil | nextchunk | node | recordingkey | recstart | recstop | 
simulationid | systemstart | systemstop | tapelabel | version
--+--+---+--+-+-+---+--+---++---+--+---+---+---+--+--+--+-+--+-++---+-
 78c70562-1f98-3971-9c28-2c3d8e09c10f | null |  null |   50 |null | 
   null |  null | null |  null |   null |  null | null |  
null |  null |  null |   50 |  

[jira] [Comment Edited] (CASSANDRA-6517) Loose of secondary index entries if nodetool cleanup called before compaction

2014-01-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863161#comment-13863161
 ] 

Michael Shuler edited comment on CASSANDRA-6517 at 1/6/14 6:30 PM:
---

I reproduced on 2.0.4 with a simple 3-node RF=3 ccm cluster.  Repair did not 
change the results and querying a different node has the same result.
{code}
cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
computer=50;

(0 rows)

cqlsh:mwerrch select * from B4Container_Demo;

 key  | archived | bytes | computer | deleted | 
description | doarchive | filename | first | frames | ifversion | imported | 
jobid | keepuntil | nextchunk | node | recordingkey | recstart | recstop | 
simulationid | systemstart | systemstop | tapelabel | version
--+--+---+--+-+-+---+--+---++---+--+---+---+---+--+--+--+-+--+-++---+-
 78c70562-1f98-3971-9c28-2c3d8e09c10f | null |  null |   50 |null | 
   null |  null | null |  null |   null |  null | null |  
null |  null |  null |   50 | null | null |null |   
  null |null |   null |  null |null

(1 rows)

cqlsh:mwerrch
{code}

Update1: full cluster restart shows the same results

Update2: debug logs, query run on node1 ... where computer=50 resulting in (0 
rows)
{code}
DEBUG [Thrift:1] 2014-01-06 12:11:42,122 CassandraServer.java (line 1954) 
execute_cql3_query
DEBUG [Thrift:1] 2014-01-06 12:11:42,136 AbstractReplicationStrategy.java (line 
86) clearing cached endpoints
DEBUG [WRITE-/127.0.0.2] 2014-01-06 12:11:42,270 OutboundTcpConnection.java 
(line 290) attempting to connect to /127.0.0.2
 INFO [HANDSHAKE-/127.0.0.2] 2014-01-06 12:11:42,271 OutboundTcpConnection.java 
(line 386) Handshaking version with /127.0.0.2

== .ccm/test/node2/logs/system.log ==
DEBUG [ACCEPT-/127.0.0.2] 2014-01-06 12:11:42,271 MessagingService.java (line 
850) Connection version 7 from /127.0.0.1
DEBUG [Thread-7] 2014-01-06 12:11:42,272 IncomingTcpConnection.java (line 107) 
Upgrading incoming connection to be compressed
DEBUG [Thread-7] 2014-01-06 12:11:42,274 IncomingTcpConnection.java (line 115) 
Max version for /127.0.0.1 is 7
DEBUG [Thread-7] 2014-01-06 12:11:42,274 MessagingService.java (line 743) 
Setting version 7 for /127.0.0.1
DEBUG [Thread-7] 2014-01-06 12:11:42,274 IncomingTcpConnection.java (line 124) 
set version for /127.0.0.1 to 7
DEBUG [ReadStage:1] 2014-01-06 12:11:42,283 KeysSearcher.java (line 69) 
Most-selective indexed predicate is 'B4Container_Demo.computer EQ 50'
DEBUG [ReadStage:1] 2014-01-06 12:11:42,285 FileCacheService.java (line 70) 
Evicting cold readers for 
/home/mshuler/.ccm/test/node2/data/system/schema_columns/system-schema_columns-jb-7-Data.db
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11033316 compared to actual usage 0
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11164665 compared to actual usage 131349
DEBUG [ReadStage:1] 2014-01-06 12:11:42,286 FileCacheService.java (line 115) 
Estimated memory usage is 11296014 compared to actual usage 262698

== .ccm/test/node1/logs/system.log ==
DEBUG [Thrift:1] 2014-01-06 12:11:42,287 Tracing.java (line 159) request 
complete
{code}

'select * ... query debug log is just:
{code}
DEBUG [Thrift:1] 2014-01-06 12:18:07,087 CassandraServer.java (line 1954) 
execute_cql3_query
DEBUG [Thrift:1] 2014-01-06 12:18:07,122 Tracing.java (line 159) request 
complete
{code}

Update4: went back to fresh cluster, inserted the data and queried ... where 
computer=50
{code}
DEBUG [Thrift:1] 2014-01-06 12:28:01,995 CassandraServer.java (line 1954) 
execute_cql3_query

== .ccm/test/node2/logs/system.log ==
DEBUG [ReadStage:33] 2014-01-06 12:28:02,015 KeysSearcher.java (line 69) 
Most-selective indexed predicate is 'B4Container_Demo.computer EQ 50'

== .ccm/test/node1/logs/system.log ==
DEBUG [Thrift:1] 2014-01-06 12:28:02,019 Tracing.java (line 159) request 
complete
{code}


was (Author: mshuler):
I reproduced on 2.0.4 with a simple 3-node RF=3 ccm cluster.  Repair did not 
change the results and querying a different node has the same result.
{code}
cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
computer=50;

(0 rows)

cqlsh:mwerrch select * from B4Container_Demo;

 key  | archived | bytes | computer | deleted | 
description | doarchive | filename | first | frames | ifversion | imported | 
jobid | keepuntil | nextchunk | node | recordingkey | recstart | recstop | 
simulationid | systemstart | systemstop | 

[jira] [Commented] (CASSANDRA-6545) LOCAL_QUORUM still doesn't work with SimpleStrategy but don't throw a meaningful error message anymore

2014-01-06 Thread Alex Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863210#comment-13863210
 ] 

Alex Liu commented on CASSANDRA-6545:
-

Lift both EACH_QUORUM and LOCAL_QUORUM. Cast strategy to NTS only if it's an 
instance of NTS, otherwise don't cast it and use it the same way as QUORUM.

 LOCAL_QUORUM still doesn't work with SimpleStrategy but don't throw a 
 meaningful error message anymore
 --

 Key: CASSANDRA-6545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6545
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Alex Liu
 Fix For: 1.2.14


 It seems it was the intent of CASSANDRA-6238 originally, though I've tracked 
 to the commit of CASSANDRA-6309 (f7efaffadace3e344eeb4a1384fa72c73d8422b0 to 
 be precise) but in any case, ConsistencyLevel.validateForWrite does not 
 reject LOCAL_QUORUM when SimpleStrategy is used anymore, yet 
 ConsistencyLevel.blockFor definitively cast the strategy to NTS for 
 LOCAL_QUORUM (in localQuorumFor() to be precise). Which results in a 
 ClassCastException as reported by 
 https://datastax-oss.atlassian.net/browse/JAVA-241.
 Note that while we're at it, I tend to agree with Aleksey comment on 
 CASSANDRA-6238, why not make EACH_QUORUM == QUORUM for SimpleStrategy too?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6517) Loss of secondary index entries if nodetool cleanup called before compaction

2014-01-06 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6517:
--

Fix Version/s: 2.0.5
 Assignee: Sam Tunnicliffe  (was: Michael Shuler)
  Summary: Loss of secondary index entries if nodetool cleanup called 
before compaction  (was: Loose of secondary index entries if nodetool cleanup 
called before compaction)

Do you have time to take a look, Sam?

 Loss of secondary index entries if nodetool cleanup called before compaction
 

 Key: CASSANDRA-6517
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6517
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Ubuntu 12.0.4 with 8+ GB RAM and 40GB hard disk for data 
 directory.
Reporter: Christoph Werres
Assignee: Sam Tunnicliffe
 Fix For: 2.0.5


 From time to time we had the feeling of not getting all results that should 
 have been returned using secondary indexes. Now we tracked down some 
 situations and found out, it happened:
 1) To primary keys that were already deleted and have been re-created later on
 2) After our nightly maintenance scripts were running
 We can reproduce now the following szenario:
 - create a row entry with an indexed column included
 - query it and use the secondary index criteria - Success
 - delete it, query again - entry gone as expected
 - re-create it with the same key, query it - success again
 Now use in exactly that sequence
 nodetool cleanup
 nodetool flush
 nodetool compact
 When issuing the query now, we don't get the result using the index. The 
 entry is indeed available in it's table when I just ask for the key. Below is 
 the exact copy-paste output from CQL when I reproduced the problem with an 
 example entry on on of our tables.
 mwerrch@mstc01401:/opt/cassandra$ current/bin/cqlsh Connected to 
 14-15-Cluster at localhost:9160.
 [cqlsh 4.1.0 | Cassandra 2.0.3 | CQL spec 3.1.1 | Thrift protocol 19.38.0] 
 Use HELP for help.
 cqlsh use mwerrch;
 cqlsh:mwerrch desc tables;
 B4Container_Demo
 cqlsh:mwerrch desc table B4Container_Demo;
 CREATE TABLE B4Container_Demo (
   key uuid,
   archived boolean,
   bytes int,
   computer int,
   deleted boolean,
   description text,
   doarchive boolean,
   filename text,
   first boolean,
   frames int,
   ifversion int,
   imported boolean,
   jobid int,
   keepuntil bigint,
   nextchunk text,
   node int,
   recordingkey blob,
   recstart bigint,
   recstop bigint,
   simulationid bigint,
   systemstart bigint,
   systemstop bigint,
   tapelabel bigint,
   version blob,
   PRIMARY KEY (key)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='demo' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=604800 AND
   index_interval=128 AND
   read_repair_chance=1.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='NONE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'LZ4Compressor'};
 CREATE INDEX mwerrch_Demo_computer ON B4Container_Demo (computer);
 CREATE INDEX mwerrch_Demo_node ON B4Container_Demo (node);
 CREATE INDEX mwerrch_Demo_recordingkey ON B4Container_Demo (recordingkey);
 cqlsh:mwerrch INSERT INTO B4Container_Demo (key,computer,node) VALUES 
 (78c70562-1f98-3971-9c28-2c3d8e09c10f, 50, 50); cqlsh:mwerrch select 
 key,node,computer from B4Container_Demo where computer=50;
  key  | node | computer
 --+--+--
  78c70562-1f98-3971-9c28-2c3d8e09c10f |   50 |   50
 (1 rows)
 cqlsh:mwerrch DELETE FROM B4Container_Demo WHERE 
 key=78c70562-1f98-3971-9c28-2c3d8e09c10f;
 cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
 computer=50;
 (0 rows)
 cqlsh:mwerrch INSERT INTO B4Container_Demo (key,computer,node) VALUES 
 (78c70562-1f98-3971-9c28-2c3d8e09c10f, 50, 50); cqlsh:mwerrch select 
 key,node,computer from B4Container_Demo where computer=50;
  key  | node | computer
 --+--+--
  78c70562-1f98-3971-9c28-2c3d8e09c10f |   50 |   50
 (1 rows)
 **
 Now we execute (maybe from a different shell so we don't have to close this 
 session) from /opt/cassandra/current/bin directory:
 ./nodetool cleanup
 ./nodetool flush
 ./nodetool compact
 Going back to our CQL session the result will no longer be available if 
 queried via the index:
 *
 cqlsh:mwerrch select key,node,computer from B4Container_Demo where 
 computer=50;
 (0 rows)



--
This 

[jira] [Created] (CASSANDRA-6552) nodetool repair -remote option

2014-01-06 Thread Rich Reffner (JIRA)
Rich Reffner created CASSANDRA-6552:
---

 Summary: nodetool repair -remote option
 Key: CASSANDRA-6552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6552
 Project: Cassandra
  Issue Type: New Feature
Reporter: Rich Reffner


Add a nodetool repair -remote option that will only repair against nodes 
outside of the local data center. The customer wants to be able to repair -pr a 
node in a local data center when they know the only valid replicas of data are 
in other data centers. They expect this will reduce streaming and OOM errors on 
the repairing node. Request is from Apple.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6304) Better handling of authorization for User Types

2014-01-06 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-6304.
--

Resolution: Not A Problem

Now that CASSANDRA-6438 made the types keyspace-scoped, this issue is no longer 
relevant.

 Better handling of authorization for User Types
 ---

 Key: CASSANDRA-6304
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6304
 Project: Cassandra
  Issue Type: New Feature
Reporter: Aleksey Yeschenko
Assignee: Aleksey Yeschenko
 Fix For: 2.1


 Currently, we require CREATE/ALTER/DROP on ALL KEYSPACES, which is a bit 
 excessive, and not entirely correct (but is the best we can do atm).
 We should:
 1. create a new IResource implementation for user types (TypeResource)
 2. extend CQL3 GRANT/REVOKE to allow CREATE/ALTER/DROP on (ALL TYPES|TYPE 
 name)
 3. require CREATE/ALTER/DROP permissions instead of requiring all keyspace 
 access
 We could (should?) optionally require ALTER permission on the columnfamilies 
 affected by ALTER TYPE. Not sure about this?
 We also don't currently allow dropping a type that's in use by a CF. So 
 someone might start using a type in the cf, and the 'owner' of the type would 
 not be able to drop it. So we should either add some kind of USE permission 
 for types, or make it possible to drop a type that's currently in use.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6438) Make user types keyspace scoped

2014-01-06 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863301#comment-13863301
 ] 

Aleksey Yeschenko commented on CASSANDRA-6438:
--

- {Create|Alter|Drop}TypeStatement.checkAccess() should now just ask for 
hasKeyspaceAccess(keyspace(), Permission.ALTER) (closed CASSANDRA-6304)
- Schema.ignoredSchemaRow() should't exclude SCHEMA_USER_TYPES_CF now

Otherwise LGTM, let the new dtests catch everything else.

 Make user types keyspace scoped
 ---

 Key: CASSANDRA-6438
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6438
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.1

 Attachments: 6438.txt


 Currently, user types are declared at the top level. I wonder however if we 
 might not want to make them scoped to a given keyspace. It was not done in 
 the initial patch for simplicity and because I was not sure of the advantages 
 of doing so. However, if we ever want to use user types in system tables, 
 having them scoped by keyspace means we won't have to care about the new type 
 conflicting with another existing type.
 Besides, having user types be part of a keyspace would allow for slightly 
 more fine grained permissions on them. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6465) DES scores fluctuate too much for cache pinning

2014-01-06 Thread Ian Barfield (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863309#comment-13863309
 ] 

Ian Barfield commented on CASSANDRA-6465:
-

I believe the purpose of time penalty was to more quickly detect problematic 
nodes. If a node was suddenly suffering severe issues, that wouldn't be 
reflected in its latency metric until the current outstanding queries resolved. 
That might take until the maximum duration timeout which can be arbitrarily 
long, and in many cases is a lot longer than you'd like. By using timeDelay, 
the snitch can somewhat immediately penalize problem nodes since the queries do 
not have to timeout first. That said, it has numerous flaws both conceptually 
and in its implementation.

I was working on this problem a couple weeks ago, but have been distracted 
since, so I might not be able to give the best summary. Here's a couple issues 
off the top of my head though:
- if the time delay values are low, then high jitter throws the scores way off. 
It isn't unreasonable to expect situations where the time delay shifts 
semi-randomly between 0 and 1 ms. This means very little in terms of whether a 
node is a suitable target but can cause a drastic difference in scores if there 
is no slow node to anchor the scores.
- if the node response periods aren't low; say they average around 50 ms. Then 
by definition they are highly random since the score could be calculated at any 
point along 0 to 50 ms.
- it has a lot of complex interactions outside of its original scope of 
detecting bad nodes
- when calculating scores, if there is no lastReceived value for a node (eg. 
the node has just been added to the cluster), then the logic defaults to using 
the current time (essentially 0 or maximum 'good'). You might instead take the 
view that an unproven, cache-cold node would be a bad selection.
- sensitive to local noise. Each time the score is calculated, the timePenalty 
is calculated fresh. Since there is no concept of persistance or scope, events 
that corrupt the scoring process are extra harmful. eg. GC, CPU load / thread 
scheduling, and concurrency shenanigans occuring between the lastReceived.get() 
and System.currentTimeMillis()

Some of these issues are somewhat alleviated by the switch to using nanos, and 
I've been tempted to back port that for this class at least for testing, but 
this logic fails in complex ways. I think at some point I was able to confirm 
some wildly fluctuating values of the subcomponents to the scores (specifically 
timePenalty) by checking the mbeans and working under the assumption that 
timePenalty was likely the only component to well rounded scores -- if you have 
at least one node with  timePenalty then it gets cut off to 
UPDATE_INTERVAL_IN_MS which as a divisor makes for nicely formed floating point 
numbers.

There are also a lot of issues with the other score components, and some of the 
overall logic, but... some other time. Apologies if i've gotten something quite 
wrong; I've never really used Cassandra.

 DES scores fluctuate too much for cache pinning
 ---

 Key: CASSANDRA-6465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11, 2 DC cluster
Reporter: Chris Burroughs
Assignee: Tyler Hobbs
Priority: Minor
  Labels: gossip
 Fix For: 2.0.5

 Attachments: des-score-graph.png, des.sample.15min.csv, get-scores.py


 To quote the conf:
 {noformat}
 # if set greater than zero and read_repair_chance is  1.0, this will allow
 # 'pinning' of replicas to hosts in order to increase cache capacity.
 # The badness threshold will control how much worse the pinned host has to be
 # before the dynamic snitch will prefer other replicas over it.  This is
 # expressed as a double which represents a percentage.  Thus, a value of
 # 0.2 means Cassandra would continue to prefer the static snitch values
 # until the pinned host was 20% worse than the fastest.
 dynamic_snitch_badness_threshold: 0.1
 {noformat}
 An assumption of this feature is that scores will vary by less than 
 dynamic_snitch_badness_threshold during normal operations.  Attached is the 
 result of polling a node for the scores of 6 different endpoints at 1 Hz for 
 15 minutes.  The endpoints to sample were chosen with `nodetool getendpoints` 
 for row that is known to get reads.  The node was acting as a coordinator for 
 a few hundred req/second, so it should have sufficient data to work with.  
 Other traces on a second cluster have produced similar results.
  * The scores vary by far more than I would expect, as show by the difficulty 
 of seeing anything useful in that graph.
  * The difference between the best and 

[jira] [Updated] (CASSANDRA-6545) LOCAL_QUORUM still doesn't work with SimpleStrategy but don't throw a meaningful error message anymore

2014-01-06 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6545:


Attachment: 6545-1.2-branch.txt

 LOCAL_QUORUM still doesn't work with SimpleStrategy but don't throw a 
 meaningful error message anymore
 --

 Key: CASSANDRA-6545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6545
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Alex Liu
 Fix For: 1.2.14

 Attachments: 6545-1.2-branch.txt


 It seems it was the intent of CASSANDRA-6238 originally, though I've tracked 
 to the commit of CASSANDRA-6309 (f7efaffadace3e344eeb4a1384fa72c73d8422b0 to 
 be precise) but in any case, ConsistencyLevel.validateForWrite does not 
 reject LOCAL_QUORUM when SimpleStrategy is used anymore, yet 
 ConsistencyLevel.blockFor definitively cast the strategy to NTS for 
 LOCAL_QUORUM (in localQuorumFor() to be precise). Which results in a 
 ClassCastException as reported by 
 https://datastax-oss.atlassian.net/browse/JAVA-241.
 Note that while we're at it, I tend to agree with Aleksey comment on 
 CASSANDRA-6238, why not make EACH_QUORUM == QUORUM for SimpleStrategy too?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6545) LOCAL_QUORUM still doesn't work with SimpleStrategy but don't throw a meaningful error message anymore

2014-01-06 Thread Alex Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Liu updated CASSANDRA-6545:


Attachment: 6545-2.0-branch.txt

 LOCAL_QUORUM still doesn't work with SimpleStrategy but don't throw a 
 meaningful error message anymore
 --

 Key: CASSANDRA-6545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6545
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Alex Liu
 Fix For: 1.2.14

 Attachments: 6545-1.2-branch.txt, 6545-2.0-branch.txt


 It seems it was the intent of CASSANDRA-6238 originally, though I've tracked 
 to the commit of CASSANDRA-6309 (f7efaffadace3e344eeb4a1384fa72c73d8422b0 to 
 be precise) but in any case, ConsistencyLevel.validateForWrite does not 
 reject LOCAL_QUORUM when SimpleStrategy is used anymore, yet 
 ConsistencyLevel.blockFor definitively cast the strategy to NTS for 
 LOCAL_QUORUM (in localQuorumFor() to be precise). Which results in a 
 ClassCastException as reported by 
 https://datastax-oss.atlassian.net/browse/JAVA-241.
 Note that while we're at it, I tend to agree with Aleksey comment on 
 CASSANDRA-6238, why not make EACH_QUORUM == QUORUM for SimpleStrategy too?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6480) Custom secondary index options in CQL3

2014-01-06 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6480:
-

Attachment: 6480-v3.txt

Attaching the updated v3.

 Custom secondary index options in CQL3
 --

 Key: CASSANDRA-6480
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6480
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Andrés de la Peña
Assignee: Aleksey Yeschenko
Priority: Minor
  Labels: cql3, index
 Fix For: 2.0.5

 Attachments: 6480-v2.txt, 6480-v3.txt


 The CQL3 create index statement syntax does not allow to specify the 
 options map internally used by custom indexes. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool

2014-01-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863378#comment-13863378
 ] 

Michael Shuler commented on CASSANDRA-6421:
---

Nice!  (added my TAB's in there)
{code}
mshuler@hana:~$ wget -q 
https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool
mshuler@hana:~$ sudo mv nodetool /etc/bash_completion.d/
mshuler@hana:~$ . .profile
mshuler@hana:~$ nodetool  [TAB][TAB]
cfhistograms enablebackup join scrub
cfstats  enablebinary move 
setcachecapacity
cleanup  enablegossip netstats 
setcompactionthreshold
clearsnapshotenablehandoffpausehandoff 
setcompactionthroughput
compact  enablethrift predictconsistency   
setstreamthroughput
compactionstats  flushproxyhistograms  
settraceprobability
decommission getcompactionthreshold   rangekeysample   
snapshot
describecluster  getcompactionthroughput  rebuild  
status
describering getendpoints rebuild_index
statusbinary
disablebackupgetsstables  refresh  
statusthrift
disablebinarygetstreamthroughput  removenode   stop
disablegossipgossipinfo   repair   
tpstats
disablehandoff   info resetlocalschema 
upgradesstables
disablethriftinvalidatekeycache   resumehandoff
version
draininvalidaterowcache   ring 
mshuler@hana:~$ nodetool v[TAB]
mshuler@hana:~$ nodetool version 
ReleaseVersion: 2.1-SNAPSHOT
mshuler@hana:~$ nodetool st[TAB][TAB]
statusstatusbinary  statusthrift  stop  
mshuler@hana:~$ nodetool sta[TAB]
mshuler@hana:~$ nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  AddressLoad   Tokens  Owns (effective)  Host ID 
  Rack
UN  127.0.0.1  40.44 KB   256 100.0%
68f18675-c52d-4491-af37-06c1637547c1  rack1
mshuler@hana:~$ nodetool rep[TAB]
mshuler@hana:~$ nodetool repair [TAB]
mshuler@hana:~$ nodetool repair system[TAB][TAB]
system system_traces  
mshuler@hana:~$ nodetool repair system
[2014-01-06 14:27:00,593] Nothing to repair for keyspace 'system'
mshuler@hana:~$ nodetool rep[TAB]
mshuler@hana:~$ nodetool repair [TAB]
mshuler@hana:~$ nodetool repair system_[add _ and TAB]
mshuler@hana:~$ nodetool repair system_traces 
[2014-01-06 14:27:04,946] Starting repair command #2, repairing 256 ranges for 
keyspace system_traces
[2014-01-06 14:27:06,028] Repair command #2 finished
mshuler@hana:~$
{code}
I will look at a patch for adding to the debian package and I suppose this goes 
in the top level conf/ directory in the tar, maybe?

 Add bash completion to nodetool
 ---

 Key: CASSANDRA-6421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Cyril Scetbon
Assignee: Cyril Scetbon
Priority: Trivial
 Fix For: 1.2.14, 2.0.5


 You can find the bash-completion file at 
 https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool
 it uses cqlsh to get keyspaces and namespaces and could use an environment 
 variable (not implemented) to get access which cqlsh if authentification is 
 needed. But I think that's really a good start :)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool

2014-01-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863396#comment-13863396
 ] 

Michael Shuler commented on CASSANDRA-6421:
---

Forgot to add - the above is a stock Debian Wheezy install, which includes 
bash-completion and the hook in ~/.bashrc to find the completion rules.  I 
don't have an OS X machine to help with, but if configured right, I assume it 
should Just Work there, too.

 Add bash completion to nodetool
 ---

 Key: CASSANDRA-6421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Cyril Scetbon
Assignee: Cyril Scetbon
Priority: Trivial
 Fix For: 1.2.14, 2.0.5


 You can find the bash-completion file at 
 https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool
 it uses cqlsh to get keyspaces and namespaces and could use an environment 
 variable (not implemented) to get access which cqlsh if authentification is 
 needed. But I think that's really a good start :)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6552) nodetool repair -remote option

2014-01-06 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6552:
--

Description: Add a nodetool repair -remote option that will only repair 
against nodes outside of the local data center. The customer wants to be able 
to repair -pr a node in a local data center when they know the only valid 
replicas of data are in other data centers. They expect this will reduce 
streaming and OOM errors on the repairing node.   (was: Add a nodetool repair 
-remote option that will only repair against nodes outside of the local data 
center. The customer wants to be able to repair -pr a node in a local data 
center when they know the only valid replicas of data are in other data 
centers. They expect this will reduce streaming and OOM errors on the repairing 
node. Request is from Apple.)

Is this a subset of CASSANDRA-6440?

 nodetool repair -remote option
 --

 Key: CASSANDRA-6552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6552
 Project: Cassandra
  Issue Type: New Feature
Reporter: Rich Reffner

 Add a nodetool repair -remote option that will only repair against nodes 
 outside of the local data center. The customer wants to be able to repair -pr 
 a node in a local data center when they know the only valid replicas of data 
 are in other data centers. They expect this will reduce streaming and OOM 
 errors on the repairing node. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6552) nodetool repair -remote option

2014-01-06 Thread Rich Reffner (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863462#comment-13863462
 ] 

Rich Reffner commented on CASSANDRA-6552:
-

Yes, looks like a subset of 6440. From 6440, this would be a great solution: 
Another easy way to do this is to have repair command take nodes with which 
you want to repair with.

 nodetool repair -remote option
 --

 Key: CASSANDRA-6552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6552
 Project: Cassandra
  Issue Type: New Feature
Reporter: Rich Reffner

 Add a nodetool repair -remote option that will only repair against nodes 
 outside of the local data center. The customer wants to be able to repair -pr 
 a node in a local data center when they know the only valid replicas of data 
 are in other data centers. They expect this will reduce streaming and OOM 
 errors on the repairing node. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6552) nodetool repair -remote option

2014-01-06 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6552.
---

Resolution: Duplicate

Resolving as duplicate then.

 nodetool repair -remote option
 --

 Key: CASSANDRA-6552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6552
 Project: Cassandra
  Issue Type: New Feature
Reporter: Rich Reffner

 Add a nodetool repair -remote option that will only repair against nodes 
 outside of the local data center. The customer wants to be able to repair -pr 
 a node in a local data center when they know the only valid replicas of data 
 are in other data centers. They expect this will reduce streaming and OOM 
 errors on the repairing node. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6440) Repair should allow repairing particular endpoints to reduce WAN usage.

2014-01-06 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863514#comment-13863514
 ] 

sankalp kohli commented on CASSANDRA-6440:
--

Could we alter the IllegalArgumentException for cfs in the system keyspace? 
IAE always gets thrown for any CF in the system KS regardless of what hosts are 
supplied.
I did not follow what you mean by this :)

 Repair should allow repairing particular endpoints to reduce WAN usage. 
 

 Key: CASSANDRA-6440
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6440
 Project: Cassandra
  Issue Type: New Feature
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Attachments: JIRA-6440.diff


 The way we send out data that does not match over WAN can be improved. 
 Example: Say there are four nodes(A,B,C,D) which are replica of a range we 
 are repairing. A, B is in DC1 and C,D is in DC2. If A does not have the data 
 which other replicas have, then we will have following streams
 1) A to B and back
 2) A to C and back(Goes over WAN)
 3) A to D and back(Goes over WAN)
 One of the ways of doing it to reduce WAN traffic is this.
 1) Repair A and B only with each other and C and D with each other starting 
 at same time t. 
 2) Once these repairs have finished, A,B and C,D are in sync with respect to 
 time t. 
 3) Now run a repair between A and C, the streams which are exchanged as a 
 result of the diff will also be streamed to B and D via A and C(C and D 
 behaves like a proxy to the streams).
 For a replication of DC1:2,DC2:2, the WAN traffic will get reduced by 50% and 
 even more for higher replication factors.
 Another easy way to do this is to have repair command take nodes with which 
 you want to repair with. Then we can do something like this.
 1) Run repair between (A and B) and (C and D)
 2) Run repair between (A and C)
 3) Run repair between (A and B) and (C and D)
 But this will increase the traffic inside the DC as we wont be doing proxy.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6545) LOCAL_QUORUM still doesn't work with SimpleStrategy but don't throw a meaningful error message anymore

2014-01-06 Thread Joaquin Casares (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863575#comment-13863575
 ] 

Joaquin Casares commented on CASSANDRA-6545:


I'm also sometimes seeing this in 2.0.4 while testing the java-driver. If I try 
and rerun the test that fails by itself it will pass.

 LOCAL_QUORUM still doesn't work with SimpleStrategy but don't throw a 
 meaningful error message anymore
 --

 Key: CASSANDRA-6545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6545
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Alex Liu
 Fix For: 1.2.14

 Attachments: 6545-1.2-branch.txt, 6545-2.0-branch.txt


 It seems it was the intent of CASSANDRA-6238 originally, though I've tracked 
 to the commit of CASSANDRA-6309 (f7efaffadace3e344eeb4a1384fa72c73d8422b0 to 
 be precise) but in any case, ConsistencyLevel.validateForWrite does not 
 reject LOCAL_QUORUM when SimpleStrategy is used anymore, yet 
 ConsistencyLevel.blockFor definitively cast the strategy to NTS for 
 LOCAL_QUORUM (in localQuorumFor() to be precise). Which results in a 
 ClassCastException as reported by 
 https://datastax-oss.atlassian.net/browse/JAVA-241.
 Note that while we're at it, I tend to agree with Aleksey comment on 
 CASSANDRA-6238, why not make EACH_QUORUM == QUORUM for SimpleStrategy too?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool

2014-01-06 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863604#comment-13863604
 ] 

Cyril Scetbon commented on CASSANDRA-6421:
--

Great to see you enjoyed it :) FYI I've tested it on OSX (where I've developed 
it), Ubuntu Lucid (10.04) and Ubuntu Precise (12.04).

Something interesting is with cleanup for example where the choice list is 
refreshed when you add a column family :
{code}
$ nodetool cleanup [TAB]
pns_fr system system_authsystem_traces  test   
$ nodetool cleanup system[TAB]
system system_authsystem_traces  
$ nodetool cleanup system [TAB]
HintsColumnFamily  Migrations batchlog   
peer_eventsschema_columnfamilies  
IndexInfo  NodeIdInfo hints  peers  
schema_columns 
LocationInfo   Schema local  
range_xfersschema_keyspaces   
$ nodetool cleanup system Schema [TAB] 
HintsColumnFamily  Migrations hints  peers  
schema_columns 
IndexInfo  NodeIdInfo local  
range_xfersschema_keyspaces   
LocationInfo   batchlog   peer_events
schema_columnfamilies  
$ nodetool cleanup system Schema Migrations [TAB]
HintsColumnFamily  NodeIdInfo local  
range_xfersschema_keyspaces   
IndexInfo  batchlog   peer_events
schema_columnfamilies  
LocationInfo   hints  peers  
schema_columns 
{code}

 Add bash completion to nodetool
 ---

 Key: CASSANDRA-6421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Cyril Scetbon
Assignee: Cyril Scetbon
Priority: Trivial
 Fix For: 1.2.14, 2.0.5


 You can find the bash-completion file at 
 https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool
 it uses cqlsh to get keyspaces and namespaces and could use an environment 
 variable (not implemented) to get access which cqlsh if authentification is 
 needed. But I think that's really a good start :)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool

2014-01-06 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863622#comment-13863622
 ] 

Cyril Scetbon commented on CASSANDRA-6421:
--

I think we could add a nodetool.bash-completion file in the tar at the root 
level or in another directory but not in the conf directory cause I suppose 
some users/tools copy everything from conf to /etc/cassandra (like brew for 
example on OSX) and if we put the nodetool script in it, it will be copied too 
:(

 Add bash completion to nodetool
 ---

 Key: CASSANDRA-6421
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Cyril Scetbon
Assignee: Cyril Scetbon
Priority: Trivial
 Fix For: 1.2.14, 2.0.5


 You can find the bash-completion file at 
 https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool
 it uses cqlsh to get keyspaces and namespaces and could use an environment 
 variable (not implemented) to get access which cqlsh if authentification is 
 needed. But I think that's really a good start :)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6503) sstables from stalled repair sessions become live after a reboot and can resurrect deleted data

2014-01-06 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863773#comment-13863773
 ] 

Jason Brown commented on CASSANDRA-6503:


bq. we probably want some kind of lockfile

Interesting. What problems do you see this solving (I'm probably missing 
something in my understanding)?

bq. splitting FileMessage is better than casting

Yeah, I knew you were gonna say that :)

 sstables from stalled repair sessions become live after a reboot and can 
 resurrect deleted data
 ---

 Key: CASSANDRA-6503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6503
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2.14, 2.0.5

 Attachments: 6503_c1.2-v1.patch


 The sstables streamed in during a repair session don't become active until 
 the session finishes.  If something causes the repair session to hang for 
 some reason, those sstables will hang around until the next reboot, and 
 become active then.  If you don't reboot for 3 months, this can cause data to 
 resurrect, as GC grace has expired, so tombstones for the data in those 
 sstables may have already been collected.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6157) Selectively Disable hinted handoff for a data center

2014-01-06 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-6157:
-

Attachment: trunk-6157-v2.diff

 Selectively Disable hinted handoff for a data center
 

 Key: CASSANDRA-6157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6157
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Fix For: 2.0.5

 Attachments: trunk-6157-v2.diff, trunk-6157.txt


 Cassandra supports disabling the hints or reducing the window for hints. 
 It would be helpful to have a switch which stops hints to a down data center 
 but continue hints to other DCs.
 This is helpful during data center fail over as hints will put more 
 unnecessary pressure on the DC taking double traffic. Also since now 
 Cassandra is under reduced reduncany, we don't want to disable hints within 
 the DC. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6157) Selectively Disable hinted handoff for a data center

2014-01-06 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863796#comment-13863796
 ] 

sankalp kohli commented on CASSANDRA-6157:
--

Attached v2 patch with Jonathan changes. Please review. 

 Selectively Disable hinted handoff for a data center
 

 Key: CASSANDRA-6157
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6157
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: sankalp kohli
Assignee: sankalp kohli
Priority: Minor
 Fix For: 2.0.5

 Attachments: trunk-6157-v2.diff, trunk-6157.txt


 Cassandra supports disabling the hints or reducing the window for hints. 
 It would be helpful to have a switch which stops hints to a down data center 
 but continue hints to other DCs.
 This is helpful during data center fail over as hints will put more 
 unnecessary pressure on the DC taking double traffic. Also since now 
 Cassandra is under reduced reduncany, we don't want to disable hints within 
 the DC. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6553) Benchmark counter improvements (counters++)

2014-01-06 Thread Ryan McGuire (JIRA)
Ryan McGuire created CASSANDRA-6553:
---

 Summary: Benchmark counter improvements (counters++)
 Key: CASSANDRA-6553
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6553
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire


Benchmark the difference in performance between CASSANDRA-6504 and trunk.

* Updating totally unrelated counters (different partitions)
* Updating the same counters a lot (same cells in the same partition)
* Different cells in the same few partitions (hot counter partition)

benchmark: https://github.com/iamaleksey/cassandra/commits/6504
compared to: https://github.com/iamaleksey/cassandra/commits/trunk



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-6553) Benchmark counter improvements (counters++)

2014-01-06 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-6553:
---

Assignee: Ryan McGuire

 Benchmark counter improvements (counters++)
 ---

 Key: CASSANDRA-6553
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6553
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire

 Benchmark the difference in performance between CASSANDRA-6504 and trunk.
 * Updating totally unrelated counters (different partitions)
 * Updating the same counters a lot (same cells in the same partition)
 * Different cells in the same few partitions (hot counter partition)
 benchmark: https://github.com/iamaleksey/cassandra/commits/6504
 compared to: https://github.com/iamaleksey/cassandra/commits/trunk



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6553) Benchmark counter improvements (counters++)

2014-01-06 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6553:


Fix Version/s: 2.1

 Benchmark counter improvements (counters++)
 ---

 Key: CASSANDRA-6553
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6553
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire
 Fix For: 2.1


 Benchmark the difference in performance between CASSANDRA-6504 and trunk.
 * Updating totally unrelated counters (different partitions)
 * Updating the same counters a lot (same cells in the same partition)
 * Different cells in the same few partitions (hot counter partition)
 benchmark: https://github.com/iamaleksey/cassandra/commits/6504
 compared to: https://github.com/iamaleksey/cassandra/commits/trunk
 So far, the above changes should only affect the write path.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6553) Benchmark counter improvements (counters++)

2014-01-06 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire updated CASSANDRA-6553:


Description: 
Benchmark the difference in performance between CASSANDRA-6504 and trunk.

* Updating totally unrelated counters (different partitions)
* Updating the same counters a lot (same cells in the same partition)
* Different cells in the same few partitions (hot counter partition)

benchmark: https://github.com/iamaleksey/cassandra/commits/6504
compared to: https://github.com/iamaleksey/cassandra/commits/trunk

So far, the above changes should only affect the write path.

  was:
Benchmark the difference in performance between CASSANDRA-6504 and trunk.

* Updating totally unrelated counters (different partitions)
* Updating the same counters a lot (same cells in the same partition)
* Different cells in the same few partitions (hot counter partition)

benchmark: https://github.com/iamaleksey/cassandra/commits/6504
compared to: https://github.com/iamaleksey/cassandra/commits/trunk


 Benchmark counter improvements (counters++)
 ---

 Key: CASSANDRA-6553
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6553
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire
 Fix For: 2.1


 Benchmark the difference in performance between CASSANDRA-6504 and trunk.
 * Updating totally unrelated counters (different partitions)
 * Updating the same counters a lot (same cells in the same partition)
 * Different cells in the same few partitions (hot counter partition)
 benchmark: https://github.com/iamaleksey/cassandra/commits/6504
 compared to: https://github.com/iamaleksey/cassandra/commits/trunk
 So far, the above changes should only affect the write path.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6553) Benchmark counter improvements (counters++)

2014-01-06 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863822#comment-13863822
 ] 

Aleksey Yeschenko commented on CASSANDRA-6553:
--

To clarify - we expect that (under contention) the latency will go up, and the 
throughput will decrease. We need to quantify the change, however.

 Benchmark counter improvements (counters++)
 ---

 Key: CASSANDRA-6553
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6553
 Project: Cassandra
  Issue Type: Test
Reporter: Ryan McGuire
Assignee: Ryan McGuire
 Fix For: 2.1


 Benchmark the difference in performance between CASSANDRA-6504 and trunk.
 * Updating totally unrelated counters (different partitions)
 * Updating the same counters a lot (same cells in the same partition)
 * Different cells in the same few partitions (hot counter partition)
 benchmark: https://github.com/iamaleksey/cassandra/commits/6504
 compared to: https://github.com/iamaleksey/cassandra/commits/trunk
 So far, the above changes should only affect the write path.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6552) nodetool repair -remote option

2014-01-06 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863833#comment-13863833
 ] 

sankalp kohli commented on CASSANDRA-6552:
--

To repair remote data centers, we can use repair by dc as added in 
CASSANDRA-6218.
For repairs within a DC, we can use the options from CASSANDRA-6440

 nodetool repair -remote option
 --

 Key: CASSANDRA-6552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6552
 Project: Cassandra
  Issue Type: New Feature
Reporter: Rich Reffner

 Add a nodetool repair -remote option that will only repair against nodes 
 outside of the local data center. The customer wants to be able to repair -pr 
 a node in a local data center when they know the only valid replicas of data 
 are in other data centers. They expect this will reduce streaming and OOM 
 errors on the repairing node. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6456) log listen address at startup

2014-01-06 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863867#comment-13863867
 ] 

Lyuben Todorov commented on CASSANDRA-6456:
---

v3 LGTM.

 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: CASSANDRA-6456-2.patch, CASSANDRA-6456-3.patch, 
 CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Issue Comment Deleted] (CASSANDRA-6456) log listen address at startup

2014-01-06 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-6456:
--

Comment: was deleted

(was: v3 LGTM.)

 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: CASSANDRA-6456-2.patch, CASSANDRA-6456-3.patch, 
 CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6554) Cluster is read-only during upgrade of nodes from 1.2 - 2.0

2014-01-06 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-6554:
-

 Summary: Cluster is read-only during upgrade of nodes from 1.2 - 
2.0
 Key: CASSANDRA-6554
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6554
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: EC2 Ubuntu Precise 12.04
Oracle JRE 1.7_25
C* 1.2.13 upgrade to 2.0.4

Reporter: Michael Shuler


During an upgrade from 1.2.13 to 2.0.3/2.0.4, the cluster is read-only and 
writes fail, until the entire cluster is fully upgraded.
(I'm gathering complete repro steps, test results, and logs to try to help and 
will post those asap, as well as try other versions to see what happens)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6456) log listen address at startup

2014-01-06 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-6456:
--

Attachment: 6456_v4_trunk.patch

Patch is technically good but there are some minor nits: 
- Space needed after try at YamlConfigurationLoader#loadConfig() - 
{{try(InputStream is = url.openStream())}}
- Imports need to be cleaned up in YamlConfigurationLoader and CassandraDaemon 
(see [Cody Style - Imports|http://wiki.apache.org/cassandra/CodeStyle#imports])
- Catch needs to go on a new line in {{CassaraDaemon#setup()}}

Attaching V4 with the above nits addressed, and +1 from me. 


 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: 6456_v4_trunk.patch, CASSANDRA-6456-2.patch, 
 CASSANDRA-6456-3.patch, CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6542) nodetool removenode hangs

2014-01-06 Thread Justen Walker (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863911#comment-13863911
 ] 

Justen Walker edited comment on CASSANDRA-6542 at 1/7/14 5:06 AM:
--

I've had this happen in 1.1.11. 

Unfortunately this version does not have a {{removenode force}} command; you 
have to use unsafe JMX methods to force-remove the node.


was (Author: justen_walker):
I've had this happen in 1.1.11. 

Unfortunately this version does not have a {removenode force} command; you have 
to use unsafe JMX methods to force-remove the node.

 nodetool removenode hangs
 -

 Key: CASSANDRA-6542
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6542
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12, 1.2.11 DSE
Reporter: Eric Lubow

 Running *nodetool removenode $host-id* doesn't actually remove the node from 
 the ring.  I've let it run anywhere from 5 minutes to 3 days and there are no 
 messages in the log about it hanging or failing, the command just sits there 
 running.  So the regular response has been to run *nodetool removenode 
 $host-id*, give it about 10-15 minutes and then run *nodetool removenode 
 force*.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-3486) Node Tool command to stop repair

2014-01-06 Thread Justen Walker (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863914#comment-13863914
 ] 

Justen Walker commented on CASSANDRA-3486:
--

I also hit this today on 1.1.11, same problem as Bill describes

 Node Tool command to stop repair
 

 Key: CASSANDRA-3486
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3486
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
 Environment: JVM
Reporter: Vijay
Assignee: Yuki Morishita
Priority: Minor
  Labels: repair
 Fix For: 2.1

 Attachments: 0001-stop-repair-3583.patch


 After CASSANDRA-1740, If the validation compaction is stopped then the repair 
 will hang. This ticket will allow users to kill the original repair.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Verify that the keyspace exists in describeRing and print nicer error message in BulkLoader.

2014-01-06 Thread marcuse
Updated Branches:
  refs/heads/cassandra-2.0 95f1b5f29 - ae0a1e0f5


Verify that the keyspace exists in describeRing and print nicer error message 
in BulkLoader.

Patch by marcuse, reviewed by thobbs for CASSANDRA-6529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ae0a1e0f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ae0a1e0f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ae0a1e0f

Branch: refs/heads/cassandra-2.0
Commit: ae0a1e0f5a88d66974582f383a942f6bf42b6ac5
Parents: 95f1b5f
Author: Marcus Eriksson marc...@apache.org
Authored: Tue Jan 7 06:48:08 2014 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Tue Jan 7 06:48:08 2014 +0100

--
 .../apache/cassandra/service/StorageService.java   |  3 +++
 .../org/apache/cassandra/tools/BulkLoader.java | 17 -
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae0a1e0f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index cca7b00..102e0d8 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1178,6 +1178,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 private ListTokenRange describeRing(String keyspace, boolean 
includeOnlyLocalDC) throws InvalidRequestException
 {
+if (!Schema.instance.getKeyspaces().contains(keyspace))
+throw new InvalidRequestException(No such keyspace:  + keyspace);
+
 if (keyspace == null || 
Keyspace.open(keyspace).getReplicationStrategy() instanceof LocalStrategy)
 throw new InvalidRequestException(There is no ring for the 
keyspace:  + keyspace);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae0a1e0f/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 15c8df8..4756bd3 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -76,7 +76,22 @@ public class BulkLoader
 OutputHandler handler = new 
OutputHandler.SystemOutput(options.verbose, options.debug);
 SSTableLoader loader = new SSTableLoader(options.directory, new 
ExternalClient(options.hosts, options.rpcPort, options.user, options.passwd, 
options.transportFactory), handler);
 
DatabaseDescriptor.setStreamThroughputOutboundMegabitsPerSec(options.throttle);
-StreamResultFuture future = loader.stream(options.ignores);
+StreamResultFuture future = null;
+try
+{
+future = loader.stream(options.ignores);
+}
+catch (Exception e)
+{
+System.err.println(e.getMessage());
+if (e.getCause() != null)
+System.err.println(e.getCause());
+if (options.debug)
+e.printStackTrace(System.err);
+else
+System.err.println(Run with --debug to get full stack trace 
or --help to get help.);
+System.exit(1);
+}
 future.addEventListener(new ProgressIndicator());
 try
 {



[2/2] git commit: Merge branch 'cassandra-2.0' into trunk

2014-01-06 Thread marcuse
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1dc43bda
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1dc43bda
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1dc43bda

Branch: refs/heads/trunk
Commit: 1dc43bdad2beb52793253c4aacb737c8de74cd4a
Parents: 6949880 ae0a1e0
Author: Marcus Eriksson marc...@apache.org
Authored: Tue Jan 7 06:50:43 2014 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Tue Jan 7 06:50:43 2014 +0100

--
 .../apache/cassandra/service/StorageService.java   |  3 +++
 .../org/apache/cassandra/tools/BulkLoader.java | 17 -
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1dc43bda/src/java/org/apache/cassandra/service/StorageService.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1dc43bda/src/java/org/apache/cassandra/tools/BulkLoader.java
--



[1/2] git commit: Verify that the keyspace exists in describeRing and print nicer error message in BulkLoader.

2014-01-06 Thread marcuse
Updated Branches:
  refs/heads/trunk 694988015 - 1dc43bdad


Verify that the keyspace exists in describeRing and print nicer error message 
in BulkLoader.

Patch by marcuse, reviewed by thobbs for CASSANDRA-6529


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ae0a1e0f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ae0a1e0f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ae0a1e0f

Branch: refs/heads/trunk
Commit: ae0a1e0f5a88d66974582f383a942f6bf42b6ac5
Parents: 95f1b5f
Author: Marcus Eriksson marc...@apache.org
Authored: Tue Jan 7 06:48:08 2014 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Tue Jan 7 06:48:08 2014 +0100

--
 .../apache/cassandra/service/StorageService.java   |  3 +++
 .../org/apache/cassandra/tools/BulkLoader.java | 17 -
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae0a1e0f/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index cca7b00..102e0d8 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -1178,6 +1178,9 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 private ListTokenRange describeRing(String keyspace, boolean 
includeOnlyLocalDC) throws InvalidRequestException
 {
+if (!Schema.instance.getKeyspaces().contains(keyspace))
+throw new InvalidRequestException(No such keyspace:  + keyspace);
+
 if (keyspace == null || 
Keyspace.open(keyspace).getReplicationStrategy() instanceof LocalStrategy)
 throw new InvalidRequestException(There is no ring for the 
keyspace:  + keyspace);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ae0a1e0f/src/java/org/apache/cassandra/tools/BulkLoader.java
--
diff --git a/src/java/org/apache/cassandra/tools/BulkLoader.java 
b/src/java/org/apache/cassandra/tools/BulkLoader.java
index 15c8df8..4756bd3 100644
--- a/src/java/org/apache/cassandra/tools/BulkLoader.java
+++ b/src/java/org/apache/cassandra/tools/BulkLoader.java
@@ -76,7 +76,22 @@ public class BulkLoader
 OutputHandler handler = new 
OutputHandler.SystemOutput(options.verbose, options.debug);
 SSTableLoader loader = new SSTableLoader(options.directory, new 
ExternalClient(options.hosts, options.rpcPort, options.user, options.passwd, 
options.transportFactory), handler);
 
DatabaseDescriptor.setStreamThroughputOutboundMegabitsPerSec(options.throttle);
-StreamResultFuture future = loader.stream(options.ignores);
+StreamResultFuture future = null;
+try
+{
+future = loader.stream(options.ignores);
+}
+catch (Exception e)
+{
+System.err.println(e.getMessage());
+if (e.getCause() != null)
+System.err.println(e.getCause());
+if (options.debug)
+e.printStackTrace(System.err);
+else
+System.err.println(Run with --debug to get full stack trace 
or --help to get help.);
+System.exit(1);
+}
 future.addEventListener(new ProgressIndicator());
 try
 {



[Cassandra Wiki] Update of Committers by MarcusEriksson

2014-01-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Committers page has been changed by MarcusEriksson:
https://wiki.apache.org/cassandra/Committers?action=diffrev1=36rev2=37

  ||Yuki Morishita||May 2012||Datastax
  ||Aleksey Yeschenko||Nov 2012||Datastax|| ||
  ||Jason Brown||Feb 2013||Netflix|| ||
- ||Marcus Eriksson||April 2013||Spotify|| ||
+ ||Marcus Eriksson||April 2013||Datastax|| ||
  
  {{https://c.statcounter.com/9397521/0/fe557aad/1/|stats}}
  


[jira] [Commented] (CASSANDRA-5202) CFs should have globally and temporally unique CF IDs to prevent reusing data from earlier incarnation of same CF name

2014-01-06 Thread Pavel Yaskevich (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863941#comment-13863941
 ] 

Pavel Yaskevich commented on CASSANDRA-5202:


+1

 CFs should have globally and temporally unique CF IDs to prevent reusing 
 data from earlier incarnation of same CF name
 

 Key: CASSANDRA-5202
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5202
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: OS: Windows 7, 
 Server: Cassandra 1.1.9 release drop
 Client: astyanax 1.56.21, 
 JVM: Sun/Oracle JVM 64 bit (jdk1.6.0_27)
Reporter: Marat Bedretdinov
Assignee: Yuki Morishita
  Labels: test
 Fix For: 2.1

 Attachments: 5202.txt, astyanax-stress-driver.zip


 Attached is a driver that sequentially:
 1. Drops keyspace
 2. Creates keyspace
 4. Creates 2 column families
 5. Seeds 1M rows with 100 columns
 6. Queries these 2 column families
 The above steps are repeated 1000 times.
 The following exception is observed at random (race - SEDA?):
 ERROR [ReadStage:55] 2013-01-29 19:24:52,676 AbstractCassandraDaemon.java 
 (line 135) Exception in thread Thread[ReadStage:55,5,main]
 java.lang.AssertionError: DecoratedKey(-1, ) != 
 DecoratedKey(62819832764241410631599989027761269388, 313a31) in 
 C:\var\lib\cassandra\data\user_role_reverse_index\business_entity_role\user_role_reverse_index-business_entity_role-hf-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1367)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1229)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1164)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:822)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1271)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 This exception appears in the server at the time of client submitting a query 
 request (row slice) and not at the time data is seeded. The client times out 
 and this data can no longer be queried as the same exception would always 
 occur from there on.
 Also on iteration 201, it appears that dropping column families failed and as 
 a result their recreation failed with unique column family name violation 
 (see exception below). Note that the data files are actually gone, so it 
 appears that the server runtime responsible for creating column family was 
 out of sync with the piece that dropped them:
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5105 ms
 Iteration: 200; Total running time for 1000 queries is 232; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5361 ms
 Iteration: 201; Total running time for 1000 queries is 222; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Starting creating column families
 Exception in thread main 
 com.netflix.astyanax.connectionpool.exceptions.BadRequestException: 
 BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=2468(2469), 
 attempts=1]InvalidRequestException(why:Keyspace names must be 
 case-insensitively unique (user_role_reverse_index conflicts with 
 user_role_reverse_index))
   at 
 com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
   at 
 

[jira] [Updated] (CASSANDRA-5357) Query cache

2014-01-06 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5357:
--

Fix Version/s: 2.1

 Query cache
 ---

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Fix For: 2.1


 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-5357) Query cache

2014-01-06 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-5357:
-

Assignee: Marcus Eriksson  (was: Vijay)

 Query cache
 ---

 Key: CASSANDRA-5357
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5357
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Marcus Eriksson
 Fix For: 2.1


 I think that most people expect the row cache to act like a query cache, 
 because that's a reasonable model.  Caching the entire partition is, in 
 retrospect, not really reasonable, so it's not surprising that it catches 
 people off guard, especially given the confusion we've inflicted on ourselves 
 as to what a row constitutes.
 I propose replacing it with a true query cache.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)