[jira] [Updated] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter

2012-06-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-3885:


Attachment: 3885-v2.txt

Ok, I like the last patch much more and I've convinced myself that this 
prefetched thing is a good idea with multiple slices.

I believe this can be simpler a tiny bit further though. Typically the 
{{readBackward}} business is a tad confusing and since only SimpleBlockFetcher 
really cares about it, it can be moved there. I also think the switch of 
slice/lookup of index block can be cleaned by doing both together. I'm 
attaching a v2 that does those things, a few other cleanups, adds a bunch of 
comments and also adds two optimizations:
* Since we always know the order on which we read columns, we can remember when 
we've entered a slice to avoid a bunch of comparisons until we leave it.
* When we switch to the next slice, we must reuse the binary search to locate 
where that new slice starts rather than going to the next block, otherwise we 
may end up reading lots of unnecessary blocks.

I'll note that imho, with those optimizations, SimpleSliceReader (not to be 
confused with SimpleBlockFetcher) isn't really useful anymore, but we probably 
want to make sure by benchmarking it.

Anyway, outside of IndexedSliceReader, there was a few problems (that are all 
solved by the attached v2):
* We needed to deal with multiple slices for the memtable iterator too, 
otherwise we'll end up returning the wrong columns.
* We needed to correctly serialize multiple slices for the inter-node protocol.

I'll note that with this patch, db.SerializationsTest is broken, but that is no 
mystery: it's trying to read old messages using the current protocol version. 
So I could regenerate the binary messages, but I'm confused on what 
SerializationsTest is actually testing. I though it was making sure we don't 
break backward compatibility. If we regenerate the binary message at each 
release we're not testing that at all.

ps: @david, it seems your editor is splitting lines where it shouldn't and is 
reordering imports in a way that don't respect 
http://wiki.apache.org/cassandra/CodeStyle.


 Support multiple ranges in SliceQueryFilter
 ---

 Key: CASSANDRA-3885
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3885
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Jonathan Ellis
Assignee: David Alves
 Fix For: 1.2

 Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, 
 CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch


 This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow 
 sub-sub-tasks.
 We need to support multiple ranges in a SliceQueryFilter, and we want 
 querying them to be efficient, i.e., one pass through the row to get all of 
 the ranges, rather than one pass per range.
 Supercolumns are irrelevant since the goal is to replace them anyway.  Ignore 
 supercolumn-related code or rip it out, whichever is easier.
 This is ONLY dealing with the storage engine part, not the StorageProxy and 
 Command intra-node messages or the Thrift or CQL client APIs.  Thus, a unit 
 test should be added to ColumnFamilyStoreTest to demonstrate that it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4344) Windows tools don't work and litter the environment

2012-06-15 Thread JIRA
Holger Hoffstätte created CASSANDRA-4344:


 Summary: Windows tools don't work and litter the environment
 Key: CASSANDRA-4344
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4344
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.1
 Environment: any Windows, any JDK
Reporter: Holger Hoffstätte


On Windows the tools either don't work at all (cassandra-stress) and litter the 
shell environment (cassandra-stress  sstablemetadata) by repeatedly appending 
the same information to variables, eventually running out of space.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4344) Windows tools don't work and litter the environment

2012-06-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Holger Hoffstätte updated CASSANDRA-4344:
-

Description: 
On Windows the tools either don't work at all (cassandra-stress) and/or litter 
the shell environment (cassandra-stress  sstablemetadata) by repeatedly 
appending the same information to variables, eventually running out of space.


  was:
On Windows the tools either don't work at all (cassandra-stress) and litter the 
shell environment (cassandra-stress  sstablemetadata) by repeatedly appending 
the same information to variables, eventually running out of space.



 Windows tools don't work and litter the environment
 ---

 Key: CASSANDRA-4344
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4344
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.1
 Environment: any Windows, any JDK
Reporter: Holger Hoffstätte

 On Windows the tools either don't work at all (cassandra-stress) and/or 
 litter the shell environment (cassandra-stress  sstablemetadata) by 
 repeatedly appending the same information to variables, eventually running 
 out of space.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4344) Windows tools don't work and litter the environment

2012-06-15 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Holger Hoffstätte updated CASSANDRA-4344:
-

Attachment: stress.patch
sstablemeta.patch

Trivial fixes for tools to consistently find the right classes and not litter 
the environment.

 Windows tools don't work and litter the environment
 ---

 Key: CASSANDRA-4344
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4344
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.1.1
 Environment: any Windows, any JDK
Reporter: Holger Hoffstätte
 Attachments: sstablemeta.patch, stress.patch


 On Windows the tools either don't work at all (cassandra-stress) and/or 
 litter the shell environment (cassandra-stress  sstablemetadata) by 
 repeatedly appending the same information to variables, eventually running 
 out of space.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter

2012-06-15 Thread David Alves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295647#comment-13295647
 ] 

David Alves commented on CASSANDRA-3885:


thanks for reviewing and extending the patch to the rest of the system Sylvain.

PS: wrt to code style I've made changes to the imports to meet code style, but 
I was already using tjake's cassandra eclipse profile...

 Support multiple ranges in SliceQueryFilter
 ---

 Key: CASSANDRA-3885
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3885
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Jonathan Ellis
Assignee: David Alves
 Fix For: 1.2

 Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, 
 CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch


 This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow 
 sub-sub-tasks.
 We need to support multiple ranges in a SliceQueryFilter, and we want 
 querying them to be efficient, i.e., one pass through the row to get all of 
 the ranges, rather than one pass per range.
 Supercolumns are irrelevant since the goal is to replace them anyway.  Ignore 
 supercolumn-related code or rip it out, whichever is easier.
 This is ONLY dealing with the storage engine part, not the StorageProxy and 
 Command intra-node messages or the Thrift or CQL client APIs.  Thus, a unit 
 test should be added to ColumnFamilyStoreTest to demonstrate that it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[wiki.cassandra-jdbc] push by wfs...@gmail.com - Update to show new branches on 2012-06-15 03:08 GMT

2012-06-15 Thread cassandra-jdbc . apache-extras . org

Revision: 21bc62dab500
Author:   wfshaw wfs...@gmail.com
Date: Thu Jun 14 20:08:11 2012
Log:  Update to show new branches
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=21bc62dab500repo=wiki

Modified:
 /Branch_Versions.wiki

===
--- /Branch_Versions.wiki   Sat Mar 24 19:34:56 2012
+++ /Branch_Versions.wiki   Thu Jun 14 20:08:11 2012
@@ -5,9 +5,13 @@

 {{{cassandra-jdbc}}} manages a number of branches. This wiki page should  
be kept up to date to document the names and descriptions of each of the  
branches.


+See the Source/Changes tab of the branch of interest to see the change  
history for that branch


 = Details =

 ||*Branch Name* || *Status*  || *Description* ||
-||trunk || _active_  || New work that is following  
Cassandra 1.1.0 is committed here. Latest development work  
(1.1-dev-SNAPSHOT)||
-||v1.0.5|| _active_  || Current version that follows  
the Cassandra 1.0.x releases ||
+||v1.0.5|| _inactive_|| Older version that follows the  
Cassandra 1.0.x releases. Only critical update will be made ||
+||v1.1.0|| _active_  || Version that follows the  
Cassandra 1.1.0 release. (Server side PreparedStatement support added here)| 
|
+||v1.1.1|| _active_  || Version that follows the  
Cassandra 1.1.0 release. (Current Stable Release) ||
+||master|| _active_  || Copy of the Current stable  
release. (master is the default branch)||
+||trunk || _active_  || New work that is following  
Cassandra 1.2 is committed here. Latest development work...||


[wiki.cassandra-jdbc] push by wfs...@gmail.com - Fix typo on 2012-06-15 03:10 GMT

2012-06-15 Thread cassandra-jdbc . apache-extras . org

Revision: 7b8878b2816f
Author:   wfshaw wfs...@gmail.com
Date: Thu Jun 14 20:10:27 2012
Log:  Fix typo
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=7b8878b2816frepo=wiki

Modified:
 /Branch_Versions.wiki

===
--- /Branch_Versions.wiki   Thu Jun 14 20:08:11 2012
+++ /Branch_Versions.wiki   Thu Jun 14 20:10:27 2012
@@ -12,6 +12,6 @@
 ||*Branch Name* || *Status*  || *Description* ||
 ||v1.0.5|| _inactive_|| Older version that follows the  
Cassandra 1.0.x releases. Only critical update will be made ||
 ||v1.1.0|| _active_  || Version that follows the  
Cassandra 1.1.0 release. (Server side PreparedStatement support added here)| 
|
-||v1.1.1|| _active_  || Version that follows the  
Cassandra 1.1.0 release. (Current Stable Release) ||
+||v1.1.1|| _active_  || Version that follows the  
Cassandra 1.1.1 release. (Current Stable Release) ||
 ||master|| _active_  || Copy of the Current stable  
release. (master is the default branch)||
 ||trunk || _active_  || New work that is following  
Cassandra 1.2 is committed here. Latest development work...||


[cassandra-jdbc] 5 new revisions pushed by wfs...@gmail.com on 2012-06-15 02:41 GMT

2012-06-15 Thread cassandra-jdbc . apache-extras . org

5 new revisions:

Revision: f97b06dd798c
Author:   Rick Shaw wfs...@gmail.com
Date: Thu Jun 14 18:56:51 2012
Log:  Remove the log4j.properties file from src/main/resources...
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=f97b06dd798c

Revision: 5a690f77f8e7
Author:   Rick Shaw wfs...@gmail.com
Date: Thu Jun 14 13:49:16 2012
Log:  Show more information in error messages from Server
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=5a690f77f8e7

Revision: 4930d2526457
Author:   Rick Shaw wfs...@gmail.com
Date: Thu Jun 14 13:50:55 2012
Log:  Add test for Issue #33
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=4930d2526457

Revision: 320b789c5fb1
Author:   Rick Shaw wfs...@gmail.com
Date: Thu Jun 14 18:57:35 2012
Log:  Merge branch 'issue-33' into v1.1.0
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=320b789c5fb1

Revision: af085cd94571
Author:   Rick Shaw wfs...@gmail.com
Date: Thu Jun 14 19:29:00 2012
Log:  Update dependencies in build.xml and pom.xml to version 1.1.1 of  
C*

http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=af085cd94571

==
Revision: f97b06dd798c
Author:   Rick Shaw wfs...@gmail.com
Date: Thu Jun 14 18:56:51 2012
Log:  Remove the log4j.properties file from src/main/resources

log4j file was getting put into the jar overriding client usage
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=f97b06dd798c

Deleted:
 /src/main/resources/log4j.properties

===
--- /src/main/resources/log4j.propertiesMon Nov 21 14:37:51 2011
+++ /dev/null
@@ -1,8 +0,0 @@
-#  Test Log4J Properties File
-
-log4j.rootLogger=WARN, stdout
-log4j.logger.org.apache.cassandra.cql.jdbc=INFO
-
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=%-6r %-5p [%-30c{3}] = %m%n

==
Revision: 5a690f77f8e7
Author:   Rick Shaw wfs...@gmail.com
Date: Thu Jun 14 13:49:16 2012
Log:  Show more information in error messages from Server
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=5a690f77f8e7

Modified:
 /src/main/java/org/apache/cassandra/cql/jdbc/CassandraStatement.java

===
--- /src/main/java/org/apache/cassandra/cql/jdbc/CassandraStatement.java	 
Wed Dec 21 21:16:09 2011
+++ /src/main/java/org/apache/cassandra/cql/jdbc/CassandraStatement.java	 
Thu Jun 14 13:49:16 2012

@@ -176,7 +176,7 @@
 }
 catch (InvalidRequestException e)
 {
-throw new SQLSyntaxErrorException(e.getWhy());
+throw new SQLSyntaxErrorException(e.getWhy()+\n'+sql+',e);
 }
 catch (UnavailableException e)
 {
@@ -184,7 +184,7 @@
 }
 catch (TimedOutException e)
 {
-throw new SQLTransientConnectionException(e.getMessage());
+throw new SQLTransientConnectionException(e);
 }
 catch (SchemaDisagreementException e)
 {
@@ -192,7 +192,7 @@
 }
 catch (TException e)
 {
-throw new SQLNonTransientConnectionException(e.getMessage());
+throw new SQLNonTransientConnectionException(e);
 }

 }

==
Revision: 4930d2526457
Author:   Rick Shaw wfs...@gmail.com
Date: Thu Jun 14 13:50:55 2012
Log:  Add test for Issue #33.

 o also add a bit of tidying up.
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/source/detail?r=4930d2526457

Modified:
 /src/test/java/org/apache/cassandra/cql/jdbc/JdbcRegressionTest.java

===
--- /src/test/java/org/apache/cassandra/cql/jdbc/JdbcRegressionTest.java	 
Sat Mar 24 18:43:56 2012
+++ /src/test/java/org/apache/cassandra/cql/jdbc/JdbcRegressionTest.java	 
Thu Jun 14 13:50:55 2012

@@ -48,7 +48,10 @@
 public static void setUpBeforeClass() throws Exception
 {
 Class.forName(org.apache.cassandra.cql.jdbc.CassandraDriver);
-con =  
DriverManager.getConnection(String.format(jdbc:cassandra://%s:%d/%s,HOST,PORT,system));
+String URL =  
String.format(jdbc:cassandra://%s:%d/%s,HOST,PORT,system);

+System.out.println(Connection URL = '+URL +');
+
+con = DriverManager.getConnection(URL);
 Statement stmt = con.createStatement();

 // Drop Keyspace
@@ -56,10 +59,12 @@

 try { stmt.execute(dropKS);}
 catch (Exception e){/* Exception on DROP is OK */}
-
+
 // Create KeySpace
 String createKS = String.format(CREATE KEYSPACE %s WITH  
strategy_class = 

[jira] [Updated] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-06-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4321:


Attachment: (was: 
0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v2.txt)

 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 
 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, 
 0001-Change-Range-Bounds-in-LeveledManifest.overlapping.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows.txt, 
 ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 The ring currently consists of 6 DC's and 22 nodes using LCS  compression.  
 This issue was first noted on 2 nodes in one DC and then appears to have 
 spread to various other nodes in the other DC's.  
 When the first occurrence of this was found I restarted the instance but it 
 failed to start so I cleared its data and treated it as a replacement node 
 for the token it was previously responsible for.  This node successfully 
 streamed all the relevant data back but failed again a number of hours later 
 with the same StackOverflowError and again was unable to restart. 
 The initial stack overflow error on a running instance looks like this:
 ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[CompactionExecutor:314,1,main]
 java.lang.StackOverflowError
 at java.util.Arrays.mergeSort(Arrays.java:1157)
 at java.util.Arrays.sort(Arrays.java:1092)
 at java.util.Collections.sort(Collections.java:134)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:49)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow.  Compactions stop from this point 
 onwards]
 I restarted this failing instance with DEBUG logging enabled and it throws 
 the following exception part way through startup:
 ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.StackOverflowError
 at 
 org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307)
 at 
 org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
 at 
 org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
 at 
 org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124)
 at 
 org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:45)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow]
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalTree.init(IntervalTree.java:39)
 at 
 org.apache.cassandra.db.DataTracker.buildIntervalTree(DataTracker.java:560)
 at 
 

[jira] [Updated] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-06-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4321:


Attachment: (was: 
0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt)

 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 
 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, 
 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 The ring currently consists of 6 DC's and 22 nodes using LCS  compression.  
 This issue was first noted on 2 nodes in one DC and then appears to have 
 spread to various other nodes in the other DC's.  
 When the first occurrence of this was found I restarted the instance but it 
 failed to start so I cleared its data and treated it as a replacement node 
 for the token it was previously responsible for.  This node successfully 
 streamed all the relevant data back but failed again a number of hours later 
 with the same StackOverflowError and again was unable to restart. 
 The initial stack overflow error on a running instance looks like this:
 ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[CompactionExecutor:314,1,main]
 java.lang.StackOverflowError
 at java.util.Arrays.mergeSort(Arrays.java:1157)
 at java.util.Arrays.sort(Arrays.java:1092)
 at java.util.Collections.sort(Collections.java:134)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:49)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow.  Compactions stop from this point 
 onwards]
 I restarted this failing instance with DEBUG logging enabled and it throws 
 the following exception part way through startup:
 ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.StackOverflowError
 at 
 org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307)
 at 
 org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
 at 
 org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
 at 
 org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124)
 at 
 org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:45)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow]
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalTree.init(IntervalTree.java:39)
 at 
 org.apache.cassandra.db.DataTracker.buildIntervalTree(DataTracker.java:560)
 at 
 

[jira] [Updated] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-06-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4321:


Attachment: (was: 
0001-Change-Range-Bounds-in-LeveledManifest.overlapping.txt)

 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 
 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, 
 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 The ring currently consists of 6 DC's and 22 nodes using LCS  compression.  
 This issue was first noted on 2 nodes in one DC and then appears to have 
 spread to various other nodes in the other DC's.  
 When the first occurrence of this was found I restarted the instance but it 
 failed to start so I cleared its data and treated it as a replacement node 
 for the token it was previously responsible for.  This node successfully 
 streamed all the relevant data back but failed again a number of hours later 
 with the same StackOverflowError and again was unable to restart. 
 The initial stack overflow error on a running instance looks like this:
 ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[CompactionExecutor:314,1,main]
 java.lang.StackOverflowError
 at java.util.Arrays.mergeSort(Arrays.java:1157)
 at java.util.Arrays.sort(Arrays.java:1092)
 at java.util.Collections.sort(Collections.java:134)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:49)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow.  Compactions stop from this point 
 onwards]
 I restarted this failing instance with DEBUG logging enabled and it throws 
 the following exception part way through startup:
 ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.StackOverflowError
 at 
 org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307)
 at 
 org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
 at 
 org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
 at 
 org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124)
 at 
 org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:45)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow]
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalTree.init(IntervalTree.java:39)
 at 
 org.apache.cassandra.db.DataTracker.buildIntervalTree(DataTracker.java:560)
 at 
 

[jira] [Updated] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-06-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4321:


Attachment: 0003-Create-standalone-scrub-v3.txt
0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt
0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt

bq. Tried the patch but the server still doesn't start.

Right. So the problem is, as you noticed, that there is really no way to start 
the server and having it load a broken sstable, which means there is no way to 
run scrub on it. Even without assertions, we rely on interval trees which 
breaks if the sstable first key is not before the last one.

After having look a bit more closely on that problem, I think the cleaner way 
to solve this is to provide a way to run scrub offline, which allows to skip 
the interval trees. So attaching a 3rd patch that provide that. It adds a new 
binary 'sstablescrub' that takes as argument a keyspace name and column family 
name and scrub the relevant sstables, and does this without breaking if the 
sstable have some out of order keys. I kind of think that having an offline 
scrub is not a bad idea anyway.

With that, you should be able to stop the node, run 'sstablescrub ksname 
cfname' and then restart the node and you should be good to go.


 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 
 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, 
 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 The ring currently consists of 6 DC's and 22 nodes using LCS  compression.  
 This issue was first noted on 2 nodes in one DC and then appears to have 
 spread to various other nodes in the other DC's.  
 When the first occurrence of this was found I restarted the instance but it 
 failed to start so I cleared its data and treated it as a replacement node 
 for the token it was previously responsible for.  This node successfully 
 streamed all the relevant data back but failed again a number of hours later 
 with the same StackOverflowError and again was unable to restart. 
 The initial stack overflow error on a running instance looks like this:
 ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[CompactionExecutor:314,1,main]
 java.lang.StackOverflowError
 at java.util.Arrays.mergeSort(Arrays.java:1157)
 at java.util.Arrays.sort(Arrays.java:1092)
 at java.util.Collections.sort(Collections.java:134)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:49)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow.  Compactions stop from this point 
 onwards]
 I restarted this failing instance with DEBUG logging enabled and it throws 
 the following exception part way through startup:
 ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.StackOverflowError
 at 
 org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307)
 at 
 org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
 at 
 org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
 at 
 org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124)
 at 
 org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:45)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow]
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 

[jira] [Updated] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-06-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-4321:


Attachment: (was: 0002-Scrub-detects-and-repair-outOfOrder-rows.txt)

 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 
 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, 
 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 The ring currently consists of 6 DC's and 22 nodes using LCS  compression.  
 This issue was first noted on 2 nodes in one DC and then appears to have 
 spread to various other nodes in the other DC's.  
 When the first occurrence of this was found I restarted the instance but it 
 failed to start so I cleared its data and treated it as a replacement node 
 for the token it was previously responsible for.  This node successfully 
 streamed all the relevant data back but failed again a number of hours later 
 with the same StackOverflowError and again was unable to restart. 
 The initial stack overflow error on a running instance looks like this:
 ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[CompactionExecutor:314,1,main]
 java.lang.StackOverflowError
 at java.util.Arrays.mergeSort(Arrays.java:1157)
 at java.util.Arrays.sort(Arrays.java:1092)
 at java.util.Collections.sort(Collections.java:134)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:49)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow.  Compactions stop from this point 
 onwards]
 I restarted this failing instance with DEBUG logging enabled and it throws 
 the following exception part way through startup:
 ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.StackOverflowError
 at 
 org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307)
 at 
 org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
 at 
 org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
 at 
 org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124)
 at 
 org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:45)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow]
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalTree.init(IntervalTree.java:39)
 at 
 org.apache.cassandra.db.DataTracker.buildIntervalTree(DataTracker.java:560)
 at 
 

[jira] [Commented] (CASSANDRA-4304) Add bytes-limit clause to queries

2012-06-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295722#comment-13295722
 ] 

Brandon Williams commented on CASSANDRA-4304:
-

I think I do like the idea of limiting by bytes instead of count, as 
CASSANDRA-3911 does.  However, I think that ticket has the right approach in 
that it should be the operator that defines that limit, not clients, since they 
will still have the ability to abuse it and OOM the server, and OOM is the 
operator's problem.

 Add bytes-limit clause to queries
 -

 Key: CASSANDRA-4304
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4304
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Christian Spriegel
 Fix For: 1.2

 Attachments: TestImplForSlices.patch


 Idea is to add a second limit clause to (slice)queries. This would allow easy 
 loading of batches, even if content is variable sized.
 Imagine the following use case:
 You want to load a batch of XMLs, where each is between 100bytes and 5MB 
 large.
 Currently you can load either
 - a large number of XMLs, but risk OOMs or timeouts
 or
 - a small number of XMLs, and do too many queries where each query usually 
 retrieves very little data.
 With cassandra being able to limit by size and not just count, we could do a 
 single query which would never OOM but always return a decent amount of data 
 -- with no extra overhead for multiple queries.
 Few thoughts from my side:
 - The limit should be a soft limit, not a hard limit. Therefore it will 
 always return at least one row/column, even if that one large than the limit 
 specifies.
 - HintedHandoffManager:303 is already doing a 
 InMemoryCompactionLimit/averageColumnSize to avoid OOM. It could then simply 
 use the new limit clause :-)
 - A bytes-limit on a range- or indexed-query should always return a complete 
 row

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4345) New (bootstrapping) Supplant Healthy Nodes

2012-06-15 Thread Benjamin Coverston (JIRA)
Benjamin Coverston created CASSANDRA-4345:
-

 Summary: New (bootstrapping) Supplant Healthy Nodes
 Key: CASSANDRA-4345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4345
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.9
Reporter: Benjamin Coverston


Copied a config from an existing node and fired up a new node, which happily 
inserted itself at token 0 of a running ring. The surprising and worrisome part 
is that EVERY node started throwing:

java.lang.NullPointerException
ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,948 Cassandra.java (line 3041) 
Internal error processing get_slice
java.lang.NullPointerException
ERROR [RPC-Thread:205427] 2012-06-14 19:16:31,949 Cassandra.java (line 3041) 
Internal error processing get_slice
java.lang.NullPointerException
ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,952 Cassandra.java (line 3041) 
Internal error processing get_slice
java.lang.NullPointerException

---

Resulting in:
INFO [GossipStage:1] 2012-06-14 18:24:37,472 Gossiper.java (line 838) Node 
/192.168.88.48 is now part of the cluster
 INFO [GossipStage:1] 2012-06-14 18:24:37,473 Gossiper.java (line 804) 
InetAddress /192.168.88.48 is now UP
 INFO [GossipStage:1] 2012-06-14 18:24:37,473 StorageService.java (line 1008) 
Nodes /192.168.88.48 and /192.168.88.70 have the same token 0.  /192.168.88
.48 is the new owner
 WARN [GossipStage:1] 2012-06-14 18:24:37,474 TokenMetadata.java (line 135) 
Token 0 changing ownership from /192.168.88.70 to /192.168.88.48
 INFO [GossipStage:1] 2012-06-14 18:24:37,475 ColumnFamilyStore.java (line 705) 
Enqueuing flush of Memtable-LocationInfo@961917618(20/25 serialized/live 
bytes, 1 ops)
 INFO [FlushWriter:1272] 2012-06-14 18:24:37,475 Memtable.java (line 246) 
Writing Memtable-LocationInfo@961917618(20/25 serialized/live bytes, 1 ops)
 INFO [FlushWriter:1272] 2012-06-14 18:24:37,492 Memtable.java (line 283) 
Completed flushing /cass_ssd/system/LocationInfo-hc-23-Data.db (74 bytes)
ERROR [RPC-Thread:200943] 2012-06-14 18:24:38,007 Cassandra.java (line 3041) 
Internal error processing get_slice
java.lang.NullPointerException
at 
org.apache.cassandra.locator.PropertyFileSnitch.getDatacenter(PropertyFileSnitch.java:104)
at 
org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122)
at 
org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(NetworkTopologyStrategy.java:93)
at 
org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:100)
at 
org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:2002)
at 
org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:1996)
at 
org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:604)
at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
at 
org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
at 
org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
at 
org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
at 
org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
at 
org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
at 
org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
at 
org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
at 
org.apache.cassandra.thrift.CustomTHsHaServer$Invocation.run(CustomTHsHaServer.java:105)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4345) New (bootstrapping) Supplant Healthy Nodes

2012-06-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295737#comment-13295737
 ] 

Jonathan Ellis commented on CASSANDRA-4345:
---

While we could improve the error message, PFS shouldn't just Make Something Up 
when given an unknown node to replicate to.

 New (bootstrapping) Supplant Healthy Nodes
 --

 Key: CASSANDRA-4345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4345
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.9
Reporter: Benjamin Coverston

 Copied a config from an existing node and fired up a new node, which happily 
 inserted itself at token 0 of a running ring. The surprising and worrisome 
 part is that EVERY node started throwing:
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,948 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205427] 2012-06-14 19:16:31,949 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,952 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ---
 Resulting in:
 INFO [GossipStage:1] 2012-06-14 18:24:37,472 Gossiper.java (line 838) Node 
 /192.168.88.48 is now part of the cluster
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 Gossiper.java (line 804) 
 InetAddress /192.168.88.48 is now UP
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 StorageService.java (line 1008) 
 Nodes /192.168.88.48 and /192.168.88.70 have the same token 0.  /192.168.88
 .48 is the new owner
  WARN [GossipStage:1] 2012-06-14 18:24:37,474 TokenMetadata.java (line 135) 
 Token 0 changing ownership from /192.168.88.70 to /192.168.88.48
  INFO [GossipStage:1] 2012-06-14 18:24:37,475 ColumnFamilyStore.java (line 
 705) Enqueuing flush of Memtable-LocationInfo@961917618(20/25 serialized/live 
 bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,475 Memtable.java (line 246) 
 Writing Memtable-LocationInfo@961917618(20/25 serialized/live bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,492 Memtable.java (line 283) 
 Completed flushing /cass_ssd/system/LocationInfo-hc-23-Data.db (74 bytes)
 ERROR [RPC-Thread:200943] 2012-06-14 18:24:38,007 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.getDatacenter(PropertyFileSnitch.java:104)
 at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122)
 at 
 org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(NetworkTopologyStrategy.java:93)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:100)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:2002)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:1996)
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:604)
 at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
 at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
 at 
 org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
 at 
 org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
 at 
 org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at 
 org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
 at 
 org.apache.cassandra.thrift.CustomTHsHaServer$Invocation.run(CustomTHsHaServer.java:105)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4304) Add bytes-limit clause to queries

2012-06-15 Thread Christian Spriegel (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295743#comment-13295743
 ] 

Christian Spriegel commented on CASSANDRA-4304:
---

Brandon, thank you for your feedback. I also see the need for these 
operator-limits. But I think they should be implemented in addition to 
client-specified limits as proposed by me.

Here is why:
# A operator-limit should throw an exception if too much data is loaded (maybe 
not an exception but some kind of flag in the result). If the server would 
silently reduce the amount of results, then the client would not know if there 
simply is no more data or if it was limited due to size. Think of some client 
asking for fixed-size batches for some processing - the operator would silently 
break the application by turning on the size limit.
# More important (to me): I have different queries that expect different 
batch-sizes. Therefore I need the application to be able to control the result 
size. For example: mobile devices need smaller batches than a backend system 
that calls our middleware.

Is there any reason not to have a client-limit? I agree, that adding another 
limit parameter does not look nice. In thrift we could reuse the the existing 
limit parameter and use the negative value range for byte limits :-). In 
cql/cli a new keyword might be nicer though.

... but I digress. Any thoughts?

I dont know if it helps, but I would be willing to contribute.

 Add bytes-limit clause to queries
 -

 Key: CASSANDRA-4304
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4304
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Christian Spriegel
 Fix For: 1.2

 Attachments: TestImplForSlices.patch


 Idea is to add a second limit clause to (slice)queries. This would allow easy 
 loading of batches, even if content is variable sized.
 Imagine the following use case:
 You want to load a batch of XMLs, where each is between 100bytes and 5MB 
 large.
 Currently you can load either
 - a large number of XMLs, but risk OOMs or timeouts
 or
 - a small number of XMLs, and do too many queries where each query usually 
 retrieves very little data.
 With cassandra being able to limit by size and not just count, we could do a 
 single query which would never OOM but always return a decent amount of data 
 -- with no extra overhead for multiple queries.
 Few thoughts from my side:
 - The limit should be a soft limit, not a hard limit. Therefore it will 
 always return at least one row/column, even if that one large than the limit 
 specifies.
 - HintedHandoffManager:303 is already doing a 
 InMemoryCompactionLimit/averageColumnSize to avoid OOM. It could then simply 
 use the new limit clause :-)
 - A bytes-limit on a range- or indexed-query should always return a complete 
 row

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4338) Experiment with direct buffer in SequentialWriter

2012-06-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295771#comment-13295771
 ] 

Jonathan Ellis commented on CASSANDRA-4338:
---

Using direct buffers for RAR and CRAR may also help avoid heap fragmentation.

 Experiment with direct buffer in SequentialWriter
 -

 Key: CASSANDRA-4338
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4338
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Jonathan Ellis
Assignee: Yuki Morishita
Priority: Minor
 Fix For: 1.2


 Using a direct buffer instead of a heap-based byte[] should let us avoid a 
 copy into native memory when we flush the buffer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[1/5] git commit: Merge branch 'cassandra-1.1' into trunk

2012-06-15 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.1 a4fab90af - f74ed12a7
  refs/heads/trunk 8cd09 - 7398e9363


Merge branch 'cassandra-1.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7398e936
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7398e936
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7398e936

Branch: refs/heads/trunk
Commit: 7398e9363ae99220d195d30eea3f3479485618f9
Parents: 8cd f74ed12
Author: Jonathan Ellis jbel...@apache.org
Authored: Fri Jun 15 12:27:40 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jun 15 12:27:40 2012 -0500

--
 CHANGES.txt|1 +
 .../cassandra/db/compaction/LeveledManifest.java   |   60 ++--
 .../cassandra/db/compaction/CompactionsTest.java   |  114 +++---
 3 files changed, 103 insertions(+), 72 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7398e936/CHANGES.txt
--
diff --cc CHANGES.txt
index 2eb1b57,693b03b..7064eb0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,28 -1,5 +1,29 @@@
 +1.2-dev
 + * update MS protocol with a version handshake + broadcast address id
 +   (CASSANDRA-4311)
 + * multithreaded hint replay (CASSANDRA-4189)
 + * add inter-node message compression (CASSANDRA-3127)
 + * enforce 1m min keycache for auto (CASSANDRA-4306)
 + * remove COPP (CASSANDRA-2479)
 + * Track tombstone expiration and compact when tombstone content is
 +   higher than a configurable threshold, default 20% (CASSANDRA-3442)
 + * update MurmurHash to version 3 (CASSANDRA-2975)
 + * (CLI) track elapsed time for `delete' operation (CASSANDRA-4060)
 + * (CLI) jline version is bumped to 1.0 to properly  support
 +   'delete' key function (CASSANDRA-4132)
 + * Save IndexSummary into new SSTable 'Summary' component (CASSANDRA-2392)
 + * Add support for range tombstones (CASSANDRA-3708)
 + * Improve MessagingService efficiency (CASSANDRA-3617)
 + * Avoid ID conflicts from concurrent schema changes (CASSANDRA-3794)
 + * Set thrift HSHA server thread limit to unlimet by default (CASSANDRA-4277)
 + * Avoids double serialization of CF id in RowMutation messages
 +   (CASSANDRA-4293)
 + * fix Summary component and caches to use correct partitioner 
(CASSANDRA-4289)
 + * stream compressed sstables directly with java nio (CASSANDRA-4297)
 +
 +
  1.1.2
+  * fix bug in sstable blacklisting with LCS (CASSANDRA-4343)
   * LCS no longer promotes tiny sstables out of L0 (CASSANDRA-4341)
   * skip tombstones during hint replay (CASSANDRA-4320)
   * fix NPE in compactionstats (CASSANDRA-4318)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7398e936/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7398e936/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--
diff --cc test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
index 1476b4a,4f87c86..2b134d1
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
@@@ -85,69 -87,20 +85,69 @@@ public class CompactionsTest extends Sc
  rm.apply();
  inserted.add(key);
  }
- store.forceBlockingFlush();
- assertMaxTimestamp(store, maxTimestampExpected);
- assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(store).size());
+ cfs.forceBlockingFlush();
+ assertMaxTimestamp(cfs, maxTimestampExpected);
+ assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(cfs).size());
  }
  
- forceCompactions(store);
+ forceCompactions(cfs);
  
- assertEquals(inserted.size(), Util.getRangeSlice(store).size());
+ assertEquals(inserted.size(), Util.getRangeSlice(cfs).size());
  
  // make sure max timestamp of compacted sstables is recorded properly 
after compaction.
- assertMaxTimestamp(store, maxTimestampExpected);
- store.truncate();
+ assertMaxTimestamp(cfs, maxTimestampExpected);
+ cfs.truncate();
  }
  
 +/**
 + * Test to see if sstable has enough expired columns, it is compacted 
itself.
 + */
 +@Test
 +public void testSingleSSTableCompaction() throws Exception
 +{
 +Table table = Table.open(TABLE1);
 +ColumnFamilyStore store = table.getColumnFamilyStore(Standard1);
 +store.clearUnsafe();
 +store.metadata.gcGraceSeconds(1);
 +

[2/5] git commit: fix bug in sstable blacklisting with LCS patch by jbellis; reviewed by yukim for CASSANDRA-4343

2012-06-15 Thread jbellis
fix bug in sstable blacklisting with LCS
patch by jbellis; reviewed by yukim for CASSANDRA-4343


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f74ed12a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f74ed12a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f74ed12a

Branch: refs/heads/cassandra-1.1
Commit: f74ed12a70ba0e59efef20a2207ef6bf4df0ae04
Parents: 55d5c04
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Jun 14 20:09:21 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jun 15 12:25:59 2012 -0500

--
 CHANGES.txt|1 +
 .../cassandra/db/compaction/LeveledManifest.java   |   60 +++
 2 files changed, 46 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f74ed12a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b0e667d..693b03b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.1.2
+ * fix bug in sstable blacklisting with LCS (CASSANDRA-4343)
  * LCS no longer promotes tiny sstables out of L0 (CASSANDRA-4341)
  * skip tombstones during hint replay (CASSANDRA-4320)
  * fix NPE in compactionstats (CASSANDRA-4318)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f74ed12a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index 4ed5fac..a53d519 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -277,15 +277,10 @@ public class LeveledManifest
 if (score  1.001 || (i == 0  sstables.size()  1))
 {
 CollectionSSTableReader candidates = getCandidatesFor(i);
-
 if (logger.isDebugEnabled())
 logger.debug(Compaction candidates for L{} are {}, i, 
toString(candidates));
-
-// check if have any SSTables marked as suspected,
-// saves us filter time when no SSTables are suspects
-return hasSuspectSSTables(candidates)
-? filterSuspectSSTables(candidates)
-: candidates;
+if (!candidates.isEmpty())
+return candidates;
 }
 }
 
@@ -386,6 +381,10 @@ public class LeveledManifest
 // 2. At most MAX_COMPACTING_L0 sstables will be compacted at once
 // 3. If total candidate size is less than maxSSTableSizeInMB, we 
won't bother compacting with L1,
 //and the result of the compaction will stay in L0 instead of 
being promoted (see promote())
+//
+// Note that we ignore suspect-ness of L1 sstables here, since if 
an L1 sstable is suspect we're
+// basically screwed, since we expect all or most L0 sstables to 
overlap with each L1 sstable.
+// So if an L1 sstable is suspect we can't do much besides try 
anyway and hope for the best.
 SetSSTableReader candidates = new HashSetSSTableReader();
 SetSSTableReader remaining = new 
HashSetSSTableReader(generations[0]);
 ListSSTableReader ageSortedSSTables = new 
ArrayListSSTableReader(generations[0]);
@@ -395,9 +394,14 @@ public class LeveledManifest
 if (candidates.contains(sstable))
 continue;
 
-ListSSTableReader newCandidates = overlapping(sstable, 
remaining);
-candidates.addAll(newCandidates);
-remaining.removeAll(newCandidates);
+for (SSTableReader newCandidate : overlapping(sstable, 
remaining))
+{
+if (!newCandidate.isMarkedSuspect())
+{
+candidates.add(newCandidate);
+remaining.remove(newCandidate);
+}
+}
 
 if (candidates.size()  MAX_COMPACTING_L0)
 {
@@ -421,14 +425,40 @@ public class LeveledManifest
 
 // for non-L0 compactions, pick up where we left off last time
 Collections.sort(generations[level], SSTable.sstableComparator);
-for (SSTableReader sstable : generations[level])
+int start = 0; // handles case where the prior compaction touched the 
very last range
+for (int i = 0; i  generations[level].size(); i++)
 {
-// the first sstable that is  than the marked
+SSTableReader sstable = generations[level].get(i);
 

[3/5] git commit: fix bug in sstable blacklisting with LCS patch by jbellis; reviewed by yukim for CASSANDRA-4343

2012-06-15 Thread jbellis
fix bug in sstable blacklisting with LCS
patch by jbellis; reviewed by yukim for CASSANDRA-4343


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f74ed12a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f74ed12a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f74ed12a

Branch: refs/heads/trunk
Commit: f74ed12a70ba0e59efef20a2207ef6bf4df0ae04
Parents: 55d5c04
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Jun 14 20:09:21 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Fri Jun 15 12:25:59 2012 -0500

--
 CHANGES.txt|1 +
 .../cassandra/db/compaction/LeveledManifest.java   |   60 +++
 2 files changed, 46 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/f74ed12a/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index b0e667d..693b03b 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 1.1.2
+ * fix bug in sstable blacklisting with LCS (CASSANDRA-4343)
  * LCS no longer promotes tiny sstables out of L0 (CASSANDRA-4341)
  * skip tombstones during hint replay (CASSANDRA-4320)
  * fix NPE in compactionstats (CASSANDRA-4318)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/f74ed12a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java 
b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
index 4ed5fac..a53d519 100644
--- a/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
+++ b/src/java/org/apache/cassandra/db/compaction/LeveledManifest.java
@@ -277,15 +277,10 @@ public class LeveledManifest
 if (score  1.001 || (i == 0  sstables.size()  1))
 {
 CollectionSSTableReader candidates = getCandidatesFor(i);
-
 if (logger.isDebugEnabled())
 logger.debug(Compaction candidates for L{} are {}, i, 
toString(candidates));
-
-// check if have any SSTables marked as suspected,
-// saves us filter time when no SSTables are suspects
-return hasSuspectSSTables(candidates)
-? filterSuspectSSTables(candidates)
-: candidates;
+if (!candidates.isEmpty())
+return candidates;
 }
 }
 
@@ -386,6 +381,10 @@ public class LeveledManifest
 // 2. At most MAX_COMPACTING_L0 sstables will be compacted at once
 // 3. If total candidate size is less than maxSSTableSizeInMB, we 
won't bother compacting with L1,
 //and the result of the compaction will stay in L0 instead of 
being promoted (see promote())
+//
+// Note that we ignore suspect-ness of L1 sstables here, since if 
an L1 sstable is suspect we're
+// basically screwed, since we expect all or most L0 sstables to 
overlap with each L1 sstable.
+// So if an L1 sstable is suspect we can't do much besides try 
anyway and hope for the best.
 SetSSTableReader candidates = new HashSetSSTableReader();
 SetSSTableReader remaining = new 
HashSetSSTableReader(generations[0]);
 ListSSTableReader ageSortedSSTables = new 
ArrayListSSTableReader(generations[0]);
@@ -395,9 +394,14 @@ public class LeveledManifest
 if (candidates.contains(sstable))
 continue;
 
-ListSSTableReader newCandidates = overlapping(sstable, 
remaining);
-candidates.addAll(newCandidates);
-remaining.removeAll(newCandidates);
+for (SSTableReader newCandidate : overlapping(sstable, 
remaining))
+{
+if (!newCandidate.isMarkedSuspect())
+{
+candidates.add(newCandidate);
+remaining.remove(newCandidate);
+}
+}
 
 if (candidates.size()  MAX_COMPACTING_L0)
 {
@@ -421,14 +425,40 @@ public class LeveledManifest
 
 // for non-L0 compactions, pick up where we left off last time
 Collections.sort(generations[level], SSTable.sstableComparator);
-for (SSTableReader sstable : generations[level])
+int start = 0; // handles case where the prior compaction touched the 
very last range
+for (int i = 0; i  generations[level].size(); i++)
 {
-// the first sstable that is  than the marked
+SSTableReader sstable = generations[level].get(i);
 

[4/5] git commit: rename store - cfs

2012-06-15 Thread jbellis
rename store - cfs


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/55d5c041
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/55d5c041
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/55d5c041

Branch: refs/heads/trunk
Commit: 55d5c041382a3185387def648f6a7c7a76847f75
Parents: a4fab90
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Jun 14 18:40:05 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Jun 14 18:40:05 2012 -0500

--
 .../cassandra/db/compaction/CompactionsTest.java   |  114 +++---
 1 files changed, 57 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/55d5c041/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
index 3916669..4f87c86 100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
@@ -65,13 +65,13 @@ public class CompactionsTest extends SchemaLoader
 {
 // this test does enough rows to force multiple block indexes to be 
used
 Table table = Table.open(TABLE1);
-ColumnFamilyStore store = table.getColumnFamilyStore(Standard1);
+ColumnFamilyStore cfs = table.getColumnFamilyStore(Standard1);
 
 final int ROWS_PER_SSTABLE = 10;
 final int SSTABLES = DatabaseDescriptor.getIndexInterval() * 3 / 
ROWS_PER_SSTABLE;
 
 // disable compaction while flushing
-store.disableAutoCompaction();
+cfs.disableAutoCompaction();
 
 long maxTimestampExpected = Long.MIN_VALUE;
 SetDecoratedKey inserted = new HashSetDecoratedKey();
@@ -87,18 +87,18 @@ public class CompactionsTest extends SchemaLoader
 rm.apply();
 inserted.add(key);
 }
-store.forceBlockingFlush();
-assertMaxTimestamp(store, maxTimestampExpected);
-assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(store).size());
+cfs.forceBlockingFlush();
+assertMaxTimestamp(cfs, maxTimestampExpected);
+assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(cfs).size());
 }
 
-forceCompactions(store);
+forceCompactions(cfs);
 
-assertEquals(inserted.size(), Util.getRangeSlice(store).size());
+assertEquals(inserted.size(), Util.getRangeSlice(cfs).size());
 
 // make sure max timestamp of compacted sstables is recorded properly 
after compaction.
-assertMaxTimestamp(store, maxTimestampExpected);
-store.truncate();
+assertMaxTimestamp(cfs, maxTimestampExpected);
+cfs.truncate();
 }
 
 
@@ -106,13 +106,13 @@ public class CompactionsTest extends SchemaLoader
 public void testSuperColumnCompactions() throws IOException, 
ExecutionException, InterruptedException
 {
 Table table = Table.open(TABLE1);
-ColumnFamilyStore store = table.getColumnFamilyStore(Super1);
+ColumnFamilyStore cfs = table.getColumnFamilyStore(Super1);
 
 final int ROWS_PER_SSTABLE = 10;
 final int SSTABLES = DatabaseDescriptor.getIndexInterval() * 3 / 
ROWS_PER_SSTABLE;
 
 //disable compaction while flushing
-store.disableAutoCompaction();
+cfs.disableAutoCompaction();
 
 long maxTimestampExpected = Long.MIN_VALUE;
 SetDecoratedKey inserted = new HashSetDecoratedKey();
@@ -131,47 +131,47 @@ public class CompactionsTest extends SchemaLoader
 rm.apply();
 inserted.add(key);
 }
-store.forceBlockingFlush();
-assertMaxTimestamp(store, maxTimestampExpected);
-assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(store, superColumn).size());
+cfs.forceBlockingFlush();
+assertMaxTimestamp(cfs, maxTimestampExpected);
+assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(cfs, superColumn).size());
 }
 
-forceCompactions(store);
+forceCompactions(cfs);
 
-assertEquals(inserted.size(), Util.getRangeSlice(store, 
superColumn).size());
+assertEquals(inserted.size(), Util.getRangeSlice(cfs, 
superColumn).size());
 
 // make sure max timestamp of compacted sstables is recorded properly 
after compaction.
-assertMaxTimestamp(store, maxTimestampExpected);
+assertMaxTimestamp(cfs, maxTimestampExpected);
 }
 
-public void assertMaxTimestamp(ColumnFamilyStore 

[5/5] git commit: rename store - cfs

2012-06-15 Thread jbellis
rename store - cfs


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/55d5c041
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/55d5c041
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/55d5c041

Branch: refs/heads/cassandra-1.1
Commit: 55d5c041382a3185387def648f6a7c7a76847f75
Parents: a4fab90
Author: Jonathan Ellis jbel...@apache.org
Authored: Thu Jun 14 18:40:05 2012 -0500
Committer: Jonathan Ellis jbel...@apache.org
Committed: Thu Jun 14 18:40:05 2012 -0500

--
 .../cassandra/db/compaction/CompactionsTest.java   |  114 +++---
 1 files changed, 57 insertions(+), 57 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/55d5c041/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
index 3916669..4f87c86 100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsTest.java
@@ -65,13 +65,13 @@ public class CompactionsTest extends SchemaLoader
 {
 // this test does enough rows to force multiple block indexes to be 
used
 Table table = Table.open(TABLE1);
-ColumnFamilyStore store = table.getColumnFamilyStore(Standard1);
+ColumnFamilyStore cfs = table.getColumnFamilyStore(Standard1);
 
 final int ROWS_PER_SSTABLE = 10;
 final int SSTABLES = DatabaseDescriptor.getIndexInterval() * 3 / 
ROWS_PER_SSTABLE;
 
 // disable compaction while flushing
-store.disableAutoCompaction();
+cfs.disableAutoCompaction();
 
 long maxTimestampExpected = Long.MIN_VALUE;
 SetDecoratedKey inserted = new HashSetDecoratedKey();
@@ -87,18 +87,18 @@ public class CompactionsTest extends SchemaLoader
 rm.apply();
 inserted.add(key);
 }
-store.forceBlockingFlush();
-assertMaxTimestamp(store, maxTimestampExpected);
-assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(store).size());
+cfs.forceBlockingFlush();
+assertMaxTimestamp(cfs, maxTimestampExpected);
+assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(cfs).size());
 }
 
-forceCompactions(store);
+forceCompactions(cfs);
 
-assertEquals(inserted.size(), Util.getRangeSlice(store).size());
+assertEquals(inserted.size(), Util.getRangeSlice(cfs).size());
 
 // make sure max timestamp of compacted sstables is recorded properly 
after compaction.
-assertMaxTimestamp(store, maxTimestampExpected);
-store.truncate();
+assertMaxTimestamp(cfs, maxTimestampExpected);
+cfs.truncate();
 }
 
 
@@ -106,13 +106,13 @@ public class CompactionsTest extends SchemaLoader
 public void testSuperColumnCompactions() throws IOException, 
ExecutionException, InterruptedException
 {
 Table table = Table.open(TABLE1);
-ColumnFamilyStore store = table.getColumnFamilyStore(Super1);
+ColumnFamilyStore cfs = table.getColumnFamilyStore(Super1);
 
 final int ROWS_PER_SSTABLE = 10;
 final int SSTABLES = DatabaseDescriptor.getIndexInterval() * 3 / 
ROWS_PER_SSTABLE;
 
 //disable compaction while flushing
-store.disableAutoCompaction();
+cfs.disableAutoCompaction();
 
 long maxTimestampExpected = Long.MIN_VALUE;
 SetDecoratedKey inserted = new HashSetDecoratedKey();
@@ -131,47 +131,47 @@ public class CompactionsTest extends SchemaLoader
 rm.apply();
 inserted.add(key);
 }
-store.forceBlockingFlush();
-assertMaxTimestamp(store, maxTimestampExpected);
-assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(store, superColumn).size());
+cfs.forceBlockingFlush();
+assertMaxTimestamp(cfs, maxTimestampExpected);
+assertEquals(inserted.toString(), inserted.size(), 
Util.getRangeSlice(cfs, superColumn).size());
 }
 
-forceCompactions(store);
+forceCompactions(cfs);
 
-assertEquals(inserted.size(), Util.getRangeSlice(store, 
superColumn).size());
+assertEquals(inserted.size(), Util.getRangeSlice(cfs, 
superColumn).size());
 
 // make sure max timestamp of compacted sstables is recorded properly 
after compaction.
-assertMaxTimestamp(store, maxTimestampExpected);
+assertMaxTimestamp(cfs, maxTimestampExpected);
 }
 
-public void 

[jira] [Commented] (CASSANDRA-3647) Support set and map value types in CQL

2012-06-15 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295810#comment-13295810
 ] 

Sylvain Lebresne commented on CASSANDRA-3647:
-

I've rebased and changed the syntax to match what's above. The new branch is at 
https://github.com/pcmanus/cassandra/commits/3647-2.

To sum up the new syntax:
* lists:
{noformat}
UPDATE L = L + [ 'a' , 'b' ] WHERE ... // Appends to list
UPDATE L = [ 'a' , 'b' ] + L WHERE ... // Prepends to list
UPDATE L[1] = 'c' WHERE ...// Sets by idx
UPDATE L = L - [ 'a', 'b' ] WHERE ...  // Remove all occurences of value 'a' 
and 'b' in list
DELETE L[1] WHERE ...  // Deletes by idx
{noformat}
* sets:
{noformat}
UPDATE S = S + { 'a', 'b' } WHERE ... // Adds to set
UPDATE S = S - { 'a', 'b' } WHERE ... // Remove values 'a' and 'b' from set
{noformat}
* maps:
{noformat}
UPDATE M['a'] = 'c' WHERE ...  // Put key,value
UPDATE M = M + { 'a' : 'c' } WHERE ... // Equivalent to previous
DELETE M['a'] WHERE ...// Remove value for key 'a'
{noformat}

A few remarks:
* We could rename list - array. I figured one reason to keep list could be to 
emphasize that there is no predefined size. But I'm good with array.
* There is no support, for maps, of
{noformat}
UPDATE M = M - { 'a' : 'c' } WHERE ...
{noformat}
or some other syntax to remove a element of a map by value. The reason is that 
I don't think we can implement that correctly due to concurrency.


 Support set and map value types in CQL
 --

 Key: CASSANDRA-3647
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3647
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
  Labels: cql
 Fix For: 1.2


 Composite columns introduce the ability to have arbitrarily nested data in a 
 Cassandra row.  We should expose this through CQL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter

2012-06-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-3885:


Attachment: (was: 3885-v2.txt)

 Support multiple ranges in SliceQueryFilter
 ---

 Key: CASSANDRA-3885
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3885
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Jonathan Ellis
Assignee: David Alves
 Fix For: 1.2

 Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, 
 CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch


 This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow 
 sub-sub-tasks.
 We need to support multiple ranges in a SliceQueryFilter, and we want 
 querying them to be efficient, i.e., one pass through the row to get all of 
 the ranges, rather than one pass per range.
 Supercolumns are irrelevant since the goal is to replace them anyway.  Ignore 
 supercolumn-related code or rip it out, whichever is easier.
 This is ONLY dealing with the storage engine part, not the StorageProxy and 
 Command intra-node messages or the Thrift or CQL client APIs.  Thus, a unit 
 test should be added to ColumnFamilyStoreTest to demonstrate that it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter

2012-06-15 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-3885:


Attachment: 3885-v2.txt

Patch needed rebase. Rebased version attached. I've also pushed it at 
https://github.com/pcmanus/cassandra/tree/3885.

 Support multiple ranges in SliceQueryFilter
 ---

 Key: CASSANDRA-3885
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3885
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Jonathan Ellis
Assignee: David Alves
 Fix For: 1.2

 Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, 
 CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch


 This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow 
 sub-sub-tasks.
 We need to support multiple ranges in a SliceQueryFilter, and we want 
 querying them to be efficient, i.e., one pass through the row to get all of 
 the ranges, rather than one pass per range.
 Supercolumns are irrelevant since the goal is to replace them anyway.  Ignore 
 supercolumn-related code or rip it out, whichever is easier.
 This is ONLY dealing with the storage engine part, not the StorageProxy and 
 Command intra-node messages or the Thrift or CQL client APIs.  Thus, a unit 
 test should be added to ColumnFamilyStoreTest to demonstrate that it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-3855) RemoveDeleted dominates compaction time for large sstable counts

2012-06-15 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295818#comment-13295818
 ] 

Sylvain Lebresne commented on CASSANDRA-3855:
-

I'll precise that I try to do a quick test to see if I could reproduce back in 
the days but wasn't really able to reproduce something similar to the attached 
hprof log. I didn't wait up to 100,000,000 keys though.

 RemoveDeleted dominates compaction time for large sstable counts
 

 Key: CASSANDRA-3855
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3855
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.0
Reporter: Stu Hood
Assignee: Yuki Morishita
  Labels: compaction, deletes, leveled
 Attachments: with-cleaning-java.hprof.txt


 With very large numbers of sstables (2000+ generated by a `bin/stress -n 
 100,000,000` run with LeveledCompactionStrategy), 
 PrecompactedRow.removeDeletedAndOldShards dominates compaction runtime, such 
 that commenting it out takes compaction throughput from 200KB/s to 12MB/s.
 Stack attached.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4345) New (bootstrapping) Supplant Healthy Nodes

2012-06-15 Thread Joaquin Casares (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295827#comment-13295827
 ] 

Joaquin Casares commented on CASSANDRA-4345:


I was able to reproduce this error as well.

 New (bootstrapping) Supplant Healthy Nodes
 --

 Key: CASSANDRA-4345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4345
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.9
Reporter: Benjamin Coverston
  Labels: datastax_qa

 Copied a config from an existing node and fired up a new node, which happily 
 inserted itself at token 0 of a running ring. The surprising and worrisome 
 part is that EVERY node started throwing:
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,948 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205427] 2012-06-14 19:16:31,949 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,952 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ---
 Resulting in:
 INFO [GossipStage:1] 2012-06-14 18:24:37,472 Gossiper.java (line 838) Node 
 /192.168.88.48 is now part of the cluster
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 Gossiper.java (line 804) 
 InetAddress /192.168.88.48 is now UP
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 StorageService.java (line 1008) 
 Nodes /192.168.88.48 and /192.168.88.70 have the same token 0.  /192.168.88
 .48 is the new owner
  WARN [GossipStage:1] 2012-06-14 18:24:37,474 TokenMetadata.java (line 135) 
 Token 0 changing ownership from /192.168.88.70 to /192.168.88.48
  INFO [GossipStage:1] 2012-06-14 18:24:37,475 ColumnFamilyStore.java (line 
 705) Enqueuing flush of Memtable-LocationInfo@961917618(20/25 serialized/live 
 bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,475 Memtable.java (line 246) 
 Writing Memtable-LocationInfo@961917618(20/25 serialized/live bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,492 Memtable.java (line 283) 
 Completed flushing /cass_ssd/system/LocationInfo-hc-23-Data.db (74 bytes)
 ERROR [RPC-Thread:200943] 2012-06-14 18:24:38,007 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.getDatacenter(PropertyFileSnitch.java:104)
 at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122)
 at 
 org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(NetworkTopologyStrategy.java:93)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:100)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:2002)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:1996)
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:604)
 at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
 at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
 at 
 org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
 at 
 org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
 at 
 org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at 
 org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
 at 
 org.apache.cassandra.thrift.CustomTHsHaServer$Invocation.run(CustomTHsHaServer.java:105)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4345) New (bootstrapping) Supplant Healthy Nodes

2012-06-15 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295863#comment-13295863
 ] 

Brandon Williams commented on CASSANDRA-4345:
-

If you mean the NPE, that is no surprise.

 New (bootstrapping) Supplant Healthy Nodes
 --

 Key: CASSANDRA-4345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4345
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.9
Reporter: Benjamin Coverston
  Labels: datastax_qa

 Copied a config from an existing node and fired up a new node, which happily 
 inserted itself at token 0 of a running ring. The surprising and worrisome 
 part is that EVERY node started throwing:
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,948 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205427] 2012-06-14 19:16:31,949 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,952 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ---
 Resulting in:
 INFO [GossipStage:1] 2012-06-14 18:24:37,472 Gossiper.java (line 838) Node 
 /192.168.88.48 is now part of the cluster
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 Gossiper.java (line 804) 
 InetAddress /192.168.88.48 is now UP
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 StorageService.java (line 1008) 
 Nodes /192.168.88.48 and /192.168.88.70 have the same token 0.  /192.168.88
 .48 is the new owner
  WARN [GossipStage:1] 2012-06-14 18:24:37,474 TokenMetadata.java (line 135) 
 Token 0 changing ownership from /192.168.88.70 to /192.168.88.48
  INFO [GossipStage:1] 2012-06-14 18:24:37,475 ColumnFamilyStore.java (line 
 705) Enqueuing flush of Memtable-LocationInfo@961917618(20/25 serialized/live 
 bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,475 Memtable.java (line 246) 
 Writing Memtable-LocationInfo@961917618(20/25 serialized/live bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,492 Memtable.java (line 283) 
 Completed flushing /cass_ssd/system/LocationInfo-hc-23-Data.db (74 bytes)
 ERROR [RPC-Thread:200943] 2012-06-14 18:24:38,007 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.getDatacenter(PropertyFileSnitch.java:104)
 at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122)
 at 
 org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(NetworkTopologyStrategy.java:93)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:100)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:2002)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:1996)
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:604)
 at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
 at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
 at 
 org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
 at 
 org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
 at 
 org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at 
 org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
 at 
 org.apache.cassandra.thrift.CustomTHsHaServer$Invocation.run(CustomTHsHaServer.java:105)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Comment Edited] (CASSANDRA-4345) New (bootstrapping) Supplant Healthy Nodes

2012-06-15 Thread Joaquin Casares (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295827#comment-13295827
 ] 

Joaquin Casares edited comment on CASSANDRA-4345 at 6/15/12 7:08 PM:
-

I was able to reproduce the original node getting kicked out as well while 
auto_bootstrap set to true on the new, clean machine.

  was (Author: j.casares):
I was able to reproduce this error as well.
  
 New (bootstrapping) Supplant Healthy Nodes
 --

 Key: CASSANDRA-4345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4345
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.9
Reporter: Benjamin Coverston
  Labels: datastax_qa

 Copied a config from an existing node and fired up a new node, which happily 
 inserted itself at token 0 of a running ring. The surprising and worrisome 
 part is that EVERY node started throwing:
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,948 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205427] 2012-06-14 19:16:31,949 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,952 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ---
 Resulting in:
 INFO [GossipStage:1] 2012-06-14 18:24:37,472 Gossiper.java (line 838) Node 
 /192.168.88.48 is now part of the cluster
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 Gossiper.java (line 804) 
 InetAddress /192.168.88.48 is now UP
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 StorageService.java (line 1008) 
 Nodes /192.168.88.48 and /192.168.88.70 have the same token 0.  /192.168.88
 .48 is the new owner
  WARN [GossipStage:1] 2012-06-14 18:24:37,474 TokenMetadata.java (line 135) 
 Token 0 changing ownership from /192.168.88.70 to /192.168.88.48
  INFO [GossipStage:1] 2012-06-14 18:24:37,475 ColumnFamilyStore.java (line 
 705) Enqueuing flush of Memtable-LocationInfo@961917618(20/25 serialized/live 
 bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,475 Memtable.java (line 246) 
 Writing Memtable-LocationInfo@961917618(20/25 serialized/live bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,492 Memtable.java (line 283) 
 Completed flushing /cass_ssd/system/LocationInfo-hc-23-Data.db (74 bytes)
 ERROR [RPC-Thread:200943] 2012-06-14 18:24:38,007 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.getDatacenter(PropertyFileSnitch.java:104)
 at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122)
 at 
 org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(NetworkTopologyStrategy.java:93)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:100)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:2002)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:1996)
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:604)
 at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
 at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
 at 
 org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
 at 
 org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
 at 
 org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at 
 org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
 at 
 org.apache.cassandra.thrift.CustomTHsHaServer$Invocation.run(CustomTHsHaServer.java:105)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-3337) Create a way to obliterate nodes from the ring

2012-06-15 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-3337:


Summary: Create a way to obliterate nodes from the ring  (was: Create a 
'killtoken' command)

 Create a way to obliterate nodes from the ring
 --

 Key: CASSANDRA-3337
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3337
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Brandon Williams
Assignee: Brandon Williams
  Labels: gossip
 Fix For: 1.0.6

 Attachments: 3337.txt


 Sometimes you just want a token gone: no re-replication, nothing, just excise 
 it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[Cassandra Wiki] Update of FAQ by TylerHobbs

2012-06-15 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The FAQ page has been changed by TylerHobbs:
http://wiki.apache.org/cassandra/FAQ?action=diffrev1=147rev2=148

Comment:
Add a few client-specific details to the #iter_world faq entry

  == How can I iterate over all the rows in a ColumnFamily? ==
  Simple but slow: Use get_range_slices, start with the empty string, and after 
each call use the last key read as the start key in the next iteration.
  
+ Most clients support an easy way to do this.  For example, 
[[http://pycassa.github.com/pycassa/api/pycassa/columnfamily.html#pycassa.columnfamily.ColumnFamily.get_range|pycassa's
 get_range()]], and 
[[http://thobbs.github.com/phpcassa/api/class-phpcassa.ColumnFamily.html#_get_range|phpcassa's
 get_range()]] return an iterator that fetches the next batch of rows 
automatically.  Hector has an 
[[https://github.com/zznate/hector-examples/blob/master/src/main/java/com/riptano/cassandra/hector/example/PaginateGetRangeSlices.java|example
 of how to do this]].
+ 
  Better: use HadoopSupport.
  
  Anchor(no_keyspaces)


[jira] [Created] (CASSANDRA-4346) Hive autocreate C* tables/other app instead of CLI create CF - need to restart C* CLI to see new CF's created in existing Keyspaces

2012-06-15 Thread Alex Liu (JIRA)
Alex Liu created CASSANDRA-4346:
---

 Summary: Hive autocreate C* tables/other app instead of CLI create 
CF - need to restart C* CLI to see new CF's created in existing Keyspaces
 Key: CASSANDRA-4346
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4346
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Liu
Priority: Minor


Cassandra CliClient class keep a local cache of

private final MapString, KsDef keyspacesMap = new HashMapString, KsDef();

which is refreshed each time add new keyspace, login, add new cf through the 
CLI, any other meta data changes not made by CLI don't refresh the local cache.

we can add one new command to CLI, refrech the meta data of the keyspace

or remove the local cache, so that any other app changes the meta data will 
show in CLI real time


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4347) IP change of node requires assassinate to really remove old IP

2012-06-15 Thread Karl Mueller (JIRA)
Karl Mueller created CASSANDRA-4347:
---

 Summary: IP change of node requires assassinate to really remove 
old IP
 Key: CASSANDRA-4347
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4347
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.0.10
 Environment: RHEL6, 64bit
Reporter: Karl Mueller
Priority: Minor


In changing the IP addresses of nodes one-by-one, the node successfully moves 
itself and its token.  Everything works properly.

However, the node which had its IP changed (but NOT other nodes in the ring) 
continues to have some type of state associated with the old IP and produces 
log messages like this:


 INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 838) Node 
/10.12.9.157 is now part of the cluster
 INFO [GossipStage:1] 2012-06-15 15:25:01,490 Gossiper.java (line 804) 
InetAddress /10.12.9.157 is now UP
 INFO [GossipStage:1] 2012-06-15 15:25:01,491 StorageService.java (line 1017) 
Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the same 
token 113427455640312821154458202477256070484.  Ignoring /10.12.9.157
 INFO [GossipTasks:1] 2012-06-15 15:25:11,373 Gossiper.java (line 818) 
InetAddress /10.12.9.157 is now dead.
 INFO [GossipTasks:1] 2012-06-15 15:25:32,380 Gossiper.java (line 632) 
FatClient /10.12.9.157 has been silent for 3ms, removing from gossip
 INFO [GossipStage:1] 2012-06-15 15:26:32,490 Gossiper.java (line 838) Node 
/10.12.9.157 is now part of the cluster
 INFO [GossipStage:1] 2012-06-15 15:26:32,491 Gossiper.java (line 804) 
InetAddress /10.12.9.157 is now UP
 INFO [GossipStage:1] 2012-06-15 15:26:32,491 StorageService.java (line 1017) 
Nodes /10.12.9.157 and dev-cass01.sv.walmartlabs.com/10.93.15.11 have the same 
token 113427455640312821154458202477256070484.  Ignoring /10.12.9.157
 INFO [GossipTasks:1] 2012-06-15 15:26:42,402 Gossiper.java (line 818) 
InetAddress /10.12.9.157 is now dead.
 INFO [GossipTasks:1] 2012-06-15 15:27:03,410 Gossiper.java (line 632) 
FatClient /10.12.9.157 has been silent for 3ms, removing from gossip
 INFO [GossipStage:1] 2012-06-15 15:28:04,533 Gossiper.java (line 838) Node 
/10.12.9.157 is now part of the cluster


Other nodes do NOT have the old IP showing up in logs.  It's only the node that 
moved.

The old IP doesn't show up in ring anywhere or in any other fashion.  The 
cluster seems to be fully operational, so I think it's just a cleanup issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4346) Hive autocreate C* tables/other app instead of CLI create CF - need to restart C* CLI to see new CF's created in existing Keyspaces

2012-06-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13295994#comment-13295994
 ] 

Jonathan Ellis commented on CASSANDRA-4346:
---

specifically, you can simply issue a describe to force a refresh

 Hive autocreate C* tables/other app instead of CLI create CF - need to 
 restart C* CLI to see new CF's created in existing Keyspaces
 ---

 Key: CASSANDRA-4346
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4346
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Liu
Priority: Minor

 Cassandra CliClient class keep a local cache of
 private final MapString, KsDef keyspacesMap = new HashMapString, KsDef();
 which is refreshed each time add new keyspace, login, add new cf through the 
 CLI, any other meta data changes not made by CLI don't refresh the local 
 cache.
 we can add one new command to CLI, refrech the meta data of the keyspace
 or remove the local cache, so that any other app changes the meta data will 
 show in CLI real time

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4346) Hive autocreate C* tables/other app instead of CLI create CF - need to restart C* CLI to see new CF's created in existing Keyspaces

2012-06-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4346.
---

Resolution: Duplicate

done in CASSANDRA-4052

 Hive autocreate C* tables/other app instead of CLI create CF - need to 
 restart C* CLI to see new CF's created in existing Keyspaces
 ---

 Key: CASSANDRA-4346
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4346
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Liu
Priority: Minor

 Cassandra CliClient class keep a local cache of
 private final MapString, KsDef keyspacesMap = new HashMapString, KsDef();
 which is refreshed each time add new keyspace, login, add new cf through the 
 CLI, any other meta data changes not made by CLI don't refresh the local 
 cache.
 we can add one new command to CLI, refrech the meta data of the keyspace
 or remove the local cache, so that any other app changes the meta data will 
 show in CLI real time

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-06-15 Thread Al Tobey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13296040#comment-13296040
 ] 

Al Tobey commented on CASSANDRA-4321:
-

What SHA / tag should these patches apply against? I've tried trunk, 1.1.1 and 
1.1.0 and can't get a clean apply. I'll try a manual merge tomorrow.

 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 
 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, 
 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 The ring currently consists of 6 DC's and 22 nodes using LCS  compression.  
 This issue was first noted on 2 nodes in one DC and then appears to have 
 spread to various other nodes in the other DC's.  
 When the first occurrence of this was found I restarted the instance but it 
 failed to start so I cleared its data and treated it as a replacement node 
 for the token it was previously responsible for.  This node successfully 
 streamed all the relevant data back but failed again a number of hours later 
 with the same StackOverflowError and again was unable to restart. 
 The initial stack overflow error on a running instance looks like this:
 ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[CompactionExecutor:314,1,main]
 java.lang.StackOverflowError
 at java.util.Arrays.mergeSort(Arrays.java:1157)
 at java.util.Arrays.sort(Arrays.java:1092)
 at java.util.Collections.sort(Collections.java:134)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:49)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow.  Compactions stop from this point 
 onwards]
 I restarted this failing instance with DEBUG logging enabled and it throws 
 the following exception part way through startup:
 ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.StackOverflowError
 at 
 org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307)
 at 
 org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
 at 
 org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
 at 
 org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124)
 at 
 org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:45)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow]
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalTree.init(IntervalTree.java:39)
 at 
 

[jira] [Commented] (CASSANDRA-4321) stackoverflow building interval tree possible sstable corruptions

2012-06-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13296042#comment-13296042
 ] 

Jonathan Ellis commented on CASSANDRA-4321:
---

cassandra-1.1 branch

 stackoverflow building interval tree  possible sstable corruptions
 ---

 Key: CASSANDRA-4321
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4321
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.1
Reporter: Anton Winter
Assignee: Sylvain Lebresne
 Fix For: 1.1.2

 Attachments: 
 0001-Change-Range-Bounds-in-LeveledManifest.overlapping-v3.txt, 
 0002-Scrub-detects-and-repair-outOfOrder-rows-v3.txt, 
 0003-Create-standalone-scrub-v3.txt, ooyala-hastur-stacktrace.txt


 After upgrading to 1.1.1 (from 1.1.0) I have started experiencing 
 StackOverflowError's resulting in compaction backlog and failure to restart. 
 The ring currently consists of 6 DC's and 22 nodes using LCS  compression.  
 This issue was first noted on 2 nodes in one DC and then appears to have 
 spread to various other nodes in the other DC's.  
 When the first occurrence of this was found I restarted the instance but it 
 failed to start so I cleared its data and treated it as a replacement node 
 for the token it was previously responsible for.  This node successfully 
 streamed all the relevant data back but failed again a number of hours later 
 with the same StackOverflowError and again was unable to restart. 
 The initial stack overflow error on a running instance looks like this:
 ERROR [CompactionExecutor:314] 2012-06-07 09:59:43,017 
 AbstractCassandraDaemon.java (line 134) Exception in thread 
 Thread[CompactionExecutor:314,1,main]
 java.lang.StackOverflowError
 at java.util.Arrays.mergeSort(Arrays.java:1157)
 at java.util.Arrays.sort(Arrays.java:1092)
 at java.util.Collections.sort(Collections.java:134)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.findMinMedianMax(IntervalNode.java:114)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:49)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow.  Compactions stop from this point 
 onwards]
 I restarted this failing instance with DEBUG logging enabled and it throws 
 the following exception part way through startup:
 ERROR 11:37:51,046 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.StackOverflowError
 at 
 org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:307)
 at 
 org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:276)
 at 
 org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:230)
 at 
 org.slf4j.helpers.MessageFormatter.format(MessageFormatter.java:124)
 at 
 org.slf4j.impl.Log4jLoggerAdapter.debug(Log4jLoggerAdapter.java:228)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:45)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 [snip - this repeats until stack overflow]
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:64)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalNode.init(IntervalNode.java:62)
 at 
 org.apache.cassandra.utils.IntervalTree.IntervalTree.init(IntervalTree.java:39)
 at 
 org.apache.cassandra.db.DataTracker.buildIntervalTree(DataTracker.java:560)
 at 
 

git commit: quell ant runtime warnings while executing target build-test

2012-06-15 Thread dbrosius
Updated Branches:
  refs/heads/trunk 7398e9363 - 7030d1e23


quell ant runtime warnings while executing target build-test


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7030d1e2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7030d1e2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7030d1e2

Branch: refs/heads/trunk
Commit: 7030d1e233aa7bcbef24fb3190e04bb819c3dc7d
Parents: 7398e93
Author: Dave Brosius dbros...@apache.org
Authored: Fri Jun 15 20:36:53 2012 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Fri Jun 15 20:36:53 2012 -0400

--
 build.xml |3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7030d1e2/build.xml
--
diff --git a/build.xml b/build.xml
index 2f005f7..c5778ff 100644
--- a/build.xml
+++ b/build.xml
@@ -997,7 +997,8 @@
 javac
  debug=true
  debuglevel=${debuglevel}
- destdir=${test.classes}
+ destdir=${test.classes}
+ includeantruntime=false
   classpath
 path refid=cassandra.classpath/
   /classpath



[jira] [Commented] (CASSANDRA-3885) Support multiple ranges in SliceQueryFilter

2012-06-15 Thread Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13296050#comment-13296050
 ] 

Vijay commented on CASSANDRA-3885:
--

{quote}
 So I could regenerate the binary messages, but I'm confused on what 
SerializationsTest is actually testing.
{quote}
I always thought we have /cassandra/test/data/serialization/x.x if you want to 
test the older versions.

LGTM +1

nit: setStart() method should have a assert which will check if there is only 
one Slice.
kind of affects:
{code}
// As soon as we'd done our first call, we want to reset the start column if 
we're paging
if (isPaging)

((SliceQueryFilter)initialFilter()).setStart(ByteBufferUtil.EMPTY_BYTE_BUFFER);
{code}

PS: I was actually doing a parallel work without the prefetch queue and thought 
of sharing. 
(https://github.com/Vijay2win/cassandra/commit/31ca6fd9e1bafc1f4d8dfe858929586637bffdef#L3L18)

 Support multiple ranges in SliceQueryFilter
 ---

 Key: CASSANDRA-3885
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3885
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Jonathan Ellis
Assignee: David Alves
 Fix For: 1.2

 Attachments: 3885-v2.txt, CASSANDRA-3885.patch, CASSANDRA-3885.patch, 
 CASSANDRA-3885.patch, CASSANDRA-3885.patch, CASSANDRA-3885.patch


 This is logically a subtask of CASSANDRA-2710, but Jira doesn't allow 
 sub-sub-tasks.
 We need to support multiple ranges in a SliceQueryFilter, and we want 
 querying them to be efficient, i.e., one pass through the row to get all of 
 the ranges, rather than one pass per range.
 Supercolumns are irrelevant since the goal is to replace them anyway.  Ignore 
 supercolumn-related code or rip it out, whichever is easier.
 This is ONLY dealing with the storage engine part, not the StorageProxy and 
 Command intra-node messages or the Thrift or CQL client APIs.  Thus, a unit 
 test should be added to ColumnFamilyStoreTest to demonstrate that it works.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (CASSANDRA-4345) New (bootstrapping) Supplant Healthy Nodes

2012-06-15 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13296076#comment-13296076
 ] 

Jonathan Ellis commented on CASSANDRA-4345:
---

Looks like we have a check for token exists in the ring for move, but not for 
bootstrap.

 New (bootstrapping) Supplant Healthy Nodes
 --

 Key: CASSANDRA-4345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4345
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.9
Reporter: Benjamin Coverston
  Labels: datastax_qa

 Copied a config from an existing node and fired up a new node, which happily 
 inserted itself at token 0 of a running ring. The surprising and worrisome 
 part is that EVERY node started throwing:
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,948 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205427] 2012-06-14 19:16:31,949 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,952 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ---
 Resulting in:
 INFO [GossipStage:1] 2012-06-14 18:24:37,472 Gossiper.java (line 838) Node 
 /192.168.88.48 is now part of the cluster
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 Gossiper.java (line 804) 
 InetAddress /192.168.88.48 is now UP
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 StorageService.java (line 1008) 
 Nodes /192.168.88.48 and /192.168.88.70 have the same token 0.  /192.168.88
 .48 is the new owner
  WARN [GossipStage:1] 2012-06-14 18:24:37,474 TokenMetadata.java (line 135) 
 Token 0 changing ownership from /192.168.88.70 to /192.168.88.48
  INFO [GossipStage:1] 2012-06-14 18:24:37,475 ColumnFamilyStore.java (line 
 705) Enqueuing flush of Memtable-LocationInfo@961917618(20/25 serialized/live 
 bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,475 Memtable.java (line 246) 
 Writing Memtable-LocationInfo@961917618(20/25 serialized/live bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,492 Memtable.java (line 283) 
 Completed flushing /cass_ssd/system/LocationInfo-hc-23-Data.db (74 bytes)
 ERROR [RPC-Thread:200943] 2012-06-14 18:24:38,007 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.getDatacenter(PropertyFileSnitch.java:104)
 at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122)
 at 
 org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(NetworkTopologyStrategy.java:93)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:100)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:2002)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:1996)
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:604)
 at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
 at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
 at 
 org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
 at 
 org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
 at 
 org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at 
 org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
 at 
 org.apache.cassandra.thrift.CustomTHsHaServer$Invocation.run(CustomTHsHaServer.java:105)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4348) node should refuse to bootstrap if told to use a token that already exists in the ring

2012-06-15 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-4348:
-

 Summary: node should refuse to bootstrap if told to use a token 
that already exists in the ring
 Key: CASSANDRA-4348
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4348
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jonathan Ellis
Assignee: Brandon Williams
 Fix For: 1.1.2


See CASSAANDRA-4345

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (CASSANDRA-4349) PFS should give a friendlier error message when a node has not been configured

2012-06-15 Thread Jonathan Ellis (JIRA)
Jonathan Ellis created CASSANDRA-4349:
-

 Summary: PFS should give a friendlier error message when a node 
has not been configured
 Key: CASSANDRA-4349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4349
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.2


see CASSANDRA-4345

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (CASSANDRA-4345) New (bootstrapping) Supplant Healthy Nodes

2012-06-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-4345.
---

Resolution: Duplicate

Split into CASSANDRA-4348 and CASSANDRA-4349.

 New (bootstrapping) Supplant Healthy Nodes
 --

 Key: CASSANDRA-4345
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4345
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.0.9
Reporter: Benjamin Coverston
  Labels: datastax_qa

 Copied a config from an existing node and fired up a new node, which happily 
 inserted itself at token 0 of a running ring. The surprising and worrisome 
 part is that EVERY node started throwing:
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,948 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205427] 2012-06-14 19:16:31,949 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ERROR [RPC-Thread:205459] 2012-06-14 19:16:31,952 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 ---
 Resulting in:
 INFO [GossipStage:1] 2012-06-14 18:24:37,472 Gossiper.java (line 838) Node 
 /192.168.88.48 is now part of the cluster
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 Gossiper.java (line 804) 
 InetAddress /192.168.88.48 is now UP
  INFO [GossipStage:1] 2012-06-14 18:24:37,473 StorageService.java (line 1008) 
 Nodes /192.168.88.48 and /192.168.88.70 have the same token 0.  /192.168.88
 .48 is the new owner
  WARN [GossipStage:1] 2012-06-14 18:24:37,474 TokenMetadata.java (line 135) 
 Token 0 changing ownership from /192.168.88.70 to /192.168.88.48
  INFO [GossipStage:1] 2012-06-14 18:24:37,475 ColumnFamilyStore.java (line 
 705) Enqueuing flush of Memtable-LocationInfo@961917618(20/25 serialized/live 
 bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,475 Memtable.java (line 246) 
 Writing Memtable-LocationInfo@961917618(20/25 serialized/live bytes, 1 ops)
  INFO [FlushWriter:1272] 2012-06-14 18:24:37,492 Memtable.java (line 283) 
 Completed flushing /cass_ssd/system/LocationInfo-hc-23-Data.db (74 bytes)
 ERROR [RPC-Thread:200943] 2012-06-14 18:24:38,007 Cassandra.java (line 3041) 
 Internal error processing get_slice
 java.lang.NullPointerException
 at 
 org.apache.cassandra.locator.PropertyFileSnitch.getDatacenter(PropertyFileSnitch.java:104)
 at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:122)
 at 
 org.apache.cassandra.locator.NetworkTopologyStrategy.calculateNaturalEndpoints(NetworkTopologyStrategy.java:93)
 at 
 org.apache.cassandra.locator.AbstractReplicationStrategy.getNaturalEndpoints(AbstractReplicationStrategy.java:100)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:2002)
 at 
 org.apache.cassandra.service.StorageService.getLiveNaturalEndpoints(StorageService.java:1996)
 at 
 org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:604)
 at 
 org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:564)
 at 
 org.apache.cassandra.thrift.CassandraServer.readColumnFamily(CassandraServer.java:128)
 at 
 org.apache.cassandra.thrift.CassandraServer.getSlice(CassandraServer.java:283)
 at 
 org.apache.cassandra.thrift.CassandraServer.multigetSliceInternal(CassandraServer.java:365)
 at 
 org.apache.cassandra.thrift.CassandraServer.get_slice(CassandraServer.java:326)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$get_slice.process(Cassandra.java:3033)
 at 
 org.apache.cassandra.thrift.Cassandra$Processor.process(Cassandra.java:2889)
 at 
 org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
 at 
 org.apache.cassandra.thrift.CustomTHsHaServer$Invocation.run(CustomTHsHaServer.java:105)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (CASSANDRA-4349) PFS should give a friendlier error message when a node has not been configured

2012-06-15 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-4349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-4349:
--

Attachment: 4349.txt

Patch attached.

 PFS should give a friendlier error message when a node has not been configured
 --

 Key: CASSANDRA-4349
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4349
 Project: Cassandra
  Issue Type: Bug
Reporter: Jonathan Ellis
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.1.2

 Attachments: 4349.txt


 see CASSANDRA-4345

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira