[Cassandra Wiki] Trivial Update of CharleyDi by CharleyDi

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The CharleyDi page has been changed by CharleyDi:
http://wiki.apache.org/cassandra/CharleyDi

New page:
Yo bros !! I am ANIKA AGUILAR. I work as a General.BR
My dad name is  Ross and he is a Clerk. My mummy is a Coast guard.BR
BR
my page; [[http://www.unitedchem.com/beatbydre.aspx|dre dre]]


[jira] [Commented] (CASSANDRA-5112) Setting up authentication tables with custom authentication plugin

2013-02-06 Thread Dirkjan Bussink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13572383#comment-13572383
 ] 

Dirkjan Bussink commented on CASSANDRA-5112:


I was wondering, as part of this issue, are you also addressing the second 
issue? About being able to change the strategy and RF fort the system_auth 
keyspace? Or another solution that would not mean a certain set of users can't 
access the database if a node goes down?

 Setting up authentication tables with custom authentication plugin
 --

 Key: CASSANDRA-5112
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5112
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.2.0
Reporter: Dirkjan Bussink
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 1.2.2


 I'm working on updating https://github.com/nedap/cassandra-auth with the new 
 authentication API's in Cassandra 1.2.0. I have stumbled on an issue and I'm 
 not really sure how to handle it.
 For the authentication I want to setup additional column families for the 
 passwords and permissions. As recommended in the documentation of 
 IAuthorizer, I'm trying to create these tables during setup(): Setup is 
 called once upon system startup to initialize the IAuthorizer. For example, 
 use this method to create any required keyspaces/column families..
 The problem is that doing this seems to be a lot harder than I would think, 
 or I'm perhaps missing something obvious. I've tried various attempts, but 
 all have failed:
 - CQL and QueryProcessor.processInternal to setup additional column families. 
 This fails, since processInternal will throw a UnsupportedOperationException 
 due to it being a SchemaAlteringStatement.
 - CQL and QueryProcessor.process. This works after the system has 
 successfully started, but due to the moment setup() is called in the 
 Cassandra boot process, it will fail. It will throw an AssertionError in 
 MigrationManager.java:320, because the gossiper hasn't been started yet.
 - Internal API's. Mimicking how other column families are set up, using 
 CFMetadata and Schema.load. This seems to get the system in some inconsistent 
 state where some parts do see the additional column family, but others don't.
 Does anyone have a recommendation for the path to follow here? What would be 
 the recommended approach for actually setting up those column families during 
 starting for authentication?
 From working on this, I also have another question. I see the default 
 system_auth keyspace is created with a SimpleStrategy and a replication 
 factor of 1. Is this a deliberate choice? I can imagine that if a node in a 
 cluster dies, losing the authentication information that happens to be 
 available on that code could be very problematic. If I'm missing any 
 reasoning here, please let me know, but it struck me as something that could 
 cause potential problems. I also don't see a way I could reconfigure this at 
 the moment, and API's such as CREATE USER do seem to depend on this keyspace.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4464) expose 2I CFs to the rest of nodetool

2013-02-06 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13572408#comment-13572408
 ] 

Jason Brown commented on CASSANDRA-4464:


Looking into it, nodetool's interface is like this:

{code}snapshot [keyspaces...] -cf [columnfamilyName] -t [snapshotName]{code}

If you execute this (on a KS named jason, with a CF named dog, with index 
age_idx):

{code}nodetool snapshot jason -cf dog.age_idx{code}

you get this output:

{code}Requested snapshot for: jeb and column family: dog.age_idx
Exception in thread main java.lang.IllegalArgumentException: Cannot take a 
snapshot of a secondary index by itself. Run snapshot on the column family that 
owns the index.
at 
org.apache.cassandra.service.StorageService.takeColumnFamilySnapshot(StorageService.java:2164)
…{code}

Not an unreasonable message, I hope. That said, I'd prefer to catch the 
exception and make a prettier (less verbose) message, but if I catch the 
exception rather than let it escape out of main(), I wouldn't have a chance to 
set the exit status code to a non-0 value (at least, not without bypassing the 
finally{} block at the end of main()).

However, that does not match the exception message you mentioned, that the CF 
does not exist. I think that comes from SS.takeSnapshot(), which would print 
Table dog.age_idx does not exist if you execute it like this:

{code}nodetool snapshot dog.age_idx{code}

In this case, it is true, 'dog.age_idx' does not exist as a table/keyspace, as 
NodeCmd is expecting keyspace names as parameters. However, we can (trivially) 
change the message to say 'Keyspace xyz does not exist' (in SS.getValidTable()) 
to make it a little clearer to users. When they realize the the correct syntax 
of the nodetool snapshot command, they'll run into the IAE (like above), so 
that should stop them from trying to do any more damage :).

Please let me know if I'm missing something here.


 expose 2I CFs to the rest of nodetool
 -

 Key: CASSANDRA-4464
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4464
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Jason Brown
Priority: Minor
 Fix For: 1.2.2

 Attachments: 4464.txt, 4464-v1.patch, 4464-v2.patch


 This was begun in CASSANDRA-4063.  We should extend it to scrub as well, and 
 probably compact since any sane way to do it for scrub should give the other 
 for free.
 Not sure how easy these will be since they go through CompactionManager via 
 StorageProxy.  I think getValidColumnFamilies could be updated to check for 
 index CFs with dot notation.
 (Other operations like flush or snapshot don't make sense for 2I CFs in 
 isolation of their parent.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5211) Migrating Clusters with gossip tables that have old dead nodes causes NPE, inability to join cluster

2013-02-06 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13572413#comment-13572413
 ] 

Brandon Williams commented on CASSANDRA-5211:
-

Rick, I don't suppose you might happen to have one of these problem system 
tables left anywhere?  The trace indicates that the rack was missing but the dc 
wasn't, and we only write those together in a single insert.

 Migrating Clusters with gossip tables that have old dead nodes causes NPE, 
 inability to join cluster
 

 Key: CASSANDRA-5211
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5211
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Rick Branson
Assignee: Brandon Williams

 I had done a removetoken on this cluster when it was 1.1.x, and it had a 
 ghost entry for the removed node still in the stored ring data. When the 
 nodes loaded the table up after conversion to 1.2 and attempting to migrate 
 to VNodes, I got the following traceback:
 ERROR [WRITE-/10.0.0.0] 2013-01-31 18:35:44,788 CassandraDaemon.java (line 
 133) Exception in thread Thread[WRITE-/10.0.0.0,5,main]
 java.lang.NullPointerException
   at 
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:167)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:124)
   at org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:73)
   at org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:93)
   at org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:32)
   at 
 org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:96)
   at 
 org.apache.cassandra.db.SystemTable.loadDcRackInfo(SystemTable.java:402)
   at 
 org.apache.cassandra.locator.Ec2Snitch.getDatacenter(Ec2Snitch.java:117)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:127)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.isLocalDC(OutboundTcpConnection.java:74)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:270)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:142)
 This is because these ghost nodes had a NULL tokens list in the system/peers 
 table. A workaround was to delete the offending row in the system/peers table 
 and restart the node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of GlenHan by GlenHan

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The GlenHan page has been changed by GlenHan:
http://wiki.apache.org/cassandra/GlenHan

New page:
Hey fellas !! The name is JANELL LIVINGSTON.BR
I might join The New Boarding School of Glad People which has a branch in 
Yakima. I have a job as Attorney at law. My father name is Patrick  and he is a 
Executor. My mummy is a Ferryman.BR
BR
Here is my homepage [[http://www.dressesonit.com|cheap wedding dresses]]


[jira] [Updated] (CASSANDRA-5211) Migrating Clusters with gossip tables that have old dead nodes causes NPE, inability to join cluster

2013-02-06 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-5211:


Attachment: 5211.txt

Confirmed that a null tokens list won't cause this.  Regardless of how we got 
here, it's more correct to confirm the existence of the dc and rack than just 
the dc, so patch to do so.

 Migrating Clusters with gossip tables that have old dead nodes causes NPE, 
 inability to join cluster
 

 Key: CASSANDRA-5211
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5211
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Rick Branson
Assignee: Brandon Williams
 Attachments: 5211.txt


 I had done a removetoken on this cluster when it was 1.1.x, and it had a 
 ghost entry for the removed node still in the stored ring data. When the 
 nodes loaded the table up after conversion to 1.2 and attempting to migrate 
 to VNodes, I got the following traceback:
 ERROR [WRITE-/10.0.0.0] 2013-01-31 18:35:44,788 CassandraDaemon.java (line 
 133) Exception in thread Thread[WRITE-/10.0.0.0,5,main]
 java.lang.NullPointerException
   at 
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:167)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:124)
   at org.apache.cassandra.cql.jdbc.JdbcUTF8.getString(JdbcUTF8.java:73)
   at org.apache.cassandra.cql.jdbc.JdbcUTF8.compose(JdbcUTF8.java:93)
   at org.apache.cassandra.db.marshal.UTF8Type.compose(UTF8Type.java:32)
   at 
 org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:96)
   at 
 org.apache.cassandra.db.SystemTable.loadDcRackInfo(SystemTable.java:402)
   at 
 org.apache.cassandra.locator.Ec2Snitch.getDatacenter(Ec2Snitch.java:117)
   at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.getDatacenter(DynamicEndpointSnitch.java:127)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.isLocalDC(OutboundTcpConnection.java:74)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.connect(OutboundTcpConnection.java:270)
   at 
 org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:142)
 This is because these ghost nodes had a NULL tokens list in the system/peers 
 table. A workaround was to delete the offending row in the system/peers table 
 and restart the node.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/3] git commit: Expose 2I to the rest of nodetool Patch by Jason Brown, reviewed by brandonwilliams for CASSANDRA-4464

2013-02-06 Thread brandonwilliams
Expose 2I to the rest of nodetool
Patch by Jason Brown, reviewed by brandonwilliams for CASSANDRA-4464


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cef8eb07
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cef8eb07
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cef8eb07

Branch: refs/heads/cassandra-1.2
Commit: cef8eb07dfb4b81cd4e985bc86c5506602650c93
Parents: f309183
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 6 08:33:33 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 6 08:33:33 2013 -0600

--
 CHANGES.txt|1 +
 .../cassandra/db/compaction/CompactionManager.java |1 -
 .../apache/cassandra/service/StorageService.java   |   67 ---
 src/java/org/apache/cassandra/tools/NodeCmd.java   |7 ++-
 src/java/org/apache/cassandra/tools/NodeProbe.java |   69 --
 5 files changed, 122 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cef8eb07/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bdab1c5..905db57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * Make sstable directory picking blacklist-aware again (CASSANDRA-5193)
  * Correctly expire gossip states for edge cases (CASSANDRA-5216)
  * Improve handling of directory creation failures (CASSANDRA-5196)
+ * Expose secondary indicies to the rest of nodetool (CASSANDRA-4464)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cef8eb07/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 168a3f3..1d9af16 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -278,7 +278,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 public void perform(ColumnFamilyStore cfs, 
CollectionSSTableReader sstables)
 {
-assert !cfs.isIndex();
 for (final SSTableReader sstable : sstables)
 {
 // SSTables are marked by the caller

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cef8eb07/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 34b2f12..5f00c88 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -35,6 +35,8 @@ import javax.management.NotificationBroadcasterSupport;
 import javax.management.ObjectName;
 
 import com.google.common.collect.*;
+
+import org.apache.cassandra.db.index.SecondaryIndex;
 import org.apache.log4j.Level;
 import org.apache.commons.lang.StringUtils;
 import org.slf4j.Logger;
@@ -2084,7 +2086,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 throw new RuntimeException(Cleanup of the system table is neither 
necessary nor wise);
 
 CounterId.OneShotRenewer counterIdRenewer = new 
CounterId.OneShotRenewer();
-for (ColumnFamilyStore cfStore : getValidColumnFamilies(tableName, 
columnFamilies))
+for (ColumnFamilyStore cfStore : getValidColumnFamilies(false, false, 
tableName, columnFamilies))
 {
 cfStore.forceCleanup(counterIdRenewer);
 }
@@ -2092,19 +2094,19 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public void scrub(String tableName, String... columnFamilies) throws 
IOException, ExecutionException, InterruptedException
 {
-for (ColumnFamilyStore cfStore : getValidColumnFamilies(tableName, 
columnFamilies))
+for (ColumnFamilyStore cfStore : getValidColumnFamilies(false, false, 
tableName, columnFamilies))
 cfStore.scrub();
 }
 
 public void upgradeSSTables(String tableName, String... columnFamilies) 
throws IOException, ExecutionException, InterruptedException
 {
-for (ColumnFamilyStore cfStore : getValidColumnFamilies(tableName, 
columnFamilies))
+for (ColumnFamilyStore cfStore : getValidColumnFamilies(true, true, 
tableName, columnFamilies))
 cfStore.sstablesRewrite();
 }
 
 public void forceTableCompaction(String tableName, String... 
columnFamilies) 

[2/3] git commit: Expose 2I to the rest of nodetool Patch by Jason Brown, reviewed by brandonwilliams for CASSANDRA-4464

2013-02-06 Thread brandonwilliams
Expose 2I to the rest of nodetool
Patch by Jason Brown, reviewed by brandonwilliams for CASSANDRA-4464


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cef8eb07
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cef8eb07
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cef8eb07

Branch: refs/heads/trunk
Commit: cef8eb07dfb4b81cd4e985bc86c5506602650c93
Parents: f309183
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 6 08:33:33 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 6 08:33:33 2013 -0600

--
 CHANGES.txt|1 +
 .../cassandra/db/compaction/CompactionManager.java |1 -
 .../apache/cassandra/service/StorageService.java   |   67 ---
 src/java/org/apache/cassandra/tools/NodeCmd.java   |7 ++-
 src/java/org/apache/cassandra/tools/NodeProbe.java |   69 --
 5 files changed, 122 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/cef8eb07/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bdab1c5..905db57 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -10,6 +10,7 @@
  * Make sstable directory picking blacklist-aware again (CASSANDRA-5193)
  * Correctly expire gossip states for edge cases (CASSANDRA-5216)
  * Improve handling of directory creation failures (CASSANDRA-5196)
+ * Expose secondary indicies to the rest of nodetool (CASSANDRA-4464)
 
 
 1.2.1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cef8eb07/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 168a3f3..1d9af16 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -278,7 +278,6 @@ public class CompactionManager implements 
CompactionManagerMBean
 {
 public void perform(ColumnFamilyStore cfs, 
CollectionSSTableReader sstables)
 {
-assert !cfs.isIndex();
 for (final SSTableReader sstable : sstables)
 {
 // SSTables are marked by the caller

http://git-wip-us.apache.org/repos/asf/cassandra/blob/cef8eb07/src/java/org/apache/cassandra/service/StorageService.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageService.java 
b/src/java/org/apache/cassandra/service/StorageService.java
index 34b2f12..5f00c88 100644
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@ -35,6 +35,8 @@ import javax.management.NotificationBroadcasterSupport;
 import javax.management.ObjectName;
 
 import com.google.common.collect.*;
+
+import org.apache.cassandra.db.index.SecondaryIndex;
 import org.apache.log4j.Level;
 import org.apache.commons.lang.StringUtils;
 import org.slf4j.Logger;
@@ -2084,7 +2086,7 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 throw new RuntimeException(Cleanup of the system table is neither 
necessary nor wise);
 
 CounterId.OneShotRenewer counterIdRenewer = new 
CounterId.OneShotRenewer();
-for (ColumnFamilyStore cfStore : getValidColumnFamilies(tableName, 
columnFamilies))
+for (ColumnFamilyStore cfStore : getValidColumnFamilies(false, false, 
tableName, columnFamilies))
 {
 cfStore.forceCleanup(counterIdRenewer);
 }
@@ -2092,19 +2094,19 @@ public class StorageService extends 
NotificationBroadcasterSupport implements IE
 
 public void scrub(String tableName, String... columnFamilies) throws 
IOException, ExecutionException, InterruptedException
 {
-for (ColumnFamilyStore cfStore : getValidColumnFamilies(tableName, 
columnFamilies))
+for (ColumnFamilyStore cfStore : getValidColumnFamilies(false, false, 
tableName, columnFamilies))
 cfStore.scrub();
 }
 
 public void upgradeSSTables(String tableName, String... columnFamilies) 
throws IOException, ExecutionException, InterruptedException
 {
-for (ColumnFamilyStore cfStore : getValidColumnFamilies(tableName, 
columnFamilies))
+for (ColumnFamilyStore cfStore : getValidColumnFamilies(true, true, 
tableName, columnFamilies))
 cfStore.sstablesRewrite();
 }
 
 public void forceTableCompaction(String tableName, String... 
columnFamilies) throws 

[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-02-06 Thread brandonwilliams
Updated Branches:
  refs/heads/cassandra-1.2 f3091835e - cef8eb07d
  refs/heads/trunk 95ffb5d2d - ed79a59d9


Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ed79a59d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ed79a59d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ed79a59d

Branch: refs/heads/trunk
Commit: ed79a59d95578502ae3ff087dcaccb8abc9dba40
Parents: 95ffb5d cef8eb0
Author: Brandon Williams brandonwilli...@apache.org
Authored: Wed Feb 6 08:34:03 2013 -0600
Committer: Brandon Williams brandonwilli...@apache.org
Committed: Wed Feb 6 08:34:03 2013 -0600

--
 CHANGES.txt|1 +
 .../cassandra/db/compaction/CompactionManager.java |1 -
 .../apache/cassandra/service/StorageService.java   |   67 ---
 src/java/org/apache/cassandra/tools/NodeCmd.java   |7 ++-
 src/java/org/apache/cassandra/tools/NodeProbe.java |   69 --
 5 files changed, 122 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed79a59d/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed79a59d/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ed79a59d/src/java/org/apache/cassandra/service/StorageService.java
--
diff --cc src/java/org/apache/cassandra/service/StorageService.java
index 1eca2b9,5f00c88..ef3237d
--- a/src/java/org/apache/cassandra/service/StorageService.java
+++ b/src/java/org/apache/cassandra/service/StorageService.java
@@@ -2240,9 -2285,9 +2285,9 @@@ public class StorageService extends Not
  public void forceTableFlush(final String tableName, final String... 
columnFamilies)
  throws IOException, ExecutionException, InterruptedException
  {
- for (ColumnFamilyStore cfStore : getValidColumnFamilies(tableName, 
columnFamilies))
+ for (ColumnFamilyStore cfStore : getValidColumnFamilies(true, false, 
tableName, columnFamilies))
  {
 -logger.debug(Forcing flush on keyspace  + tableName + , CF  + 
cfStore.getColumnFamilyName());
 +logger.debug(Forcing flush on keyspace  + tableName + , CF  + 
cfStore.name);
  cfStore.forceBlockingFlush();
  }
  }
@@@ -2367,9 -2412,9 +2412,9 @@@
  public AntiEntropyService.RepairFuture forceTableRepair(final 
RangeToken range, final String tableName, boolean isSequential, boolean  
isLocal, final String... columnFamilies) throws IOException
  {
  ArrayListString names = new ArrayListString();
- for (ColumnFamilyStore cfStore : getValidColumnFamilies(tableName, 
columnFamilies))
+ for (ColumnFamilyStore cfStore : getValidColumnFamilies(false, false, 
tableName, columnFamilies))
  {
 -names.add(cfStore.getColumnFamilyName());
 +names.add(cfStore.name);
  }
  
  if (names.isEmpty())



[jira] [Commented] (CASSANDRA-4464) expose 2I CFs to the rest of nodetool

2013-02-06 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13572449#comment-13572449
 ] 

Brandon Williams commented on CASSANDRA-4464:
-

Oops, you're right, I think I forgot the '-cf' flag so it treated them both as 
keyspaces. +1 and committed, thanks.

 expose 2I CFs to the rest of nodetool
 -

 Key: CASSANDRA-4464
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4464
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Jonathan Ellis
Assignee: Jason Brown
Priority: Minor
 Fix For: 1.2.2

 Attachments: 4464.txt, 4464-v1.patch, 4464-v2.patch


 This was begun in CASSANDRA-4063.  We should extend it to scrub as well, and 
 probably compact since any sane way to do it for scrub should give the other 
 for free.
 Not sure how easy these will be since they go through CompactionManager via 
 StorageProxy.  I think getValidColumnFamilies could be updated to check for 
 index CFs with dot notation.
 (Other operations like flush or snapshot don't make sense for 2I CFs in 
 isolation of their parent.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5182) Deletable rows are sometimes not removed during compaction

2013-02-06 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5182:
--

Attachment: 5182-1.2.txt

Patch attached for 1.2 and above. It checks index file using getPosition if 
sstable has AlwaysPresentFilter as a bloom filter.

 Deletable rows are sometimes not removed during compaction
 --

 Key: CASSANDRA-5182
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5182
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.0.7
Reporter: Binh Van Nguyen
Assignee: Yuki Morishita
 Fix For: 1.2.2

 Attachments: 5182-1.1.txt, 5182-1.2.txt, test_ttl.tar.gz


 Our use case is write heavy and read seldom.  To optimize the space used, 
 we've set the bloom_filter_fp_ratio=1.0  That along with the fact that each 
 row is only written to one time and that there are more than 20 SSTables 
 keeps the rows from ever being compacted. Here is the code:
 https://github.com/apache/cassandra/blob/cassandra-1.1/src/java/org/apache/cassandra/db/compaction/CompactionController.java#L162
 We hit this conner case and because of this C* keeps consuming more and more 
 space on disk while it should not.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of YvetteALK by YvetteALK

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The YvetteALK page has been changed by YvetteALK:
http://wiki.apache.org/cassandra/YvetteALK

New page:
Hey fellas !! I am SONYA BLACK. My parents want me to join The Excellent 
Institute of Useful People built at San Antonio.BR
I am self employed as a Examiner. I am a fan of MakingWalking Sticks. My daddy 
name is Walter  and he is a Firefighter. My mother is a Air Traffic 
Controller.BR
BR
Check out my weblog :: [[http://dutifulphone.webs.com|cheap beats by dre]]


[Cassandra Wiki] Trivial Update of TyroneBru by TyroneBru

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The TyroneBru page has been changed by TyroneBru:
http://wiki.apache.org/cassandra/TyroneBru

New page:
Wassp People !! The name is CARMEL JUAREZ. I have a house in City of 
London.BR
This summer iam going to be 35. My parents want me to join The Gift Academy of 
Unparalleled Children in Huntington Beach. My father name is Charles  and he is 
a Spy. My mummy is a Private detective.BR
BR
My web blog: [[http://discount-louis-vuitton.blinkweb.com|cheap louis vuitton 
bags]]


[jira] [Commented] (CASSANDRA-5112) Setting up authentication tables with custom authentication plugin

2013-02-06 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13572562#comment-13572562
 ] 

Aleksey Yeschenko commented on CASSANDRA-5112:
--

bq. are you also addressing the second issue? About being able to change the 
strategy and RF fort the system_auth keyspace?

That's the plan.

 Setting up authentication tables with custom authentication plugin
 --

 Key: CASSANDRA-5112
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5112
 Project: Cassandra
  Issue Type: Improvement
  Components: API
Affects Versions: 1.2.0
Reporter: Dirkjan Bussink
Assignee: Aleksey Yeschenko
Priority: Minor
 Fix For: 1.2.2


 I'm working on updating https://github.com/nedap/cassandra-auth with the new 
 authentication API's in Cassandra 1.2.0. I have stumbled on an issue and I'm 
 not really sure how to handle it.
 For the authentication I want to setup additional column families for the 
 passwords and permissions. As recommended in the documentation of 
 IAuthorizer, I'm trying to create these tables during setup(): Setup is 
 called once upon system startup to initialize the IAuthorizer. For example, 
 use this method to create any required keyspaces/column families..
 The problem is that doing this seems to be a lot harder than I would think, 
 or I'm perhaps missing something obvious. I've tried various attempts, but 
 all have failed:
 - CQL and QueryProcessor.processInternal to setup additional column families. 
 This fails, since processInternal will throw a UnsupportedOperationException 
 due to it being a SchemaAlteringStatement.
 - CQL and QueryProcessor.process. This works after the system has 
 successfully started, but due to the moment setup() is called in the 
 Cassandra boot process, it will fail. It will throw an AssertionError in 
 MigrationManager.java:320, because the gossiper hasn't been started yet.
 - Internal API's. Mimicking how other column families are set up, using 
 CFMetadata and Schema.load. This seems to get the system in some inconsistent 
 state where some parts do see the additional column family, but others don't.
 Does anyone have a recommendation for the path to follow here? What would be 
 the recommended approach for actually setting up those column families during 
 starting for authentication?
 From working on this, I also have another question. I see the default 
 system_auth keyspace is created with a SimpleStrategy and a replication 
 factor of 1. Is this a deliberate choice? I can imagine that if a node in a 
 cluster dies, losing the authentication information that happens to be 
 available on that code could be very problematic. If I'm missing any 
 reasoning here, please let me know, but it struck me as something that could 
 cause potential problems. I also don't see a way I could reconfigure this at 
 the moment, and API's such as CREATE USER do seem to depend on this keyspace.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of MarianBar by MarianBar

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The MarianBar page has been changed by MarianBar:
http://wiki.apache.org/cassandra/MarianBar

New page:
Wassp People !! I am WAI GUERRA. This feb i will be 28.BR
I am self employed as a Police Inspector. One day i would want to do 
Magic.BR
BR
Feel free to surf to my page; 
[[http://discount-louis-vuitton.blinkweb.com|louis vuitton outlet online]]


[Cassandra Wiki] Trivial Update of GCYValent by GCYValent

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The GCYValent page has been changed by GCYValent:
http://wiki.apache.org/cassandra/GCYValent

New page:
Hey !! I am VERENA DAVIS. I belong to Winchester. I am 43. I want to become a 
Agricultural and food scientist.BR
My hobby is Bridge Building. My father name is   Lance and he is a Tuner. My 
mom is a Senator.BR
BR
Feel free to visit my weblog :: [[http://www.aftonbags.com|louis vuitton belts]]


[jira] [Updated] (CASSANDRA-5225) Missing columns, errors when requesting specific columns from wide rows

2013-02-06 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-5225:
---

Attachment: pycassa-repro.py

Attached python script reproduces the issue with pycassa.

 Missing columns, errors when requesting specific columns from wide rows
 ---

 Key: CASSANDRA-5225
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5225
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Tyler Hobbs
Priority: Critical
 Attachments: pycassa-repro.py


 With Cassandra 1.2.1 (and probably 1.2.0), I'm seeing some problems with 
 Thrift queries that request a set of specific column names when the row is 
 very wide.
 To reproduce, I'm inserting 10 million columns into a single row and then 
 randomly requesting three columns by name in a loop.  It's common for only 
 one or two of the three columns to be returned.  I'm also seeing stack traces 
 like the following in the Cassandra log:
 {noformat}
 ERROR 13:12:01,017 Exception in thread Thread[ReadStage:76,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
 bytes remaining)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1576)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
 bytes remaining)
   at 
 org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69)
   at 
 org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
   at 
 org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1358)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1215)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1127)
   at org.apache.cassandra.db.Table.getRow(Table.java:355)
   at 
 org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1572)
   ... 3 more
 {noformat}
 This doesn't seem to happen when the row is smaller, so it might have 
 something to do with incremental large row compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5225) Missing columns, errors when requesting specific columns from wide rows

2013-02-06 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-5225:
--

 Summary: Missing columns, errors when requesting specific columns 
from wide rows
 Key: CASSANDRA-5225
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5225
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Tyler Hobbs
Priority: Critical
 Attachments: pycassa-repro.py

With Cassandra 1.2.1 (and probably 1.2.0), I'm seeing some problems with Thrift 
queries that request a set of specific column names when the row is very wide.

To reproduce, I'm inserting 10 million columns into a single row and then 
randomly requesting three columns by name in a loop.  It's common for only one 
or two of the three columns to be returned.  I'm also seeing stack traces like 
the following in the Cassandra log:

{noformat}
ERROR 13:12:01,017 Exception in thread Thread[ReadStage:76,5,main]
java.lang.RuntimeException: 
org.apache.cassandra.io.sstable.CorruptSSTableException: 
org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column 
name length 0 
(/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
bytes remaining)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1576)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column 
name length 0 
(/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
bytes remaining)
at 
org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69)
at 
org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
at 
org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133)
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1358)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1215)
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1127)
at org.apache.cassandra.db.Table.getRow(Table.java:355)
at 
org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
at 
org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
at 
org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1572)
... 3 more
{noformat}

This doesn't seem to happen when the row is smaller, so it might have something 
to do with incremental large row compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of OAAVeroni by OAAVeroni

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The OAAVeroni page has been changed by OAAVeroni:
http://wiki.apache.org/cassandra/OAAVeroni

New page:
Hey !! My name is CLARICE DIXON. I am from Evansville.BR
I want to become a Art Director. One day i would want to do Making Model Cars. 
My daddy name is Christopher  and he is a Obstetrician. My mom is a 
Ophthalmologist.BR
BR
Also visit my web site :: [[http://www.justbeatsphone.com|dr dre earphones]]


[jira] [Created] (CASSANDRA-5226) CQL3 refactor to allow conversion function

2013-02-06 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-5226:
---

 Summary: CQL3 refactor to allow conversion function
 Key: CASSANDRA-5226
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5226
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne


In CASSANDRA-5198, we've fixed CQL3 type validation and talked about adding 
conversion functions to ease working with the different data types. However, 
the current CQL3 code makes it fairly hard to add such functions in a non-hacky 
way. In fact, we already support a few conversion functions (token, 
minTimeuuid, maxTimeuuid, now) but the way we support them is extremely ugly 
(the token function is competely special cased and min/maxTimeuuid are ugly 
hacks in TimeUUIDType that I'm really not proud of).

So I'm attaching a refactor that cleans that up, making it easy to add new 
conversion functions. Now, said refactor is a big one. While the goal is to 
make it easy to add functions, I think it also improve the code in the 
following ways:
* It much more clearly separate the phase of validating the query from 
executing it. And in particular, it moves more work in the preparing phase.  
Typically, the parsing of constants is now done in the preparing phase, not the 
execution one. It also groups validation code much more cleanly imo.
* It simplify UpdateStatement. The Operation business was not very clean and in 
particular the same operations where not handled by the same code depending on 
whether they were prepared or not, which was error prone. This is no longer the 
case.
* It somewhat simplify the parser. A few parsing rules were a bit too 
convoluted, trying to enforce invariants that are much more easily checked post 
parsing (and doing it post parsing often allow better error messages, the 
parser tends to send cryptic errors).

The first attached part is the initial refactor. It also adds some relatively 
generic code for adding conversion functions (it would typically not be very 
hard to allow user defined functions, though that's not part of the patch at 
all) and uses that to handle the existing token, minTimeuuid and maxTimeuuid 
functions.

It's also worth mentioning that this first patch introduces type casts. The 
main reason is that it allows multiple overloads of the same function. 
Typically, the minTimeuuid allows both strings arguments (for dates) or integer 
ones (for timestamp), but so when you have:
{noformat}
SELECT * FROM foo WHERE t  minTimeuuid(?);
{noformat}
then the code doesn't know which function to use. So it complains. But you can 
remove the ambiguity with
{noformat}
SELECT * FROM foo WHERE t  minTimeuuid((bigint)?);
{noformat}

The 2nd patch finishes what the first one started by extending this conversion 
functions support to select clauses. So after this 2nd patch you can do stuff 
like:
{noformat}
SELECT token(k), k FROM foo;
{noformat}
for instance.

The 3rd patch builds on that to actually add new conversion functions. Namely, 
for every existing CQL3 type it adds a blobTotype and a typeToBlob 
function that convert from and to blobs. And so you can do (not that this 
example is particularly smart):
{noformat}
SELECT varintToBlob(v) FROM foo WHERE v  blobToVarint(bigintToBlob(3));
{noformat}
Honestly this last patch is more for demonstration purpose and we can discuss 
this separately. In particular, we may want better different names for those 
functions. But at least it should highlight that adding new function is easy 
(this could be used to add methods to work with dates for instance).

Now, at least considering the 2 first patches, this is not a small amount of 
code but I would still suggest pushing this in 1.2 (the patches are against 
1.2) for the following reasons:
 * It fixes a few existing bugs (CASSANDRA-5198 has broke prepared statement 
for instance, which this patch fixes) and add missing validation in a few 
places (we are allowing sets literals like \{ 1, 1 \} for instance, which is 
kind of wrong as it suggests we support multisets). We could fix those 
separatly but honestly I'm not we won't miss some.
 * We do have a fair amount of CQL dtests and i've check all pass. The refactor 
also cleans up some part of the code quite a bit imo. So overall I think I'm 
almost more confident in the code post-refactor than the current one.
 * We're early in 1.2 and it's an improvement after all. It would be a bit sad 
to have to wait for 2.0 to get this.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5226) CQL3 refactor to allow conversion function

2013-02-06 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5226:


Fix Version/s: 1.2.2

 CQL3 refactor to allow conversion function
 --

 Key: CASSANDRA-5226
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5226
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.2


 In CASSANDRA-5198, we've fixed CQL3 type validation and talked about adding 
 conversion functions to ease working with the different data types. However, 
 the current CQL3 code makes it fairly hard to add such functions in a 
 non-hacky way. In fact, we already support a few conversion functions (token, 
 minTimeuuid, maxTimeuuid, now) but the way we support them is extremely ugly 
 (the token function is competely special cased and min/maxTimeuuid are ugly 
 hacks in TimeUUIDType that I'm really not proud of).
 So I'm attaching a refactor that cleans that up, making it easy to add new 
 conversion functions. Now, said refactor is a big one. While the goal is to 
 make it easy to add functions, I think it also improve the code in the 
 following ways:
 * It much more clearly separate the phase of validating the query from 
 executing it. And in particular, it moves more work in the preparing phase.  
 Typically, the parsing of constants is now done in the preparing phase, not 
 the execution one. It also groups validation code much more cleanly imo.
 * It simplify UpdateStatement. The Operation business was not very clean and 
 in particular the same operations where not handled by the same code 
 depending on whether they were prepared or not, which was error prone. This 
 is no longer the case.
 * It somewhat simplify the parser. A few parsing rules were a bit too 
 convoluted, trying to enforce invariants that are much more easily checked 
 post parsing (and doing it post parsing often allow better error messages, 
 the parser tends to send cryptic errors).
 The first attached part is the initial refactor. It also adds some relatively 
 generic code for adding conversion functions (it would typically not be very 
 hard to allow user defined functions, though that's not part of the patch at 
 all) and uses that to handle the existing token, minTimeuuid and maxTimeuuid 
 functions.
 It's also worth mentioning that this first patch introduces type casts. The 
 main reason is that it allows multiple overloads of the same function. 
 Typically, the minTimeuuid allows both strings arguments (for dates) or 
 integer ones (for timestamp), but so when you have:
 {noformat}
 SELECT * FROM foo WHERE t  minTimeuuid(?);
 {noformat}
 then the code doesn't know which function to use. So it complains. But you 
 can remove the ambiguity with
 {noformat}
 SELECT * FROM foo WHERE t  minTimeuuid((bigint)?);
 {noformat}
 The 2nd patch finishes what the first one started by extending this 
 conversion functions support to select clauses. So after this 2nd patch you 
 can do stuff like:
 {noformat}
 SELECT token(k), k FROM foo;
 {noformat}
 for instance.
 The 3rd patch builds on that to actually add new conversion functions. 
 Namely, for every existing CQL3 type it adds a blobTotype and a 
 typeToBlob function that convert from and to blobs. And so you can do (not 
 that this example is particularly smart):
 {noformat}
 SELECT varintToBlob(v) FROM foo WHERE v  blobToVarint(bigintToBlob(3));
 {noformat}
 Honestly this last patch is more for demonstration purpose and we can discuss 
 this separately. In particular, we may want better different names for those 
 functions. But at least it should highlight that adding new function is easy 
 (this could be used to add methods to work with dates for instance).
 Now, at least considering the 2 first patches, this is not a small amount of 
 code but I would still suggest pushing this in 1.2 (the patches are against 
 1.2) for the following reasons:
  * It fixes a few existing bugs (CASSANDRA-5198 has broke prepared statement 
 for instance, which this patch fixes) and add missing validation in a few 
 places (we are allowing sets literals like \{ 1, 1 \} for instance, which is 
 kind of wrong as it suggests we support multisets). We could fix those 
 separatly but honestly I'm not we won't miss some.
  * We do have a fair amount of CQL dtests and i've check all pass. The 
 refactor also cleans up some part of the code quite a bit imo. So overall I 
 think I'm almost more confident in the code post-refactor than the current 
 one.
  * We're early in 1.2 and it's an improvement after all. It would be a bit 
 sad to have to wait for 2.0 to get this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA 

[jira] [Updated] (CASSANDRA-5226) CQL3 refactor to allow conversion function

2013-02-06 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5226:


Attachment: 0003-Add-bytes-conversion-functions.txt
0002-Allow-functions-in-selection.txt
0001-Refactor-to-support-CQL3-functions.txt

 CQL3 refactor to allow conversion function
 --

 Key: CASSANDRA-5226
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5226
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 0001-Refactor-to-support-CQL3-functions.txt, 
 0002-Allow-functions-in-selection.txt, 0003-Add-bytes-conversion-functions.txt


 In CASSANDRA-5198, we've fixed CQL3 type validation and talked about adding 
 conversion functions to ease working with the different data types. However, 
 the current CQL3 code makes it fairly hard to add such functions in a 
 non-hacky way. In fact, we already support a few conversion functions (token, 
 minTimeuuid, maxTimeuuid, now) but the way we support them is extremely ugly 
 (the token function is competely special cased and min/maxTimeuuid are ugly 
 hacks in TimeUUIDType that I'm really not proud of).
 So I'm attaching a refactor that cleans that up, making it easy to add new 
 conversion functions. Now, said refactor is a big one. While the goal is to 
 make it easy to add functions, I think it also improve the code in the 
 following ways:
 * It much more clearly separate the phase of validating the query from 
 executing it. And in particular, it moves more work in the preparing phase.  
 Typically, the parsing of constants is now done in the preparing phase, not 
 the execution one. It also groups validation code much more cleanly imo.
 * It simplify UpdateStatement. The Operation business was not very clean and 
 in particular the same operations where not handled by the same code 
 depending on whether they were prepared or not, which was error prone. This 
 is no longer the case.
 * It somewhat simplify the parser. A few parsing rules were a bit too 
 convoluted, trying to enforce invariants that are much more easily checked 
 post parsing (and doing it post parsing often allow better error messages, 
 the parser tends to send cryptic errors).
 The first attached part is the initial refactor. It also adds some relatively 
 generic code for adding conversion functions (it would typically not be very 
 hard to allow user defined functions, though that's not part of the patch at 
 all) and uses that to handle the existing token, minTimeuuid and maxTimeuuid 
 functions.
 It's also worth mentioning that this first patch introduces type casts. The 
 main reason is that it allows multiple overloads of the same function. 
 Typically, the minTimeuuid allows both strings arguments (for dates) or 
 integer ones (for timestamp), but so when you have:
 {noformat}
 SELECT * FROM foo WHERE t  minTimeuuid(?);
 {noformat}
 then the code doesn't know which function to use. So it complains. But you 
 can remove the ambiguity with
 {noformat}
 SELECT * FROM foo WHERE t  minTimeuuid((bigint)?);
 {noformat}
 The 2nd patch finishes what the first one started by extending this 
 conversion functions support to select clauses. So after this 2nd patch you 
 can do stuff like:
 {noformat}
 SELECT token(k), k FROM foo;
 {noformat}
 for instance.
 The 3rd patch builds on that to actually add new conversion functions. 
 Namely, for every existing CQL3 type it adds a blobTotype and a 
 typeToBlob function that convert from and to blobs. And so you can do (not 
 that this example is particularly smart):
 {noformat}
 SELECT varintToBlob(v) FROM foo WHERE v  blobToVarint(bigintToBlob(3));
 {noformat}
 Honestly this last patch is more for demonstration purpose and we can discuss 
 this separately. In particular, we may want better different names for those 
 functions. But at least it should highlight that adding new function is easy 
 (this could be used to add methods to work with dates for instance).
 Now, at least considering the 2 first patches, this is not a small amount of 
 code but I would still suggest pushing this in 1.2 (the patches are against 
 1.2) for the following reasons:
  * It fixes a few existing bugs (CASSANDRA-5198 has broke prepared statement 
 for instance, which this patch fixes) and add missing validation in a few 
 places (we are allowing sets literals like \{ 1, 1 \} for instance, which is 
 kind of wrong as it suggests we support multisets). We could fix those 
 separatly but honestly I'm not we won't miss some.
  * We do have a fair amount of CQL dtests and i've check all pass. The 
 refactor also cleans up some part of the code quite a bit imo. So overall I 
 think I'm almost more confident 

[Cassandra Wiki] Trivial Update of Rodvpoofz by Rodvpoofz

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Rodvpoofz page has been changed by Rodvpoofz:
http://wiki.apache.org/cassandra/Rodvpoofz

New page:
Wassp People !! The name is CHARLOTT DUFFY.BR
This spring iam going to be 46. I am taking admission in The Miracle Boarding 
School built at Springdale. I am self employed as a Pianist. I like Surf 
Fishing. My dad name is Timothy  and he is a Software Engineer. My mother is a 
Sales Person.BR
BR
Here is my web-site - [[http://www.chanel-ol-store.com|chanel bag]]


[Cassandra Wiki] Trivial Update of YvetteALK by YvetteALK

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The YvetteALK page has been changed by YvetteALK:
http://wiki.apache.org/cassandra/YvetteALK?action=diffrev1=1rev2=2

+ Wassp People !! The name is GIOVANNA CASTILLO. My school's name is The 
Monsterous Boarding School which has a branch in Bangor. I am self employed as 
a Attorney at law. I also like to Handwriting Analysis. My daddy name is Adrian 
 and he is a Miller. My momy is a Usher.BR
- Hey fellas !! I am SONYA BLACK. My parents want me to join The Excellent 
Institute of Useful People built at San Antonio.BR
- I am self employed as a Examiner. I am a fan of MakingWalking Sticks. My 
daddy name is Walter  and he is a Firefighter. My mother is a Air Traffic 
Controller.BR
  BR
- Check out my weblog :: [[http://dutifulphone.webs.com|cheap beats by dre]]
+ my blog [[http://dutifulphone.webs.com|beats by dr]]
  


[Cassandra Wiki] Trivial Update of Enriqueta by Enriqueta

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Enriqueta page has been changed by Enriqueta:
http://wiki.apache.org/cassandra/Enriqueta

New page:
Hey fellas !! The name is PARTHENIA THOMAS. I am staying at Flint.BR
I want to become a Automotive mechanic. I also like to Volunteer. My dad name 
is Erik and he is a Hostess. My momy is a Historian.BR
BR
Feel free to visit my webpage :: [[http://www.shoesashlen.com|christian 
louboutin shoes]]


[jira] [Updated] (CASSANDRA-5226) CQL3 refactor to allow conversion function

2013-02-06 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5226:
--

Reviewer: iamaleksey

 CQL3 refactor to allow conversion function
 --

 Key: CASSANDRA-5226
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5226
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 1.2.2

 Attachments: 0001-Refactor-to-support-CQL3-functions.txt, 
 0002-Allow-functions-in-selection.txt, 0003-Add-bytes-conversion-functions.txt


 In CASSANDRA-5198, we've fixed CQL3 type validation and talked about adding 
 conversion functions to ease working with the different data types. However, 
 the current CQL3 code makes it fairly hard to add such functions in a 
 non-hacky way. In fact, we already support a few conversion functions (token, 
 minTimeuuid, maxTimeuuid, now) but the way we support them is extremely ugly 
 (the token function is competely special cased and min/maxTimeuuid are ugly 
 hacks in TimeUUIDType that I'm really not proud of).
 So I'm attaching a refactor that cleans that up, making it easy to add new 
 conversion functions. Now, said refactor is a big one. While the goal is to 
 make it easy to add functions, I think it also improve the code in the 
 following ways:
 * It much more clearly separate the phase of validating the query from 
 executing it. And in particular, it moves more work in the preparing phase.  
 Typically, the parsing of constants is now done in the preparing phase, not 
 the execution one. It also groups validation code much more cleanly imo.
 * It simplify UpdateStatement. The Operation business was not very clean and 
 in particular the same operations where not handled by the same code 
 depending on whether they were prepared or not, which was error prone. This 
 is no longer the case.
 * It somewhat simplify the parser. A few parsing rules were a bit too 
 convoluted, trying to enforce invariants that are much more easily checked 
 post parsing (and doing it post parsing often allow better error messages, 
 the parser tends to send cryptic errors).
 The first attached part is the initial refactor. It also adds some relatively 
 generic code for adding conversion functions (it would typically not be very 
 hard to allow user defined functions, though that's not part of the patch at 
 all) and uses that to handle the existing token, minTimeuuid and maxTimeuuid 
 functions.
 It's also worth mentioning that this first patch introduces type casts. The 
 main reason is that it allows multiple overloads of the same function. 
 Typically, the minTimeuuid allows both strings arguments (for dates) or 
 integer ones (for timestamp), but so when you have:
 {noformat}
 SELECT * FROM foo WHERE t  minTimeuuid(?);
 {noformat}
 then the code doesn't know which function to use. So it complains. But you 
 can remove the ambiguity with
 {noformat}
 SELECT * FROM foo WHERE t  minTimeuuid((bigint)?);
 {noformat}
 The 2nd patch finishes what the first one started by extending this 
 conversion functions support to select clauses. So after this 2nd patch you 
 can do stuff like:
 {noformat}
 SELECT token(k), k FROM foo;
 {noformat}
 for instance.
 The 3rd patch builds on that to actually add new conversion functions. 
 Namely, for every existing CQL3 type it adds a blobTotype and a 
 typeToBlob function that convert from and to blobs. And so you can do (not 
 that this example is particularly smart):
 {noformat}
 SELECT varintToBlob(v) FROM foo WHERE v  blobToVarint(bigintToBlob(3));
 {noformat}
 Honestly this last patch is more for demonstration purpose and we can discuss 
 this separately. In particular, we may want better different names for those 
 functions. But at least it should highlight that adding new function is easy 
 (this could be used to add methods to work with dates for instance).
 Now, at least considering the 2 first patches, this is not a small amount of 
 code but I would still suggest pushing this in 1.2 (the patches are against 
 1.2) for the following reasons:
  * It fixes a few existing bugs (CASSANDRA-5198 has broke prepared statement 
 for instance, which this patch fixes) and add missing validation in a few 
 places (we are allowing sets literals like \{ 1, 1 \} for instance, which is 
 kind of wrong as it suggests we support multisets). We could fix those 
 separatly but honestly I'm not we won't miss some.
  * We do have a fair amount of CQL dtests and i've check all pass. The 
 refactor also cleans up some part of the code quite a bit imo. So overall I 
 think I'm almost more confident in the code post-refactor than the current 
 one.
  * We're early in 1.2 and it's an improvement after all. It would be a bit 
 sad to have to wait 

git commit: update WordCount for SuperColumn refactor

2013-02-06 Thread jbellis
Updated Branches:
  refs/heads/trunk ed79a59d9 - 22d8e8448


update WordCount for SuperColumn refactor


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22d8e844
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22d8e844
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22d8e844

Branch: refs/heads/trunk
Commit: 22d8e8448a5db16a2c550664773780748e801bf7
Parents: ed79a59
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 6 16:31:24 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 6 16:31:24 2013 -0600

--
 examples/hadoop_word_count/src/WordCount.java |   10 +-
 1 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/22d8e844/examples/hadoop_word_count/src/WordCount.java
--
diff --git a/examples/hadoop_word_count/src/WordCount.java 
b/examples/hadoop_word_count/src/WordCount.java
index a0ad913..398a7cb 100644
--- a/examples/hadoop_word_count/src/WordCount.java
+++ b/examples/hadoop_word_count/src/WordCount.java
@@ -25,7 +25,7 @@ import org.apache.cassandra.hadoop.ColumnFamilyOutputFormat;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import org.apache.cassandra.db.IColumn;
+import org.apache.cassandra.db.Column;
 import org.apache.cassandra.hadoop.ColumnFamilyInputFormat;
 import org.apache.cassandra.hadoop.ConfigHelper;
 import org.apache.cassandra.utils.ByteBufferUtil;
@@ -70,7 +70,7 @@ public class WordCount extends Configured implements Tool
 System.exit(0);
 }
 
-public static class TokenizerMapper extends MapperByteBuffer, 
SortedMapByteBuffer, IColumn, Text, IntWritable
+public static class TokenizerMapper extends MapperByteBuffer, 
SortedMapByteBuffer, Column, Text, IntWritable
 {
 private final static IntWritable one = new IntWritable(1);
 private Text word = new Text();
@@ -81,9 +81,9 @@ public class WordCount extends Configured implements Tool
 {
 }
 
-public void map(ByteBuffer key, SortedMapByteBuffer, IColumn 
columns, Context context) throws IOException, InterruptedException
+public void map(ByteBuffer key, SortedMapByteBuffer, Column columns, 
Context context) throws IOException, InterruptedException
 {
-for (IColumn column : columns.values())
+for (Column column : columns.values())
 {
 String name  = ByteBufferUtil.string(column.name());
 String value = null;
@@ -137,7 +137,7 @@ public class WordCount extends Configured implements Tool
 
 private static Mutation getMutation(Text word, int sum)
 {
-Column c = new Column();
+org.apache.cassandra.thrift.Column c = new 
org.apache.cassandra.thrift.Column();
 c.setName(Arrays.copyOf(word.getBytes(), word.getLength()));
 c.setValue(ByteBufferUtil.bytes(sum));
 c.setTimestamp(System.currentTimeMillis());



git commit: update WordCountCounters for SuperColumn refactor

2013-02-06 Thread jbellis
Updated Branches:
  refs/heads/trunk 22d8e8448 - 63cc6b0c7


update WordCountCounters for SuperColumn refactor


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/63cc6b0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/63cc6b0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/63cc6b0c

Branch: refs/heads/trunk
Commit: 63cc6b0c79759121cb777a5baaf5d6e2983d07c8
Parents: 22d8e84
Author: Jonathan Ellis jbel...@apache.org
Authored: Wed Feb 6 16:34:37 2013 -0600
Committer: Jonathan Ellis jbel...@apache.org
Committed: Wed Feb 6 16:34:37 2013 -0600

--
 .../hadoop_word_count/src/WordCountCounters.java   |8 
 1 files changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/63cc6b0c/examples/hadoop_word_count/src/WordCountCounters.java
--
diff --git a/examples/hadoop_word_count/src/WordCountCounters.java 
b/examples/hadoop_word_count/src/WordCountCounters.java
index e5a2460..55d0889 100644
--- a/examples/hadoop_word_count/src/WordCountCounters.java
+++ b/examples/hadoop_word_count/src/WordCountCounters.java
@@ -34,7 +34,7 @@ import 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 
-import org.apache.cassandra.db.IColumn;
+import org.apache.cassandra.db.Column;
 import org.apache.cassandra.hadoop.ColumnFamilyInputFormat;
 import org.apache.cassandra.hadoop.ConfigHelper;
 import org.apache.cassandra.thrift.*;
@@ -60,12 +60,12 @@ public class WordCountCounters extends Configured 
implements Tool
 System.exit(0);
 }
 
-public static class SumMapper extends MapperByteBuffer, 
SortedMapByteBuffer, IColumn, Text, LongWritable
+public static class SumMapper extends MapperByteBuffer, 
SortedMapByteBuffer, Column, Text, LongWritable
 {
-public void map(ByteBuffer key, SortedMapByteBuffer, IColumn 
columns, Context context) throws IOException, InterruptedException
+public void map(ByteBuffer key, SortedMapByteBuffer, Column columns, 
Context context) throws IOException, InterruptedException
 {
 long sum = 0;
-for (IColumn column : columns.values())
+for (Column column : columns.values())
 {
 logger.debug(read  + key + : + column.name() +  from  + 
context.getInputSplit());
 sum += ByteBufferUtil.toLong(column.value());



[jira] [Updated] (CASSANDRA-5225) Missing columns, errors when requesting specific columns from wide rows

2013-02-06 Thread Brandon Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-5225:


Fix Version/s: 1.2.2

Bisect says the winner is CASSANDRA-3885, but I never encountered the corrupt 
sstable exception, I don't think that's related.

 Missing columns, errors when requesting specific columns from wide rows
 ---

 Key: CASSANDRA-5225
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5225
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Tyler Hobbs
Priority: Critical
 Fix For: 1.2.2

 Attachments: pycassa-repro.py


 With Cassandra 1.2.1 (and probably 1.2.0), I'm seeing some problems with 
 Thrift queries that request a set of specific column names when the row is 
 very wide.
 To reproduce, I'm inserting 10 million columns into a single row and then 
 randomly requesting three columns by name in a loop.  It's common for only 
 one or two of the three columns to be returned.  I'm also seeing stack traces 
 like the following in the Cassandra log:
 {noformat}
 ERROR 13:12:01,017 Exception in thread Thread[ReadStage:76,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
 bytes remaining)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1576)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
 bytes remaining)
   at 
 org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69)
   at 
 org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
   at 
 org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1358)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1215)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1127)
   at org.apache.cassandra.db.Table.getRow(Table.java:355)
   at 
 org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1572)
   ... 3 more
 {noformat}
 This doesn't seem to happen when the row is smaller, so it might have 
 something to do with incremental large row compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-5225) Missing columns, errors when requesting specific columns from wide rows

2013-02-06 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13572972#comment-13572972
 ] 

Brandon Williams edited comment on CASSANDRA-5225 at 2/6/13 11:01 PM:
--

Bisect says the winner is CASSANDRA-3885, but I never encountered the corrupt 
sstable exception.

  was (Author: brandon.williams):
Bisect says the winner is CASSANDRA-3885, but I never encountered the 
corrupt sstable exception, I don't think that's related.
  
 Missing columns, errors when requesting specific columns from wide rows
 ---

 Key: CASSANDRA-5225
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5225
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
Reporter: Tyler Hobbs
Priority: Critical
 Fix For: 1.2.2

 Attachments: pycassa-repro.py


 With Cassandra 1.2.1 (and probably 1.2.0), I'm seeing some problems with 
 Thrift queries that request a set of specific column names when the row is 
 very wide.
 To reproduce, I'm inserting 10 million columns into a single row and then 
 randomly requesting three columns by name in a loop.  It's common for only 
 one or two of the three columns to be returned.  I'm also seeing stack traces 
 like the following in the Cassandra log:
 {noformat}
 ERROR 13:12:01,017 Exception in thread Thread[ReadStage:76,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
 bytes remaining)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1576)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
 org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid 
 column name length 0 
 (/var/lib/cassandra/data/Keyspace1/CF1/Keyspace1-CF1-ib-5-Data.db, 14035168 
 bytes remaining)
   at 
 org.apache.cassandra.db.columniterator.SSTableNamesIterator.init(SSTableNamesIterator.java:69)
   at 
 org.apache.cassandra.db.filter.NamesQueryFilter.getSSTableColumnIterator(NamesQueryFilter.java:81)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:68)
   at 
 org.apache.cassandra.db.CollationController.collectTimeOrderedData(CollationController.java:133)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1358)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1215)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1127)
   at org.apache.cassandra.db.Table.getRow(Table.java:355)
   at 
 org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:64)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1052)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1572)
   ... 3 more
 {noformat}
 This doesn't seem to happen when the row is smaller, so it might have 
 something to do with incremental large row compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[Cassandra Wiki] Trivial Update of Maryellen by Maryellen

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The Maryellen page has been changed by Maryellen:
http://wiki.apache.org/cassandra/Maryellen

New page:
Hey fellas !! I am CHUN VALENCIA. I live in Bloomington.BR
I like to do Making Dioramas. My papa name is Russell and he is a Librettist. 
My momy is a Saxophonist.BR
BR
Review my blog post ... [[http://www.aldorabag.com|chanel purses]]


[Cassandra Wiki] Trivial Update of AimeeX85 by AimeeX85

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The AimeeX85 page has been changed by AimeeX85:
http://wiki.apache.org/cassandra/AimeeX85

New page:
Hey !! The name is HILARY WAGNER. I live in Marina. I am turning 26.BR
I have applied for distance learning at The Clean Boarding School of Glad 
Children located in Truro. One day i would want to do Knotting. My daddy name 
is Kevin  and he is a Writer. My momy is a Professor.BR
BR
Review my webpage :: [[http://www.aldorabag.com|chanel wallet]]


[Cassandra Wiki] Trivial Update of KarineNix by KarineNix

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The KarineNix page has been changed by KarineNix:
http://wiki.apache.org/cassandra/KarineNix

New page:
Yo bros !! The name is ZENAIDA YANG. I have a house in Anaheim.BR
I am turning 37. I am self employed as a Bureaucrat. My hobby is Working on 
cars. My father name is Kevin  and he is a Navigator. My mom is a 
Pawnbroker.BR
BR
my webpage: [[http://dutifulphone.webs.com|beats headphones]]


[Cassandra Wiki] Trivial Update of ArnoldOsw by ArnoldOsw

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The ArnoldOsw page has been changed by ArnoldOsw:
http://wiki.apache.org/cassandra/ArnoldOsw

New page:
Howdy !! I am SCOTTIE ELLIS. I reside in Torrance.BR
BR
I have a job as Mercer. My hobby is Engraving. My daddy name is Michael and he 
is a Sculptor. My mother is a Private detective.BR
BR
Here is my weblog: [[http://www.justbeatsphone.com|dre beats cheap]]


[Cassandra Wiki] Trivial Update of JennaEaso by JennaEaso

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The JennaEaso page has been changed by JennaEaso:
http://wiki.apache.org/cassandra/JennaEaso

New page:
I am staying at Fresno. I am 48. I might take night schooling in The Powerful 
Preparatory situated in Hesperia.BR
I am planning to become a Insurer. I also like to Surf Fishing. My daddy name 
is Rhett and he is a Photographer. My momy is a Professor.BR
BR
Also visit my site; [[http://www.unitedchem.com/beatbydre.aspx|beats by dre]]


[Cassandra Wiki] Trivial Update of JanetSDHN by JanetSDHN

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The JanetSDHN page has been changed by JanetSDHN:
http://wiki.apache.org/cassandra/JanetSDHN

New page:
Yo guys !! I am SHERITA HORTON. I study at The Stormy Finishing School of Shiny 
Children built at Macon.BR
I am self employed as a Philosopher. My daddy name is David  and he is a 
Publisher. My mom is a Copywriter.BR
BR
Also visit my website - [[http://www.fairchanelstore.com|chanel purse]]


[Cassandra Wiki] Trivial Update of SonzbfuPu by SonzbfuPu

2013-02-06 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Cassandra Wiki for 
change notification.

The SonzbfuPu page has been changed by SonzbfuPu:
http://wiki.apache.org/cassandra/SonzbfuPu

New page:
I am from Cary. This spring iam going to be 27. My parents want me to join The 
Neat Prep School built at Harrisburg.BR
I am planning to become a Coachman. I like to do Smoking Pipes. My dad name is 
Greg  and he is a Waiter. My mom is a Market Gardener.BR
BR
Also visit my page - [[http://www.shoesashlen.com|christian louboutin shoes]]