[jira] [Updated] (CASSANDRA-6281) Use Atomic*FieldUpdater to save memory

2014-01-02 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-6281:
---

Attachment: 0001-Use-Atomic-FieldUpdater-to-save-memory.patch

 Use Atomic*FieldUpdater to save memory
 --

 Key: CASSANDRA-6281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6281
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-Use-Atomic-FieldUpdater-to-save-memory.patch


 Followup to CASSANDRA-6278, use Atomic*FieldUpdater in;
 AtomicSortedColumns
 ReadCallback
 WriteResponseHandler



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6522) DroppableTombstoneRatio JMX value is 0.0 for all CFs

2014-01-02 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860120#comment-13860120
 ] 

Marcus Eriksson commented on CASSANDRA-6522:


This metric only counts deleted columns, I guess you have been doing entire row 
deletes?

[~jbellis] [~yukim] Should we try to estimate how many columns are affected by 
a row/range tombstone and include in this metric? 

 DroppableTombstoneRatio JMX value is 0.0 for all CFs
 

 Key: CASSANDRA-6522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6522
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12.04 LTS, Cassandra 1.2.8
Reporter: Daniel Kador
Priority: Minor

 We're seeing that the JMX value for DroppableTombstoneRatio for all our CFs 
 is 0.0. On the face of it that seems wrong since we've definitely issued a 
 ton of deletes for row keys to expire some old data that we no longer need 
 (and it definitely hasn't been reclaimed from disk yet). Am I 
 misunderstanding what this means / how to use it? We're on 1.2.8 and using 
 leveled compaction for all our CFs.
 gc_grace_seconds is set to 1 day and we've issued a series of deletes over a 
 day ago, so gc_grace has elapsed.
 Cluster is 18 nodes.  Two DCs, so 9 nodes in each DC.  Each node has capacity 
 for 1.5TB or so and is sitting with about 1TB under management.  That's why 
 we wanted to do deletes, obviously.  Most of that 1TB is a single CF (called 
 events) which represents intermediate state for us that we can delete.
 Happy to provide any more info, just let me know.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6497) Iterable CqlPagingRecordReader

2014-01-02 Thread Luca Rosellini (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luca Rosellini updated CASSANDRA-6497:
--

Priority: Major  (was: Minor)

 Iterable CqlPagingRecordReader
 --

 Key: CASSANDRA-6497
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6497
 Project: Cassandra
  Issue Type: Improvement
  Components: Hadoop
Reporter: Luca Rosellini
 Fix For: 2.1

 Attachments: iterable-CqlPagingRecordReader.diff


 The current CqlPagingRecordReader implementation provides a non-standard way 
 of iterating over the underlying {{rowIterator}}. It would be nice to have an 
 Iterable CqlPagingRecordReader like the one proposed in the attached diff.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6529) sstableloader shows no progress or errors when pointed at a bad directory

2014-01-02 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-6529:
---

Attachment: 0001-verify-that-the-keyspace-exists-in-describeRing.patch

 sstableloader shows no progress or errors when pointed at a bad directory
 -

 Key: CASSANDRA-6529
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6529
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0.5

 Attachments: 
 0001-verify-that-the-keyspace-exists-in-describeRing.patch


 With sstableloader, the source directory is supposed to be in the format 
 {{keyspace_name/table_name/}}.  If you incorrectly just put the sstables 
 in a {{keyspace_name/}} directory, the sstableloader process will not show 
 any progress, errors, or other output, it will simply hang.
 This was initially reported on the user ML here: 
 http://www.mail-archive.com/user@cassandra.apache.org/msg33916.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Validate CF existence on execution for prepared statement

2014-01-02 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 2f63bbadf - 7171b7a2c


Validate CF existence on execution for prepared statement

patch by aholmber; reviewed by slebresne for CASSANDRA-6535


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7171b7a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7171b7a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7171b7a2

Branch: refs/heads/cassandra-1.2
Commit: 7171b7a2c621c2a0b4f876bef23e4f1ebc5332b0
Parents: 2f63bba
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Jan 2 14:15:06 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Jan 2 14:15:06 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ClientState.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7171b7a2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6c63f9d..64146c1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,6 +6,7 @@
  * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
  * Validate SliceRange start and finish lengths (CASSANDRA-6521)
  * fsync compression metadata (CASSANDRA-6531)
+ * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
 
 
 1.2.13

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7171b7a2/src/java/org/apache/cassandra/service/ClientState.java
--
diff --git a/src/java/org/apache/cassandra/service/ClientState.java 
b/src/java/org/apache/cassandra/service/ClientState.java
index e6b0f97..87ccfda 100644
--- a/src/java/org/apache/cassandra/service/ClientState.java
+++ b/src/java/org/apache/cassandra/service/ClientState.java
@@ -36,6 +36,7 @@ import org.apache.cassandra.db.Table;
 import org.apache.cassandra.exceptions.AuthenticationException;
 import org.apache.cassandra.exceptions.InvalidRequestException;
 import org.apache.cassandra.exceptions.UnauthorizedException;
+import org.apache.cassandra.thrift.ThriftValidation;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.SemanticVersion;
 
@@ -144,6 +145,7 @@ public class ClientState
 public void hasColumnFamilyAccess(String keyspace, String columnFamily, 
Permission perm)
 throws UnauthorizedException, InvalidRequestException
 {
+ThriftValidation.validateColumnFamily(keyspace, columnFamily);
 hasAccess(keyspace, perm, DataResource.columnFamily(keyspace, 
columnFamily));
 }
 



[jira] [Resolved] (CASSANDRA-6535) Prepared Statement on Defunct CF Can Impact Cluster Availability

2014-01-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-6535.
-

   Resolution: Fixed
Fix Version/s: 1.2.14
 Reviewer: Sylvain Lebresne

+1, committed, thanks.

 Prepared Statement on Defunct CF Can Impact Cluster Availability
 

 Key: CASSANDRA-6535
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6535
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 1.2.12
 CentOS 6.4
Reporter: Adam Holmberg
 Fix For: 1.2.14

 Attachments: 6535.txt


 *Synopsis:* misbehaving clients can cause DoS on a cluster with a defunct 
 prepared statement
 *Scenario:* 
 1.) Create prepared INSERT statement on existing table X
 2.) Table X is dropped
 3.) Continue using prepared statement from (1)
 *Result:* 
 a.) on coordinator node: COMMIT-LOG-WRITER + MutationStage errors
 b.) on other nodes: UnknownColumnFamilyException reading from socket; 
 closing  -- leads to thrashing inter-node connections
 c.) Other clients of the cluster suffer from I/O timeouts, presumably a 
 result of (b)
 *Other observations:*
 * On single-node clusters, clients return from insert without error because 
 mutation errors are swallowed.
 * On multiple-node clusters, clients receive a confounded 'read timeout' 
 error because the closed internode connections do not propagate the error 
 back.
 * With prepared SELECT statements (as opposed to INSERT described above). A 
 NullPointerException is caused on the server, and no meaninful error is 
 returned to the client.
 Besides the obvious don't do that to the integrator, it would be good if 
 the cluster could handle this error case more gracefully and avoid undue 
 impact.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[2/2] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-01-02 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/service/ClientState.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5284e129
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5284e129
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5284e129

Branch: refs/heads/cassandra-2.0
Commit: 5284e129f65ed737897e06af1d79cc6ce9bc4645
Parents: 4ed2234 7171b7a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Jan 2 14:18:29 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Jan 2 14:18:29 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ClientState.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5284e129/CHANGES.txt
--
diff --cc CHANGES.txt
index 958369a,64146c1..7946927
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,15 +1,31 @@@
 -1.2.14
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a trace (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 +* Delete unfinished compaction incrementally (CASSANDRA-6086)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
+  * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
  
  
 -1.2.13
 +2.0.4
 + * Allow removing snapshots of no-longer-existing CFs (CASSANDRA-6418)
 + * add StorageService.stopDaemon() (CASSANDRA-4268)
 + * add IRE for invalid CF supplied to get_count (CASSANDRA-5701)
 + * add client encryption support to sstableloader (CASSANDRA-6378)
 + * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468)
 + * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496)
 + * Fix assertion failure in filterColdSSTables (CASSANDRA-6483)
 + * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008)
 + * Fix cleanup ClassCastException (CASSANDRA-6462)
 + * Reduce gossip memory use by interning VersionedValue strings 
(CASSANDRA-6410)
 + * Allow specifying datacenters to participate in a repair (CASSANDRA-6218)
 + * Fix divide-by-zero in PCI (CASSANDRA-6403)
 + * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
 + * Add millisecond precision formats to the timestamp parser (CASSANDRA-6395)
 + * Expose a total memtable size metric for a CF (CASSANDRA-6391)
 + * cqlsh: handle symlinks properly (CASSANDRA-6425)
 + * Fix potential infinite loop when paging query with IN (CASSANDRA-6464)
 + * Fix assertion error in AbstractQueryPager.discardFirst (CASSANDRA-6447)
 + * Fix streaming older SSTable yields unnecessary tombstones (CASSANDRA-6527)
 +Merged from 1.2:
   * Improved error message on bad properties in DDL queries (CASSANDRA-6453)
   * Randomize batchlog candidates selection (CASSANDRA-6481)
   * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5284e129/src/java/org/apache/cassandra/service/ClientState.java
--
diff --cc src/java/org/apache/cassandra/service/ClientState.java
index 472fe53,87ccfda..7f312a9
--- a/src/java/org/apache/cassandra/service/ClientState.java
+++ b/src/java/org/apache/cassandra/service/ClientState.java
@@@ -36,7 -36,7 +36,8 @@@ import org.apache.cassandra.db.SystemKe
  import org.apache.cassandra.exceptions.AuthenticationException;
  import org.apache.cassandra.exceptions.InvalidRequestException;
  import org.apache.cassandra.exceptions.UnauthorizedException;
 +import org.apache.cassandra.tracing.Tracing;
+ import org.apache.cassandra.thrift.ThriftValidation;
  import org.apache.cassandra.utils.Pair;
  import org.apache.cassandra.utils.SemanticVersion;
  



[1/2] git commit: Validate CF existence on execution for prepared statement

2014-01-02 Thread slebresne
Updated Branches:
  refs/heads/cassandra-2.0 4ed223407 - 5284e129f


Validate CF existence on execution for prepared statement

patch by aholmber; reviewed by slebresne for CASSANDRA-6535


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7171b7a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7171b7a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7171b7a2

Branch: refs/heads/cassandra-2.0
Commit: 7171b7a2c621c2a0b4f876bef23e4f1ebc5332b0
Parents: 2f63bba
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Jan 2 14:15:06 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Jan 2 14:15:06 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ClientState.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7171b7a2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6c63f9d..64146c1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,6 +6,7 @@
  * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
  * Validate SliceRange start and finish lengths (CASSANDRA-6521)
  * fsync compression metadata (CASSANDRA-6531)
+ * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
 
 
 1.2.13

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7171b7a2/src/java/org/apache/cassandra/service/ClientState.java
--
diff --git a/src/java/org/apache/cassandra/service/ClientState.java 
b/src/java/org/apache/cassandra/service/ClientState.java
index e6b0f97..87ccfda 100644
--- a/src/java/org/apache/cassandra/service/ClientState.java
+++ b/src/java/org/apache/cassandra/service/ClientState.java
@@ -36,6 +36,7 @@ import org.apache.cassandra.db.Table;
 import org.apache.cassandra.exceptions.AuthenticationException;
 import org.apache.cassandra.exceptions.InvalidRequestException;
 import org.apache.cassandra.exceptions.UnauthorizedException;
+import org.apache.cassandra.thrift.ThriftValidation;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.SemanticVersion;
 
@@ -144,6 +145,7 @@ public class ClientState
 public void hasColumnFamilyAccess(String keyspace, String columnFamily, 
Permission perm)
 throws UnauthorizedException, InvalidRequestException
 {
+ThriftValidation.validateColumnFamily(keyspace, columnFamily);
 hasAccess(keyspace, perm, DataResource.columnFamily(keyspace, 
columnFamily));
 }
 



[3/3] git commit: Merge branch 'cassandra-2.0' into trunk

2014-01-02 Thread slebresne
Merge branch 'cassandra-2.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8165af5d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8165af5d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8165af5d

Branch: refs/heads/trunk
Commit: 8165af5db91c43dd564f879bd7f124275e3b9608
Parents: 80548b3 5284e12
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Jan 2 14:19:47 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Jan 2 14:19:47 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ClientState.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8165af5d/CHANGES.txt
--



[2/3] git commit: Merge branch 'cassandra-1.2' into cassandra-2.0

2014-01-02 Thread slebresne
Merge branch 'cassandra-1.2' into cassandra-2.0

Conflicts:
src/java/org/apache/cassandra/service/ClientState.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5284e129
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5284e129
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5284e129

Branch: refs/heads/trunk
Commit: 5284e129f65ed737897e06af1d79cc6ce9bc4645
Parents: 4ed2234 7171b7a
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Jan 2 14:18:29 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Jan 2 14:18:29 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ClientState.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5284e129/CHANGES.txt
--
diff --cc CHANGES.txt
index 958369a,64146c1..7946927
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,30 -1,15 +1,31 @@@
 -1.2.14
 - * Allow executing CREATE statements multiple times (CASSANDRA-6471)
 - * Don't send confusing info with timeouts (CASSANDRA-6491)
 - * Don't resubmit counter mutation runnables internally (CASSANDRA-6427)
 - * Don't drop local mutations without a trace (CASSANDRA-6510)
 - * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
 - * Validate SliceRange start and finish lengths (CASSANDRA-6521)
 +2.0.5
 +* Delete unfinished compaction incrementally (CASSANDRA-6086)
 +Merged from 1.2:
   * fsync compression metadata (CASSANDRA-6531)
+  * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
  
  
 -1.2.13
 +2.0.4
 + * Allow removing snapshots of no-longer-existing CFs (CASSANDRA-6418)
 + * add StorageService.stopDaemon() (CASSANDRA-4268)
 + * add IRE for invalid CF supplied to get_count (CASSANDRA-5701)
 + * add client encryption support to sstableloader (CASSANDRA-6378)
 + * Fix accept() loop for SSL sockets post-shutdown (CASSANDRA-6468)
 + * Fix size-tiered compaction in LCS L0 (CASSANDRA-6496)
 + * Fix assertion failure in filterColdSSTables (CASSANDRA-6483)
 + * Fix row tombstones in larger-than-memory compactions (CASSANDRA-6008)
 + * Fix cleanup ClassCastException (CASSANDRA-6462)
 + * Reduce gossip memory use by interning VersionedValue strings 
(CASSANDRA-6410)
 + * Allow specifying datacenters to participate in a repair (CASSANDRA-6218)
 + * Fix divide-by-zero in PCI (CASSANDRA-6403)
 + * Fix setting last compacted key in the wrong level for LCS (CASSANDRA-6284)
 + * Add millisecond precision formats to the timestamp parser (CASSANDRA-6395)
 + * Expose a total memtable size metric for a CF (CASSANDRA-6391)
 + * cqlsh: handle symlinks properly (CASSANDRA-6425)
 + * Fix potential infinite loop when paging query with IN (CASSANDRA-6464)
 + * Fix assertion error in AbstractQueryPager.discardFirst (CASSANDRA-6447)
 + * Fix streaming older SSTable yields unnecessary tombstones (CASSANDRA-6527)
 +Merged from 1.2:
   * Improved error message on bad properties in DDL queries (CASSANDRA-6453)
   * Randomize batchlog candidates selection (CASSANDRA-6481)
   * Fix thundering herd on endpoint cache invalidation (CASSANDRA-6345, 6485)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/5284e129/src/java/org/apache/cassandra/service/ClientState.java
--
diff --cc src/java/org/apache/cassandra/service/ClientState.java
index 472fe53,87ccfda..7f312a9
--- a/src/java/org/apache/cassandra/service/ClientState.java
+++ b/src/java/org/apache/cassandra/service/ClientState.java
@@@ -36,7 -36,7 +36,8 @@@ import org.apache.cassandra.db.SystemKe
  import org.apache.cassandra.exceptions.AuthenticationException;
  import org.apache.cassandra.exceptions.InvalidRequestException;
  import org.apache.cassandra.exceptions.UnauthorizedException;
 +import org.apache.cassandra.tracing.Tracing;
+ import org.apache.cassandra.thrift.ThriftValidation;
  import org.apache.cassandra.utils.Pair;
  import org.apache.cassandra.utils.SemanticVersion;
  



[1/3] git commit: Validate CF existence on execution for prepared statement

2014-01-02 Thread slebresne
Updated Branches:
  refs/heads/trunk 80548b359 - 8165af5db


Validate CF existence on execution for prepared statement

patch by aholmber; reviewed by slebresne for CASSANDRA-6535


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7171b7a2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7171b7a2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7171b7a2

Branch: refs/heads/trunk
Commit: 7171b7a2c621c2a0b4f876bef23e4f1ebc5332b0
Parents: 2f63bba
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Thu Jan 2 14:15:06 2014 +0100
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Thu Jan 2 14:15:06 2014 +0100

--
 CHANGES.txt| 1 +
 src/java/org/apache/cassandra/service/ClientState.java | 2 ++
 2 files changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7171b7a2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 6c63f9d..64146c1 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -6,6 +6,7 @@
  * Don't allow null max_hint_window_in_ms (CASSANDRA-6419)
  * Validate SliceRange start and finish lengths (CASSANDRA-6521)
  * fsync compression metadata (CASSANDRA-6531)
+ * Validate CF existence on execution for prepared statement (CASSANDRA-6535)
 
 
 1.2.13

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7171b7a2/src/java/org/apache/cassandra/service/ClientState.java
--
diff --git a/src/java/org/apache/cassandra/service/ClientState.java 
b/src/java/org/apache/cassandra/service/ClientState.java
index e6b0f97..87ccfda 100644
--- a/src/java/org/apache/cassandra/service/ClientState.java
+++ b/src/java/org/apache/cassandra/service/ClientState.java
@@ -36,6 +36,7 @@ import org.apache.cassandra.db.Table;
 import org.apache.cassandra.exceptions.AuthenticationException;
 import org.apache.cassandra.exceptions.InvalidRequestException;
 import org.apache.cassandra.exceptions.UnauthorizedException;
+import org.apache.cassandra.thrift.ThriftValidation;
 import org.apache.cassandra.utils.Pair;
 import org.apache.cassandra.utils.SemanticVersion;
 
@@ -144,6 +145,7 @@ public class ClientState
 public void hasColumnFamilyAccess(String keyspace, String columnFamily, 
Permission perm)
 throws UnauthorizedException, InvalidRequestException
 {
+ThriftValidation.validateColumnFamily(keyspace, columnFamily);
 hasAccess(keyspace, perm, DataResource.columnFamily(keyspace, 
columnFamily));
 }
 



[jira] [Commented] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860240#comment-13860240
 ] 

Jonathan Ellis commented on CASSANDRA-5351:
---

Ping?

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1

 Attachments: node1.log, node1_v2_full.log, node2.log, 
 node2_v2_full.log, node3.log, node3_v2_full.log


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5351:
--

Reviewer: Marcus Eriksson  (was: Sylvain Lebresne)

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1

 Attachments: node1.log, node1_v2_full.log, node2.log, 
 node2_v2_full.log, node3.log, node3_v2_full.log


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6456) log listen address at startup

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6456:
--

Reviewer: Lyuben Todorov
Assignee: Sean Bridges  (was: Jeremy Hanna)

 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


svn commit: r1554829 - in /cassandra/site: publish/download/index.html publish/index.html src/settings.py

2014-01-02 Thread slebresne
Author: slebresne
Date: Thu Jan  2 15:18:29 2014
New Revision: 1554829

URL: http://svn.apache.org/r1554829
Log:
Update website for 2.0.4 release

Modified:
cassandra/site/publish/download/index.html
cassandra/site/publish/index.html
cassandra/site/src/settings.py

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1554829r1=1554828r2=1554829view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Thu Jan  2 15:18:29 2014
@@ -49,8 +49,8 @@
   Cassandra releases include the core server, the a 
href=http://wiki.apache.org/cassandra/NodeTool;nodetool/a administration 
command-line interface, and a development shell (a 
href=http://cassandra.apache.org/doc/cql/CQL.html;ttcqlsh/tt/a and the 
old ttcassandra-cli/tt).
 
   p
-  The latest stable release of Apache Cassandra is 2.0.3
-  (released on 2013-11-25).  iIf you're just
+  The latest stable release of Apache Cassandra is 2.0.4
+  (released on 2013-12-30).  iIf you're just
   starting out, download this one./i
   /p
 
@@ -59,13 +59,13 @@
   ul
 li
 a class=filename 
-   
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/2.0.3/apache-cassandra-2.0.3-bin.tar.gz;
+   
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/2.0.4/apache-cassandra-2.0.4-bin.tar.gz;
onclick=javascript: 
pageTracker._trackPageview('/clicks/binary_download');
-  apache-cassandra-2.0.3-bin.tar.gz
+  apache-cassandra-2.0.4-bin.tar.gz
 /a
-[a 
href=http://www.apache.org/dist/cassandra/2.0.3/apache-cassandra-2.0.3-bin.tar.gz.asc;PGP/a]
-[a 
href=http://www.apache.org/dist/cassandra/2.0.3/apache-cassandra-2.0.3-bin.tar.gz.md5;MD5/a]
-[a 
href=http://www.apache.org/dist/cassandra/2.0.3/apache-cassandra-2.0.3-bin.tar.gz.sha1;SHA1/a]
+[a 
href=http://www.apache.org/dist/cassandra/2.0.4/apache-cassandra-2.0.4-bin.tar.gz.asc;PGP/a]
+[a 
href=http://www.apache.org/dist/cassandra/2.0.4/apache-cassandra-2.0.4-bin.tar.gz.md5;MD5/a]
+[a 
href=http://www.apache.org/dist/cassandra/2.0.4/apache-cassandra-2.0.4-bin.tar.gz.sha1;SHA1/a]
 /li
 li
 a href=http://wiki.apache.org/cassandra/DebianPackaging;Debian 
installation instructions/a
@@ -144,13 +144,13 @@
   ul
 li
 a class=filename 
-   
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/2.0.3/apache-cassandra-2.0.3-src.tar.gz;
+   
href=http://www.apache.org/dyn/closer.cgi?path=/cassandra/2.0.4/apache-cassandra-2.0.4-src.tar.gz;
onclick=javascript: 
pageTracker._trackPageview('/clicks/source_download');
-  apache-cassandra-2.0.3-src.tar.gz
+  apache-cassandra-2.0.4-src.tar.gz
 /a
-[a 
href=http://www.apache.org/dist/cassandra/2.0.3/apache-cassandra-2.0.3-src.tar.gz.asc;PGP/a]
-[a 
href=http://www.apache.org/dist/cassandra/2.0.3/apache-cassandra-2.0.3-src.tar.gz.md5;MD5/a]
-[a 
href=http://www.apache.org/dist/cassandra/2.0.3/apache-cassandra-2.0.3-src.tar.gz.sha1;SHA1/a]
+[a 
href=http://www.apache.org/dist/cassandra/2.0.4/apache-cassandra-2.0.4-src.tar.gz.asc;PGP/a]
+[a 
href=http://www.apache.org/dist/cassandra/2.0.4/apache-cassandra-2.0.4-src.tar.gz.md5;MD5/a]
+[a 
href=http://www.apache.org/dist/cassandra/2.0.4/apache-cassandra-2.0.4-src.tar.gz.sha1;SHA1/a]
 /li
   
 li

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1554829r1=1554828r2=1554829view=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Thu Jan  2 15:18:29 2014
@@ -76,8 +76,8 @@
   h2Download/h2
   div class=inner rc
 p
-The latest release is b2.0.3/b
-span class=relnotes(a 
href=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-2.0.3;Changes/a)/span
+The latest release is b2.0.4/b
+span class=relnotes(a 
href=http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-2.0.4;Changes/a)/span
 /p
 
 pa class=filename href=/download/Download options/a/p

Modified: cassandra/site/src/settings.py
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/settings.py?rev=1554829r1=1554828r2=1554829view=diff
==
--- cassandra/site/src/settings.py (original)
+++ cassandra/site/src/settings.py Thu Jan  2 15:18:29 2014
@@ -98,8 +98,8 @@ class CassandraDef(object):
 veryoldstable_version = '1.1.12'
 veryoldstable_release_date = '2013-05-27'
 veryoldstable_exists = True
-stable_version = '2.0.3'
-stable_release_date = '2013-11-25'
+stable_version = '2.0.4'
+stable_release_date = '2013-12-30'
 

[jira] [Assigned] (CASSANDRA-6522) DroppableTombstoneRatio JMX value is 0.0 for all CFs

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6522:
-

Assignee: Marcus Eriksson

Yes.

 DroppableTombstoneRatio JMX value is 0.0 for all CFs
 

 Key: CASSANDRA-6522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6522
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12.04 LTS, Cassandra 1.2.8
Reporter: Daniel Kador
Assignee: Marcus Eriksson
Priority: Minor

 We're seeing that the JMX value for DroppableTombstoneRatio for all our CFs 
 is 0.0. On the face of it that seems wrong since we've definitely issued a 
 ton of deletes for row keys to expire some old data that we no longer need 
 (and it definitely hasn't been reclaimed from disk yet). Am I 
 misunderstanding what this means / how to use it? We're on 1.2.8 and using 
 leveled compaction for all our CFs.
 gc_grace_seconds is set to 1 day and we've issued a series of deletes over a 
 day ago, so gc_grace has elapsed.
 Cluster is 18 nodes.  Two DCs, so 9 nodes in each DC.  Each node has capacity 
 for 1.5TB or so and is sitting with about 1TB under management.  That's why 
 we wanted to do deletes, obviously.  Most of that 1TB is a single CF (called 
 events) which represents intermediate state for us that we can delete.
 Happy to provide any more info, just let me know.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6529) sstableloader shows no progress or errors when pointed at a bad directory

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6529:
--

Reviewer: Tyler Hobbs

 sstableloader shows no progress or errors when pointed at a bad directory
 -

 Key: CASSANDRA-6529
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6529
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0.5

 Attachments: 
 0001-verify-that-the-keyspace-exists-in-describeRing.patch


 With sstableloader, the source directory is supposed to be in the format 
 {{keyspace_name/table_name/}}.  If you incorrectly just put the sstables 
 in a {{keyspace_name/}} directory, the sstableloader process will not show 
 any progress, errors, or other output, it will simply hang.
 This was initially reported on the user ML here: 
 http://www.mail-archive.com/user@cassandra.apache.org/msg33916.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6281) Use Atomic*FieldUpdater to save memory

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6281:
--

Reviewer: Benedict

This is probably safe for 2.0.5, right?

 Use Atomic*FieldUpdater to save memory
 --

 Key: CASSANDRA-6281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6281
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-Use-Atomic-FieldUpdater-to-save-memory.patch


 Followup to CASSANDRA-6278, use Atomic*FieldUpdater in;
 AtomicSortedColumns
 ReadCallback
 WriteResponseHandler



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6522) DroppableTombstoneRatio JMX value is 0.0 for all CFs

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6522:
--

Fix Version/s: 2.0.5

 DroppableTombstoneRatio JMX value is 0.0 for all CFs
 

 Key: CASSANDRA-6522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6522
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12.04 LTS, Cassandra 1.2.8
Reporter: Daniel Kador
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0.5


 We're seeing that the JMX value for DroppableTombstoneRatio for all our CFs 
 is 0.0. On the face of it that seems wrong since we've definitely issued a 
 ton of deletes for row keys to expire some old data that we no longer need 
 (and it definitely hasn't been reclaimed from disk yet). Am I 
 misunderstanding what this means / how to use it? We're on 1.2.8 and using 
 leveled compaction for all our CFs.
 gc_grace_seconds is set to 1 day and we've issued a series of deletes over a 
 day ago, so gc_grace has elapsed.
 Cluster is 18 nodes.  Two DCs, so 9 nodes in each DC.  Each node has capacity 
 for 1.5TB or so and is sitting with about 1TB under management.  That's why 
 we wanted to do deletes, obviously.  Most of that 1TB is a single CF (called 
 events) which represents intermediate state for us that we can delete.
 Happy to provide any more info, just let me know.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6537) Starting node with auto_bootstrap false causes node to become replica for all ranges

2014-01-02 Thread T Jake Luciani (JIRA)
T Jake Luciani created CASSANDRA-6537:
-

 Summary: Starting node with auto_bootstrap false causes node to 
become replica for all ranges
 Key: CASSANDRA-6537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6537
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani


We have a datacenter with 8 nodes and RF=3

When trying to add a new node with auto_bootstrap false I noticed that nodetool 
describering showed the new node was in the endpoint list for all ranges.





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-6537) Starting node with auto_bootstrap false causes node to become replica for all ranges

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis reassigned CASSANDRA-6537:
-

Assignee: Ryan McGuire

 Starting node with auto_bootstrap false causes node to become replica for all 
 ranges
 

 Key: CASSANDRA-6537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6537
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Ryan McGuire

 We have a datacenter with 8 nodes and RF=3
 When trying to add a new node with auto_bootstrap false I noticed that 
 nodetool describering showed the new node was in the endpoint list for all 
 ranges.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6281) Use Atomic*FieldUpdater to save memory

2014-01-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860285#comment-13860285
 ] 

Sylvain Lebresne commented on CASSANDRA-6281:
-

Playing devil's advocate here, I'd rather stick to 2.1. There is no such thing 
as an entirely safe patch and this sound like a pretty minor improvement (in 
term of concrete impact, it's definitively nice to have).

 Use Atomic*FieldUpdater to save memory
 --

 Key: CASSANDRA-6281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6281
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-Use-Atomic-FieldUpdater-to-save-memory.patch


 Followup to CASSANDRA-6278, use Atomic*FieldUpdater in;
 AtomicSortedColumns
 ReadCallback
 WriteResponseHandler



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6281) Use Atomic*FieldUpdater to save memory

2014-01-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860297#comment-13860297
 ] 

Benedict commented on CASSANDRA-6281:
-

Don't even need to apply the patch to see it looks good. Every  - line is 
matched by a semantically identical + line, except in DWRH, which is almost 
identical and definitely fine.

Note, I did apply it, just to make sure. Everything looks hunkydory.

I'm comfortable about applying it to 2.0.5, the patch is about as safe as they 
come, although I think the benefit is actually fairly minimal so I don't think 
it is urgent. Overall contribution to memory consumption of these fields in the 
2.0 branch is going to be really minimal given, probably ~2% or less of 
memtable consumption on average.

The only thing I would say is that the receive = 0 assignment in ReadCallback 
is unnecessary.


 Use Atomic*FieldUpdater to save memory
 --

 Key: CASSANDRA-6281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6281
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-Use-Atomic-FieldUpdater-to-save-memory.patch


 Followup to CASSANDRA-6278, use Atomic*FieldUpdater in;
 AtomicSortedColumns
 ReadCallback
 WriteResponseHandler



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6281) Use Atomic*FieldUpdater to save memory

2014-01-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860307#comment-13860307
 ] 

Jonathan Ellis commented on CASSANDRA-6281:
---

For 2% benefit I'm fine w/ waiting for 2.1.

 Use Atomic*FieldUpdater to save memory
 --

 Key: CASSANDRA-6281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6281
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-Use-Atomic-FieldUpdater-to-save-memory.patch


 Followup to CASSANDRA-6278, use Atomic*FieldUpdater in;
 AtomicSortedColumns
 ReadCallback
 WriteResponseHandler



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-5351:
--

Attachment: (was: node1_v2_full.log)

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1

 Attachments: node1.log, node2.log, node2_v2_full.log, node3.log, 
 node3_v2_full.log


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-5351:
--

Attachment: (was: node2_v2_full.log)

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-5351:
--

Attachment: (was: node3.log)

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-5351:
--

Attachment: (was: node1.log)

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-5351:
--

Attachment: (was: node3_v2_full.log)

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-5351:
--

Attachment: (was: node2.log)

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6281) Use Atomic*FieldUpdater to save memory

2014-01-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860315#comment-13860315
 ] 

Benedict commented on CASSANDRA-6281:
-

Just for absolute correctness, I'll clarify: 

Current overhead per column is around 100 bytes, plus a little extra for the 
row as a whole. This will save around 16 bytes on average per row in memtable 
(the readcallbacks/writehandlers are too short lived to be a major impact at 
all). So in a table with only 1 almost empty column this might be a large 
saving, 10-15%, but it goes down rapidly. And I doubt many people have tables 
with fewer than 4-5 columns, at which point it's about 4% excluding the data 
itself (so maybe 3% including it), and getting smaller with each column / size 
of payload.

 Use Atomic*FieldUpdater to save memory
 --

 Key: CASSANDRA-6281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6281
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-Use-Atomic-FieldUpdater-to-save-memory.patch


 Followup to CASSANDRA-6278, use Atomic*FieldUpdater in;
 AtomicSortedColumns
 ReadCallback
 WriteResponseHandler



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Lyuben Todorov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lyuben Todorov updated CASSANDRA-5351:
--

Attachment: 5351_node1.log
5351_nodetool.log
5351_node2.log
5351_node3.log

Adding the updated logs. The exception didn't show up in the previous logs 
because the assert error does not get thrown (even when running with -ea) I 
switched out the assert with a [runtime 
exception|https://gist.github.com/lyubent/8221953]. 

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1

 Attachments: 5351_node1.log, 5351_node2.log, 5351_node3.log, 
 5351_nodetool.log


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6537) Starting node with auto_bootstrap false causes node to become replica for all ranges

2014-01-02 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860318#comment-13860318
 ] 

T Jake Luciani edited comment on CASSANDRA-6537 at 1/2/14 4:40 PM:
---

My guess is we need to add the following to the autobootstrap=false case:

{code}
// if our schema hasn't matched yet, keep sleeping until it does
// (post CASSANDRA-1391 we don't expect this to be necessary very 
often, but it doesn't hurt to be careful)
while (!MigrationManager.isReadyForBootstrap())
{
setMode(Mode.JOINING, waiting for schema information to 
complete, true);
try
{
Thread.sleep(1000);
}
catch (InterruptedException e)
{
throw new AssertionError(e);
}
}
setMode(Mode.JOINING, waiting for pending range calculation, 
true);
PendingRangeCalculatorService.instance.blockUntilFinished();
setMode(Mode.JOINING, calculation complete, ready to bootstrap, 
true);
{code}


was (Author: tjake):
My guess is we need to add the following to the autobootstrap case:

{code}
// if our schema hasn't matched yet, keep sleeping until it does
// (post CASSANDRA-1391 we don't expect this to be necessary very 
often, but it doesn't hurt to be careful)
while (!MigrationManager.isReadyForBootstrap())
{
setMode(Mode.JOINING, waiting for schema information to 
complete, true);
try
{
Thread.sleep(1000);
}
catch (InterruptedException e)
{
throw new AssertionError(e);
}
}
setMode(Mode.JOINING, waiting for pending range calculation, 
true);
PendingRangeCalculatorService.instance.blockUntilFinished();
setMode(Mode.JOINING, calculation complete, ready to bootstrap, 
true);
{code}

 Starting node with auto_bootstrap false causes node to become replica for all 
 ranges
 

 Key: CASSANDRA-6537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6537
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Ryan McGuire

 We have a datacenter with 8 nodes and RF=3
 When trying to add a new node with auto_bootstrap false I noticed that 
 nodetool describering showed the new node was in the endpoint list for all 
 ranges.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6537) Starting node with auto_bootstrap false causes node to become replica for all ranges

2014-01-02 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860318#comment-13860318
 ] 

T Jake Luciani commented on CASSANDRA-6537:
---

My guess is we need to add the following to the autobootstrap case:

{code}
// if our schema hasn't matched yet, keep sleeping until it does
// (post CASSANDRA-1391 we don't expect this to be necessary very 
often, but it doesn't hurt to be careful)
while (!MigrationManager.isReadyForBootstrap())
{
setMode(Mode.JOINING, waiting for schema information to 
complete, true);
try
{
Thread.sleep(1000);
}
catch (InterruptedException e)
{
throw new AssertionError(e);
}
}
setMode(Mode.JOINING, waiting for pending range calculation, 
true);
PendingRangeCalculatorService.instance.blockUntilFinished();
setMode(Mode.JOINING, calculation complete, ready to bootstrap, 
true);
{code}

 Starting node with auto_bootstrap false causes node to become replica for all 
 ranges
 

 Key: CASSANDRA-6537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6537
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Ryan McGuire

 We have a datacenter with 8 nodes and RF=3
 When trying to add a new node with auto_bootstrap false I noticed that 
 nodetool describering showed the new node was in the endpoint list for all 
 ranges.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6529) sstableloader shows no progress or errors when pointed at a bad directory

2014-01-02 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860335#comment-13860335
 ] 

Tyler Hobbs commented on CASSANDRA-6529:


It would be nice to have an error message instead of a stack trace:

{noformat}
~/cassandra $ bin/sstableloader /foo/bar -d 127.0.0.1
Exception in thread main java.lang.RuntimeException: Could not retrieve 
endpoint ranges: 
at 
org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:239)
at 
org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:149)
at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:79)
Caused by: InvalidRequestException(why:No such keyspace: foo)
at 
org.apache.cassandra.thrift.Cassandra$describe_ring_result$describe_ring_resultStandardScheme.read(Cassandra.java:34055)
at 
org.apache.cassandra.thrift.Cassandra$describe_ring_result$describe_ring_resultStandardScheme.read(Cassandra.java:34022)
at 
org.apache.cassandra.thrift.Cassandra$describe_ring_result.read(Cassandra.java:33964)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at 
org.apache.cassandra.thrift.Cassandra$Client.recv_describe_ring(Cassandra.java:1251)
at 
org.apache.cassandra.thrift.Cassandra$Client.describe_ring(Cassandra.java:1238)
at 
org.apache.cassandra.tools.BulkLoader$ExternalClient.init(BulkLoader.java:215)
{noformat}

Can we wrap the describe_ring() call in a try/catch for InvalidRequestException 
and just print the why attribute and a brief message that suggests checking 
the usage?

 sstableloader shows no progress or errors when pointed at a bad directory
 -

 Key: CASSANDRA-6529
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6529
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0.5

 Attachments: 
 0001-verify-that-the-keyspace-exists-in-describeRing.patch


 With sstableloader, the source directory is supposed to be in the format 
 {{keyspace_name/table_name/}}.  If you incorrectly just put the sstables 
 in a {{keyspace_name/}} directory, the sstableloader process will not show 
 any progress, errors, or other output, it will simply hang.
 This was initially reported on the user ML here: 
 http://www.mail-archive.com/user@cassandra.apache.org/msg33916.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6522) DroppableTombstoneRatio JMX value is 0.0 for all CFs

2014-01-02 Thread Daniel Kador (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860352#comment-13860352
 ] 

Daniel Kador commented on CASSANDRA-6522:
-

Marcus Eriksson: Yeah, we were doing row deletes, not column deletes.

Thanks for getting this scheduled.

 DroppableTombstoneRatio JMX value is 0.0 for all CFs
 

 Key: CASSANDRA-6522
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6522
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12.04 LTS, Cassandra 1.2.8
Reporter: Daniel Kador
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.0.5


 We're seeing that the JMX value for DroppableTombstoneRatio for all our CFs 
 is 0.0. On the face of it that seems wrong since we've definitely issued a 
 ton of deletes for row keys to expire some old data that we no longer need 
 (and it definitely hasn't been reclaimed from disk yet). Am I 
 misunderstanding what this means / how to use it? We're on 1.2.8 and using 
 leveled compaction for all our CFs.
 gc_grace_seconds is set to 1 day and we've issued a series of deletes over a 
 day ago, so gc_grace has elapsed.
 Cluster is 18 nodes.  Two DCs, so 9 nodes in each DC.  Each node has capacity 
 for 1.5TB or so and is sitting with about 1TB under management.  That's why 
 we wanted to do deletes, obviously.  Most of that 1TB is a single CF (called 
 events) which represents intermediate state for us that we can delete.
 Happy to provide any more info, just let me know.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6131) JAVA_HOME on cassandra-env.sh is ignored on Debian packages

2014-01-02 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860368#comment-13860368
 ] 

Michael Shuler commented on CASSANDRA-6131:
---

Works fine for me with setting JAVA_HOME in the recommended location of 
/etc/default/cassandra or in cassandra-env.sh

 JAVA_HOME on cassandra-env.sh is ignored on Debian packages
 ---

 Key: CASSANDRA-6131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6131
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Sebastián Lacuesta
Assignee: Eric Evans
  Labels: debian
 Fix For: 2.0.5

 Attachments: 6131-2.patch, 6131.patch


 I've just got upgraded to 2.0.1 package from the apache repositories using 
 apt. I had the JAVA_HOME environment variable set in 
 /etc/cassandra/cassandra-env.sh but after the upgrade it only worked by 
 setting it on /usr/sbin/cassandra script. I can't configure java 7 system 
 wide, only for cassandra.
 Off-toppic: Thanks for getting rid of the jsvc mess.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6537) Starting node with auto_bootstrap false causes node to become replica for all ranges

2014-01-02 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860381#comment-13860381
 ] 

T Jake Luciani commented on CASSANDRA-6537:
---

I think this is happening because the node is part of a new rack. No not 
related to bootstrapping...

 Starting node with auto_bootstrap false causes node to become replica for all 
 ranges
 

 Key: CASSANDRA-6537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6537
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Ryan McGuire

 We have a datacenter with 8 nodes and RF=3
 When trying to add a new node with auto_bootstrap false I noticed that 
 nodetool describering showed the new node was in the endpoint list for all 
 ranges.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6537) Starting node with auto_bootstrap false causes node to become replica for all ranges

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6537.
---

Resolution: Not A Problem
  Assignee: (was: Ryan McGuire)

 Starting node with auto_bootstrap false causes node to become replica for all 
 ranges
 

 Key: CASSANDRA-6537
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6537
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani

 We have a datacenter with 8 nodes and RF=3
 When trying to add a new node with auto_bootstrap false I noticed that 
 nodetool describering showed the new node was in the endpoint list for all 
 ranges.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-02 Thread Johnny Miller (JIRA)
Johnny Miller created CASSANDRA-6538:


 Summary: Provide a read-time CQL function to display the data size 
of columns and rows
 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller


It would be extremely useful to be able to work out the size of rows and 
columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860551#comment-13860551
 ] 

Jonathan Ellis commented on CASSANDRA-6271:
---

Moving on to Modifier[Level].

What makes this hard to understand is that it does not map closely to the 
classic b-tree insert algorithm:
# Add to the appropriate leaf
# Split the leaf if necessary, add the median to the parent
# Split the parent if necessary, etc.



 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Attachments: oprate.svg


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860576#comment-13860576
 ] 

Jonathan Ellis commented on CASSANDRA-6538:
---

Useful for what?

 Provide a read-time CQL function to display the data size of columns and rows
 -

 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller

 It would be extremely useful to be able to work out the size of rows and 
 columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (CASSANDRA-6536) SStable gets corrupted after keyspace drop and recreation

2014-01-02 Thread Ryan McGuire (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McGuire reassigned CASSANDRA-6536:
---

Assignee: Russ Hatch  (was: Ryan McGuire)

Russ, can you reproduce this? If so, let's create a dtest for it.

 SStable gets corrupted after keyspace drop and recreation
 -

 Key: CASSANDRA-6536
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6536
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 1.2.12  1.2.13
Reporter: Dominic Letz
Assignee: Russ Hatch

 ERROR [ReadStage:41] 2014-01-02 14:27:00,629 CassandraDaemon.java (line 191) 
 Exception in thread Thread[ReadStage:41,5,main]
 java.lang.RuntimeException: 
 org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: 
 Corrupt (negative) value length encountered
 When running a test like this the SECOND TIME:
 DROP KEYSPACE testspace;
 CREATE KEYSPACE testspace with REPLICATION = {'class':'SimpleStrategy', 
 'replication_factor':1} AND durable_writes = false;
 USE testspace;
 CREATE TABLE testtable (id text PRIMARY KEY, group text) WITH compression = 
 {'sstable_compression':'LZ4Compressor'};
 CREATE INDEX testindex ON testtable (group);
 INSERT INTO testtable (id, group) VALUES ('1', 'beta');
 INSERT INTO testtable (id, group) VALUES ('2', 'gamma');
 INSERT INTO testtable (id, group) VALUES ('3', 'delta');
 INSERT INTO testtable (id, group) VALUES ('4', 'epsilon');
 INSERT INTO testtable (id, group) VALUES ('5', 'alpha');
 INSERT INTO testtable (id, group) VALUES ('6', 'beta');
 INSERT INTO testtable (id, group) VALUES ('7', 'gamma');
 INSERT INTO testtable (id, group) VALUES ('8', 'delta');
 INSERT INTO testtable (id, group) VALUES ('9', 'epsilon');
 INSERT INTO testtable (id, group) VALUES ('00010', 'alpha');
 INSERT INTO testtable (id, group) VALUES ('00011', 'beta');
 INSERT INTO testtable (id, group) VALUES ('00012', 'gamma');
 INSERT INTO testtable (id, group) VALUES ('00013', 'delta');
 INSERT INTO testtable (id, group) VALUES ('00014', 'epsilon');
 INSERT INTO testtable (id, group) VALUES ('00015', 'alpha');
 INSERT INTO testtable (id, group) VALUES ('00016', 'beta');
 INSERT INTO testtable (id, group) VALUES ('00017', 'gamma');
 ... 
 INSERT INTO testtable (id, group) VALUES ('10', 'alpha');
 SELECT COUNT(*) FROM testspace.testtable WHERE group = 'alpha' LIMIT 11;



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6456) log listen address at startup

2014-01-02 Thread Lyuben Todorov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860592#comment-13860592
 ] 

Lyuben Todorov commented on CASSANDRA-6456:
---

I think we should change the format to a single line (helps when grep'ing) to a 
single line (see [this gist|https://gist.github.com/lyubent/8224068]) 

There is a code formatting nit too:
- YamlConfigurationLoader#logConfig - *{*  should be placed on new lines

 Other than that patch looks good.

 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5930) Offline scrubs can choke on broken files

2014-01-02 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860591#comment-13860591
 ] 

Tyler Hobbs commented on CASSANDRA-5930:


[~jeffpotter] what version of Cassandra were you running when you hit the above 
error?

As far as the original stacktrace for this ticket goes, it's unfortunately 
necessary for counter CFs.  CASSANDRA-2759 explains the reasoning.  I suppose I 
could make the error message mention that and point to the ticket.

The scrub code looks reasonably robust in general, so I think it's better to 
wait for individual bugs to get reported than to try to improve the code 
without any failure examples.

 Offline scrubs can choke on broken files
 

 Key: CASSANDRA-5930
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5930
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
Priority: Minor

 There are cases where offline scrub can hit an exception and die, like:
 {noformat}
 WARNING: Non-fatal error reading row (stacktrace follows)
 Exception in thread main java.io.IOError: java.io.IOError: 
 java.io.EOFException
   at org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:242)
   at 
 org.apache.cassandra.tools.StandaloneScrubber.main(StandaloneScrubber.java:121)
 Caused by: java.io.IOError: java.io.EOFException
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:99)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:182)
   at org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:171)
   ... 1 more
 Caused by: java.io.EOFException
   at java.io.RandomAccessFile.readFully(RandomAccessFile.java:399)
   at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:120)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:144)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:234)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:112)
   ... 5 more
 {noformat}
 Since the purpose of offline scrub is to fix broken stuff, it should be more 
 resilient to broken stuff...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5351) Avoid repairing already-repaired data by default

2014-01-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860604#comment-13860604
 ] 

Jonathan Ellis commented on CASSANDRA-5351:
---

Hmm, you should merge/rebase to latest trunk, or at least grab the logback 
settings to get thread names in there.

But it looks like what I predicted, you have a compaction stomping on your 
to-be-repaired sstable:

{noformat}
INFO  16:17:01 Compacting 
[SSTableReader(path='/Users/lyubentodorov/.ccm/5351_dec8/node2/data/system/local/system-local-jc-4-Data.db'),
 
SSTableReader(path='/Users/lyubentodorov/.ccm/5351_dec8/node2/data/system/local/system-local-jc-2-Data.db'),
 
SSTableReader(path='/Users/lyubentodorov/.ccm/5351_dec8/node2/data/system/local/system-local-jc-1-Data.db'),
 
SSTableReader(path='/Users/lyubentodorov/.ccm/5351_dec8/node2/data/system/local/system-local-jc-3-Data.db')]
...
DEBUG 16:17:01 Marking 
/Users/lyubentodorov/.ccm/5351_dec8/node2/data/system/local/system-local-jc-1-Data.db
 compacted
{noformat}

Interestingly it looks like it almost catches this:
{noformat}
INFO  16:17:45 Skipping anticompaction for 1, required sstable was compacted 
and is no longer available.
{noformat}

... but then it proceeds to repair/anticompact and hit the error anyway.

 Avoid repairing already-repaired data by default
 

 Key: CASSANDRA-5351
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5351
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
  Labels: repair
 Fix For: 2.1

 Attachments: 5351_node1.log, 5351_node2.log, 5351_node3.log, 
 5351_nodetool.log


 Repair has always built its merkle tree from all the data in a columnfamily, 
 which is guaranteed to work but is inefficient.
 We can improve this by remembering which sstables have already been 
 successfully repaired, and only repairing sstables new since the last repair. 
  (This automatically makes CASSANDRA-3362 much less of a problem too.)
 The tricky part is, compaction will (if not taught otherwise) mix repaired 
 data together with non-repaired.  So we should segregate unrepaired sstables 
 from the repaired ones.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6456) log listen address at startup

2014-01-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860609#comment-13860609
 ] 

Jonathan Ellis commented on CASSANDRA-6456:
---

I think this makes some ad-hoc config logging redundant as well?

 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860614#comment-13860614
 ] 

Benedict commented on CASSANDRA-6271:
-

It sort of does, it just does it all in a batch and does it to a new copy.

1. It finds the appropriate leaf (but scanning from the previous insertion 
point, instead of the root of the tree)
1a. It copies any elements between the previous insertion point and the new 
insertion point into the new tree (necessary because we're immutable, but does 
not affect semantics)
2. Puts it in the leaf if there is room (if not splits), but since it can be 
inserted ahead of some pre-existing elements being copied to the new version, 
the split may be delayed until *they* are copied forwards in the next (1a) 
step. These later copies can also be thought of as inserts.
3. Splits the parent if necessary  (in the addChild method, which is called 
whenever we have a completed node to pass to the parent)

It can also, perhaps more easily, be thought of as performing an optimised 
insert of the union of all elements in the original tree with the updating 
collection. Instead of going through each item in the original tree, when a 
sub-tree range doesn't intersect with the updating collection it doesn't 
descend into the tree, but copies it as is (semantically equivalent to 
inserting them all). 

If they do intersect, it descends and performs the optimised insert on the 
sub-tree. The optimised insert on a leaf is simply a binary search for the 
insertion point, with an array copy to move any leaves between the last 
insertion point and the new insertion point. If our next insert takes as back 
up out of a sub-tree/leaf, we just finish inserting all of the remaining 
elements from the original tree, spilling up when necessary as with any insert.

If at any point any of the inserts causes an overflow we split by creating a 
node with half of all elements we've buffered to a new node and adding it 
immediately to the parent. This can happen an arbitrary number of times for a 
sub-tree, since we don't bound the size of the updating collection, a huge 
portion of which which could intersect with a given sub-tree.


 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Attachments: oprate.svg


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6539) Track metrics at a keyspace level as well as column family level

2014-01-02 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-6539:
--

 Summary: Track metrics at a keyspace level as well as column 
family level
 Key: CASSANDRA-6539
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6539
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey


It would be useful to be able to see aggregated metrics (write/read 
count/latency) at a keyspace level as well as at the individual column family 
level.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-02 Thread Johnny Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860622#comment-13860622
 ] 

Johnny Miller commented on CASSANDRA-6538:
--

When debugging issues in environments where the (suspicion) is that specific 
rows contain larger than expected data sizes and I am unable to write a client 
to read the data and check its size.

 Provide a read-time CQL function to display the data size of columns and rows
 -

 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller

 It would be extremely useful to be able to work out the size of rows and 
 columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6456) log listen address at startup

2014-01-02 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860626#comment-13860626
 ] 

Jeremiah Jordan commented on CASSANDRA-6456:


For the original intent of this JIRA I think we need to add a call to get 
address or something.  As the IP's in the yaml can be left blank.

 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6539) Track metrics at a keyspace level as well as column family level

2014-01-02 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860631#comment-13860631
 ] 

Nick Bailey commented on CASSANDRA-6539:


To be a bit clearer, this is more useful for data models and clusters where 
there are a very large amount of column families per keyspace (hundreds or 
thousands). Tracking only individual column families can be burdensome at that 
level.

 Track metrics at a keyspace level as well as column family level
 

 Key: CASSANDRA-6539
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6539
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey

 It would be useful to be able to see aggregated metrics (write/read 
 count/latency) at a keyspace level as well as at the individual column family 
 level.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6540) Allow clearsnapshot jmx command to remove snapshots for cfs/keyspaces that have been dropped

2014-01-02 Thread Nick Bailey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey updated CASSANDRA-6540:
---

  Priority: Minor  (was: Major)
Issue Type: Improvement  (was: Bug)

 Allow clearsnapshot jmx command to remove snapshots for cfs/keyspaces that 
 have been dropped
 

 Key: CASSANDRA-6540
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6540
 Project: Cassandra
  Issue Type: Improvement
Reporter: Nick Bailey
Priority: Minor





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6540) Allow clearsnapshot jmx command to remove snapshots for cfs/keyspaces that have been dropped

2014-01-02 Thread Nick Bailey (JIRA)
Nick Bailey created CASSANDRA-6540:
--

 Summary: Allow clearsnapshot jmx command to remove snapshots for 
cfs/keyspaces that have been dropped
 Key: CASSANDRA-6540
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6540
 Project: Cassandra
  Issue Type: Bug
Reporter: Nick Bailey






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6539) Track metrics at a keyspace level as well as column family level

2014-01-02 Thread Nick Bailey (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Bailey updated CASSANDRA-6539:
---

  Priority: Minor  (was: Major)
Issue Type: Improvement  (was: Bug)

 Track metrics at a keyspace level as well as column family level
 

 Key: CASSANDRA-6539
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6539
 Project: Cassandra
  Issue Type: Improvement
Reporter: Nick Bailey
Priority: Minor

 It would be useful to be able to see aggregated metrics (write/read 
 count/latency) at a keyspace level as well as at the individual column family 
 level.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6398) nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying to remove it' occurs regularly.

2014-01-02 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860673#comment-13860673
 ] 

Tyler Hobbs commented on CASSANDRA-6398:


[~enigmacurry] do you have a somewhat reliable way to repro this?  So far I 
haven't seen this.

The exception is thrown when a new heartbeat version for the removed node comes 
in over gossip during a 30s sleep period.  I'm not 100% sure why this check 
exists, but I would guess it's to avoid removing a node that's still alive.  
(We also have another check for this.)

 nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying 
 to remove it' occurs regularly.
 

 Key: CASSANDRA-6398
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6398
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.5


 I see this error somewhat regularly when running a *{{nodetool removenode}}* 
 command. The command completes successfully, ie, the node is removed, so I'm 
 not sure what this message is telling me.
  
 {code}
 $ nodetool -p 7100 removenode bff9072e-4bb6-42fa-937c-bb73bcc094bc
 Exception in thread main java.lang.RuntimeException: Endpoint /127.0.0.2 
 generation changed while trying to remove it
   at 
 org.apache.cassandra.gms.Gossiper.advertiseRemoving(Gossiper.java:421)
   at 
 org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3080)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5872) Bundle JNA

2014-01-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860675#comment-13860675
 ] 

Jonathan Ellis commented on CASSANDRA-5872:
---

{noformat}
dependency groupId=jna artifactId=jna/
{noformat}

I don't think the groupId changed from net.java.dev.jna, and version is missing.

Should also add license to lib/licenses.


 Bundle JNA
 --

 Key: CASSANDRA-5872
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5872
 Project: Cassandra
  Issue Type: Task
  Components: Core
Reporter: Jonathan Ellis
Assignee: Lyuben Todorov
Priority: Minor
 Fix For: 2.1

 Attachments: 5872-trunk.patch, 5872_debian.patch


 JNA 4.0 is reported to be dual-licensed LGPL/APL.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6256) Gossip race condition can be missing HOST_ID

2014-01-02 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860680#comment-13860680
 ] 

Tyler Hobbs commented on CASSANDRA-6256:


The line numbers don't match 2.0 any more, but I'm guessing this is the line:

{noformat}
return 
UUID.fromString(getEndpointStateForEndpoint(endpoint).getApplicationState(ApplicationState.HOST_ID).value);
{noformat}

Are you sure that it wasn't the endpoint state that was null instead of the 
host ID?

 Gossip race condition can be missing HOST_ID
 

 Key: CASSANDRA-6256
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6256
 Project: Cassandra
  Issue Type: Bug
Reporter: Brandon Williams
Assignee: Tyler Hobbs

 A very rare and tight race of some sort can cause:
 {noformat}
 ERROR [GossipStage:1] 2013-10-26 00:48:32,071 CassandraDaemon.java (line 191) 
 Exception in thread Thread[GossipStage:1,5,main]
 java.lang.NullPointerException
 at org.apache.cassandra.gms.Gossiper.getHostId(Gossiper.java:696)
 at 
 org.apache.cassandra.service.StorageService.handleStateNormal(StorageService.java:1388)
 at 
 org.apache.cassandra.service.StorageService.onChange(StorageService.java:1257)
 at 
 org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1876)
 at 
 org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:861)
 at 
 org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:939)
 at 
 org.apache.cassandra.gms.GossipDigestAckVerbHandler.doVerb(GossipDigestAckVerbHandler.java:58)
 at 
 org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:56)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
 {noformat}
 It isn't immediately clear how this happens since we set HOST_ID before the 
 gossiper even starts.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6281) Use Atomic*FieldUpdater to save memory

2014-01-02 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860708#comment-13860708
 ] 

Marcus Eriksson commented on CASSANDRA-6281:


CASSANDRA-6278 went into 2.1 so i figured the same here

 Use Atomic*FieldUpdater to save memory
 --

 Key: CASSANDRA-6281
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6281
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
Priority: Minor
 Fix For: 2.1

 Attachments: 0001-Use-Atomic-FieldUpdater-to-save-memory.patch


 Followup to CASSANDRA-6278, use Atomic*FieldUpdater in;
 AtomicSortedColumns
 ReadCallback
 WriteResponseHandler



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: Use Atomic*FieldUpdater to save memory.

2014-01-02 Thread marcuse
Updated Branches:
  refs/heads/trunk 8165af5db - 7aa3364e0


Use Atomic*FieldUpdater to save memory.

Patch by marcuse, reviewed by belliottsmith for CASSANDRA-6281.


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7aa3364e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7aa3364e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7aa3364e

Branch: refs/heads/trunk
Commit: 7aa3364e04b286ac7b41cfadda568df41e4e2821
Parents: 8165af5
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Jan 2 21:17:14 2014 +0100
Committer: Marcus Eriksson marc...@apache.org
Committed: Thu Jan 2 21:17:14 2014 +0100

--
 .../cassandra/db/AtomicSortedColumns.java   | 56 ++--
 .../service/DatacenterWriteResponseHandler.java |  3 +-
 .../apache/cassandra/service/ReadCallback.java  | 18 ---
 .../cassandra/service/WriteResponseHandler.java | 12 +++--
 4 files changed, 47 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7aa3364e/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
--
diff --git a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java 
b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
index 6e4fd01..b1f1e59 100644
--- a/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
+++ b/src/java/org/apache/cassandra/db/AtomicSortedColumns.java
@@ -18,7 +18,7 @@
 package org.apache.cassandra.db;
 
 import java.util.*;
-import java.util.concurrent.atomic.AtomicReference;
+import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
 
 import com.google.common.base.Function;
 import com.google.common.collect.Iterables;
@@ -49,7 +49,9 @@ import org.apache.cassandra.utils.Allocator;
  */
 public class AtomicSortedColumns extends ColumnFamily
 {
-private final AtomicReferenceHolder ref;
+private volatile Holder ref;
+private static final AtomicReferenceFieldUpdaterAtomicSortedColumns, 
Holder refUpdater
+= 
AtomicReferenceFieldUpdater.newUpdater(AtomicSortedColumns.class, Holder.class, 
ref);
 
 public static final ColumnFamily.FactoryAtomicSortedColumns factory = 
new FactoryAtomicSortedColumns()
 {
@@ -67,12 +69,12 @@ public class AtomicSortedColumns extends ColumnFamily
 private AtomicSortedColumns(CFMetaData metadata, Holder holder)
 {
 super(metadata);
-this.ref = new AtomicReference(holder);
+this.ref = holder;
 }
 
 public CellNameType getComparator()
 {
-return (CellNameType)ref.get().map.comparator();
+return (CellNameType)ref.map.comparator();
 }
 
 public ColumnFamily.Factory getFactory()
@@ -82,12 +84,12 @@ public class AtomicSortedColumns extends ColumnFamily
 
 public ColumnFamily cloneMe()
 {
-return new AtomicSortedColumns(metadata, ref.get().cloneMe());
+return new AtomicSortedColumns(metadata, ref.cloneMe());
 }
 
 public DeletionInfo deletionInfo()
 {
-return ref.get().deletionInfo;
+return ref.deletionInfo;
 }
 
 public void delete(DeletionTime delTime)
@@ -108,29 +110,29 @@ public class AtomicSortedColumns extends ColumnFamily
 // Keeping deletion info for max markedForDeleteAt value
 while (true)
 {
-Holder current = ref.get();
+Holder current = ref;
 DeletionInfo newDelInfo = current.deletionInfo.copy().add(info);
-if (ref.compareAndSet(current, current.with(newDelInfo)))
+if (refUpdater.compareAndSet(this, current, 
current.with(newDelInfo)))
 break;
 }
 }
 
 public void setDeletionInfo(DeletionInfo newInfo)
 {
-ref.set(ref.get().with(newInfo));
+ref = ref.with(newInfo);
 }
 
 public void purgeTombstones(int gcBefore)
 {
 while (true)
 {
-Holder current = ref.get();
+Holder current = ref;
 if (!current.deletionInfo.hasPurgeableTombstones(gcBefore))
 break;
 
 DeletionInfo purgedInfo = current.deletionInfo.copy();
 purgedInfo.purge(gcBefore);
-if (ref.compareAndSet(current, current.with(purgedInfo)))
+if (refUpdater.compareAndSet(this, current, 
current.with(purgedInfo)))
 break;
 }
 }
@@ -140,11 +142,11 @@ public class AtomicSortedColumns extends ColumnFamily
 Holder current, modified;
 do
 {
-current = ref.get();
+current = ref;
 modified = current.cloneMe();
 modified.addColumn(cell, allocator, 
SecondaryIndexManager.nullUpdater);
 }
-while (!ref.compareAndSet(current, modified));
+  

[jira] [Created] (CASSANDRA-6541) Add JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled to cassandra-env.sh

2014-01-02 Thread jonathan lacefield (JIRA)
jonathan lacefield created CASSANDRA-6541:
-

 Summary: Add JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled to 
cassandra-env.sh
 Key: CASSANDRA-6541
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6541
 Project: Cassandra
  Issue Type: Bug
  Components: Config
Reporter: jonathan lacefield
Priority: Minor


Newer versions of Oracle's Hotspot JVM , post 6u45, are experiencing issues 
with GC and JMX where heap slowly fills up overtime until OOM or a full GC 
event occurs, specifically when CMS is leveraged.  Running repair exacerbates 
this issue.  The configuration added to the Summary line helps alleviate this 
behavior and should be included in the C* config files by default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6542) nodetool removenode hangs

2014-01-02 Thread Eric Lubow (JIRA)
Eric Lubow created CASSANDRA-6542:
-

 Summary: nodetool removenode hangs
 Key: CASSANDRA-6542
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6542
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Ubuntu 12, 1.2.11 DSE
Reporter: Eric Lubow
 Fix For: 1.2.11


Running *nodetool removenode $host-id* doesn't actually remove the node from 
the ring.  I've let it run anywhere from 5 minutes to 3 days and there are no 
messages in the log about it hanging or failing, the command just sits there 
running.  So the regular response has been to run *nodetool removenode 
$host-id*, give it about 10-15 minutes and then run *nodetool removenode force*.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6541) Add JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled to cassandra-env.sh

2014-01-02 Thread Matt Stump (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860743#comment-13860743
 ] 

Matt Stump commented on CASSANDRA-6541:
---

I've seen at least 3 users in the field hit this issue, in each case enabling 
CMSClassUnloadingEnabled solved the issue.

 Add JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled to cassandra-env.sh
 --

 Key: CASSANDRA-6541
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6541
 Project: Cassandra
  Issue Type: Bug
  Components: Config
Reporter: jonathan lacefield
Priority: Minor

 Newer versions of Oracle's Hotspot JVM , post 6u45, are experiencing issues 
 with GC and JMX where heap slowly fills up overtime until OOM or a full GC 
 event occurs, specifically when CMS is leveraged.  Running repair exacerbates 
 this issue.  The configuration added to the Summary line helps alleviate this 
 behavior and should be included in the C* config files by default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6541) Add JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled to cassandra-env.sh

2014-01-02 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860778#comment-13860778
 ] 

Jason Brown commented on CASSANDRA-6541:


I think we've been seeing the same issue, as well, with JMX. Do you have any 
links or refs about the issue/resolution?

 Add JVM_OPTS=$JVM_OPTS -XX:+CMSClassUnloadingEnabled to cassandra-env.sh
 --

 Key: CASSANDRA-6541
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6541
 Project: Cassandra
  Issue Type: Bug
  Components: Config
Reporter: jonathan lacefield
Priority: Minor

 Newer versions of Oracle's Hotspot JVM , post 6u45, are experiencing issues 
 with GC and JMX where heap slowly fills up overtime until OOM or a full GC 
 event occurs, specifically when CMS is leveraged.  Running repair exacerbates 
 this issue.  The configuration added to the Summary line helps alleviate this 
 behavior and should be included in the C* config files by default.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6398) nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying to remove it' occurs regularly.

2014-01-02 Thread Ryan McGuire (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860811#comment-13860811
 ] 

Ryan McGuire commented on CASSANDRA-6398:
-

Indeed, it works for me as well.

Judging from my paste above, and that I didn't use an absolute path to 
nodetool, I'm willing to think that I may have been using a mismatched version. 
I would suggest marking this as cannot reproduce, I haven't seen it recently 
myself.

 nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying 
 to remove it' occurs regularly.
 

 Key: CASSANDRA-6398
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6398
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.5


 I see this error somewhat regularly when running a *{{nodetool removenode}}* 
 command. The command completes successfully, ie, the node is removed, so I'm 
 not sure what this message is telling me.
  
 {code}
 $ nodetool -p 7100 removenode bff9072e-4bb6-42fa-937c-bb73bcc094bc
 Exception in thread main java.lang.RuntimeException: Endpoint /127.0.0.2 
 generation changed while trying to remove it
   at 
 org.apache.cassandra.gms.Gossiper.advertiseRemoving(Gossiper.java:421)
   at 
 org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3080)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6415) Snapshot repair blocks for ever if something happens to the I made my snapshot response

2014-01-02 Thread Nick Bailey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860841#comment-13860841
 ] 

Nick Bailey commented on CASSANDRA-6415:


This was also fixed in the 2.0 branch in 2.0.4 correct?

 Snapshot repair blocks for ever if something happens to the I made my 
 snapshot response
 -

 Key: CASSANDRA-6415
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6415
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
  Labels: repair
 Fix For: 1.2.13

 Attachments: 6415-1.2.txt


 The snapshotLatch.await(); can be waiting for ever and block all repair 
 operations indefinitely if something happens that another node doesn't 
 respond.
 {noformat}
 public void makeSnapshots(CollectionInetAddress endpoints)
 {
 try
 {
 snapshotLatch = new CountDownLatch(endpoints.size());
 IAsyncCallback callback = new IAsyncCallback()
 {
 public boolean isLatencyForSnitch()
 {
 return false;
 }
 public void response(MessageIn msg)
 {
 RepairJob.this.snapshotLatch.countDown();
 }
 };
 for (InetAddress endpoint : endpoints)
 MessagingService.instance().sendRR(new 
 SnapshotCommand(tablename, cfname, sessionName, false).createMessage(), 
 endpoint, callback);
 snapshotLatch.await();
 snapshotLatch = null;
 }
 catch (InterruptedException e)
 {
 throw new RuntimeException(e);
 }
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6415) Snapshot repair blocks for ever if something happens to the I made my snapshot response

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6415:
--

Fix Version/s: 2.0.4

Yes.

 Snapshot repair blocks for ever if something happens to the I made my 
 snapshot response
 -

 Key: CASSANDRA-6415
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6415
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Yuki Morishita
  Labels: repair
 Fix For: 1.2.13, 2.0.4

 Attachments: 6415-1.2.txt


 The snapshotLatch.await(); can be waiting for ever and block all repair 
 operations indefinitely if something happens that another node doesn't 
 respond.
 {noformat}
 public void makeSnapshots(CollectionInetAddress endpoints)
 {
 try
 {
 snapshotLatch = new CountDownLatch(endpoints.size());
 IAsyncCallback callback = new IAsyncCallback()
 {
 public boolean isLatencyForSnitch()
 {
 return false;
 }
 public void response(MessageIn msg)
 {
 RepairJob.this.snapshotLatch.countDown();
 }
 };
 for (InetAddress endpoint : endpoints)
 MessagingService.instance().sendRR(new 
 SnapshotCommand(tablename, cfname, sessionName, false).createMessage(), 
 endpoint, callback);
 snapshotLatch.await();
 snapshotLatch = null;
 }
 catch (InterruptedException e)
 {
 throw new RuntimeException(e);
 }
 }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (CASSANDRA-6398) nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying to remove it' occurs regularly.

2014-01-02 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-6398.


Resolution: Cannot Reproduce

Thanks, marked as Can't Repro for now.

 nodetool removenode error 'Endpoint /x.x.x.x generation changed while trying 
 to remove it' occurs regularly.
 

 Key: CASSANDRA-6398
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6398
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Ryan McGuire
Assignee: Tyler Hobbs
Priority: Minor
 Fix For: 2.0.5


 I see this error somewhat regularly when running a *{{nodetool removenode}}* 
 command. The command completes successfully, ie, the node is removed, so I'm 
 not sure what this message is telling me.
  
 {code}
 $ nodetool -p 7100 removenode bff9072e-4bb6-42fa-937c-bb73bcc094bc
 Exception in thread main java.lang.RuntimeException: Endpoint /127.0.0.2 
 generation changed while trying to remove it
   at 
 org.apache.cassandra.gms.Gossiper.advertiseRemoving(Gossiper.java:421)
   at 
 org.apache.cassandra.service.StorageService.removeNode(StorageService.java:3080)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111)
   at 
 com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45)
   at 
 com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:235)
   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:250)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
   at 
 com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:791)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1486)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:96)
   at 
 javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1327)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1419)
   at 
 javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:847)
   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322)
   at sun.rmi.transport.Transport$1.run(Transport.java:177)
   at sun.rmi.transport.Transport$1.run(Transport.java:174)
   at java.security.AccessController.doPrivileged(Native Method)
   at sun.rmi.transport.Transport.serviceCall(Transport.java:173)
   at 
 sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808)
   at 
 sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:722)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-5930) Offline scrubs can choke on broken files

2014-01-02 Thread J Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860912#comment-13860912
 ] 

J Potter commented on CASSANDRA-5930:
-

Hi Tyler -- based on my notes, it should have been Cassandra 1.2.6.1 (DSE 3.1), 
at least, that's what other tickets we have filed at this same time suggest.

 Offline scrubs can choke on broken files
 

 Key: CASSANDRA-5930
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5930
 Project: Cassandra
  Issue Type: Bug
Reporter: Jeremiah Jordan
Assignee: Tyler Hobbs
Priority: Minor

 There are cases where offline scrub can hit an exception and die, like:
 {noformat}
 WARNING: Non-fatal error reading row (stacktrace follows)
 Exception in thread main java.io.IOError: java.io.IOError: 
 java.io.EOFException
   at org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:242)
   at 
 org.apache.cassandra.tools.StandaloneScrubber.main(StandaloneScrubber.java:121)
 Caused by: java.io.IOError: java.io.EOFException
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.init(PrecompactedRow.java:99)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
   at 
 org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:182)
   at org.apache.cassandra.db.compaction.Scrubber.scrub(Scrubber.java:171)
   ... 1 more
 Caused by: java.io.EOFException
   at java.io.RandomAccessFile.readFully(RandomAccessFile.java:399)
   at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:120)
   at 
 org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37)
   at 
 org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:144)
   at 
 org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:234)
   at 
 org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:112)
   ... 5 more
 {noformat}
 Since the purpose of offline scrub is to fix broken stuff, it should be more 
 resilient to broken stuff...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: do not ignore user configured JAVA_HOME

2014-01-02 Thread eevans
Updated Branches:
  refs/heads/cassandra-2.0 5284e129f - d278b7c2d


do not ignore user configured JAVA_HOME

Patch by eevans; reviewed by Michael Shuler for CASSANDRA-6131


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d278b7c2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d278b7c2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d278b7c2

Branch: refs/heads/cassandra-2.0
Commit: d278b7c2d5f4bc74b8c621b6b18503fc7d08422d
Parents: 5284e12
Author: Eric Evans eev...@apache.org
Authored: Thu Jan 2 17:05:43 2014 -0600
Committer: Eric Evans eev...@apache.org
Committed: Thu Jan 2 17:05:43 2014 -0600

--
 debian/init | 43 +++
 1 file changed, 7 insertions(+), 36 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d278b7c2/debian/init
--
diff --git a/debian/init b/debian/init
index 26faeba..d132441 100644
--- a/debian/init
+++ b/debian/init
@@ -24,9 +24,6 @@ WAIT_FOR_START=10
 CASSANDRA_HOME=/usr/share/cassandra
 FD_LIMIT=10
 
-# The first existing directory is used for JAVA_HOME if needed.
-JVM_SEARCH_DIRS=/usr/lib/jvm/default-java
-
 [ -e /usr/share/cassandra/apache-cassandra.jar ] || exit 0
 [ -e /etc/cassandra/cassandra.yaml ] || exit 0
 [ -e /etc/cassandra/cassandra-env.sh ] || exit 0
@@ -34,34 +31,6 @@ JVM_SEARCH_DIRS=/usr/lib/jvm/default-java
 # Read configuration variable file if it is present
 [ -r /etc/default/$NAME ]  . /etc/default/$NAME
 
-# If JAVA_HOME has not been set, try to determine it.
-if [ -z $JAVA_HOME ]; then
-# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
-# both consistent with how the upstream startup script works, and how
-# Debian works (read: the use of alternatives to set a system JVM).
-if [ -n `which java` ]; then
-java=`which java`
-# Dereference symlink(s)
-while true; do
-if [ -h $java ]; then
-java=`readlink $java`
-continue
-fi
-break
-done
-JAVA_HOME=`dirname $java`/../
-# No JAVA_HOME set and no java found in PATH, search for a JVM.
-else
-for jdir in $JVM_SEARCH_DIRS; do
-if [ -x $jdir/bin/java ]; then
-JAVA_HOME=$jdir
-break
-fi
-done
-fi
-fi
-JAVA=$JAVA_HOME/bin/java
-
 # Read Cassandra environment file.
 . /etc/cassandra/cassandra-env.sh
 
@@ -70,6 +39,12 @@ if [ -z $JVM_OPTS ]; then
 exit 3
 fi
 
+# Add JNA to EXTRA_CLASSPATH
+export EXTRA_CLASSPATH=/usr/share/java/jna.jar:$EXTRA_CLASSPATH
+
+# Export JAVA_HOME, if set.
+[ -n $JAVA_HOME ]  export JAVA_HOME
+
 # Load the VERBOSE setting and other rcS variables
 . /lib/init/vars.sh
 
@@ -77,10 +52,6 @@ fi
 # Depend on lsb-base (= 3.0-6) to ensure that this file is present.
 . /lib/lsb/init-functions
 
-# If JNA is installed, add it to EXTRA_CLASSPATH
-#
-EXTRA_CLASSPATH=/usr/share/java/jna.jar:$EXTRA_CLASSPATH
-
 #
 # Function that returns 0 if process is running, or nonzero if not.
 #
@@ -119,7 +90,7 @@ do_start()
 [ -e `dirname $PIDFILE` ] || \
 install -d -ocassandra -gcassandra -m750 `dirname $PIDFILE`
 
-export EXTRA_CLASSPATH
+
 
 start-stop-daemon -S -c cassandra -a /usr/sbin/cassandra -q -p $PIDFILE 
-t /dev/null || return 1
 



[jira] [Commented] (CASSANDRA-6530) Fix logback configuration in scripts and debian packaging for trunk/2.1

2014-01-02 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860932#comment-13860932
 ] 

Michael Shuler commented on CASSANDRA-6530:
---

This appears to already be set up in the logback config:
{code}
(c169)mshuler@hana:~/git/cassandra$ grep thrift conf/logback.xml 
  logger name=org.apache.thrift.server.TNonblockingServer level=ERROR/
{code}

Running stress and killing it a dozen times or so did not give the the error 
output as above.

 Fix logback configuration in scripts and debian packaging for trunk/2.1
 ---

 Key: CASSANDRA-6530
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6530
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor
 Fix For: 2.1

 Attachments: logback_configurations_final.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-02 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860935#comment-13860935
 ] 

Jonathan Ellis commented on CASSANDRA-6271:
---

bq. Splits the parent if necessary (in the addChild method, which is called 
whenever we have a completed node to pass to the parent)

Hmm, I don't see an addChild, and ensureChild/addExtraChild aren't doing any 
obvious parent-splitting.

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Attachments: oprate.svg


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (CASSANDRA-6530) Fix logback configuration in scripts and debian packaging for trunk/2.1

2014-01-02 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860932#comment-13860932
 ] 

Michael Shuler edited comment on CASSANDRA-6530 at 1/2/14 11:26 PM:


This appears to already be set up in the logback config:
{code}
(c169)mshuler@hana:~/git/cassandra$ grep thrift conf/logback.xml 
  logger name=org.apache.thrift.server.TNonblockingServer level=ERROR/
{code}

Setting rpc_server_type: hsha and running stress and killing it a dozen times 
or so did not give the error output as above.


was (Author: mshuler):
This appears to already be set up in the logback config:
{code}
(c169)mshuler@hana:~/git/cassandra$ grep thrift conf/logback.xml 
  logger name=org.apache.thrift.server.TNonblockingServer level=ERROR/
{code}

Running stress and killing it a dozen times or so did not give the the error 
output as above.

 Fix logback configuration in scripts and debian packaging for trunk/2.1
 ---

 Key: CASSANDRA-6530
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6530
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Michael Shuler
Assignee: Michael Shuler
Priority: Minor
 Fix For: 2.1

 Attachments: logback_configurations_final.patch






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6271) Replace SnapTree in AtomicSortedColumns

2014-01-02 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860943#comment-13860943
 ] 

Benedict commented on CASSANDRA-6271:
-

Sorry, was going from memory. addExtraChild() is what I was referring to, and 
it calls ensureRoom() which does the parent splitting if necessary

 Replace SnapTree in AtomicSortedColumns
 ---

 Key: CASSANDRA-6271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6271
 Project: Cassandra
  Issue Type: Improvement
Reporter: Benedict
Assignee: Benedict
  Labels: performance
 Attachments: oprate.svg


 On the write path a huge percentage of time is spent in GC (50% in my tests, 
 if accounting for slow down due to parallel marking). SnapTrees are both GC 
 unfriendly due to their structure and also very expensive to keep around - 
 each column name in AtomicSortedColumns uses  100 bytes on average 
 (excluding the actual ByteBuffer).
 I suggest using a sorted array; changes are supplied at-once, as opposed to 
 one at a time, and if  10% of the keys in the array change (and data equal 
 to  10% of the size of the key array) we simply overlay a new array of 
 changes only over the top. Otherwise we rewrite the array. This method should 
 ensure much less GC overhead, and also save approximately 80% of the current 
 memory overhead.
 TreeMap is similarly difficult object for the GC, and a related task might be 
 to remove it where not strictly necessary, even though we don't keep them 
 hanging around for long. TreeMapBackedSortedColumns, for instance, seems to 
 be used in a lot of places where we could simply sort the columns.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Constance Eustace (JIRA)
Constance Eustace created CASSANDRA-6543:


 Summary: CASSANDRA 2.0 : java driver : blobs not retrieving 
correctly
 Key: CASSANDRA-6543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6543
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Constance Eustace


Might be wrong place but didn't find where the bugs should go and saw some 
java-driver ones in here...

Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
getBytes() unsafe, neither seemed to matter.

getBytes(col).array() 

Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
correctly inserted. 

Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
indicate it is of type blob, well, and getBytes shouldn't work. 
bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
different for the retrieved BB than the one sent to storage.

Is there a mode or something? Maybe some assumed UTF8 decode is occurring? 
Compression? The blob I'm storing has already been compressed via java's zip 
support, so a rezip would probably make it larger?



Here is the blob value in cqlsh, I'll try to get the post-retrieval array:  

1f8b0800eddccb6e1a7718c6e1bdaf6294d8533a424e1b9b538d58446a6c832155058dd820d9725d4ec6c487aa24cabd1716bd032a7dfff18330f2c28b27fa2dde4598efe0ddbbe5d3edddf57cb978babecd968bf9d3edf5f26eb1bc7eba5dccefd60fd74f777f67ade6f6fdf3e949a3d5fcb391655f5acddad77ab3517bcc8b4a7e944f8a3fb2615e7c1ebf2dc695cfc561b7fbd36c9e55b2eeaf95d145ab59e4e3c9b0dea8d56ba72727b57fde9fbeff7a523bfd3dafb76ef21f8be9b8a8acfebaf80e02020202525ac841180908080808486921c60604040404c4d88080808080a40f31362020202020c60604040404247d88b10101010101f9ffc7e687cb229bad5fa6dd2cfb74b9a964ebe9c33a9bbe3c37ebadfc6832392a1e1bf561d6da343e8ecee73783ab5effb8fae1bed3ee5cf4aee6d3d5e2b8daedf656c7f7d541f57e55bd1f54078be5fc62b11c2c672fc3ec316b6c3f36a3f3c1aadd5ff5dbf7d5edcff6f3b8dfeef43bfffdd2ee6c7fe9b5579d6cb59c2f7ad34d5119e79546b3951793ca64fb71941f1e4d0edfeefecd95e1f9f8f9f16336db7c98cf96a3cbf1e568341e6e5e9eb387d9bad19c76cfcebebdf9edcd5537fba5973df4ae16edb365f561ddbfb8fcb41a1479d6b86bd5b77fb67ec966f3c54d162744f41207af2745f012420821841042082184104208218410420821841042082184104208218410420821841031431ca4fdbfd671bead17470202020202525a88b1010101010131362020202020e9438c0d0808080888b10101010101491f626c40404040408c0d0808080848fa10cf03440911bd842b76514a0821841042082184104208218410420821841042082184104208218410420821841042c40ce18addfebe4110070302020202525a88b1010101010131362020202020e9438c0d0808080888b10101010101491f626c40404040408c0d0808080848fa108f04440911bd844376514a0821841042082184104208218410420821841042082184104208218410420821841042c40ce190dd1e0fd9eddeafbec8ee15a60a08080808486921c60604040404c4d88080808080a40f31362020202020c60604040404247d88b1010101010131362020202020e9433c1b112544f4122efa45292184104208218410420821841042082184104208218410420821841042082184104208113344eaf7e3e2b408230101010101292dc4d88080808080181b1010101090f421c60604040404c4d88080808080a40f31362020202020c60604040404247d88e701a284885ec215bb28258410420821841042082184104208218410420821841042082184104208218410420821628670c56e7fdf2088830101010101292dc4d88080808080181b1010101090f421c60604040404c4d88080808080a40f31362020202020c60604040404247d884702a284885ec221bb28258410420821841042082184104208218410420821841042082184104208218410420821628670c86e8f87ec76ef575f64f70a530504040404a4b410630302020202626c4040404040d287181b1010101010630302020202923ec4d88080808080181b1010101090f4219e8d8812227a0917fda2941042082184104208218410420821841042082184104208218410420821841042082184881922f5fb71715a849180808080809416626c40404040408c0d0808080848fa10630302020202626c4040404040d287181b1010101010630302020202923ec4f3005142442fe18a5d941242082184104208218410420821841042082184104208218410420821841042082184103143b862b7bf6f10c4c180808080809416626c40404040408c0d0808080848fa10630302020202626c4040404040d287181b1010101010630302020202923ec423015142442fe1905d9412420821841042082184104208218410420821841042082184104208218410420821841031433864b7c74376bbf7ab2fb27b85a90202020202525a88b1010101010131362020202020e9438c0d0808080888b10101010101491f626c40404040408c0d0808080848fa10cf46440911bd848b7e514a0821841042082184104208218410420821841042082184104208218410420821841042c40c91fafdb8382dc24840404040404a0b31362020202020c60604040404247d88b1010101010131362020202020e9438c0d0808080888b10101010101491fe279802821a29770c52e4a09218410420821841042082184104208218410420821841042082184104208218410428898215cb1dbdf3708e26040404040404a0b31362020202020c60604040404247d88b1010101010131362020202020e9438c0d0808080888b10101010101491fe291802821a29770c82e4a09218410420821841042082184104208218410420821841042082184104208218410428898211cb2dbe321bbddfbd517d9befe0555ff7bbe84b50300



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860969#comment-13860969
 ] 

Constance Eustace commented on CASSANDRA-6543:
--

8208073F0002000100020013696E7465726E616C5F7375626D697373696F6E0004626C6F62000474797065000D0008626C6F6264617461000300010004475A495006F41F8B0800EDDCCB6E1A7718C6E1BDAF6294D8533A424E1B9B538D58446A6C832155058DD820D9725D4EC6C487AA24CABD1716BD032A7DFFF18330F2C28B27FA2DDE4598EFE0DDBBE5D3EDDDF57CB978BABECD968BF9D3EDF5F26EB1BC7EBA5DCCEFD60FD74F777F67ADE6F6FDF3E949A3D5FCB391655F5ACDDAD77AB3517BCC8B4A7E944F8A3FB2615E7C1EBF2DC695CFC561B7FBD36C9E55B2EEAF95D145AB59E4E3C9B0DEA8D56BA72727B57FDE9FBEFF7A523BFD3DAFB76EF21F8BE9B8A8ACFEBAF80E02020202525AC841180908080808486921C60604040404C4D88080808080A40F31362020202020C60604040404247D88B10101010101F9FFC7E687CB229BAD5FA6DD2CFB74B9A964EBE9C33A9BBE3C37EBADFC6832392A1E1BF561D6DA343E8ECEE73783AB5EFFB8FAE1BED3EE5CF4AEE6D3D5E2B8DAEDF656C7F7D541F57E55BD1F54078BE5FC62B11C2C672FC3EC316B6C3F36A3F3C1AADD5FF5DBF7D5EDCFF6F3B8DFEEF43BFFFDD2EE6C7FE9B5579D6CB59C2F7AD34D5119E79546B3951793CA64FB71941F1E4D0EDFEEFECD95E1F9F8F9F16336DB7C98CF96A3CBF1E568341E6E5E9EB387D9BAD19C76CFCEBEBDF9EDCD5537FBA5973DF4AE16EDB365F561DDBFB8FCB41A1479D6B86BD5B77FB67EC966F3C54D162744F41207AF2745F012420821841042082184104208218410420821841042082184104208218410420821841031431CA4FDBFD671BEAD17470202020202525A88B1010101010131362020202020E9438C0D0808080888B10101010101491F626C40404040408C0D0808080848FA10CF03440911BD842B76514A0821841042082184104208218410420821841042082184104208218410420821841042C40CE18ADDFEBE4110070302020202525A88B1010101010131362020202020E9438C0D0808080888B10101010101491F626C40404040408C0D0808080848FA108F04440911BD844376514A0821841042082184104208218410420821841042082184104208218410420821841042C40CE190DD1E0FD9EDDEAFBEC8EE15A60A08080808486921C60604040404C4D88080808080A40F31362020202020C60604040404247D88B1010101010131362020202020E9433C1B112544F4122EFA45292184104208218410420821841042082184104208218410420821841042082184104208113344EAF7E3E2B408230101010101292DC4D88080808080181B1010101090F421C60604040404C4D88080808080A40F31362020202020C60604040404247D88E701A284885EC215BB28258410420821841042082184104208218410420821841042082184104208218410420821628670C56E7FDF2088830101010101292DC4D88080808080181B1010101090F421C60604040404C4D88080808080A40F31362020202020C60604040404247D884702A284885EC221BB28258410420821841042082184104208218410420821841042082184104208218410420821628670C86E8F87EC76EF575F64F70A530504040404A4B410630302020202626C4040404040D287181B1010101010630302020202923EC4D88080808080181B1010101090F4219E8D8812227A0917FDA2941042082184104208218410420821841042082184104208218410420821841042082184881922F5FB71715A849180808080809416626C40404040408C0D0808080848FA10630302020202626C4040404040D287181B1010101010630302020202923EC4F3005142442FE18A5D941242082184104208218410420821841042082184104208218410420821841042082184103143B862B7BF6F10C4C180808080809416626C40404040408C0D0808080848FA10630302020202626C4040404040D287181B1010101010630302020202923EC423015142442FE1905D9412420821841042082184104208218410420821841042082184104208218410420821841031433864B7C74376BBF7AB2FB27B85A90202020202525A88B1010101010131362020202020E9438C0D0808080888B10101010101491F626C40404040408C0D0808080848FA10CF46440911BD848B7E514A0821841042082184104208218410420821841042082184104208218410420821841042C40C91FAFDB8382DC24840404040404A0B31362020202020C60604040404247D88B1010101010131362020202020E9438C0D0808080888B10101010101491FE279802821A29770C52E4A09218410420821841042082184104208218410420821841042082184104208218410428898215CB1DBDF3708E26040404040404A0B31362020202020C60604040404247D88B1010101010131362020202020E9438C0D0808080888B10101010101491FE291802821A29770C82E4A09218410420821841042082184104208218410420821841042082184104208218410428898211CB2DBE321BBDDFBD517D9BEFE0555FF7BBE84B50300


 CASSANDRA 2.0 : java driver : blobs not retrieving correctly
 

 Key: CASSANDRA-6543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6543
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Constance Eustace

 Might be wrong place but didn't find where the bugs should go and saw some 
 java-driver ones in here...
 Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
 getBytes() unsafe, neither seemed to matter.
 getBytes(col).array() 
 Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
 correctly inserted. 
 Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
 indicate it is of type blob, well, and getBytes shouldn't work. 
 bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
 different for the retrieved BB than the one sent to storage.
 Is 

[jira] [Updated] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Constance Eustace (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Constance Eustace updated CASSANDRA-6543:
-

Description: 
Might be wrong place but didn't find where the bugs should go and saw some 
java-driver ones in here...

Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
getBytes() unsafe, neither seemed to matter.

getBytes(col).array() 

Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
correctly inserted. 

Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
indicate it is of type blob, well, and getBytes shouldn't work. 
bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
different for the retrieved BB than the one sent to storage.

Is there a mode or something? Maybe some assumed UTF8 decode is occurring? 
Compression? The blob I'm storing has already been compressed via java's zip 
support, so a rezip would probably make it larger?



Here is the blob value in cqlsh, I'll try to get the post-retrieval array:  

1f8b0800eddccb6e1a7718c6e1bdaf6294d8533a424e1b9b538d58446a6c832155058dd820d9725d4ec6c487aa24cabd1716bd032a7dfff18330f2c28b27fa2dde4598efe0ddbbe5d3edddf57cb978babecd968bf9d3edf5f26eb1bc7eba5dccefd60fd74f777f67ade6f6fdf3e949a3d5fcb391655f5acddad77ab3517bcc8b4a7e944f8a3fb2615e7c1ebf2dc695cfc561b7fbd36c9e55b2eeaf95d145ab59e4e3c9b0dea8d56ba72727b57fde9fbeff7a523bfd3dafb76ef21f8be9b8a8acfebaf80e02020202525ac841180908080808486921c60604040404c4d88080808080a40f31362020202020c60604040404247d88b10101010101f9ffc7e687cb229bad5fa6dd2cfb74b9a964ebe9c33a9bbe3c37ebadfc6832392a1e1bf561d6da343e8ecee73783ab5effb8fae1bed3ee5cf4aee6d3d5e2b8daedf656c7f7d541f57e55bd1f54078be5fc62b11c2c672fc3ec316b6c3f36a3f3c1aadd5ff5dbf7d5edcff6f3b8dfeef43bfffdd2ee6c7fe9b5579d6cb59c2f7ad34d5119e79546b3951793ca64fb71941f1e4d0edfeefecd95e1f9f8f9f16336db7c98cf96a3cbf1e568341e6e5e9eb387d9bad19c76cfcebebdf9edcd5537fba5973df4ae16edb365f561ddbfb8fcb41a1479d6b86bd5b77fb67ec966f3c54d162744f41207af2745f012420821841042082184104208218410420821841042082184104208218410420821841031431ca4fdbfd671bead17470202020202525a88b1010101010131362020202020e9438c0d0808080888b10101010101491f626c40404040408c0d0808080848fa10cf03440911bd842b76514a0821841042082184104208218410420821841042082184104208218410420821841042c40ce18addfebe4110070302020202525a88b1010101010131362020202020e9438c0d0808080888b10101010101491f626c40404040408c0d0808080848fa108f04440911bd844376514a0821841042082184104208218410420821841042082184104208218410420821841042c40ce190dd1e0fd9eddeafbec8ee15a60a08080808486921c60604040404c4d88080808080a40f31362020202020c60604040404247d88b1010101010131362020202020e9433c1b112544f4122efa45292184104208218410420821841042082184104208218410420821841042082184104208113344eaf7e3e2b408230101010101292dc4d88080808080181b1010101090f421c60604040404c4d88080808080a40f31362020202020c60604040404247d88e701a284885ec215bb28258410420821841042082184104208218410420821841042082184104208218410420821628670c56e7fdf2088830101010101292dc4d88080808080181b1010101090f421c60604040404c4d88080808080a40f31362020202020c60604040404247d884702a284885ec221bb28258410420821841042082184104208218410420821841042082184104208218410420821628670c86e8f87ec76ef575f64f70a530504040404a4b410630302020202626c4040404040d287181b1010101010630302020202923ec4d88080808080181b1010101090f4219e8d8812227a0917fda2941042082184104208218410420821841042082184104208218410420821841042082184881922f5fb71715a849180808080809416626c40404040408c0d0808080848fa10630302020202626c4040404040d287181b1010101010630302020202923ec4f3005142442fe18a5d941242082184104208218410420821841042082184104208218410420821841042082184103143b862b7bf6f10c4c180808080809416626c40404040408c0d0808080848fa10630302020202626c4040404040d287181b1010101010630302020202923ec423015142442fe1905d9412420821841042082184104208218410420821841042082184104208218410420821841031433864b7c74376bbf7ab2fb27b85a90202020202525a88b1010101010131362020202020e9438c0d0808080888b10101010101491f626c40404040408c0d0808080848fa10cf46440911bd848b7e514a0821841042082184104208218410420821841042082184104208218410420821841042c40c91fafdb8382dc24840404040404a0b31362020202020c60604040404247d88b1010101010131362020202020e9438c0d0808080888b10101010101491fe279802821a29770c52e4a09218410420821841042082184104208218410420821841042082184104208218410428898215cb1dbdf3708e26040404040404a0b31362020202020c60604040404247d88b1010101010131362020202020e9438c0d0808080888b10101010101491fe291802821a29770c82e4a09218410420821841042082184104208218410420821841042082184104208218410428898211cb2dbe321bbddfbd517d9befe0555ff7bbe84b50300


quick and dirty stackover flow byte[] - hexstring of post retrieval array is: 


[jira] [Issue Comment Deleted] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Constance Eustace (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Constance Eustace updated CASSANDRA-6543:
-

Comment: was deleted

(was: Using a quick dirty stack overflow byte[] - hextring:

)

 CASSANDRA 2.0 : java driver : blobs not retrieving correctly
 

 Key: CASSANDRA-6543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6543
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Constance Eustace

 Might be wrong place but didn't find where the bugs should go and saw some 
 java-driver ones in here...
 Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
 getBytes() unsafe, neither seemed to matter.
 getBytes(col).array() 
 Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
 correctly inserted. 
 Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
 indicate it is of type blob, well, and getBytes shouldn't work. 
 bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
 different for the retrieved BB than the one sent to storage.
 Is there a mode or something? Maybe some assumed UTF8 decode is occurring? 
 Compression? The blob I'm storing has already been compressed via java's zip 
 support, so a rezip would probably make it larger?
 Here is the blob value in cqlsh, I'll try to get the post-retrieval array:  
 

[jira] [Commented] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860967#comment-13860967
 ] 

Constance Eustace commented on CASSANDRA-6543:
--

Using a quick dirty stack overflow byte[] - hextring:



 CASSANDRA 2.0 : java driver : blobs not retrieving correctly
 

 Key: CASSANDRA-6543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6543
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Constance Eustace

 Might be wrong place but didn't find where the bugs should go and saw some 
 java-driver ones in here...
 Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
 getBytes() unsafe, neither seemed to matter.
 getBytes(col).array() 
 Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
 correctly inserted. 
 Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
 indicate it is of type blob, well, and getBytes shouldn't work. 
 bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
 different for the retrieved BB than the one sent to storage.
 Is there a mode or something? Maybe some assumed UTF8 decode is occurring? 
 Compression? The blob I'm storing has already been compressed via java's zip 
 support, so a rezip would probably make it larger?
 Here is the blob value in cqlsh, I'll try to get the post-retrieval array:  
 

[jira] [Issue Comment Deleted] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Constance Eustace (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Constance Eustace updated CASSANDRA-6543:
-

Comment: was deleted

(was: 
8208073F0002000100020013696E7465726E616C5F7375626D697373696F6E0004626C6F62000474797065000D0008626C6F6264617461000300010004475A495006F41F8B0800EDDCCB6E1A7718C6E1BDAF6294D8533A424E1B9B538D58446A6C832155058DD820D9725D4EC6C487AA24CABD1716BD032A7DFFF18330F2C28B27FA2DDE4598EFE0DDBBE5D3EDDDF57CB978BABECD968BF9D3EDF5F26EB1BC7EBA5DCCEFD60FD74F777F67ADE6F6FDF3E949A3D5FCB391655F5ACDDAD77AB3517BCC8B4A7E944F8A3FB2615E7C1EBF2DC695CFC561B7FBD36C9E55B2EEAF95D145AB59E4E3C9B0DEA8D56BA72727B57FDE9FBEFF7A523BFD3DAFB76EF21F8BE9B8A8ACFEBAF80E02020202525AC841180908080808486921C60604040404C4D88080808080A40F31362020202020C60604040404247D88B10101010101F9FFC7E687CB229BAD5FA6DD2CFB74B9A964EBE9C33A9BBE3C37EBADFC6832392A1E1BF561D6DA343E8ECEE73783AB5EFFB8FAE1BED3EE5CF4AEE6D3D5E2B8DAEDF656C7F7D541F57E55BD1F54078BE5FC62B11C2C672FC3EC316B6C3F36A3F3C1AADD5FF5DBF7D5EDCFF6F3B8DFEEF43BFFFDD2EE6C7FE9B5579D6CB59C2F7AD34D5119E79546B3951793CA64FB71941F1E4D0EDFEEFECD95E1F9F8F9F16336DB7C98CF96A3CBF1E568341E6E5E9EB387D9BAD19C76CFCEBEBDF9EDCD5537FBA5973DF4AE16EDB365F561DDBFB8FCB41A1479D6B86BD5B77FB67EC966F3C54D162744F41207AF2745F012420821841042082184104208218410420821841042082184104208218410420821841031431CA4FDBFD671BEAD17470202020202525A88B1010101010131362020202020E9438C0D0808080888B10101010101491F626C40404040408C0D0808080848FA10CF03440911BD842B76514A0821841042082184104208218410420821841042082184104208218410420821841042C40CE18ADDFEBE4110070302020202525A88B1010101010131362020202020E9438C0D0808080888B10101010101491F626C40404040408C0D0808080848FA108F04440911BD844376514A0821841042082184104208218410420821841042082184104208218410420821841042C40CE190DD1E0FD9EDDEAFBEC8EE15A60A08080808486921C60604040404C4D88080808080A40F31362020202020C60604040404247D88B1010101010131362020202020E9433C1B112544F4122EFA45292184104208218410420821841042082184104208218410420821841042082184104208113344EAF7E3E2B408230101010101292DC4D88080808080181B1010101090F421C60604040404C4D88080808080A40F31362020202020C60604040404247D88E701A284885EC215BB28258410420821841042082184104208218410420821841042082184104208218410420821628670C56E7FDF2088830101010101292DC4D88080808080181B1010101090F421C60604040404C4D88080808080A40F31362020202020C60604040404247D884702A284885EC221BB28258410420821841042082184104208218410420821841042082184104208218410420821628670C86E8F87EC76EF575F64F70A530504040404A4B410630302020202626C4040404040D287181B1010101010630302020202923EC4D88080808080181B1010101090F4219E8D8812227A0917FDA2941042082184104208218410420821841042082184104208218410420821841042082184881922F5FB71715A849180808080809416626C40404040408C0D0808080848FA10630302020202626C4040404040D287181B1010101010630302020202923EC4F3005142442FE18A5D941242082184104208218410420821841042082184104208218410420821841042082184103143B862B7BF6F10C4C180808080809416626C40404040408C0D0808080848FA10630302020202626C4040404040D287181B1010101010630302020202923EC423015142442FE1905D9412420821841042082184104208218410420821841042082184104208218410420821841031433864B7C74376BBF7AB2FB27B85A90202020202525A88B1010101010131362020202020E9438C0D0808080888B10101010101491F626C40404040408C0D0808080848FA10CF46440911BD848B7E514A0821841042082184104208218410420821841042082184104208218410420821841042C40C91FAFDB8382DC24840404040404A0B31362020202020C60604040404247D88B1010101010131362020202020E9438C0D0808080888B10101010101491FE279802821A29770C52E4A09218410420821841042082184104208218410420821841042082184104208218410428898215CB1DBDF3708E26040404040404A0B31362020202020C60604040404247D88B1010101010131362020202020E9438C0D0808080888B10101010101491FE291802821A29770C82E4A09218410420821841042082184104208218410420821841042082184104208218410428898211CB2DBE321BBDDFBD517D9BEFE0555FF7BBE84B50300
)

 CASSANDRA 2.0 : java driver : blobs not retrieving correctly
 

 Key: CASSANDRA-6543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6543
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Constance Eustace

 Might be wrong place but didn't find where the bugs should go and saw some 
 java-driver ones in here...
 Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
 getBytes() unsafe, neither seemed to matter.
 getBytes(col).array() 
 Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
 correctly inserted. 
 Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
 indicate it is of type blob, well, and getBytes shouldn't work. 
 bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
 different for the retrieved BB than the one sent to storage.
 Is there a mode or 

[jira] [Commented] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Constance Eustace (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13860976#comment-13860976
 ] 

Constance Eustace commented on CASSANDRA-6543:
--

Hm:

8208073F0002000100020013696E7465726E616C5F7375626D697373696F6E0004626C6F62000474797065000D0008626C6F6264617461000300010004475A495006F4
 

seems to be prepended to my value

 CASSANDRA 2.0 : java driver : blobs not retrieving correctly
 

 Key: CASSANDRA-6543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6543
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Constance Eustace

 Might be wrong place but didn't find where the bugs should go and saw some 
 java-driver ones in here...
 Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
 getBytes() unsafe, neither seemed to matter.
 getBytes(col).array() 
 Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
 correctly inserted. 
 Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
 indicate it is of type blob, well, and getBytes shouldn't work. 
 bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
 different for the retrieved BB than the one sent to storage.
 Is there a mode or something? Maybe some assumed UTF8 decode is occurring? 
 Compression? The blob I'm storing has already been compressed via java's zip 
 support, so a rezip would probably make it larger?
 Here is the blob value in cqlsh, I'll try to get the post-retrieval array:  
 

[jira] [Resolved] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis resolved CASSANDRA-6543.
---

Resolution: Invalid

Please file java driver bugs with the java driver github project.

 CASSANDRA 2.0 : java driver : blobs not retrieving correctly
 

 Key: CASSANDRA-6543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6543
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Constance Eustace

 Might be wrong place but didn't find where the bugs should go and saw some 
 java-driver ones in here...
 Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
 getBytes() unsafe, neither seemed to matter.
 getBytes(col).array() 
 Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
 correctly inserted. 
 Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
 indicate it is of type blob, well, and getBytes shouldn't work. 
 bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
 different for the retrieved BB than the one sent to storage.
 Is there a mode or something? Maybe some assumed UTF8 decode is occurring? 
 Compression? The blob I'm storing has already been compressed via java's zip 
 support, so a rezip would probably make it larger?
 Here is the blob value in cqlsh, I'll try to get the post-retrieval array:  
 

[jira] [Commented] (CASSANDRA-6465) DES scores fluctuate too much for cache pinning

2014-01-02 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13861012#comment-13861012
 ] 

Tyler Hobbs commented on CASSANDRA-6465:


I can reproduce Chris's results, and in my experimentation it looks like almost 
all of the variation is due to the timePenalty, which is basically how long 
it has been since the last entry from an endpoint.  I can see why something 
like the time penalty might be useful for the phi FD, which expects messages on 
a periodic basis, but it doesn't make sense to me to use it in a load balancing 
measure.  My suggestion would be to remove the time penalty.

bq. Are we sure that this mechanism of producing cache pinning is worth the 
complexity here, especially given speculative execution?

Effective cache utilization is extremely important, so I would say it's well 
worth the additional complexity.  I don't think speculative execution should 
affect this greatly, but I might be missing something; care to expand on that?

 DES scores fluctuate too much for cache pinning
 ---

 Key: CASSANDRA-6465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11, 2 DC cluster
Reporter: Chris Burroughs
Assignee: Tyler Hobbs
Priority: Minor
  Labels: gossip
 Fix For: 2.0.5

 Attachments: des-score-graph.png, des.sample.15min.csv, get-scores.py


 To quote the conf:
 {noformat}
 # if set greater than zero and read_repair_chance is  1.0, this will allow
 # 'pinning' of replicas to hosts in order to increase cache capacity.
 # The badness threshold will control how much worse the pinned host has to be
 # before the dynamic snitch will prefer other replicas over it.  This is
 # expressed as a double which represents a percentage.  Thus, a value of
 # 0.2 means Cassandra would continue to prefer the static snitch values
 # until the pinned host was 20% worse than the fastest.
 dynamic_snitch_badness_threshold: 0.1
 {noformat}
 An assumption of this feature is that scores will vary by less than 
 dynamic_snitch_badness_threshold during normal operations.  Attached is the 
 result of polling a node for the scores of 6 different endpoints at 1 Hz for 
 15 minutes.  The endpoints to sample were chosen with `nodetool getendpoints` 
 for row that is known to get reads.  The node was acting as a coordinator for 
 a few hundred req/second, so it should have sufficient data to work with.  
 Other traces on a second cluster have produced similar results.
  * The scores vary by far more than I would expect, as show by the difficulty 
 of seeing anything useful in that graph.
  * The difference between the best and next-best score is usually  10% 
 (default dynamic_snitch_badness_threshold).
 Neither ClientRequest nor ColumFamily metrics showed wild changes during the 
 data gathering period.
 Attachments:
  * jython script cobbled together to gather the data (based on work on the 
 mailing list from Maki Watanabe a while back)
  * csv of DES scores for 6 endpoints, polled about once a second
  * Attempt at making a graph



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


git commit: make it clear that varargs in are in effect

2014-01-02 Thread dbrosius
Updated Branches:
  refs/heads/trunk 7aa3364e0 - af3ad31c3


make it clear that varargs in are in effect


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/af3ad31c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/af3ad31c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/af3ad31c

Branch: refs/heads/trunk
Commit: af3ad31c3056fc5c54a3860e5e9d7f9662fa15f0
Parents: 7aa3364
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Thu Jan 2 19:58:42 2014 -0500
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Thu Jan 2 19:58:42 2014 -0500

--
 .../src/org/apache/cassandra/stress/util/JavaDriverClient.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/af3ad31c/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
--
diff --git 
a/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java 
b/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
index 7f2ab16..cf37040 100644
--- a/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
+++ b/tools/stress/src/org/apache/cassandra/stress/util/JavaDriverClient.java
@@ -105,7 +105,7 @@ public class JavaDriverClient
 {
 
 stmt.setConsistencyLevel(from(consistency));
-BoundStatement bstmt = stmt.bind(queryParams.toArray(new 
ByteBuffer[queryParams.size()]));
+BoundStatement bstmt = stmt.bind((Object[]) queryParams.toArray(new 
ByteBuffer[queryParams.size()]));
 return getSession().execute(bstmt);
 }
 



[jira] [Commented] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2014-01-02 Thread Jason Harvey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13861050#comment-13861050
 ] 

Jason Harvey commented on CASSANDRA-6405:
-

[~mishail] 100M currently. preheat is turned on.

 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey
 Attachments: threaddump.txt


 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   Bloom Filter FP chance: default
   Built indexes: []
   Compaction Strategy: 
 org.apache.cassandra.db.compaction.LeveledCompactionStrategy
   Compaction Strategy Options:
 sstable_size_in_mb: 160
 Column Family: CommentTree
 SSTable count: 30
 SSTables in each level: [1, 10, 

[jira] [Commented] (CASSANDRA-6405) When making heavy use of counters, neighbor nodes occasionally enter spiral of constant memory consumpion

2014-01-02 Thread Jason Harvey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13861066#comment-13861066
 ] 

Jason Harvey commented on CASSANDRA-6405:
-

I should note, [~brandon.williams] took a peek at the heap dump and it was 
unfortunately caught just after a CMS, so it doesn't tell us much. I've been 
unable to get a heap dump from when the memory is full. Despite the thing 
constantly CMSing, every dump I've taken is what the heap looked like just 
after a CMS.

Only solid clue still remaining is that instance count of CounterColumn.

 When making heavy use of counters, neighbor nodes occasionally enter spiral 
 of constant memory consumpion
 -

 Key: CASSANDRA-6405
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6405
 Project: Cassandra
  Issue Type: Bug
 Environment: RF of 3, 15 nodes.
 Sun Java 7 (also occurred in OpenJDK 6, and Sun Java 6).
 Xmx of 8G.
 No row cache.
Reporter: Jason Harvey
 Attachments: threaddump.txt


 We're randomly running into an interesting issue on our ring. When making use 
 of counters, we'll occasionally have 3 nodes (always neighbors) suddenly 
 start immediately filling up memory, CMSing, fill up again, repeat. This 
 pattern goes on for 5-20 minutes. Nearly all requests to the nodes time out 
 during this period. Restarting one, two, or all three of the nodes does not 
 resolve the spiral; after a restart the three nodes immediately start hogging 
 up memory again and CMSing constantly.
 When the issue resolves itself, all 3 nodes immediately get better. Sometimes 
 it reoccurs in bursts, where it will be trashed for 20 minutes, fine for 5, 
 trashed for 20, and repeat that cycle a few times.
 There are no unusual logs provided by cassandra during this period of time, 
 other than recording of the constant dropped read requests and the constant 
 CMS runs. I have analyzed the log files prior to multiple distinct instances 
 of this issue and have found no preceding events which are associated with 
 this issue.
 I have verified that our apps are not performing any unusual number or type 
 of requests during this time.
 This behaviour occurred on 1.0.12, 1.1.7, and now on 1.2.11.
 The way I've narrowed this down to counters is a bit naive. It started 
 happening when we started making use of counter columns, went away after we 
 rolled back use of counter columns. I've repeated this attempted rollout on 
 each version now, and it consistently rears its head every time. I should 
 note this incident does _seem_ to happen more rarely on 1.2.11 compared to 
 the previous versions.
 This incident has been consistent across multiple different types of 
 hardware, as well as major kernel version changes (2.6 all the way to 3.2). 
 The OS is operating normally during the event.
 I managed to get an hprof dump when the issue was happening in the wild. 
 Something notable in the class instance counts as reported by jhat. Here are 
 the top 5 counts for this one node:
 {code}
 5967846 instances of class org.apache.cassandra.db.CounterColumn 
 1247525 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$WeightedValue 
 1247310 instances of class org.apache.cassandra.cache.KeyCacheKey 
 1246648 instances of class 
 com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
 1237526 instances of class org.apache.cassandra.db.RowIndexEntry 
 {code}
 Is it normal or expected for CounterColumn to have that number of instances?
 The data model for how we use counters is as follows: between 50-2 
 counter columns per key. We currently have around 3 million keys total, but 
 this issue also replicated when we only had a few thousand keys total. 
 Average column count is around 1k, and 90th is 18k. New columns are added 
 regularly, and columns are incremented regularly. No column or key deletions 
 occur. We probably have 1-5k hot keys at any given time, spread across the 
 entire ring. R:W ratio is typically around 50:1. This is the only CF we're 
 using counters on, at this time. CF details are as follows:
 {code}
 ColumnFamily: CommentTree
   Key Validation Class: org.apache.cassandra.db.marshal.AsciiType
   Default column value validator: 
 org.apache.cassandra.db.marshal.CounterColumnType
   Cells sorted by: 
 org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType,org.apache.cassandra.db.marshal.LongType)
   GC grace seconds: 864000
   Compaction min/max thresholds: 4/32
   Read repair chance: 0.01
   DC Local Read repair chance: 0.0
   Populate IO Cache on flush: false
   Replicate on write: true
   Caching: KEYS_ONLY
   

[jira] [Updated] (CASSANDRA-6465) DES scores fluctuate too much for cache pinning

2014-01-02 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-6465:
--

Since Version: 1.2.0

This was introduced by CASSANDRA-3722.  It's not clear to me what that code is 
trying to do.  Or maybe I'm still grumpy about calling i/o activity severity.

 DES scores fluctuate too much for cache pinning
 ---

 Key: CASSANDRA-6465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: 1.2.11, 2 DC cluster
Reporter: Chris Burroughs
Assignee: Tyler Hobbs
Priority: Minor
  Labels: gossip
 Fix For: 2.0.5

 Attachments: des-score-graph.png, des.sample.15min.csv, get-scores.py


 To quote the conf:
 {noformat}
 # if set greater than zero and read_repair_chance is  1.0, this will allow
 # 'pinning' of replicas to hosts in order to increase cache capacity.
 # The badness threshold will control how much worse the pinned host has to be
 # before the dynamic snitch will prefer other replicas over it.  This is
 # expressed as a double which represents a percentage.  Thus, a value of
 # 0.2 means Cassandra would continue to prefer the static snitch values
 # until the pinned host was 20% worse than the fastest.
 dynamic_snitch_badness_threshold: 0.1
 {noformat}
 An assumption of this feature is that scores will vary by less than 
 dynamic_snitch_badness_threshold during normal operations.  Attached is the 
 result of polling a node for the scores of 6 different endpoints at 1 Hz for 
 15 minutes.  The endpoints to sample were chosen with `nodetool getendpoints` 
 for row that is known to get reads.  The node was acting as a coordinator for 
 a few hundred req/second, so it should have sufficient data to work with.  
 Other traces on a second cluster have produced similar results.
  * The scores vary by far more than I would expect, as show by the difficulty 
 of seeing anything useful in that graph.
  * The difference between the best and next-best score is usually  10% 
 (default dynamic_snitch_badness_threshold).
 Neither ClientRequest nor ColumFamily metrics showed wild changes during the 
 data gathering period.
 Attachments:
  * jython script cobbled together to gather the data (based on work on the 
 mailing list from Maki Watanabe a while back)
  * csv of DES scores for 6 endpoints, polled about once a second
  * Attempt at making a graph



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-5202) CFs should have globally and temporally unique CF IDs to prevent reusing data from earlier incarnation of same CF name

2014-01-02 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5202:
--

Attachment: (was: 5202-2.0.0.txt)

 CFs should have globally and temporally unique CF IDs to prevent reusing 
 data from earlier incarnation of same CF name
 

 Key: CASSANDRA-5202
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5202
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: OS: Windows 7, 
 Server: Cassandra 1.1.9 release drop
 Client: astyanax 1.56.21, 
 JVM: Sun/Oracle JVM 64 bit (jdk1.6.0_27)
Reporter: Marat Bedretdinov
Assignee: Yuki Morishita
  Labels: test
 Fix For: 2.1

 Attachments: astyanax-stress-driver.zip


 Attached is a driver that sequentially:
 1. Drops keyspace
 2. Creates keyspace
 4. Creates 2 column families
 5. Seeds 1M rows with 100 columns
 6. Queries these 2 column families
 The above steps are repeated 1000 times.
 The following exception is observed at random (race - SEDA?):
 ERROR [ReadStage:55] 2013-01-29 19:24:52,676 AbstractCassandraDaemon.java 
 (line 135) Exception in thread Thread[ReadStage:55,5,main]
 java.lang.AssertionError: DecoratedKey(-1, ) != 
 DecoratedKey(62819832764241410631599989027761269388, 313a31) in 
 C:\var\lib\cassandra\data\user_role_reverse_index\business_entity_role\user_role_reverse_index-business_entity_role-hf-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1367)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1229)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1164)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:822)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1271)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 This exception appears in the server at the time of client submitting a query 
 request (row slice) and not at the time data is seeded. The client times out 
 and this data can no longer be queried as the same exception would always 
 occur from there on.
 Also on iteration 201, it appears that dropping column families failed and as 
 a result their recreation failed with unique column family name violation 
 (see exception below). Note that the data files are actually gone, so it 
 appears that the server runtime responsible for creating column family was 
 out of sync with the piece that dropped them:
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5105 ms
 Iteration: 200; Total running time for 1000 queries is 232; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5361 ms
 Iteration: 201; Total running time for 1000 queries is 222; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Starting creating column families
 Exception in thread main 
 com.netflix.astyanax.connectionpool.exceptions.BadRequestException: 
 BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=2468(2469), 
 attempts=1]InvalidRequestException(why:Keyspace names must be 
 case-insensitively unique (user_role_reverse_index conflicts with 
 user_role_reverse_index))
   at 
 com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
   at 
 

[jira] [Updated] (CASSANDRA-5202) CFs should have globally and temporally unique CF IDs to prevent reusing data from earlier incarnation of same CF name

2014-01-02 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5202:
--

Attachment: (was: 5202-1.1.txt)

 CFs should have globally and temporally unique CF IDs to prevent reusing 
 data from earlier incarnation of same CF name
 

 Key: CASSANDRA-5202
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5202
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: OS: Windows 7, 
 Server: Cassandra 1.1.9 release drop
 Client: astyanax 1.56.21, 
 JVM: Sun/Oracle JVM 64 bit (jdk1.6.0_27)
Reporter: Marat Bedretdinov
Assignee: Yuki Morishita
  Labels: test
 Fix For: 2.1

 Attachments: astyanax-stress-driver.zip


 Attached is a driver that sequentially:
 1. Drops keyspace
 2. Creates keyspace
 4. Creates 2 column families
 5. Seeds 1M rows with 100 columns
 6. Queries these 2 column families
 The above steps are repeated 1000 times.
 The following exception is observed at random (race - SEDA?):
 ERROR [ReadStage:55] 2013-01-29 19:24:52,676 AbstractCassandraDaemon.java 
 (line 135) Exception in thread Thread[ReadStage:55,5,main]
 java.lang.AssertionError: DecoratedKey(-1, ) != 
 DecoratedKey(62819832764241410631599989027761269388, 313a31) in 
 C:\var\lib\cassandra\data\user_role_reverse_index\business_entity_role\user_role_reverse_index-business_entity_role-hf-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1367)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1229)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1164)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:822)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1271)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 This exception appears in the server at the time of client submitting a query 
 request (row slice) and not at the time data is seeded. The client times out 
 and this data can no longer be queried as the same exception would always 
 occur from there on.
 Also on iteration 201, it appears that dropping column families failed and as 
 a result their recreation failed with unique column family name violation 
 (see exception below). Note that the data files are actually gone, so it 
 appears that the server runtime responsible for creating column family was 
 out of sync with the piece that dropped them:
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5105 ms
 Iteration: 200; Total running time for 1000 queries is 232; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5361 ms
 Iteration: 201; Total running time for 1000 queries is 222; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Starting creating column families
 Exception in thread main 
 com.netflix.astyanax.connectionpool.exceptions.BadRequestException: 
 BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=2468(2469), 
 attempts=1]InvalidRequestException(why:Keyspace names must be 
 case-insensitively unique (user_role_reverse_index conflicts with 
 user_role_reverse_index))
   at 
 com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
   at 
 

[jira] [Updated] (CASSANDRA-5202) CFs should have globally and temporally unique CF IDs to prevent reusing data from earlier incarnation of same CF name

2014-01-02 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-5202:
--

Attachment: 5202.txt

(also: https://github.com/yukim/cassandra/commits/5202)

Patch attached for review.

* CF ID is generated randomly upon new CFMetaData creation.
CF under system keyspaces and ones from older version have deterministic CF 
ID based on their name.
* SSTable directories are created as ks/cf-cfid. cfid here is hex encoding of 
UUID bytes. When upgrading, older format ks/cf is still used.
* Saved key cache file name also has cfid appended at the end, and key cache 
look up is CF ID aware.


 CFs should have globally and temporally unique CF IDs to prevent reusing 
 data from earlier incarnation of same CF name
 

 Key: CASSANDRA-5202
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5202
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.9
 Environment: OS: Windows 7, 
 Server: Cassandra 1.1.9 release drop
 Client: astyanax 1.56.21, 
 JVM: Sun/Oracle JVM 64 bit (jdk1.6.0_27)
Reporter: Marat Bedretdinov
Assignee: Yuki Morishita
  Labels: test
 Fix For: 2.1

 Attachments: 5202.txt, astyanax-stress-driver.zip


 Attached is a driver that sequentially:
 1. Drops keyspace
 2. Creates keyspace
 4. Creates 2 column families
 5. Seeds 1M rows with 100 columns
 6. Queries these 2 column families
 The above steps are repeated 1000 times.
 The following exception is observed at random (race - SEDA?):
 ERROR [ReadStage:55] 2013-01-29 19:24:52,676 AbstractCassandraDaemon.java 
 (line 135) Exception in thread Thread[ReadStage:55,5,main]
 java.lang.AssertionError: DecoratedKey(-1, ) != 
 DecoratedKey(62819832764241410631599989027761269388, 313a31) in 
 C:\var\lib\cassandra\data\user_role_reverse_index\business_entity_role\user_role_reverse_index-business_entity_role-hf-1-Data.db
   at 
 org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:60)
   at 
 org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:67)
   at 
 org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:79)
   at 
 org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:256)
   at 
 org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1367)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1229)
   at 
 org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1164)
   at org.apache.cassandra.db.Table.getRow(Table.java:378)
   at 
 org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69)
   at 
 org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:822)
   at 
 org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1271)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:662)
 This exception appears in the server at the time of client submitting a query 
 request (row slice) and not at the time data is seeded. The client times out 
 and this data can no longer be queried as the same exception would always 
 occur from there on.
 Also on iteration 201, it appears that dropping column families failed and as 
 a result their recreation failed with unique column family name violation 
 (see exception below). Note that the data files are actually gone, so it 
 appears that the server runtime responsible for creating column family was 
 out of sync with the piece that dropped them:
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5105 ms
 Iteration: 200; Total running time for 1000 queries is 232; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Dropped column families
 Starting dropping keyspace
 Dropped keyspace
 Starting creating column families
 Created column families
 Starting seeding data
 Total rows inserted: 100 in 5361 ms
 Iteration: 201; Total running time for 1000 queries is 222; Average running 
 time of 1000 queries is 0 ms
 Starting dropping column families
 Starting creating column families
 

[jira] [Updated] (CASSANDRA-6456) log listen address at startup

2014-01-02 Thread Sean Bridges (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Bridges updated CASSANDRA-6456:


Attachment: CASSANDRA-6456-2.patch

 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: CASSANDRA-6456-2.patch, CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6456) log listen address at startup

2014-01-02 Thread Sean Bridges (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13861183#comment-13861183
 ] 

Sean Bridges commented on CASSANDRA-6456:
-

New patch attached.

{quote}
I think we should change the format to a single line (helps when grep'ing) to a 
single line (see this gist)
{quote}

Changed to log on a single line with slightly modified format to be consistent 
with other log lines. 

{quote}
For the original intent of this JIRA I think we need to add a call to get 
address or something. As the IP's in the yaml can be left blank.
{quote}

I added a line to log InetAddress.getLocalHost() on startup in case listen 
address is not set


{quote}
I think this makes some ad-hoc config logging redundant as well?
{quote}

A couple of log lines were removed with the original patch, let me know if 
there are more to remove.


 log listen address at startup
 -

 Key: CASSANDRA-6456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6456
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Jeremy Hanna
Assignee: Sean Bridges
Priority: Trivial
 Attachments: CASSANDRA-6456-2.patch, CASSANDRA-6456.patch


 When looking through logs from a cluster, sometimes it's handy to know the 
 address a node is from the logs.  It would be convenient if on startup, we 
 indicated the listen address for that node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (CASSANDRA-6544) Reduce GC activity during compaction

2014-01-02 Thread Vijay (JIRA)
Vijay created CASSANDRA-6544:


 Summary: Reduce GC activity during compaction
 Key: CASSANDRA-6544
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6544
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Vijay
Assignee: Vijay
 Fix For: 2.1


We are noticing increase in P99 while the compactions are running at full 
stream. Most of it is because of the increased GC activity (followed by full 
GC).

The obvious solution/work around is to throttle the compactions, but with SSD's 
we can get more disk bandwidth for reads and compactions.

It will be nice to move the compaction object allocations off heap. First thing 
to do might be create a Offheap Slab allocator with the size as the compaction 
in memory size and recycle it. 

Also we might want to make it configurable so folks can disable it when they 
don't have off-heap memory to reserve.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13861305#comment-13861305
 ] 

Sylvain Lebresne commented on CASSANDRA-6538:
-

Whenever we have CASSANDRA-4914, it should be relatively simple to write an 
aggregation function that sum the data size of queried columns, but in the 
meantime, I'm highly skeptical that it's worth adding special casing for it (in 
CQL at least).

If it's only to check that a given partition isn't a lot bigger than one 
though, maybe a simpler option could be a JMX call that given a partition key, 
returns the total size it occupies on disk (which as a bonus we can do without 
actually reading the data, just the index).

 Provide a read-time CQL function to display the data size of columns and rows
 -

 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller

 It would be extremely useful to be able to work out the size of rows and 
 columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows

2014-01-02 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6538:


Priority: Minor  (was: Major)

 Provide a read-time CQL function to display the data size of columns and rows
 -

 Key: CASSANDRA-6538
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6538
 Project: Cassandra
  Issue Type: Improvement
Reporter: Johnny Miller
Priority: Minor

 It would be extremely useful to be able to work out the size of rows and 
 columns via CQL. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (CASSANDRA-6543) CASSANDRA 2.0 : java driver : blobs not retrieving correctly

2014-01-02 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13861314#comment-13861314
 ] 

Sylvain Lebresne commented on CASSANDRA-6543:
-

This is indeed not the correct place for Java driver issues 
(https://datastax-oss.atlassian.net/browse/JAVA is), but for the sake of saving 
everyone's time, this is not a bug, just a misuse of the .array() method of 
ByteBuffer. Please see [this 
thread|https://groups.google.com/a/lists.datastax.com/forum/#!searchin/java-driver-user/blob$20ByteBuffer/java-driver-user/4_KegVX0teo/2OOZ8YOwtBcJ]
 for more details.

 CASSANDRA 2.0 : java driver : blobs not retrieving correctly
 

 Key: CASSANDRA-6543
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6543
 Project: Cassandra
  Issue Type: Bug
  Components: API
Reporter: Constance Eustace

 Might be wrong place but didn't find where the bugs should go and saw some 
 java-driver ones in here...
 Simple retrieval of data from a blob CQL3 column, tried getBytes() and 
 getBytes() unsafe, neither seemed to matter.
 getBytes(col).array() 
 Anyway, the input is 1760 bytes, and checked in cqlsh and the data looks 
 correctly inserted. 
 Retrieval buffer is consistently 1863 bytes... ResultSet column definitions 
 indicate it is of type blob, well, and getBytes shouldn't work. 
 bytebuffer.getCapacity is 1863 bytes. The first four values are definitely 
 different for the retrieved BB than the one sent to storage.
 Is there a mode or something? Maybe some assumed UTF8 decode is occurring? 
 Compression? The blob I'm storing has already been compressed via java's zip 
 support, so a rezip would probably make it larger?
 Here is the blob value in cqlsh, I'll try to get the post-retrieval array: