git commit: remove redundant startTime

2013-07-01 Thread jbellis
Updated Branches:
  refs/heads/trunk 5fe804c46 - 09ea74620


remove redundant startTime


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/09ea7462
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/09ea7462
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/09ea7462

Branch: refs/heads/trunk
Commit: 09ea746207a448d8ede85e7905f05e5f9535fb1f
Parents: 5fe804c
Author: Jonathan Ellis jbel...@apache.org
Authored: Sat Jun 29 18:43:42 2013 -0700
Committer: Jonathan Ellis jbel...@apache.org
Committed: Sun Jun 30 23:25:22 2013 -0700

--
 src/java/org/apache/cassandra/service/StorageProxy.java | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/09ea7462/src/java/org/apache/cassandra/service/StorageProxy.java
--
diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java 
b/src/java/org/apache/cassandra/service/StorageProxy.java
index c57d01e..9f75fce 100644
--- a/src/java/org/apache/cassandra/service/StorageProxy.java
+++ b/src/java/org/apache/cassandra/service/StorageProxy.java
@@ -1076,7 +1076,7 @@ public class StorageProxy implements StorageProxyMBean
 throw new IsBootstrappingException();
 }
 
-long startTime = System.nanoTime();
+long start = System.nanoTime();
 ListRow rows = null;
 try
 {
@@ -1089,7 +1089,6 @@ public class StorageProxy implements StorageProxyMBean
 ReadCommand command = commands.get(0);
 CFMetaData metadata = 
Schema.instance.getCFMetaData(command.ksName, command.cfName);
 
-long start = System.nanoTime();
 long timeout = 
TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout());
 while (true)
 {
@@ -1125,7 +1124,7 @@ public class StorageProxy implements StorageProxyMBean
 }
 finally
 {
-readMetrics.addNano(System.nanoTime() - startTime);
+readMetrics.addNano(System.nanoTime() - start);
 }
 return rows;
 }



[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-07-01 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696624#comment-13696624
 ] 

Sylvain Lebresne commented on CASSANDRA-5151:
-

Ok, so any opposition to closing this now and re-opening if it turns Michael's 
bug hasn't been fix by CASSANDRA-5241?

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 2.0 beta 1

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: Fix sometimes skipping range tombstones during reverse queries

2013-07-01 Thread slebresne
Updated Branches:
  refs/heads/cassandra-1.2 8c2a28050 - 1a8f7230a


Fix sometimes skipping range tombstones during reverse queries

patch by slebresne; reviewed by jbellis for CASSANDRA-5712


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a8f7230
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a8f7230
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a8f7230

Branch: refs/heads/cassandra-1.2
Commit: 1a8f7230a1b56d8e58c33ef2922f4460e7b6f913
Parents: 8c2a280
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Jul 1 09:29:48 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Jul 1 09:32:21 2013 +0200

--
 CHANGES.txt |  3 +-
 .../db/columniterator/IndexedSliceReader.java   | 28 ++-
 .../apache/cassandra/db/RangeTombstoneTest.java | 37 
 3 files changed, 66 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a8f7230/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1b9634c..843bb53 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,8 +3,9 @@
  * Fix serialization of the LEFT gossip value (CASSANDRA-5696)
  * Pig: support for cql3 tables (CASSANDRA-5234)
  * cqlsh: Don't show 'null' in place of empty values (CASSANDRA-5675)
- * Race condition in detecting version on a mixed 1.1/1.2 cluster 
+ * Race condition in detecting version on a mixed 1.1/1.2 cluster
(CASSANDRA-5692)
+ * Fix skipping range tombstones with reverse queries (CASSANDRA-5712)
 
 
 1.2.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a8f7230/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index 4ca0ea5..21eb48b 100644
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@ -29,6 +29,7 @@ import org.apache.cassandra.db.ColumnFamily;
 import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.DeletionInfo;
 import org.apache.cassandra.db.OnDiskAtom;
+import org.apache.cassandra.db.RangeTombstone;
 import org.apache.cassandra.db.RowIndexEntry;
 import org.apache.cassandra.db.filter.ColumnSlice;
 import org.apache.cassandra.db.marshal.AbstractType;
@@ -60,6 +61,9 @@ class IndexedSliceReader extends AbstractIteratorOnDiskAtom 
implements OnDiskA
 private final DequeOnDiskAtom blockColumns = new 
ArrayDequeOnDiskAtom();
 private final AbstractType? comparator;
 
+// Holds range tombstone in reverse queries. See addColumn()
+private final DequeOnDiskAtom rangeTombstonesReversed;
+
 /**
  * This slice reader assumes that slices are sorted correctly, e.g. that 
for forward lookup slices are in
  * lexicographic order of start elements and that for reverse lookup they 
are in reverse lexicographic order of
@@ -74,6 +78,7 @@ class IndexedSliceReader extends AbstractIteratorOnDiskAtom 
implements OnDiskA
 this.reversed = reversed;
 this.slices = slices;
 this.comparator = sstable.metadata.comparator;
+this.rangeTombstonesReversed = reversed ? new ArrayDequeOnDiskAtom() 
: null;
 
 try
 {
@@ -147,6 +152,14 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 {
 while (true)
 {
+if (reversed)
+{
+// Return all tombstone for the block first (see addColumn() 
below)
+OnDiskAtom column = rangeTombstonesReversed.poll();
+if (column != null)
+return column;
+}
+
 OnDiskAtom column = blockColumns.poll();
 if (column == null)
 {
@@ -169,9 +182,22 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 protected void addColumn(OnDiskAtom col)
 {
 if (reversed)
-blockColumns.addFirst(col);
+{
+/*
+ * We put range tomstone markers at the beginning of the range 
they delete. But for reversed queries,
+ * the caller still need to know about a RangeTombstone before it 
sees any column that it covers.
+ * To make that simple, we keep said tombstones separate and 
return them all before any column for
+ * a given block.
+ */
+if (col instanceof RangeTombstone)
+rangeTombstonesReversed.addFirst(col);
+else
+ 

[2/2] git commit: Merge branch 'cassandra-1.2' into trunk

2013-07-01 Thread slebresne
Merge branch 'cassandra-1.2' into trunk

Conflicts:
src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
test/unit/org/apache/cassandra/db/RangeTombstoneTest.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4b889732
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4b889732
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4b889732

Branch: refs/heads/trunk
Commit: 4b8897327794855e442b93edc470a5710d240a83
Parents: 09ea746 1a8f723
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Jul 1 09:37:08 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Jul 1 09:37:08 2013 +0200

--
 CHANGES.txt |  3 +-
 .../db/columniterator/IndexedSliceReader.java   | 27 ++-
 .../apache/cassandra/db/RangeTombstoneTest.java | 36 
 3 files changed, 64 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4b889732/CHANGES.txt
--
diff --cc CHANGES.txt
index 87aa99d,843bb53..66c8b1e
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -72,9 -1,11 +72,10 @@@
  1.2.7
   * Fix loading key cache when a saved entry is no longer valid 
(CASSANDRA-5706)
   * Fix serialization of the LEFT gossip value (CASSANDRA-5696)
 - * Pig: support for cql3 tables (CASSANDRA-5234)
   * cqlsh: Don't show 'null' in place of empty values (CASSANDRA-5675)
-  * Race condition in detecting version on a mixed 1.1/1.2 cluster 
+  * Race condition in detecting version on a mixed 1.1/1.2 cluster
 (CASSANDRA-5692)
+  * Fix skipping range tombstones with reverse queries (CASSANDRA-5712)
  
  
  1.2.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4b889732/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/4b889732/test/unit/org/apache/cassandra/db/RangeTombstoneTest.java
--
diff --cc test/unit/org/apache/cassandra/db/RangeTombstoneTest.java
index 9d04cfd,c2f8b83..b240a5f
--- a/test/unit/org/apache/cassandra/db/RangeTombstoneTest.java
+++ b/test/unit/org/apache/cassandra/db/RangeTombstoneTest.java
@@@ -151,9 -154,41 +151,40 @@@ public class RangeTombstoneTest extend
  assert !isLive(cf, cf.getColumn(b(i))) : Column  + i +  
shouldn't be live;
  }
  
+ @Test
+ public void reverseQueryTest() throws Exception
+ {
 -Table table = Table.open(KSNAME);
++Keyspace table = Keyspace.open(KSNAME);
+ ColumnFamilyStore cfs = table.getColumnFamilyStore(CFNAME);
+ 
+ // Inserting data
+ String key = k3;
+ RowMutation rm;
+ ColumnFamily cf;
+ 
+ rm = new RowMutation(KSNAME, ByteBufferUtil.bytes(key));
+ add(rm, 2, 0);
+ rm.apply();
+ cfs.forceBlockingFlush();
+ 
+ rm = new RowMutation(KSNAME, ByteBufferUtil.bytes(key));
+ // Deletes everything but without being a row tombstone
+ delete(rm.addOrGet(CFNAME), 0, 10, 1);
+ add(rm, 1, 2);
+ rm.apply();
+ cfs.forceBlockingFlush();
+ 
+ // Get the last value of the row
 -QueryPath path = new QueryPath(CFNAME);
 -cf = cfs.getColumnFamily(QueryFilter.getSliceFilter(dk(key), path, 
ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, true, 1));
++cf = cfs.getColumnFamily(QueryFilter.getSliceFilter(dk(key), CFNAME, 
ByteBufferUtil.EMPTY_BYTE_BUFFER, ByteBufferUtil.EMPTY_BYTE_BUFFER, true, 1, 
System.currentTimeMillis()));
+ 
+ assert !cf.isEmpty();
+ int last = i(cf.getSortedColumns().iterator().next().name());
+ assert last == 1 : Last column should be column 1 since column 2 has 
been deleted;
+ }
+ 
 -private static boolean isLive(ColumnFamily cf, IColumn c)
 +private static boolean isLive(ColumnFamily cf, Column c)
  {
 -return c != null  !c.isMarkedForDelete()  
!cf.deletionInfo().isDeleted(c);
 +return c != null  !c.isMarkedForDelete(System.currentTimeMillis()) 
 !cf.deletionInfo().isDeleted(c);
  }
  
  private static ByteBuffer b(int i)
@@@ -161,9 -196,14 +192,14 @@@
  return ByteBufferUtil.bytes(i);
  }
  
+ private static int i(ByteBuffer i)
+ {
+ return ByteBufferUtil.toInt(i);
+ }
+ 
  private static void add(RowMutation rm, int value, long timestamp)
  {
 -rm.add(new QueryPath(CFNAME, null, b(value)), b(value), timestamp);
 +rm.add(CFNAME, b(value), b(value), timestamp);
  }
  
  private static void 

[1/2] git commit: Fix sometimes skipping range tombstones during reverse queries

2013-07-01 Thread slebresne
Updated Branches:
  refs/heads/trunk 09ea74620 - 4b8897327


Fix sometimes skipping range tombstones during reverse queries

patch by slebresne; reviewed by jbellis for CASSANDRA-5712


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a8f7230
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a8f7230
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a8f7230

Branch: refs/heads/trunk
Commit: 1a8f7230a1b56d8e58c33ef2922f4460e7b6f913
Parents: 8c2a280
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Jul 1 09:29:48 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Jul 1 09:32:21 2013 +0200

--
 CHANGES.txt |  3 +-
 .../db/columniterator/IndexedSliceReader.java   | 28 ++-
 .../apache/cassandra/db/RangeTombstoneTest.java | 37 
 3 files changed, 66 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a8f7230/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 1b9634c..843bb53 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -3,8 +3,9 @@
  * Fix serialization of the LEFT gossip value (CASSANDRA-5696)
  * Pig: support for cql3 tables (CASSANDRA-5234)
  * cqlsh: Don't show 'null' in place of empty values (CASSANDRA-5675)
- * Race condition in detecting version on a mixed 1.1/1.2 cluster 
+ * Race condition in detecting version on a mixed 1.1/1.2 cluster
(CASSANDRA-5692)
+ * Fix skipping range tombstones with reverse queries (CASSANDRA-5712)
 
 
 1.2.6

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a8f7230/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
--
diff --git 
a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java 
b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
index 4ca0ea5..21eb48b 100644
--- a/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
+++ b/src/java/org/apache/cassandra/db/columniterator/IndexedSliceReader.java
@@ -29,6 +29,7 @@ import org.apache.cassandra.db.ColumnFamily;
 import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.DeletionInfo;
 import org.apache.cassandra.db.OnDiskAtom;
+import org.apache.cassandra.db.RangeTombstone;
 import org.apache.cassandra.db.RowIndexEntry;
 import org.apache.cassandra.db.filter.ColumnSlice;
 import org.apache.cassandra.db.marshal.AbstractType;
@@ -60,6 +61,9 @@ class IndexedSliceReader extends AbstractIteratorOnDiskAtom 
implements OnDiskA
 private final DequeOnDiskAtom blockColumns = new 
ArrayDequeOnDiskAtom();
 private final AbstractType? comparator;
 
+// Holds range tombstone in reverse queries. See addColumn()
+private final DequeOnDiskAtom rangeTombstonesReversed;
+
 /**
  * This slice reader assumes that slices are sorted correctly, e.g. that 
for forward lookup slices are in
  * lexicographic order of start elements and that for reverse lookup they 
are in reverse lexicographic order of
@@ -74,6 +78,7 @@ class IndexedSliceReader extends AbstractIteratorOnDiskAtom 
implements OnDiskA
 this.reversed = reversed;
 this.slices = slices;
 this.comparator = sstable.metadata.comparator;
+this.rangeTombstonesReversed = reversed ? new ArrayDequeOnDiskAtom() 
: null;
 
 try
 {
@@ -147,6 +152,14 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 {
 while (true)
 {
+if (reversed)
+{
+// Return all tombstone for the block first (see addColumn() 
below)
+OnDiskAtom column = rangeTombstonesReversed.poll();
+if (column != null)
+return column;
+}
+
 OnDiskAtom column = blockColumns.poll();
 if (column == null)
 {
@@ -169,9 +182,22 @@ class IndexedSliceReader extends 
AbstractIteratorOnDiskAtom implements OnDiskA
 protected void addColumn(OnDiskAtom col)
 {
 if (reversed)
-blockColumns.addFirst(col);
+{
+/*
+ * We put range tomstone markers at the beginning of the range 
they delete. But for reversed queries,
+ * the caller still need to know about a RangeTombstone before it 
sees any column that it covers.
+ * To make that simple, we keep said tombstones separate and 
return them all before any column for
+ * a given block.
+ */
+if (col instanceof RangeTombstone)
+rangeTombstonesReversed.addFirst(col);
+else
+

git commit: Changing column_index_size_in_kb on different nodes might corrupt files

2013-07-01 Thread slebresne
Updated Branches:
  refs/heads/trunk 4b8897327 - 82b920b66


Changing column_index_size_in_kb on different nodes might corrupt files

patch by slebresne; reviewed by jbellis for CASSANDRA-5454


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82b920b6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82b920b6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82b920b6

Branch: refs/heads/trunk
Commit: 82b920b66e7bb551856551838e228adad043a685
Parents: 4b88973
Author: Sylvain Lebresne sylv...@datastax.com
Authored: Mon Jul 1 09:40:08 2013 +0200
Committer: Sylvain Lebresne sylv...@datastax.com
Committed: Mon Jul 1 09:40:08 2013 +0200

--
 CHANGES.txt |  2 ++
 .../org/apache/cassandra/db/ColumnIndex.java| 20 +---
 .../cassandra/io/sstable/SSTableWriter.java |  2 +-
 3 files changed, 8 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/82b920b6/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 66c8b1e..2520b23 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -67,6 +67,8 @@
  * Fix ALTER RENAME post-5125 (CASSANDRA-5702)
  * Disallow renaming a 2ndary indexed column (CASSANDRA-5705)
  * Rename Table to Keyspace (CASSANDRA-5613)
+ * Ensure changing column_index_size_in_kb on different nodes don't corrupt the
+   sstable (CASSANDRA-5454)
 
 
 1.2.7

http://git-wip-us.apache.org/repos/asf/cassandra/blob/82b920b6/src/java/org/apache/cassandra/db/ColumnIndex.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnIndex.java 
b/src/java/org/apache/cassandra/db/ColumnIndex.java
index bcc5c2f..1501fcd 100644
--- a/src/java/org/apache/cassandra/db/ColumnIndex.java
+++ b/src/java/org/apache/cassandra/db/ColumnIndex.java
@@ -71,8 +71,7 @@ public class ColumnIndex
 
 public Builder(ColumnFamily cf,
ByteBuffer key,
-   DataOutput output,
-   boolean fromStream)
+   DataOutput output)
 {
 assert cf != null;
 assert key != null;
@@ -83,14 +82,7 @@ public class ColumnIndex
 this.indexOffset = rowHeaderSize(key, deletionInfo);
 this.result = new ColumnIndex(new 
ArrayListIndexHelper.IndexInfo());
 this.output = output;
-this.tombstoneTracker = fromStream ? null : new 
RangeTombstone.Tracker(cf.getComparator());
-}
-
-public Builder(ColumnFamily cf,
-   ByteBuffer key,
-   DataOutput output)
-{
-this(cf, key, output, false);
+this.tombstoneTracker = new 
RangeTombstone.Tracker(cf.getComparator());
 }
 
 /**
@@ -113,7 +105,7 @@ public class ColumnIndex
 
 public int writtenAtomCount()
 {
-return tombstoneTracker == null ? atomCount : atomCount + 
tombstoneTracker.writtenAtom();
+return atomCount + tombstoneTracker.writtenAtom();
 }
 
 /**
@@ -173,8 +165,7 @@ public class ColumnIndex
 firstColumn = column;
 startPosition = endPosition;
 // TODO: have that use the firstColumn as min + make sure we 
optimize that on read
-if (tombstoneTracker != null)
-endPosition += 
tombstoneTracker.writeOpenedMarker(firstColumn, output, atomSerializer);
+endPosition += tombstoneTracker.writeOpenedMarker(firstColumn, 
output, atomSerializer);
 blockSize = 0; // We don't count repeated tombstone marker in 
the block size, to avoid a situation
// where we wouldn't make any progress because 
a block is filled by said marker
 }
@@ -196,8 +187,7 @@ public class ColumnIndex
 atomSerializer.serializeForSSTable(column, output);
 
 // TODO: Should deal with removing unneeded tombstones
-if (tombstoneTracker != null)
-tombstoneTracker.update(column);
+tombstoneTracker.update(column);
 
 lastColumn = column;
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/82b920b6/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
index 879c9bc..2f54e1a 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java
@@ -221,7 +221,7 @@ public 

[jira] [Updated] (CASSANDRA-5619) CAS UPDATE for a lost race: save round trip by returning column values

2013-07-01 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5619:


Attachment: 5619_thrift_fixup.txt

Damn you thrift. I guess returning an empty list works as well so attaching a 
simple patch to do that.

I changed it only on the thrift side, and not in the StorageProxy call cause it 
felt cleaner internally to keep null, and I didn't saw the point of allocating 
an empty CF object in the CQL3 case.

The patch also make is so that expected.isEmpty() tests existence (like 
expected == null). It's what makes sense and I figured that maybe Thrift can't 
pass null as parameter either so ...


 CAS UPDATE for a lost race: save round trip by returning column values
 --

 Key: CASSANDRA-5619
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5619
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0 beta 1
Reporter: Blair Zajac
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 5619_thrift_fixup.txt, 5619.txt


 Looking at the new CAS CQL3 support examples [1], if one lost a race for an 
 UPDATE, to save a round trip to get the current values to decide if you need 
 to perform your work, could the columns that were used in the IF clause also 
 be returned to the caller?  Maybe the columns values as part of the SET part 
 could also be returned.
 I don't know if this is generally useful though.
 In the case of creating a new user account with a given username which is the 
 partition key, if one lost the race to another person creating an account 
 with the same username, it doesn't matter to the loser what the column values 
 are, just that they lost.
 I'm new to Cassandra, so maybe there's other use cases, such as doing 
 incremental amount of work on a row.  In pure Java projects I've done while 
 loops around AtomicReference.html#compareAndSet() until the work was done on 
 the referenced object to handle multiple threads each making forward progress 
 in updating the references object.
 [1] https://github.com/riptano/cassandra-dtest/blob/master/cql_tests.py#L3044

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4131) Integrate Hive support to be in core cassandra

2013-07-01 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696646#comment-13696646
 ] 

Cyril Scetbon commented on CASSANDRA-4131:
--

Are the duplicates just tombstones not filtered as said at 
https://issues.apache.org/jira/browse/CASSANDRA-4421?focusedCommentId=13658450page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13658450
 ?

 Integrate Hive support to be in core cassandra
 --

 Key: CASSANDRA-4131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4131
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeremy Hanna
Assignee: Edward Capriolo
  Labels: hadoop, hive

 The standalone hive support (at https://github.com/riptano/hive) would be 
 great to have in-tree so that people don't have to go out to github to 
 download it and wonder if it's a left-for-dead external shim.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4131) Integrate Hive support to be in core cassandra

2013-07-01 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696646#comment-13696646
 ] 

Cyril Scetbon edited comment on CASSANDRA-4131 at 7/1/13 8:05 AM:
--

Are the duplicates just tombstones not filtered out as said at 
https://issues.apache.org/jira/browse/CASSANDRA-4421?focusedCommentId=13658450page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13658450
 ?

  was (Author: cscetbon):
Are the duplicates just tombstones not filtered as said at 
https://issues.apache.org/jira/browse/CASSANDRA-4421?focusedCommentId=13658450page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13658450
 ?
  
 Integrate Hive support to be in core cassandra
 --

 Key: CASSANDRA-4131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4131
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeremy Hanna
Assignee: Edward Capriolo
  Labels: hadoop, hive

 The standalone hive support (at https://github.com/riptano/hive) would be 
 great to have in-tree so that people don't have to go out to github to 
 download it and wonder if it's a left-for-dead external shim.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5466) Compaction task eats 100% CPU for a long time for tables with collection typed columns

2013-07-01 Thread Fabien Rousseau (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696651#comment-13696651
 ] 

Fabien Rousseau commented on CASSANDRA-5466:


It's probably related to : https://issues.apache.org/jira/browse/CASSANDRA-5677

( Also this discussion : 
http://www.mail-archive.com/user@cassandra.apache.org/msg30641.html )

 Compaction task eats 100% CPU for a long time for tables with collection 
 typed columns
 --

 Key: CASSANDRA-5466
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5466
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: ubuntu 12.10, sun-6-java 1.6.0.37, Core-i7, 8GB RAM
Reporter: Alexey Tereschenko
Assignee: Alex Zarutin

 For the table:
 {code:sql}
 create table test (
 user_id bigint,
 first_list listbigint,
 second_list listbigint,
 third_list listbigint,
 PRIMARY KEY (user_id)
 );
 {code}
 I do thousands of updates like the following:
 {code:sql}
 UPDATE test SET first_list = [1], second_list = [2], third_list = [3] WHERE 
 user_id = ?;
 {code}
 In several minutes a compaction task starts running. {{nodetool 
 compactionstats}} shows that remaining time is 2 seconds but in fact it can 
 take hours to really complete the compaction tasks. And during that time 
 Cassandra consumes 100% of CPU and slows down so significally that it gives 
 connection timeout exceptions to any client code trying to establish 
 connection with Cassandra. This happens only with tables with collection 
 typed columns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5715) CAS on 'primary key only' table

2013-07-01 Thread Sylvain Lebresne (JIRA)
Sylvain Lebresne created CASSANDRA-5715:
---

 Summary: CAS on 'primary key only' table
 Key: CASSANDRA-5715
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor


Given a table with only a primary key, like
{noformat}
CREATE TABLE test (k int PRIMARY KEY)
{noformat}
there is currently no way to CAS a row in that table into existing because:
# INSERT doesn't currently support IF
# UPDATE has no way to update such table

So we should probably allow IF conditions on INSERT statements.

In addition (or alternatively), we could work on allowing UPDATE to update such 
table. One motivation for that could be to make UPDATE always be more general 
to INSERT. That is currently, there is a bunch of operation that INSERT cannot 
do (counter increments, collection appends), but that primary key table case 
is, afaik, the only case where you *need* to use INSERT. However, because CQL 
forces segregation of PK value to the WHERE clause and not to the SET one, the 
only syntax that I can see work would be:
{noformat}
UPDATE WHERE k=0;
{noformat}
which maybe is too ugly to allow?

 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-1585) Support renaming columnfamilies and keyspaces

2013-07-01 Thread Alain RODRIGUEZ (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696667#comment-13696667
 ] 

Alain RODRIGUEZ commented on CASSANDRA-1585:


Does any one knows if this feature will be available again ? It was removed 
almost 3 years ago and this issue is marked as resolution: later.

It is not a critical feature, but since CQL3 purpose is to be similar to the 
SQL norm, it would be interesting to be able to rename databases (keyspace) and 
tables (CF).

This is feature people would use for sure.

 Support renaming columnfamilies and keyspaces
 -

 Key: CASSANDRA-1585
 URL: https://issues.apache.org/jira/browse/CASSANDRA-1585
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Stu Hood
Priority: Minor

 Renames were briefly supported but were race-prone.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5716) Remark on cassandra-5273 : Hanging system after OutOfMemory. Server cannot die due to uncaughtException handling

2013-07-01 Thread Ignace Desimpel (JIRA)
Ignace Desimpel created CASSANDRA-5716:
--

 Summary: Remark on cassandra-5273 : Hanging system after 
OutOfMemory. Server cannot die due to uncaughtException handling
 Key: CASSANDRA-5716
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5716
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 1.2.6
 Environment: linux
Reporter: Ignace Desimpel
Priority: Minor
 Fix For: 1.2.6


Possible incorrect handling of an OOM as a result of modifications made for 
issue cassandra-5273.
I could reproduce the OOM, with the patch of Cassandra-5273 applied.
The good news is that, at least in my case, it works fine : the system did die !
 
However, due to multiple uncaughtException handling, multiple threads are 
calling the exitThread.start() routine, causing an IllegalStateException. There 
are some other exceptions also, but that seems logical. Also, after calling the 
start() function, the thread(s) will continue to run, and that could not be 
wanted.
 
Below I pasted the stack trace.
Just for your information, after all this works, and I could restart the 
Cassandra server and redo the OOM
 
2013-06-27 16:28:15.384 Unable to reduce heap usage since there are no dirty 
column families
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid4808.hprof ...
Heap dump file created [278960302 bytes in 2.659 secs]
2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-31,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-36,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-30,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread Thread[GossipTasks:1,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread Thread[OptionalTasks:1,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread 
Thread[PERIODIC-COMMIT-LOG-SYNCER,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread 
Thread[metrics-meter-tick-thread-2,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.656 Exception in thread Thread[EXPIRING-MAP-REAPER:1,5,main]
java.lang.OutOfMemoryError: Java heap space
   at 
java.util.concurrent.ConcurrentHashMap$EntrySet.iterator(ConcurrentHashMap.java:1202)
 ~[na:1.6.0_29]
   at org.apache.cassandra.utils.ExpiringMap$1.run(ExpiringMap.java:88) 
~[thrift/:na]
   at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
 ~[thrift/:na]
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) 
[na:1.6.0_29]
   at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) 
[na:1.6.0_29]
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) 
[na:1.6.0_29]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 [na:1.6.0_29]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
 [na:1.6.0_29]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
 [na:1.6.0_29]
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 [na:1.6.0_29]
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
[na:1.6.0_29]
   at java.lang.Thread.run(Thread.java:662) [na:1.6.0_29]
2013-06-27 16:28:42.656 Exception in thread Thread[ScheduledTasks:1,5,main]
java.lang.OutOfMemoryError: Java heap space
   at com.yammer.metrics.stats.Snapshot.init(Snapshot.java:30) 
~[metrics-core-2.0.3.jar:na]
   at 
com.yammer.metrics.stats.ExponentiallyDecayingSample.getSnapshot(ExponentiallyDecayingSample.java:107)
 ~[metrics-core-2.0.3.jar:na]
   at 
org.apache.cassandra.locator.DynamicEndpointSnitch.updateScores(DynamicEndpointSnitch.java:237)
 ~[thrift/:na]
   at 
org.apache.cassandra.locator.DynamicEndpointSnitch.access$0(DynamicEndpointSnitch.java:217)
 ~[thrift/:na]
   at 
org.apache.cassandra.locator.DynamicEndpointSnitch$1.run(DynamicEndpointSnitch.java:71)
 ~[thrift/:na]
   at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
 ~[thrift/:na]
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) 
[na:1.6.0_29]
   at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) 
[na:1.6.0_29]
   at 

[jira] [Updated] (CASSANDRA-5699) Streaming (2.0) can deadlock

2013-07-01 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5699:


Attachment: (was: 5699.txt)

 Streaming (2.0) can deadlock
 

 Key: CASSANDRA-5699
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5699
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1


 The new streaming implementation (CASSANDRA-5286) creates 2 threads per host 
 for streaming, one for the incoming stream and one for the outgoing one. 
 However, both currently share the same socket, but since we use synchronous 
 I/O, a read can block a write, which can result in a deadlock if 2 nodes are 
 both blocking on a read a the same time, thus blocking their respective 
 writes (this is actually fairly easy to reproduce with a simple repair).
 So instead attaching a patch that uses one socket per thread.
 The patch also correct the stream throughput throttling calculation that was 
 8000 times lower than what it should be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5699) Streaming (2.0) can deadlock

2013-07-01 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-5699:


Attachment: 5699.txt

bq. Is the stream lifecycle documented anywhere

Yuki had written https://gist.github.com/yukim/5672508. The one thing this 
patch changes compared to that design document is that the initialization 
phase is slightly more complex since we need to create 2 connection. So the 
first node sends a StreamInit to the other end to create the first connection 
(as was done previously), but then the remote side creates a connection back 
sending a StreamInit message of it's own. Then and only then do we go to the 
prepare phase.

In any case, we probably want that lifecycle to be documented in the javadoc or 
we'll lose track of it, so I've described this in relative detail at the head 
of StreamSession.

So updated the patch with those added comments and the modification to 
StreamRepairTask cleaned up.


 Streaming (2.0) can deadlock
 

 Key: CASSANDRA-5699
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5699
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 5699.txt


 The new streaming implementation (CASSANDRA-5286) creates 2 threads per host 
 for streaming, one for the incoming stream and one for the outgoing one. 
 However, both currently share the same socket, but since we use synchronous 
 I/O, a read can block a write, which can result in a deadlock if 2 nodes are 
 both blocking on a read a the same time, thus blocking their respective 
 writes (this is actually fairly easy to reproduce with a simple repair).
 So instead attaching a patch that uses one socket per thread.
 The patch also correct the stream throughput throttling calculation that was 
 8000 times lower than what it should be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-07-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696844#comment-13696844
 ] 

Jonathan Ellis commented on CASSANDRA-5151:
---

SGTM.

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 2.0 beta 1

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5715) CAS on 'primary key only' table

2013-07-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696846#comment-13696846
 ] 

Jonathan Ellis commented on CASSANDRA-5715:
---

{{UPDATE test SET k=0 WHERE k=0}} is legal SQL, if a bit odd.

 CAS on 'primary key only' table
 ---

 Key: CASSANDRA-5715
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor

 Given a table with only a primary key, like
 {noformat}
 CREATE TABLE test (k int PRIMARY KEY)
 {noformat}
 there is currently no way to CAS a row in that table into existing because:
 # INSERT doesn't currently support IF
 # UPDATE has no way to update such table
 So we should probably allow IF conditions on INSERT statements.
 In addition (or alternatively), we could work on allowing UPDATE to update 
 such table. One motivation for that could be to make UPDATE always be more 
 general to INSERT. That is currently, there is a bunch of operation that 
 INSERT cannot do (counter increments, collection appends), but that primary 
 key table case is, afaik, the only case where you *need* to use INSERT. 
 However, because CQL forces segregation of PK value to the WHERE clause and 
 not to the SET one, the only syntax that I can see work would be:
 {noformat}
 UPDATE WHERE k=0;
 {noformat}
 which maybe is too ugly to allow?
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5716) Remark on cassandra-5273 : Hanging system after OutOfMemory. Server cannot die due to uncaughtException handling

2013-07-01 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5716:
--

Affects Version/s: (was: 1.2.6)
   2.0 beta 1
Fix Version/s: (was: 1.2.6)
   2.0
   Issue Type: Bug  (was: Improvement)

 Remark on cassandra-5273 : Hanging system after OutOfMemory. Server cannot 
 die due to uncaughtException handling
 

 Key: CASSANDRA-5716
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5716
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0 beta 1
 Environment: linux
Reporter: Ignace Desimpel
Priority: Minor
 Fix For: 2.0


 Possible incorrect handling of an OOM as a result of modifications made for 
 issue cassandra-5273.
 I could reproduce the OOM, with the patch of Cassandra-5273 applied.
 The good news is that, at least in my case, it works fine : the system did 
 die !
  
 However, due to multiple uncaughtException handling, multiple threads are 
 calling the exitThread.start() routine, causing an IllegalStateException. 
 There are some other exceptions also, but that seems logical. Also, after 
 calling the start() function, the thread(s) will continue to run, and that 
 could not be wanted.
  
 Below I pasted the stack trace.
 Just for your information, after all this works, and I could restart the 
 Cassandra server and redo the OOM
  
 2013-06-27 16:28:15.384 Unable to reduce heap usage since there are no dirty 
 column families
 java.lang.OutOfMemoryError: Java heap space
 Dumping heap to java_pid4808.hprof ...
 Heap dump file created [278960302 bytes in 2.659 secs]
 2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-31,5,main]
 java.lang.OutOfMemoryError: Java heap space
 2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-36,5,main]
 java.lang.OutOfMemoryError: Java heap space
 2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-30,5,main]
 java.lang.OutOfMemoryError: Java heap space
 2013-06-27 16:28:42.655 Exception in thread Thread[GossipTasks:1,5,main]
 java.lang.OutOfMemoryError: Java heap space
 2013-06-27 16:28:42.655 Exception in thread Thread[OptionalTasks:1,5,main]
 java.lang.OutOfMemoryError: Java heap space
 2013-06-27 16:28:42.655 Exception in thread 
 Thread[PERIODIC-COMMIT-LOG-SYNCER,5,main]
 java.lang.OutOfMemoryError: Java heap space
 2013-06-27 16:28:42.655 Exception in thread 
 Thread[metrics-meter-tick-thread-2,5,main]
 java.lang.OutOfMemoryError: Java heap space
 2013-06-27 16:28:42.656 Exception in thread 
 Thread[EXPIRING-MAP-REAPER:1,5,main]
 java.lang.OutOfMemoryError: Java heap space
at 
 java.util.concurrent.ConcurrentHashMap$EntrySet.iterator(ConcurrentHashMap.java:1202)
  ~[na:1.6.0_29]
at org.apache.cassandra.utils.ExpiringMap$1.run(ExpiringMap.java:88) 
 ~[thrift/:na]
at 
 org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
  ~[thrift/:na]
at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) 
 [na:1.6.0_29]
at 
 java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) 
 [na:1.6.0_29]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) 
 [na:1.6.0_29]
at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
  [na:1.6.0_29]
at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
  [na:1.6.0_29]
at 
 java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
  [na:1.6.0_29]
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
  [na:1.6.0_29]
at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
  [na:1.6.0_29]
at java.lang.Thread.run(Thread.java:662) [na:1.6.0_29]
 2013-06-27 16:28:42.656 Exception in thread Thread[ScheduledTasks:1,5,main]
 java.lang.OutOfMemoryError: Java heap space
at com.yammer.metrics.stats.Snapshot.init(Snapshot.java:30) 
 ~[metrics-core-2.0.3.jar:na]
at 
 com.yammer.metrics.stats.ExponentiallyDecayingSample.getSnapshot(ExponentiallyDecayingSample.java:107)
  ~[metrics-core-2.0.3.jar:na]
at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.updateScores(DynamicEndpointSnitch.java:237)
  ~[thrift/:na]
at 
 org.apache.cassandra.locator.DynamicEndpointSnitch.access$0(DynamicEndpointSnitch.java:217)
  ~[thrift/:na]
at 
 

[jira] [Commented] (CASSANDRA-5715) CAS on 'primary key only' table

2013-07-01 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696860#comment-13696860
 ] 

Sylvain Lebresne commented on CASSANDRA-5715:
-

I understand that, but CQL doesn't ever allow a PK column to be in the SET 
clause contrarily to SQL. So allowing it in just that case wouldn't make sense 
unless we start allowing PK columns in the SET clause in general. Note sure 
it's worth going there.

 CAS on 'primary key only' table
 ---

 Key: CASSANDRA-5715
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor

 Given a table with only a primary key, like
 {noformat}
 CREATE TABLE test (k int PRIMARY KEY)
 {noformat}
 there is currently no way to CAS a row in that table into existing because:
 # INSERT doesn't currently support IF
 # UPDATE has no way to update such table
 So we should probably allow IF conditions on INSERT statements.
 In addition (or alternatively), we could work on allowing UPDATE to update 
 such table. One motivation for that could be to make UPDATE always be more 
 general to INSERT. That is currently, there is a bunch of operation that 
 INSERT cannot do (counter increments, collection appends), but that primary 
 key table case is, afaik, the only case where you *need* to use INSERT. 
 However, because CQL forces segregation of PK value to the WHERE clause and 
 not to the SET one, the only syntax that I can see work would be:
 {noformat}
 UPDATE WHERE k=0;
 {noformat}
 which maybe is too ugly to allow?
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CASSANDRA-5717) Repair causes streaming errors

2013-07-01 Thread Yoan Arnaudov (JIRA)
Yoan Arnaudov created CASSANDRA-5717:


 Summary: Repair causes streaming errors
 Key: CASSANDRA-5717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5717
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.6
 Environment: CentOS release 6.3 (Final)
Reporter: Yoan Arnaudov


I've changed the replication factor for one of keyspaces and now I'm running 
repairs on column families (manually). I have 3 nodes cluster. Here is the 
error in the error log for one of the nodes.

{code:title=Error Log}
ERROR [Streaming to /208.94.232.9:1] 2013-07-01 09:31:29,819 
CassandraDaemon.java (line 192) Exception in thread Thread[Streaming to 
/208.94.232.9:1,5,main]
java.lang.RuntimeException: java.io.IOException: Broken pipe
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectly(Unknown Source)
at sun.nio.ch.FileChannelImpl.transferTo(Unknown Source)
at 
org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:93)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
... 3 more
ERROR [Streaming to /208.94.232.135:2] 2013-07-01 09:44:18,372 
CassandraDaemon.java (line 192) Exception in thread Thread[Streaming to 
/208.94.232.135:2,5,main]
java.lang.RuntimeException: java.io.IOException: Broken pipe
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectly(Unknown Source)
at sun.nio.ch.FileChannelImpl.transferTo(Unknown Source)
at 
org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:93)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
... 3 more
{code}


{code:title=netstats for one of the nodes}
Mode: NORMAL
Not sending any streams.
Streaming from: /IP
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_org/KEYSPACE-ns_history_org-hf-282-Data.db
 sections=1 progress=0/1393915753 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_org/KEYSPACE-ns_history_org-ic-455-Data.db
 sections=1 progress=0/792707 - 0%
Streaming from: /IP
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_org/KEYSPACE-ns_history_org-hf-255-Data.db
 sections=1 progress=0/1398197628 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_biz/KEYSPACE-ns_history_biz-ic-341-Data.db
 sections=1 progress=0/6153542 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-539-Data.db 
sections=1 progress=0/86968194 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_biz/KEYSPACE-ns_history_biz-hf-244-Data.db
 sections=1 progress=0/322197762 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_biz/KEYSPACE-ns_history_biz-ic-346-Data.db
 sections=1 progress=0/6219503 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-487-Data.db 
sections=1 progress=0/2689291466 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-ic-684-Data.db 
sections=1 progress=0/3717513 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_org/KEYSPACE-ns_history_org-ic-413-Data.db
 sections=1 progress=0/22256993 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-529-Data.db 
sections=1 progress=0/345419053 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-534-Data.db 
sections=1 progress=0/88759930 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-509-Data.db 
sections=1 progress=0/365451892 - 0%
Read Repair Statistics:
Attempted: 10696592
Mismatch (Blocking): 48873
Mismatch (Background): 47308
Pool NameActive   Pending  Completed
Commandsn/a 0  

[jira] [Commented] (CASSANDRA-5715) CAS on 'primary key only' table

2013-07-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696892#comment-13696892
 ] 

Jonathan Ellis commented on CASSANDRA-5715:
---

Ah, I see.

We could special case it then as {{UPDATE test SET PRIMARY KEY WHERE k=0}}, 
slightly less awkward than no SET at all IMO.  But I'm okay either way.

 CAS on 'primary key only' table
 ---

 Key: CASSANDRA-5715
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sylvain Lebresne
Priority: Minor

 Given a table with only a primary key, like
 {noformat}
 CREATE TABLE test (k int PRIMARY KEY)
 {noformat}
 there is currently no way to CAS a row in that table into existing because:
 # INSERT doesn't currently support IF
 # UPDATE has no way to update such table
 So we should probably allow IF conditions on INSERT statements.
 In addition (or alternatively), we could work on allowing UPDATE to update 
 such table. One motivation for that could be to make UPDATE always be more 
 general to INSERT. That is currently, there is a bunch of operation that 
 INSERT cannot do (counter increments, collection appends), but that primary 
 key table case is, afaik, the only case where you *need* to use INSERT. 
 However, because CQL forces segregation of PK value to the WHERE clause and 
 not to the SET one, the only syntax that I can see work would be:
 {noformat}
 UPDATE WHERE k=0;
 {noformat}
 which maybe is too ugly to allow?
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5716) Remark on cassandra-5273 : Hanging system after OutOfMemory. Server cannot die due to uncaughtException handling

2013-07-01 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5716:
--

Description: 
Possible incorrect handling of an OOM as a result of modifications made for 
issue cassandra-5273.
I could reproduce the OOM, with the patch of Cassandra-5273 applied.
The good news is that, at least in my case, it works fine : the system did die !
 
However, due to multiple uncaughtException handling, multiple threads are 
calling the exitThread.start() routine, causing an IllegalStateException. There 
are some other exceptions also, but that seems logical. Also, after calling the 
start() function, the thread(s) will continue to run, and that could not be 
wanted.
 
Below I pasted the stack trace.
Just for your information, after all this works, and I could restart the 
Cassandra server and redo the OOM

[stack trace moved to 
http://aep.appspot.com/display/mQFNFHUh1VvQJYGcxRK0lQSM2j8/ ]

  was:
Possible incorrect handling of an OOM as a result of modifications made for 
issue cassandra-5273.
I could reproduce the OOM, with the patch of Cassandra-5273 applied.
The good news is that, at least in my case, it works fine : the system did die !
 
However, due to multiple uncaughtException handling, multiple threads are 
calling the exitThread.start() routine, causing an IllegalStateException. There 
are some other exceptions also, but that seems logical. Also, after calling the 
start() function, the thread(s) will continue to run, and that could not be 
wanted.
 
Below I pasted the stack trace.
Just for your information, after all this works, and I could restart the 
Cassandra server and redo the OOM
 
2013-06-27 16:28:15.384 Unable to reduce heap usage since there are no dirty 
column families
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid4808.hprof ...
Heap dump file created [278960302 bytes in 2.659 secs]
2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-31,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-36,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread Thread[qtp1564441079-30,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread Thread[GossipTasks:1,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread Thread[OptionalTasks:1,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread 
Thread[PERIODIC-COMMIT-LOG-SYNCER,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.655 Exception in thread 
Thread[metrics-meter-tick-thread-2,5,main]
java.lang.OutOfMemoryError: Java heap space
2013-06-27 16:28:42.656 Exception in thread Thread[EXPIRING-MAP-REAPER:1,5,main]
java.lang.OutOfMemoryError: Java heap space
   at 
java.util.concurrent.ConcurrentHashMap$EntrySet.iterator(ConcurrentHashMap.java:1202)
 ~[na:1.6.0_29]
   at org.apache.cassandra.utils.ExpiringMap$1.run(ExpiringMap.java:88) 
~[thrift/:na]
   at 
org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run(DebuggableScheduledThreadPoolExecutor.java:75)
 ~[thrift/:na]
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) 
[na:1.6.0_29]
   at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) 
[na:1.6.0_29]
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) 
[na:1.6.0_29]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
 [na:1.6.0_29]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
 [na:1.6.0_29]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
 [na:1.6.0_29]
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 [na:1.6.0_29]
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
[na:1.6.0_29]
   at java.lang.Thread.run(Thread.java:662) [na:1.6.0_29]
2013-06-27 16:28:42.656 Exception in thread Thread[ScheduledTasks:1,5,main]
java.lang.OutOfMemoryError: Java heap space
   at com.yammer.metrics.stats.Snapshot.init(Snapshot.java:30) 
~[metrics-core-2.0.3.jar:na]
   at 
com.yammer.metrics.stats.ExponentiallyDecayingSample.getSnapshot(ExponentiallyDecayingSample.java:107)
 ~[metrics-core-2.0.3.jar:na]
   at 
org.apache.cassandra.locator.DynamicEndpointSnitch.updateScores(DynamicEndpointSnitch.java:237)
 ~[thrift/:na]
   at 
org.apache.cassandra.locator.DynamicEndpointSnitch.access$0(DynamicEndpointSnitch.java:217)
 ~[thrift/:na]
   at 

[2/3] git commit: avoid starting exitThread multiple times patch by jbellis for CASSANDRA-5716

2013-07-01 Thread jbellis
avoid starting exitThread multiple times
patch by jbellis for CASSANDRA-5716


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40f0bdce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40f0bdce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40f0bdce

Branch: refs/heads/trunk
Commit: 40f0bdce069db14e912f28d7351c4b602389c6a5
Parents: 1a8f723
Author: Jonathan Ellis jbel...@apache.org
Authored: Mon Jul 1 09:10:06 2013 -0700
Committer: Jonathan Ellis jbel...@apache.org
Committed: Mon Jul 1 09:12:12 2013 -0700

--
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/40f0bdce/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index 53c653f..af21f07 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -195,7 +195,13 @@ public class CassandraDaemon
 {
 // some code, like FileChannel.map, will wrap an 
OutOfMemoryError in another exception
 if (e2 instanceof OutOfMemoryError)
-exitThread.start();
+{
+synchronized (exitThread)
+{
+if (!exitThread.isAlive())
+exitThread.start();
+}
+}
 
 if (e2 instanceof FSError)
 {



[jira] [Updated] (CASSANDRA-5273) Hanging system after OutOfMemory. Server cannot die due to uncaughtException handling

2013-07-01 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5273:
--

Fix Version/s: (was: 2.0 beta 1)
   1.2.6

 Hanging system after OutOfMemory. Server cannot die due to uncaughtException 
 handling
 -

 Key: CASSANDRA-5273
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5273
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.1
 Environment: linux, 64 bit
Reporter: Ignace Desimpel
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.2.6

 Attachments: 
 0001-CASSANDRA-5273-add-timeouts-to-the-blocking-commitlo.patch, 
 0001-CASSANDRA-5273-add-timeouts-to-the-blocking-commitlo.patch, 5273-v2.txt, 
 5273-v3.txt, CassHangs.txt


 On out of memory exception, there is an uncaughtexception handler that is 
 calling System.exit(). However, multiple threads are calling this handler 
 causing a deadlock and the server cannot stop working. See 
 http://www.mail-archive.com/user@cassandra.apache.org/msg27898.html. And see 
 stack trace in attachement.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[1/3] git commit: avoid starting exitThread multiple times patch by jbellis for CASSANDRA-5716

2013-07-01 Thread jbellis
Updated Branches:
  refs/heads/cassandra-1.2 1a8f7230a - 40f0bdce0
  refs/heads/trunk 82b920b66 - b621078f2


avoid starting exitThread multiple times
patch by jbellis for CASSANDRA-5716


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40f0bdce
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40f0bdce
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40f0bdce

Branch: refs/heads/cassandra-1.2
Commit: 40f0bdce069db14e912f28d7351c4b602389c6a5
Parents: 1a8f723
Author: Jonathan Ellis jbel...@apache.org
Authored: Mon Jul 1 09:10:06 2013 -0700
Committer: Jonathan Ellis jbel...@apache.org
Committed: Mon Jul 1 09:12:12 2013 -0700

--
 src/java/org/apache/cassandra/service/CassandraDaemon.java | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/40f0bdce/src/java/org/apache/cassandra/service/CassandraDaemon.java
--
diff --git a/src/java/org/apache/cassandra/service/CassandraDaemon.java 
b/src/java/org/apache/cassandra/service/CassandraDaemon.java
index 53c653f..af21f07 100644
--- a/src/java/org/apache/cassandra/service/CassandraDaemon.java
+++ b/src/java/org/apache/cassandra/service/CassandraDaemon.java
@@ -195,7 +195,13 @@ public class CassandraDaemon
 {
 // some code, like FileChannel.map, will wrap an 
OutOfMemoryError in another exception
 if (e2 instanceof OutOfMemoryError)
-exitThread.start();
+{
+synchronized (exitThread)
+{
+if (!exitThread.isAlive())
+exitThread.start();
+}
+}
 
 if (e2 instanceof FSError)
 {



[3/3] git commit: Merge branch 'cassandra-1.2' into trunk

2013-07-01 Thread jbellis
Merge branch 'cassandra-1.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b621078f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b621078f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b621078f

Branch: refs/heads/trunk
Commit: b621078f20587573b91ba76ed1309a7c0a91a122
Parents: 82b920b 40f0bdc
Author: Jonathan Ellis jbel...@apache.org
Authored: Mon Jul 1 09:12:34 2013 -0700
Committer: Jonathan Ellis jbel...@apache.org
Committed: Mon Jul 1 09:12:34 2013 -0700

--

--




[jira] [Updated] (CASSANDRA-5716) Remark on cassandra-5273 : Hanging system after OutOfMemory. Server cannot die due to uncaughtException handling

2013-07-01 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5716:
--

Affects Version/s: (was: 2.0 beta 1)
   1.2.6
Fix Version/s: (was: 2.0)
   1.2.7
 Assignee: Jonathan Ellis

fixed in 40f0bdce069db14e912f28d7351c4b602389c6a5

 Remark on cassandra-5273 : Hanging system after OutOfMemory. Server cannot 
 die due to uncaughtException handling
 

 Key: CASSANDRA-5716
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5716
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.6
 Environment: linux
Reporter: Ignace Desimpel
Assignee: Jonathan Ellis
Priority: Minor
 Fix For: 1.2.7


 Possible incorrect handling of an OOM as a result of modifications made for 
 issue cassandra-5273.
 I could reproduce the OOM, with the patch of Cassandra-5273 applied.
 The good news is that, at least in my case, it works fine : the system did 
 die !
  
 However, due to multiple uncaughtException handling, multiple threads are 
 calling the exitThread.start() routine, causing an IllegalStateException. 
 There are some other exceptions also, but that seems logical. Also, after 
 calling the start() function, the thread(s) will continue to run, and that 
 could not be wanted.
  
 Below I pasted the stack trace.
 Just for your information, after all this works, and I could restart the 
 Cassandra server and redo the OOM
 [stack trace moved to 
 http://aep.appspot.com/display/mQFNFHUh1VvQJYGcxRK0lQSM2j8/ ]

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-4430) optional pluggable o.a.c.metrics reporters

2013-07-01 Thread Chris Burroughs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696969#comment-13696969
 ] 

Chris Burroughs commented on CASSANDRA-4430:


Yep, hope to have something to show for it soon.

 optional pluggable o.a.c.metrics reporters
 --

 Key: CASSANDRA-4430
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4430
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Chris Burroughs
Assignee: Chris Burroughs
Priority: Minor
 Fix For: 2.1

 Attachments: cassandra-ganglia-example.png


 CASSANDRA-4009  expanded the use of the metrics library which has a set of 
 reporter modules http://metrics.codahale.com/manual/core/#reporters  You can 
 report to flat files, ganglia, spit everything over http, etc.  The next step 
 is a mechanism for using those reporters with  o.a.c.metrics.  To avoid 
 bundling everything I suggest following the mx4j approach of enable only if 
 on classpath coupled with a reporter configuration file.
 Strawman file:
 {noformat}
 console:
   time: 1
   timeunit: seconds
 csv:
  - time: 1
timeunit: minutes
file: foo.csv
  - time: 10
timeunit: seconds
 file: bar.csv
 ganglia:
  - time: 30
timunit: seconds
host: server-1
port: 8649
  - time: 30
timunit: seconds
host: server-2
port: 8649
 {noformat}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


svn commit: r1498573 - in /cassandra/site: publish/index.html publish/media/img/summit-eu-2013.jpg src/content/index.html src/media/img/summit-eu-2013.jpg

2013-07-01 Thread jbellis
Author: jbellis
Date: Mon Jul  1 17:13:21 2013
New Revision: 1498573

URL: http://svn.apache.org/r1498573
Log:
add Cassandra EU banner

Added:
cassandra/site/publish/media/img/summit-eu-2013.jpg   (with props)
cassandra/site/src/media/img/summit-eu-2013.jpg   (with props)
Modified:
cassandra/site/publish/index.html
cassandra/site/src/content/index.html

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1498573r1=1498572r2=1498573view=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Mon Jul  1 17:13:21 2013
@@ -86,8 +86,8 @@
 
 
 div class=span-24
-a 
href=http://www.datastax.com/company/news-and-events/events/cassandrasummit2013;
-  img src=/media/img/summit2013.jpg
+a href=http://www.datastax.com/cassandraeurope2013/;
+  img src=/media/img/summit-eu-2013.jpg
 /a
 /div
 

Added: cassandra/site/publish/media/img/summit-eu-2013.jpg
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/media/img/summit-eu-2013.jpg?rev=1498573view=auto
==
Binary file - no diff available.

Propchange: cassandra/site/publish/media/img/summit-eu-2013.jpg
--
svn:mime-type = application/octet-stream

Modified: cassandra/site/src/content/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/content/index.html?rev=1498573r1=1498572r2=1498573view=diff
==
--- cassandra/site/src/content/index.html (original)
+++ cassandra/site/src/content/index.html Mon Jul  1 17:13:21 2013
@@ -32,8 +32,8 @@
 {% include skeleton/_download.html %}
 
 div class=span-24
-a 
href=http://www.datastax.com/company/news-and-events/events/cassandrasummit2013;
-  img src=/media/img/summit2013.jpg
+a href=http://www.datastax.com/cassandraeurope2013/;
+  img src=/media/img/summit-eu-2013.jpg
 /a
 /div
 

Added: cassandra/site/src/media/img/summit-eu-2013.jpg
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/media/img/summit-eu-2013.jpg?rev=1498573view=auto
==
Binary file - no diff available.

Propchange: cassandra/site/src/media/img/summit-eu-2013.jpg
--
svn:mime-type = application/octet-stream




[jira] [Comment Edited] (CASSANDRA-4131) Integrate Hive support to be in core cassandra

2013-07-01 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696646#comment-13696646
 ] 

Cyril Scetbon edited comment on CASSANDRA-4131 at 7/1/13 5:25 PM:
--

Are the duplicates just tombstones not filtered out as said at 
https://issues.apache.org/jira/browse/CASSANDRA-4421?focusedCommentId=13658450page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13658450
 ? If yes, we should use Column.isLive() function to skip them

  was (Author: cscetbon):
Are the duplicates just tombstones not filtered out as said at 
https://issues.apache.org/jira/browse/CASSANDRA-4421?focusedCommentId=13658450page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13658450
 ?
  
 Integrate Hive support to be in core cassandra
 --

 Key: CASSANDRA-4131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4131
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeremy Hanna
Assignee: Edward Capriolo
  Labels: hadoop, hive

 The standalone hive support (at https://github.com/riptano/hive) would be 
 great to have in-tree so that people don't have to go out to github to 
 download it and wonder if it's a left-for-dead external shim.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (CASSANDRA-4131) Integrate Hive support to be in core cassandra

2013-07-01 Thread Cyril Scetbon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13696646#comment-13696646
 ] 

Cyril Scetbon edited comment on CASSANDRA-4131 at 7/1/13 5:26 PM:
--

Are the duplicates just tombstones not filtered out as said at 
https://issues.apache.org/jira/browse/CASSANDRA-4421?focusedCommentId=13658450page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13658450
 ? If yes, we should use Column.isLive() function to identify and skip them

  was (Author: cscetbon):
Are the duplicates just tombstones not filtered out as said at 
https://issues.apache.org/jira/browse/CASSANDRA-4421?focusedCommentId=13658450page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13658450
 ? If yes, we should use Column.isLive() function to skip them
  
 Integrate Hive support to be in core cassandra
 --

 Key: CASSANDRA-4131
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4131
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jeremy Hanna
Assignee: Edward Capriolo
  Labels: hadoop, hive

 The standalone hive support (at https://github.com/riptano/hive) would be 
 great to have in-tree so that people don't have to go out to github to 
 download it and wonder if it's a left-for-dead external shim.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5717) Repair causes streaming errors

2013-07-01 Thread Yoan Arnaudov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yoan Arnaudov updated CASSANDRA-5717:
-

Description: 
I've changed the replication factor for one of keyspaces and now I'm running 
repairs on column families (manually). I have 3 nodes cluster. Here is the 
error in the error log for one of the nodes.

{code:title=Error Log}
ERROR [Streaming to /NODE_IP:1] 2013-07-01 09:31:29,819 CassandraDaemon.java 
(line 192) Exception in thread Thread[Streaming to /NODE_IP:1,5,main]
java.lang.RuntimeException: java.io.IOException: Broken pipe
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectly(Unknown Source)
at sun.nio.ch.FileChannelImpl.transferTo(Unknown Source)
at 
org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:93)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
... 3 more
ERROR [Streaming to /NODE_IP:2] 2013-07-01 09:44:18,372 CassandraDaemon.java 
(line 192) Exception in thread Thread[Streaming to /NODE_IP:2,5,main]
java.lang.RuntimeException: java.io.IOException: Broken pipe
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown 
Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at sun.nio.ch.FileChannelImpl.transferToDirectly(Unknown Source)
at sun.nio.ch.FileChannelImpl.transferTo(Unknown Source)
at 
org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:93)
at 
org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
... 3 more
{code}


{code:title=netstats for one of the nodes}
Mode: NORMAL
Not sending any streams.
Streaming from: /IP
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_org/KEYSPACE-ns_history_org-hf-282-Data.db
 sections=1 progress=0/1393915753 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_org/KEYSPACE-ns_history_org-ic-455-Data.db
 sections=1 progress=0/792707 - 0%
Streaming from: /IP
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_org/KEYSPACE-ns_history_org-hf-255-Data.db
 sections=1 progress=0/1398197628 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_biz/KEYSPACE-ns_history_biz-ic-341-Data.db
 sections=1 progress=0/6153542 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-539-Data.db 
sections=1 progress=0/86968194 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_biz/KEYSPACE-ns_history_biz-hf-244-Data.db
 sections=1 progress=0/322197762 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_biz/KEYSPACE-ns_history_biz-ic-346-Data.db
 sections=1 progress=0/6219503 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-487-Data.db 
sections=1 progress=0/2689291466 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-ic-684-Data.db 
sections=1 progress=0/3717513 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/ns_history_org/KEYSPACE-ns_history_org-ic-413-Data.db
 sections=1 progress=0/22256993 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-529-Data.db 
sections=1 progress=0/345419053 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-534-Data.db 
sections=1 progress=0/88759930 - 0%
   KEYSPACE: 
/var/lib/cassandra/data/KEYSPACE/listing/KEYSPACE-listing-hf-509-Data.db 
sections=1 progress=0/365451892 - 0%
Read Repair Statistics:
Attempted: 10696592
Mismatch (Blocking): 48873
Mismatch (Background): 47308
Pool NameActive   Pending  Completed
Commandsn/a 0  339083095
Responses   n/a 0  270274923
{code}
Netstats showing zero progress and it's not progressing.

  was:
I've changed the replication factor for one of keyspaces and now I'm 

[jira] [Assigned] (CASSANDRA-5714) Allow coordinator failover for cursors

2013-07-01 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-5714:
---

Assignee: Sylvain Lebresne

 Allow coordinator failover for cursors
 --

 Key: CASSANDRA-5714
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5714
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0 beta 1
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
Priority: Minor

 With CASSANDRA-4415 if a coordinator fails or gets slow, causing the {{NEXT}} 
 request to timeout, the client application won't be able to complete its 
 browsing of the result. That implies that most of the time when the developer 
 will rely on cursors he will have to write some logic to handle a retry 
 request for results starting where the iteration failed. This will quickly 
 become painful.
 Ideally the driver should handle this failover by itself by transparently 
 issuing this retry query when {{NEXT}} fail, but as the driver doesn't 
 understand CQL queries, the only thing it's aware of is the number of rows 
 already read. Therefore we should allow an optional parameter 
 {{initial_row_number}} in {{QUERY}} and {{EXECUTE}} messages that would 
 allow a kind of stateless failover of cursors.
 With such an option, developers wouldn't have to write any failover/retry 
 logic on failure as they would know that everything has already been tried by 
 the driver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (CASSANDRA-5151) Implement better way of eliminating compaction left overs.

2013-07-01 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-5151.
-

Resolution: Fixed

 Implement better way of eliminating compaction left overs.
 --

 Key: CASSANDRA-5151
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5151
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.1.3
Reporter: Yuki Morishita
Assignee: Yuki Morishita
 Fix For: 2.0 beta 1

 Attachments: 
 0001-move-scheduling-MeteredFlusher-to-CassandraDaemon.patch, 5151-1.2.txt, 
 5151-v2.txt


 This is from discussion in CASSANDRA-5137. Currently we skip loading SSTables 
 that are left over from incomplete compaction to not over-count counter, but 
 the way we track compaction completion is not secure.
 One possible solution is to create system CF like:
 {code}
 create table compaction_log (
   id uuid primary key,
   inputs setint,
   outputs setint
 );
 {code}
 to track incomplete compaction.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5649) Move resultset type information into prepare, not execute

2013-07-01 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697154#comment-13697154
 ] 

Sylvain Lebresne commented on CASSANDRA-5649:
-

Additional notes on the patch. We only return the result ResultSet in a prepare 
response in the case of select. Meaning, we don't do it for the result sets 
return by conditional updates, but in that case we can't predict what the 
returned columns will be.

There is also the issue of SELECT * FROM  What if the drop/add a column? 
After checking, it's actually fine because the set of columns returned in the 
resultSet is computed during preparation of the statement.

Lastly, the patch allow the no_metadata flag for QUERY messages too, which is 
obviously much less useful, but it kind of makes to have it there for symmetry 
(and it takes no space since it's just a bit flag from a byte the QUERY 
messages have anyway), and I figured, maybe smart high level clients that 
generate the query can actually easily compute the metadata of the resultSet 
(it's not really rocket science) and so it could be useful in the long run.

 Move resultset type information into prepare, not execute
 -

 Key: CASSANDRA-5649
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5649
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1


 Native protocol 1.0 sends type information on execute.  This is a minor 
 inefficiency for large resultsets; unfortunately, single-row resultsets are 
 common.
 This does represent a performance regression from Thrift; Thrift does not 
 send type information at all.  (Bad for driver complexity, but good for 
 performance.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5714) Allow coordinator failover for cursors

2013-07-01 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697306#comment-13697306
 ] 

Michaël Figuière commented on CASSANDRA-5714:
-

I like this solution. Some extra advantages that I can see in detaching the 
paging state from the connection:
* The previous solution could lead to an over consumption of StreamIds as the 
driver cannot reuse them until the paging is complete. In some use cases that 
would have force the user to provision a large amount of connections just to 
avoid slowdowns due to StreamIds exhaustion.
* This design offer many API options on the client side: implicit automatic 
paging, explicit one with sync/async calls (e.g. to allow multiple paging in 
parallel from a single client thread), copying the paging state to a different 
process,...

 Allow coordinator failover for cursors
 --

 Key: CASSANDRA-5714
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5714
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0 beta 1
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0 beta 1


 With CASSANDRA-4415 if a coordinator fails or gets slow, causing the {{NEXT}} 
 request to timeout, the client application won't be able to complete its 
 browsing of the result. That implies that most of the time when the developer 
 will rely on cursors he will have to write some logic to handle a retry 
 request for results starting where the iteration failed. This will quickly 
 become painful.
 Ideally the driver should handle this failover by itself by transparently 
 issuing this retry query when {{NEXT}} fail, but as the driver doesn't 
 understand CQL queries, the only thing it's aware of is the number of rows 
 already read. Therefore we should allow an optional parameter 
 {{initial_row_number}} in {{QUERY}} and {{EXECUTE}} messages that would 
 allow a kind of stateless failover of cursors.
 With such an option, developers wouldn't have to write any failover/retry 
 logic on failure as they would know that everything has already been tried by 
 the driver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5619) CAS UPDATE for a lost race: save round trip by returning column values

2013-07-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697307#comment-13697307
 ] 

Jonathan Ellis commented on CASSANDRA-5619:
---

How do we differentiate now between it didn't work, because the row doesn't 
exist and you specified non-empty columns and it did work?

-0 on allowing both null and empty in parameters, btw.  let's pick one and 
stick to it.  (null works fine as a Thrift parameter, incidentally.)

You can use test_cas as quick sanity check, btw: {{PYTHONPATH=test nosetests 
--tests=system.test_thrift_server:TestMutations.test_cas}}

(I ran into this problem when trying to update it for the new API.)

 CAS UPDATE for a lost race: save round trip by returning column values
 --

 Key: CASSANDRA-5619
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5619
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Affects Versions: 2.0 beta 1
Reporter: Blair Zajac
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 5619_thrift_fixup.txt, 5619.txt


 Looking at the new CAS CQL3 support examples [1], if one lost a race for an 
 UPDATE, to save a round trip to get the current values to decide if you need 
 to perform your work, could the columns that were used in the IF clause also 
 be returned to the caller?  Maybe the columns values as part of the SET part 
 could also be returned.
 I don't know if this is generally useful though.
 In the case of creating a new user account with a given username which is the 
 partition key, if one lost the race to another person creating an account 
 with the same username, it doesn't matter to the loser what the column values 
 are, just that they lost.
 I'm new to Cassandra, so maybe there's other use cases, such as doing 
 incremental amount of work on a row.  In pure Java projects I've done while 
 loops around AtomicReference.html#compareAndSet() until the work was done on 
 the referenced object to handle multiple threads each making forward progress 
 in updating the references object.
 [1] https://github.com/riptano/cassandra-dtest/blob/master/cql_tests.py#L3044

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication

2013-07-01 Thread Mike (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697313#comment-13697313
 ] 

Mike commented on CASSANDRA-5391:
-

I don't think I see any code changes in the 1.1.x branch as a result of this 
bug. Does the bug not apply to 1.1.x (aka, it was introduced in the 1.2.0 
streaming refactor?), or does 1.1.12 (and 1.1.9, on which Datastax Enterprise 
is based) still suffer from this?

 SSL problems with inter-DC communication
 

 Key: CASSANDRA-5391
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.3
 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version
 java version 1.6.0_23
 Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode)
 $ uname -a
 Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 
 x86_64 x86_64 x86_64 GNU/Linux
 $ cat /etc/redhat-release 
 Scientific Linux release 6.3 (Carbon)
 $ facter | grep ec2
 ...
 ec2_placement = availability_zone=us-east-1d
 ...
 $ rpm -qi cassandra
 cassandra-1.2.3-1.el6.cmp1.noarch
 (custom built rpm from cassandra tarball distribution)
Reporter: Ondřej Černoš
Assignee: Yuki Morishita
Priority: Blocker
 Fix For: 1.2.4

 Attachments: 5391-1.2.3.txt, 5391-1.2.txt, 5391-v2-1.2.txt


 I get SSL and snappy compression errors in multiple datacenter setup.
 The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use 
 slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex 
 able to parse the Rackspace/Openstack availability zone which happens to be 
 in unusual format).
 During {{nodetool rebuild}} tests I managed to (consistently) trigger the 
 following error:
 {noformat}
 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] 
 IncomingTcpConnection.java(79) 
 org.apache.cassandra.net.IncomingTcpConnection: IOException reading from 
 socket; closing
 java.io.IOException: FAILED_TO_UNCOMPRESS(5)
   at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
   at org.xerial.snappy.SnappyNative.rawUncompress(Native Method)
   at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391)
   at 
 org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101)
   at 
 org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79)
   at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337)
   at 
 org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361)
   at 
 org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160)
   at 
 org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166)
   at 
 org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66)
 {noformat}
 The exception is raised during DB file download. What is strange is the 
 following:
 * the exception is raised only when rebuildig from AWS into Rackspace
 * the exception is raised only when all nodes are up and running in AWS (all 
 3). In other words, if I bootstrap from one or two nodes in AWS, the command 
 succeeds.
 Packet-level inspection revealed malformed packets _on both ends of 
 communication_ (the packet is considered malformed on the machine it 
 originates on).
 Further investigation raised two more concerns:
 * We managed to get another stacktrace when testing the scenario. The 
 exception was raised only once during the tests and was raised when I 
 throttled the inter-datacenter bandwidth to 1Mbps.
 {noformat}
 java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC
   at com.google.common.base.Throwables.propagate(Throwables.java:160)
   at 
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
   at java.lang.Thread.run(Thread.java:662)
 Caused by: javax.net.ssl.SSLException: bad record MAC
   at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190)
   at 
 com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649)
   at 
 

[jira] [Commented] (CASSANDRA-5699) Streaming (2.0) can deadlock

2013-07-01 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13697341#comment-13697341
 ] 

Jonathan Ellis commented on CASSANDRA-5699:
---

It's somewhat confusing that we use StreamInit both as leader and as follower, 
to mean different things, but I get how the MessagingService architecture makes 
it difficult to do otherwise.  As a minor improvement, suggest renaming 
isInitiator to sentByInitiator.

What is going on with the switch from {{SetUUID ongoingSessions}} to 
{{MapInetAddress, StreamSession ongoingSessions}} in SRF?

Nit: onConnect could mean leader connects to follower or [what it actually 
means] stream is fully connected.  Suggest renaming, e.g. 
onSessionEstablished.

Other comments on new Streaming:

Somewhat confused by logic in complete() -- I received a Complete message.  If 
I'm already waiting for a complete message, close the session.  Otherwise, wait 
for [another] Complete message ?

It looks like there may be synchronization issues with state; some accesses 
are synchronized and some are not.

Why is init broken out from construction?  Makes some things awkward, e.g. 
streamResult which is final-post-init but we have to null-check until then.

Why do we include a String description in SIM?

When does follower immediately have something to stream to leader, is this a 
Repair optimization?



 Streaming (2.0) can deadlock
 

 Key: CASSANDRA-5699
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5699
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1

 Attachments: 5699.txt


 The new streaming implementation (CASSANDRA-5286) creates 2 threads per host 
 for streaming, one for the incoming stream and one for the outgoing one. 
 However, both currently share the same socket, but since we use synchronous 
 I/O, a read can block a write, which can result in a deadlock if 2 nodes are 
 both blocking on a read a the same time, thus blocking their respective 
 writes (this is actually fairly easy to reproduce with a simple repair).
 So instead attaching a patch that uses one socket per thread.
 The patch also correct the stream throughput throttling calculation that was 
 8000 times lower than what it should be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5714) Allow coordinator failover for cursors

2013-07-01 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5714:
--

Reviewer: iamaleksey

 Allow coordinator failover for cursors
 --

 Key: CASSANDRA-5714
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5714
 Project: Cassandra
  Issue Type: Improvement
Affects Versions: 1.2.0 beta 1
Reporter: Michaël Figuière
Assignee: Sylvain Lebresne
Priority: Minor
 Fix For: 2.0 beta 1


 With CASSANDRA-4415 if a coordinator fails or gets slow, causing the {{NEXT}} 
 request to timeout, the client application won't be able to complete its 
 browsing of the result. That implies that most of the time when the developer 
 will rely on cursors he will have to write some logic to handle a retry 
 request for results starting where the iteration failed. This will quickly 
 become painful.
 Ideally the driver should handle this failover by itself by transparently 
 issuing this retry query when {{NEXT}} fail, but as the driver doesn't 
 understand CQL queries, the only thing it's aware of is the number of rows 
 already read. Therefore we should allow an optional parameter 
 {{initial_row_number}} in {{QUERY}} and {{EXECUTE}} messages that would 
 allow a kind of stateless failover of cursors.
 With such an option, developers wouldn't have to write any failover/retry 
 logic on failure as they would know that everything has already been tried by 
 the driver.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5649) Move resultset type information into prepare, not execute

2013-07-01 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-5649:
--

Reviewer: iamaleksey

 Move resultset type information into prepare, not execute
 -

 Key: CASSANDRA-5649
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5649
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jonathan Ellis
Assignee: Sylvain Lebresne
 Fix For: 2.0 beta 1


 Native protocol 1.0 sends type information on execute.  This is a minor 
 inefficiency for large resultsets; unfortunately, single-row resultsets are 
 common.
 This does represent a performance regression from Thrift; Thrift does not 
 send type information at all.  (Bad for driver complexity, but good for 
 performance.)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-5466) Compaction task eats 100% CPU for a long time for tables with collection typed columns

2013-07-01 Thread Alex Zarutin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Zarutin updated CASSANDRA-5466:


Attachment: 
nodetool-compactionstats-cass-5466-output-30-threads-1578752-req-LeveledCompactionStrategy.log

nodetool-compactionstats-cass-5466-output-30-threads-1386752-req-Default-LCS.log

logs-system-cass-5466-output-30-threads-1578752-req-LeveledCompactionStrategy.log

logs-system-cass-5466-output-30-threads-1386752-req-Default-LCS.log
CASSANDRA-5466.txt
Cassandra_JDBC_Updater.tar.gz

I made a couple repro steps, trying to repro this issue using C* 1.2.4 using 
CCM on 1 node cluster running with 4 and 8 gigs.
Cluster was build using default cluster configuration (I test) and with 
compaction strategy set to LeveledCompactionStrategy (II test)

On thousands updates nothing really happens, C* takes new 100% of CPU or more 
for a while, while nodetool compactionstats does not really show nothing 
(pending tasks: 0, Active compaction remaining time : n/a), but it drops soon 
to average ~1% of CPU utilization.

So, I increased load to thousand, generating 1.3 - 1.5M of the concurrent 
updates (using 30 threads). It becomes more like the bug reporter mentioned, 
however, after minutes CPU utilization went back to normal.

Attached documents:
- repro steps
- java based client (entire tar.gz.hello project), rename xxx.tar.gz.hello to 
xxx.tar.gz

- system.log for the test I
- system.log for the test II
- nodetool compactionstats output for the test I
- nodetool compactionstats output for the test II

 Compaction task eats 100% CPU for a long time for tables with collection 
 typed columns
 --

 Key: CASSANDRA-5466
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5466
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.2.4
 Environment: ubuntu 12.10, sun-6-java 1.6.0.37, Core-i7, 8GB RAM
Reporter: Alexey Tereschenko
Assignee: Alex Zarutin
 Attachments: CASSANDRA-5466.txt, Cassandra_JDBC_Updater.tar.gz, 
 logs-system-cass-5466-output-30-threads-1386752-req-Default-LCS.log, 
 logs-system-cass-5466-output-30-threads-1578752-req-LeveledCompactionStrategy.log,
  
 nodetool-compactionstats-cass-5466-output-30-threads-1386752-req-Default-LCS.log,
  
 nodetool-compactionstats-cass-5466-output-30-threads-1578752-req-LeveledCompactionStrategy.log


 For the table:
 {code:sql}
 create table test (
 user_id bigint,
 first_list listbigint,
 second_list listbigint,
 third_list listbigint,
 PRIMARY KEY (user_id)
 );
 {code}
 I do thousands of updates like the following:
 {code:sql}
 UPDATE test SET first_list = [1], second_list = [2], third_list = [3] WHERE 
 user_id = ?;
 {code}
 In several minutes a compaction task starts running. {{nodetool 
 compactionstats}} shows that remaining time is 2 seconds but in fact it can 
 take hours to really complete the compaction tasks. And during that time 
 Cassandra consumes 100% of CPU and slows down so significally that it gives 
 connection timeout exceptions to any client code trying to establish 
 connection with Cassandra. This happens only with tables with collection 
 typed columns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CASSANDRA-3416) nodetool and show schema give different value for compact threshold

2013-07-01 Thread Alex Zarutin (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-3416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Zarutin updated CASSANDRA-3416:


Assignee: Alex Zarutin  (was: Ryan McGuire)

 nodetool and show schema give different value for compact threshold
 ---

 Key: CASSANDRA-3416
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3416
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Affects Versions: 1.0.0
 Environment: ALL
Reporter: mike li
Assignee: Alex Zarutin
Priority: Minor

 On Thu, Oct 27, 2011 at 10:06 PM,  mike...@thomsonreuters.com wrote:
  Why these two gives different results?
  ./nodetool -h 172.xx.xxx.xx  getcompactionthreshold Timeseries TickData
  Current compaction thresholds for Timeseries/TickData:
 
  min = 1,  max = 2147483647
 
 
  [default@Timeseries] show schema;
 
  use Timeseries;
 
  ...
 
and min_compaction_threshold = 4
 
and max_compaction_threshold = 32
 
 
  If we use leveledCompaction, does compaction threshold setting matter?
 
 No, it doesn't matter with leveled compaction. The code override the min 
 threshold to 1 and the max to Integer.MAX_VALUE, which is what you are seeing 
 with nodetool.
 It obviously don't override it everywhere it should, given the output of show 
 schema.
 Do you mind opening a JIRA ticket so we fix it?
 --
 Sylvain
 
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


git commit: simple javadoc fix

2013-07-01 Thread dbrosius
Updated Branches:
  refs/heads/trunk b621078f2 - ec3b8f817


simple javadoc fix


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ec3b8f81
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ec3b8f81
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ec3b8f81

Branch: refs/heads/trunk
Commit: ec3b8f8173e6bdae313bf7f3814e3718a0cd3348
Parents: b621078
Author: Dave Brosius dbros...@apache.org
Authored: Mon Jul 1 22:36:04 2013 -0400
Committer: Dave Brosius dbros...@apache.org
Committed: Mon Jul 1 22:36:04 2013 -0400

--
 src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ec3b8f81/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java
--
diff --git a/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java 
b/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java
index b0d9dee..d5b8fed 100644
--- a/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java
+++ b/src/java/org/apache/cassandra/hadoop/pig/CqlStorage.java
@@ -62,7 +62,7 @@ public class CqlStorage extends AbstractCassandraStorage
 this(1000);
 }
 
-/** @param limit number of CQL rows to fetch in a thrift request */
+/** @param pageSize limit number of CQL rows to fetch in a thrift request 
*/
 public CqlStorage(int pageSize)
 {
 super();