[jira] [Commented] (CASSANDRA-9160) Migrate CQL dtests to unit tests

2015-05-22 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555771#comment-14555771
 ] 

Stefania commented on CASSANDRA-9160:
-

[~jbellis], [~slebresne]: 

The conversion of cql dtests from python to java is complete, please refer to 
the spreadsheet for details. Here's a summary:

- Several dtests were testing cas statements, for which I added support in 
ModificationStatement and BatchStatement executeInternal() methods. However, 
this simply checks that the conditions are met and applies the mutations if 
that's the case. It obviously doesn't test any paxos, unlike the dtests 
(although most of the cas dtests were using only one node).

- A few dtests were exercising the QueryPager functionality of 
SelectStatement.execute(). This is not supported by executeInternal(). Should 
we add paging to executeInternal()? 

- I wasn't able to convert 5 dtests due to various reasons (decoder or driver 
features, thrift protocol). 

- I converted tests for two patches not yet delivered to trunk, CASSANDRA-7281 
and CASSANDRA-7396. We can either commit the tests with the @Ignore annotation 
or we can remove them from this patch and add them as a separate patch the the 
respective tickets.

Overall I feel we may end up with a gap if we move the CQL tests entirely to 
unit tests, specifically the statement execute() methods and the code paths 
around these methods (e.g. StorageProxy). Perhaps we should extend CQLTester to 
execute over the network rather than only internally or leave some very basic 
CQL statements as dtests (one per statement for example).

*TODO*:

- The dtests need to be deleted, so far I just marked them with the equivalent 
java test. The remaining 5 dtests should be re-housed.

- The rearrangement of converted or existing CQL unit tests is not done yet. I 
parked most of the converted tests in a class called TableTest unless it was 
clear they belonged somewhere else. To rearrange properly I would need some 
guidance on a top level structure. 

 Migrate CQL dtests to unit tests
 

 Key: CASSANDRA-9160
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9160
 Project: Cassandra
  Issue Type: Test
Reporter: Sylvain Lebresne
Assignee: Stefania

 We have CQL tests in 2 places: dtests and unit tests. The unit tests are 
 actually somewhat better in the sense that they have the ability to test both 
 prepared and unprepared statements at the flip of a switch. It's also better 
 to have all those tests in the same place so we can improve the test 
 framework in only one place (CASSANDRA-7959, CASSANDRA-9159, etc...). So we 
 should move the CQL dtests to the unit tests (which will be a good occasion 
 to organize them better).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/6] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/test/unit/org/apache/cassandra/db/lifecycle/HelpersTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/lifecycle/HelpersTest.java 
b/test/unit/org/apache/cassandra/db/lifecycle/HelpersTest.java
new file mode 100644
index 000..d53a830
--- /dev/null
+++ b/test/unit/org/apache/cassandra/db/lifecycle/HelpersTest.java
@@ -0,0 +1,158 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* License); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*/
+package org.apache.cassandra.db.lifecycle;
+
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.collect.Lists;
+
+import org.junit.Test;
+
+import junit.framework.Assert;
+import org.apache.cassandra.MockSchema;
+import org.apache.cassandra.Util;
+import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.io.sstable.format.big.BigTableReader;
+import org.apache.cassandra.utils.concurrent.Refs;
+
+public class HelpersTest
+{
+
+static SetInteger a = set(1, 2, 3);
+static SetInteger b = set(4, 5, 6);
+static SetInteger c = set(7, 8, 9);
+static SetInteger abc = set(1, 2, 3, 4, 5, 6, 7, 8, 9);
+
+// this also tests orIn
+@Test
+public void testFilterIn()
+{
+check(Helpers.filterIn(abc, a), a);
+check(Helpers.filterIn(abc, a, c), set(1, 2, 3, 7, 8, 9));
+check(Helpers.filterIn(a, c), set());
+}
+
+// this also tests notIn
+@Test
+public void testFilterOut()
+{
+check(Helpers.filterOut(abc, a), set(4, 5, 6, 7, 8, 9));
+check(Helpers.filterOut(abc, b), set(1, 2, 3, 7, 8, 9));
+check(Helpers.filterOut(a, a), set());
+}
+
+@Test
+public void testConcatUniq()
+{
+check(Helpers.concatUniq(a, b, a, c, b, a), abc);
+}
+
+@Test
+public void testIdentityMap()
+{
+Integer one = new Integer(1);
+Integer two = new Integer(2);
+Integer three = new Integer(3);
+MapInteger, Integer identity = Helpers.identityMap(set(one, two, 
three));
+Assert.assertEquals(3, identity.size());
+Assert.assertSame(one, identity.get(1));
+Assert.assertSame(two, identity.get(2));
+Assert.assertSame(three, identity.get(3));
+}
+
+@Test
+public void testReplace()
+{
+boolean failure;
+failure = false;
+try
+{
+Helpers.replace(abc, a, c);
+}
+catch (AssertionError e)
+{
+failure = true;
+}
+Assert.assertTrue(failure);
+
+failure = false;
+try
+{
+Helpers.replace(a, abc, c);
+}
+catch (AssertionError e)
+{
+failure = true;
+}
+Assert.assertTrue(failure);
+
+failure = false;
+try
+{
+MapInteger, Integer notIdentity = ImmutableMap.of(1, new 
Integer(1), 2, 2, 3, 3);
+Helpers.replace(notIdentity, a, b);
+}
+catch (AssertionError e)
+{
+failure = true;
+}
+Assert.assertTrue(failure);
+
+// check it actually works when correct values provided
+check(Helpers.replace(a, a, b), b);
+}
+
+private static SetInteger set(Integer ... contents)
+{
+return ImmutableSet.copyOf(contents);
+}
+
+private static void check(IterableInteger check, SetInteger expected)
+{
+Assert.assertEquals(expected, ImmutableSet.copyOf(check));
+}
+
+@Test
+public void testSetupDeletionNotification()
+{
+IterableSSTableReader readers = 
Lists.newArrayList(MockSchema.sstable(1), MockSchema.sstable(2));
+Throwable accumulate = Helpers.setReplaced(readers, null);
+Assert.assertNull(accumulate);
+for (SSTableReader reader : readers)
+Assert.assertTrue(reader.isReplaced());
+accumulate = Helpers.setReplaced(readers, null);
+Assert.assertNotNull(accumulate);
+}
+
+@Test
+public void 

[3/6] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
index a526ec9..8029075 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
@@ -20,33 +20,28 @@ package org.apache.cassandra.io.sstable;
 import java.util.*;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.collect.ImmutableList;
 
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.DataTracker;
 import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.RowIndexEntry;
 import org.apache.cassandra.db.compaction.AbstractCompactedRow;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.io.sstable.format.SSTableWriter;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
 import org.apache.cassandra.utils.CLibrary;
-import org.apache.cassandra.utils.FBUtilities;
-import org.apache.cassandra.utils.concurrent.Refs;
 import org.apache.cassandra.utils.concurrent.Transactional;
 
-import static org.apache.cassandra.utils.Throwables.merge;
-
 /**
  * Wraps one or more writers as output for rewriting one or more readers: 
every sstable_preemptive_open_interval_in_mb
  * we look in the summary we're collecting for the latest writer for the 
penultimate key that we know to have been fully
  * flushed to the index file, and then double check that the key is fully 
present in the flushed data file.
- * Then we move the starts of each reader forwards to that point, replace them 
in the datatracker, and attach a runnable
+ * Then we move the starts of each reader forwards to that point, replace them 
in the Tracker, and attach a runnable
  * for on-close (i.e. when all references expire) that drops the page cache 
prior to that key position
  *
  * hard-links are created for each partially written sstable so that readers 
opened against them continue to work past
  * the rename of the temporary file, which is deleted once all readers against 
the hard-link have been closed.
- * If for any reason the writer is rolled over, we immediately rename and 
fully expose the completed file in the DataTracker.
+ * If for any reason the writer is rolled over, we immediately rename and 
fully expose the completed file in the Tracker.
  *
  * On abort we restore the original lower bounds to the existing readers and 
delete any temporary files we had in progress,
  * but leave any hard-links in place for the readers we opened to cleanup when 
they're finished as we would had we finished
@@ -74,26 +69,19 @@ public class SSTableRewriter extends 
Transactional.AbstractTransactional impleme
 return preemptiveOpenInterval;
 }
 
-private final DataTracker dataTracker;
 private final ColumnFamilyStore cfs;
 
 private final long maxAge;
 private long repairedAt = -1;
 // the set of final readers we will expose on commit
+private final LifecycleTransaction transaction; // the readers we are 
rewriting (updated as they are replaced)
 private final ListSSTableReader preparedForCommit = new ArrayList();
-private final SetSSTableReader rewriting; // the readers we are 
rewriting (updated as they are replaced)
-private final MapDescriptor, DecoratedKey originalStarts = new 
HashMap(); // the start key for each reader we are rewriting
 private final MapDescriptor, Integer fileDescriptors = new HashMap(); 
// the file descriptors for each reader descriptor we are rewriting
 
-private SSTableReader currentlyOpenedEarly; // the reader for the most 
recent (re)opening of the target file
 private long currentlyOpenedEarlyAt; // the position (in MB) in the target 
file we last (re)opened at
 
-private final ListFinished finishedWriters = new ArrayList();
-// as writers are closed from finishedWriters, their last readers are 
moved into discard, so that abort can cleanup
-// after us safely; we use a set so we can add in both prepareToCommit and 
abort
-private final SetSSTableReader discard = new HashSet();
-// true for operations that are performed without Cassandra running 
(prevents updates of DataTracker)
-private final boolean isOffline;
+private final ListSSTableWriter writers = new ArrayList();
+private final boolean isOffline; // true for operations that are performed 
without Cassandra running (prevents updates of Tracker)
 
 private SSTableWriter writer;
 private MapDecoratedKey, RowIndexEntry cachedKeys = new HashMap();
@@ -101,15 +89,11 @@ public class SSTableRewriter extends 
Transactional.AbstractTransactional impleme
 // for testing 

[2/7] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/test/unit/org/apache/cassandra/db/lifecycle/HelpersTest.java
--
diff --git a/test/unit/org/apache/cassandra/db/lifecycle/HelpersTest.java 
b/test/unit/org/apache/cassandra/db/lifecycle/HelpersTest.java
new file mode 100644
index 000..d53a830
--- /dev/null
+++ b/test/unit/org/apache/cassandra/db/lifecycle/HelpersTest.java
@@ -0,0 +1,158 @@
+/*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* License); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*/
+package org.apache.cassandra.db.lifecycle;
+
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.collect.Lists;
+
+import org.junit.Test;
+
+import junit.framework.Assert;
+import org.apache.cassandra.MockSchema;
+import org.apache.cassandra.Util;
+import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.io.sstable.format.big.BigTableReader;
+import org.apache.cassandra.utils.concurrent.Refs;
+
+public class HelpersTest
+{
+
+static SetInteger a = set(1, 2, 3);
+static SetInteger b = set(4, 5, 6);
+static SetInteger c = set(7, 8, 9);
+static SetInteger abc = set(1, 2, 3, 4, 5, 6, 7, 8, 9);
+
+// this also tests orIn
+@Test
+public void testFilterIn()
+{
+check(Helpers.filterIn(abc, a), a);
+check(Helpers.filterIn(abc, a, c), set(1, 2, 3, 7, 8, 9));
+check(Helpers.filterIn(a, c), set());
+}
+
+// this also tests notIn
+@Test
+public void testFilterOut()
+{
+check(Helpers.filterOut(abc, a), set(4, 5, 6, 7, 8, 9));
+check(Helpers.filterOut(abc, b), set(1, 2, 3, 7, 8, 9));
+check(Helpers.filterOut(a, a), set());
+}
+
+@Test
+public void testConcatUniq()
+{
+check(Helpers.concatUniq(a, b, a, c, b, a), abc);
+}
+
+@Test
+public void testIdentityMap()
+{
+Integer one = new Integer(1);
+Integer two = new Integer(2);
+Integer three = new Integer(3);
+MapInteger, Integer identity = Helpers.identityMap(set(one, two, 
three));
+Assert.assertEquals(3, identity.size());
+Assert.assertSame(one, identity.get(1));
+Assert.assertSame(two, identity.get(2));
+Assert.assertSame(three, identity.get(3));
+}
+
+@Test
+public void testReplace()
+{
+boolean failure;
+failure = false;
+try
+{
+Helpers.replace(abc, a, c);
+}
+catch (AssertionError e)
+{
+failure = true;
+}
+Assert.assertTrue(failure);
+
+failure = false;
+try
+{
+Helpers.replace(a, abc, c);
+}
+catch (AssertionError e)
+{
+failure = true;
+}
+Assert.assertTrue(failure);
+
+failure = false;
+try
+{
+MapInteger, Integer notIdentity = ImmutableMap.of(1, new 
Integer(1), 2, 2, 3, 3);
+Helpers.replace(notIdentity, a, b);
+}
+catch (AssertionError e)
+{
+failure = true;
+}
+Assert.assertTrue(failure);
+
+// check it actually works when correct values provided
+check(Helpers.replace(a, a, b), b);
+}
+
+private static SetInteger set(Integer ... contents)
+{
+return ImmutableSet.copyOf(contents);
+}
+
+private static void check(IterableInteger check, SetInteger expected)
+{
+Assert.assertEquals(expected, ImmutableSet.copyOf(check));
+}
+
+@Test
+public void testSetupDeletionNotification()
+{
+IterableSSTableReader readers = 
Lists.newArrayList(MockSchema.sstable(1), MockSchema.sstable(2));
+Throwable accumulate = Helpers.setReplaced(readers, null);
+Assert.assertNull(accumulate);
+for (SSTableReader reader : readers)
+Assert.assertTrue(reader.isReplaced());
+accumulate = Helpers.setReplaced(readers, null);
+Assert.assertNotNull(accumulate);
+}
+
+@Test
+public void 

[1/6] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 33d71b825 - e5a76bdb5


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java 
b/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
index 5dca589..fa91d00 100644
--- a/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
@@ -18,14 +18,16 @@
 package org.apache.cassandra.io.sstable;
 
 import java.io.File;
-import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.util.*;
 import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
 
+import com.google.common.collect.Iterables;
 import com.google.common.collect.Sets;
 import org.junit.After;
 import org.junit.BeforeClass;
+import com.google.common.util.concurrent.Uninterruptibles;
 import org.junit.Test;
 
 import org.apache.cassandra.SchemaLoader;
@@ -43,6 +45,7 @@ import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.io.sstable.format.SSTableWriter;
 import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.db.compaction.SSTableSplitter;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.metrics.StorageMetrics;
@@ -52,7 +55,6 @@ import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
 
 import static org.junit.Assert.*;
-import static org.apache.cassandra.utils.Throwables.maybeFail;
 
 public class SSTableRewriterTest extends SchemaLoader
 {
@@ -83,7 +85,9 @@ public class SSTableRewriterTest extends SchemaLoader
 {
 Keyspace keyspace = Keyspace.open(KEYSPACE);
 ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF);
-cfs.truncateBlocking();
+truncate(cfs);
+assertEquals(0, cfs.metric.liveDiskSpaceUsed.getCount());
+
 for (int j = 0; j  100; j ++)
 {
 ByteBuffer key = ByteBufferUtil.bytes(String.valueOf(j));
@@ -94,8 +98,10 @@ public class SSTableRewriterTest extends SchemaLoader
 cfs.forceBlockingFlush();
 SetSSTableReader sstables = new HashSet(cfs.getSSTables());
 assertEquals(1, sstables.size());
-try (SSTableRewriter writer = new SSTableRewriter(cfs, sstables, 1000, 
false);
- AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategy().getScanners(sstables);)
+assertEquals(sstables.iterator().next().bytesOnDisk(), 
cfs.metric.liveDiskSpaceUsed.getCount());
+try (AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategy().getScanners(sstables);
+ LifecycleTransaction txn = cfs.getTracker().tryModify(sstables, 
OperationType.UNKNOWN);
+ SSTableRewriter writer = new SSTableRewriter(cfs, txn, 1000, 
false);)
 {
 ISSTableScanner scanner = scanners.scanners.get(0);
 CompactionController controller = new CompactionController(cfs, 
sstables, cfs.gcBefore(System.currentTimeMillis()));
@@ -105,30 +111,29 @@ public class SSTableRewriterTest extends SchemaLoader
 AbstractCompactedRow row = new LazilyCompactedRow(controller, 
Arrays.asList(scanner.next()));
 writer.append(row);
 }
-CollectionSSTableReader newsstables = writer.finish();
-cfs.getDataTracker().markCompactedSSTablesReplaced(sstables, 
newsstables , OperationType.COMPACTION);
+writer.finish();
 }
 SSTableDeletingTask.waitForDeletions();
-
 validateCFS(cfs);
 int filecounts = 
assertFileCounts(sstables.iterator().next().descriptor.directory.list(), 0, 0);
 assertEquals(1, filecounts);
-
+truncate(cfs);
 }
 @Test
 public void basicTest2() throws InterruptedException
 {
 Keyspace keyspace = Keyspace.open(KEYSPACE);
 ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF);
-cfs.truncateBlocking();
+truncate(cfs);
 
 SSTableReader s = writeFile(cfs, 1000);
 cfs.addSSTable(s);
 SetSSTableReader sstables = new HashSet(cfs.getSSTables());
 assertEquals(1, sstables.size());
 SSTableRewriter.overrideOpenInterval(1000);
-try (SSTableRewriter writer = new SSTableRewriter(cfs, sstables, 1000, 
false);
- AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategy().getScanners(sstables);)
+try (AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategy().getScanners(sstables);
+ LifecycleTransaction txn = cfs.getTracker().tryModify(sstables, 

[5/6] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d79b835..004e893 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -54,14 +54,10 @@ import org.apache.cassandra.concurrent.NamedThreadFactory;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
-import org.apache.cassandra.db.Cell;
-import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.DecoratedKey;
-import org.apache.cassandra.db.Keyspace;
-import org.apache.cassandra.db.OnDiskAtom;
-import org.apache.cassandra.db.SystemKeyspace;
+import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.compaction.CompactionInfo.Holder;
 import org.apache.cassandra.db.index.SecondaryIndexBuilder;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
 import org.apache.cassandra.dht.Bounds;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
@@ -82,12 +78,14 @@ import org.apache.cassandra.utils.UUIDGen;
 import org.apache.cassandra.utils.concurrent.OpOrder;
 import org.apache.cassandra.utils.concurrent.Refs;
 
+import static java.util.Collections.singleton;
+
 /**
  * p
  * A singleton which manages a private executor of ongoing compactions.
  * /p
  * Scheduling for compaction is accomplished by swapping sstables to be 
compacted into
- * a set via DataTracker. New scheduling attempts will ignore currently 
compacting
+ * a set via Tracker. New scheduling attempts will ignore currently compacting
  * sstables.
  */
 public class CompactionManager implements CompactionManagerMBean
@@ -195,7 +193,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 public boolean isCompacting(IterableColumnFamilyStore cfses)
 {
 for (ColumnFamilyStore cfs : cfses)
-if (!cfs.getDataTracker().getCompacting().isEmpty())
+if (!cfs.getTracker().getCompacting().isEmpty())
 return true;
 return false;
 }
@@ -245,22 +243,22 @@ public class CompactionManager implements 
CompactionManagerMBean
 }
 }
 
-private AllSSTableOpStatus parallelAllSSTableOperation(final 
ColumnFamilyStore cfs, final OneSSTableOperation operation) throws 
ExecutionException, InterruptedException
+private AllSSTableOpStatus parallelAllSSTableOperation(final 
ColumnFamilyStore cfs, final OneSSTableOperation operation, OperationType 
operationType) throws ExecutionException, InterruptedException
 {
-IterableSSTableReader compactingSSTables = cfs.markAllCompacting();
-if (compactingSSTables == null)
-{
-logger.info(Aborting operation on {}.{} after failing to 
interrupt other compaction operations, cfs.keyspace.getName(), cfs.name);
-return AllSSTableOpStatus.ABORTED;
-}
-if (Iterables.isEmpty(compactingSSTables))
+try (LifecycleTransaction compacting = 
cfs.markAllCompacting(operationType);)
 {
-logger.info(No sstables for {}.{}, cfs.keyspace.getName(), 
cfs.name);
-return AllSSTableOpStatus.SUCCESSFUL;
-}
-try
-{
-IterableSSTableReader sstables = 
operation.filterSSTables(compactingSSTables);
+if (compacting == null)
+{
+logger.info(Aborting operation on {}.{} after failing to 
interrupt other compaction operations, cfs.keyspace.getName(), cfs.name);
+return AllSSTableOpStatus.ABORTED;
+}
+if (compacting.originals().isEmpty())
+{
+logger.info(No sstables for {}.{}, cfs.keyspace.getName(), 
cfs.name);
+return AllSSTableOpStatus.SUCCESSFUL;
+}
+
+IterableSSTableReader sstables = 
operation.filterSSTables(compacting.originals());
 ListFutureObject futures = new ArrayList();
 
 for (final SSTableReader sstable : sstables)
@@ -271,31 +269,30 @@ public class CompactionManager implements 
CompactionManagerMBean
 return AllSSTableOpStatus.ABORTED;
 }
 
+final LifecycleTransaction txn = 
compacting.split(singleton(sstable));
 futures.add(executor.submit(new CallableObject()
 {
 @Override
 public Object call() throws Exception
 {
-operation.execute(sstable);
+operation.execute(txn);
 return this;
 }
 

[4/6] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java
--
diff --git 
a/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java 
b/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java
new file mode 100644
index 000..acc9747
--- /dev/null
+++ b/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java
@@ -0,0 +1,511 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.db.lifecycle;
+
+import java.util.*;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Function;
+import com.google.common.base.Predicate;
+import com.google.common.collect.*;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.db.compaction.OperationType;
+import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.io.sstable.format.SSTableReader.UniqueIdentifier;
+import org.apache.cassandra.utils.concurrent.Transactional;
+
+import static com.google.common.base.Functions.compose;
+import static com.google.common.base.Predicates.*;
+import static com.google.common.collect.ImmutableSet.copyOf;
+import static com.google.common.collect.Iterables.*;
+import static java.util.Collections.singleton;
+import static org.apache.cassandra.db.lifecycle.Helpers.*;
+import static org.apache.cassandra.db.lifecycle.View.updateCompacting;
+import static org.apache.cassandra.db.lifecycle.View.updateLiveSet;
+import static org.apache.cassandra.utils.Throwables.maybeFail;
+import static org.apache.cassandra.utils.concurrent.Refs.release;
+import static org.apache.cassandra.utils.concurrent.Refs.selfRefs;
+
+public class LifecycleTransaction extends Transactional.AbstractTransactional
+{
+private static final Logger logger = 
LoggerFactory.getLogger(LifecycleTransaction.class);
+
+/**
+ * a class that represents accumulated modifications to the Tracker.
+ * has two instances, one containing modifications that are staged (i.e. 
invisible)
+ * and one containing those logged that have been made visible through a 
call to checkpoint()
+ */
+private static class State
+{
+// readers that are either brand new, update a previous new reader, or 
update one of the original readers
+final SetSSTableReader update = new HashSet();
+// disjoint from update, represents a subset of originals that is no 
longer needed
+final SetSSTableReader obsolete = new HashSet();
+
+void log(State staged)
+{
+update.removeAll(staged.obsolete);
+update.removeAll(staged.update);
+update.addAll(staged.update);
+obsolete.addAll(staged.obsolete);
+}
+
+boolean contains(SSTableReader reader)
+{
+return update.contains(reader) || obsolete.contains(reader);
+}
+
+boolean isEmpty()
+{
+return update.isEmpty()  obsolete.isEmpty();
+}
+
+void clear()
+{
+update.clear();
+obsolete.clear();
+}
+}
+
+public final Tracker tracker;
+private final OperationType operationType;
+// the original readers this transaction was opened over, and that it 
guards
+// (no other transactions may operate over these readers concurrently)
+private final SetSSTableReader originals = new HashSet();
+// the set of readers we've marked as compacting (only updated on creation 
and in checkpoint())
+private final SetSSTableReader marked = new HashSet();
+// the identity set of readers we've ever encountered; used to ensure we 
don't accidentally revisit the
+// same version of a reader. potentially a dangerous property if there are 
reference counting bugs
+// as they won't be caught until the transaction's lifespan is over.
+private final SetUniqueIdentifier identities = 
Collections.newSetFromMap(new IdentityHashMapUniqueIdentifier, Boolean());
+
+// changes that have been made visible
+private final State logged = new State();
+// 

[6/7] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
Extend Transactional API to sstable lifecycle management

patch by benedict; reviewed by marcus for CASSANDRA-8568


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5a76bdb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5a76bdb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5a76bdb

Branch: refs/heads/trunk
Commit: e5a76bdb5fc04ffa16b8becaa7877186226c3b32
Parents: 33d71b8
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Mar 12 10:23:35 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri May 22 09:44:36 2015 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 249 --
 .../org/apache/cassandra/db/DataTracker.java| 793 ---
 .../cassandra/db/HintedHandOffManager.java  |   2 +-
 src/java/org/apache/cassandra/db/Keyspace.java  |  15 +-
 src/java/org/apache/cassandra/db/Memtable.java  |  22 +-
 .../compaction/AbstractCompactionStrategy.java  |   7 +-
 .../db/compaction/AbstractCompactionTask.java   |  19 +-
 .../db/compaction/CompactionController.java |   4 +-
 .../db/compaction/CompactionManager.java| 182 +++--
 .../cassandra/db/compaction/CompactionTask.java |  54 +-
 .../DateTieredCompactionStrategy.java   |  17 +-
 .../compaction/LeveledCompactionStrategy.java   |  23 +-
 .../db/compaction/LeveledCompactionTask.java|  11 +-
 .../db/compaction/LeveledManifest.java  |  11 +-
 .../db/compaction/SSTableSplitter.java  |  13 +-
 .../cassandra/db/compaction/Scrubber.java   |  21 +-
 .../SizeTieredCompactionStrategy.java   |  30 +-
 .../cassandra/db/compaction/Upgrader.java   |  15 +-
 .../compaction/WrappingCompactionStrategy.java  |   2 +-
 .../writers/CompactionAwareWriter.java  |  11 +-
 .../writers/DefaultCompactionWriter.java|  11 +-
 .../writers/MajorLeveledCompactionWriter.java   |  11 +-
 .../writers/MaxSSTableSizeWriter.java   |  10 +-
 .../SplittingSizeTieredCompactionWriter.java|  14 +-
 .../AbstractSimplePerColumnSecondaryIndex.java  |   4 +-
 .../db/index/SecondaryIndexManager.java |   2 +-
 .../apache/cassandra/db/lifecycle/Helpers.java  | 241 ++
 .../db/lifecycle/LifecycleTransaction.java  | 511 
 .../db/lifecycle/SSTableIntervalTree.java   |  40 +
 .../apache/cassandra/db/lifecycle/Tracker.java  | 468 +++
 .../org/apache/cassandra/db/lifecycle/View.java | 252 ++
 .../io/compress/CompressionMetadata.java|   2 +-
 .../io/sstable/IndexSummaryManager.java | 106 ++-
 .../io/sstable/SSTableDeletingTask.java |  27 +-
 .../cassandra/io/sstable/SSTableRewriter.java   | 295 ++-
 .../io/sstable/format/SSTableReader.java| 100 ++-
 .../io/sstable/format/big/BigTableWriter.java   |   6 +-
 .../cassandra/io/util/SequentialWriter.java |   2 +-
 .../cassandra/metrics/ColumnFamilyMetrics.java  |  18 +-
 .../cassandra/streaming/StreamSession.java  |   7 +-
 .../cassandra/tools/StandaloneScrubber.java |  12 +-
 .../cassandra/tools/StandaloneSplitter.java |   7 +-
 .../cassandra/tools/StandaloneUpgrader.java |   6 +-
 .../cassandra/utils/concurrent/Blocker.java |  63 ++
 .../utils/concurrent/Transactional.java |  31 +-
 .../db/compaction/LongCompactionsTest.java  |  10 +-
 test/unit/org/apache/cassandra/MockSchema.java  | 167 
 test/unit/org/apache/cassandra/Util.java|  27 +-
 .../org/apache/cassandra/db/KeyCacheTest.java   |   3 +-
 .../unit/org/apache/cassandra/db/ScrubTest.java |  58 +-
 .../db/compaction/AntiCompactionTest.java   |  51 +-
 .../compaction/CompactionAwareWriterTest.java   |  45 +-
 .../DateTieredCompactionStrategyTest.java   |   6 +-
 .../cassandra/db/lifecycle/HelpersTest.java | 158 
 .../db/lifecycle/LifecycleTransactionTest.java  | 412 ++
 .../cassandra/db/lifecycle/TrackerTest.java | 342 
 .../apache/cassandra/db/lifecycle/ViewTest.java | 202 +
 .../io/sstable/IndexSummaryManagerTest.java | 123 ++-
 .../cassandra/io/sstable/SSTableReaderTest.java |  11 +-
 .../io/sstable/SSTableRewriterTest.java | 250 +++---
 61 files changed, 3902 insertions(+), 1711 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8b59309..ca87385 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
  * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
  * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
  * Revert 

[4/7] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java
--
diff --git 
a/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java 
b/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java
new file mode 100644
index 000..acc9747
--- /dev/null
+++ b/src/java/org/apache/cassandra/db/lifecycle/LifecycleTransaction.java
@@ -0,0 +1,511 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.cassandra.db.lifecycle;
+
+import java.util.*;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Function;
+import com.google.common.base.Predicate;
+import com.google.common.collect.*;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.db.compaction.OperationType;
+import org.apache.cassandra.io.sstable.format.SSTableReader;
+import org.apache.cassandra.io.sstable.format.SSTableReader.UniqueIdentifier;
+import org.apache.cassandra.utils.concurrent.Transactional;
+
+import static com.google.common.base.Functions.compose;
+import static com.google.common.base.Predicates.*;
+import static com.google.common.collect.ImmutableSet.copyOf;
+import static com.google.common.collect.Iterables.*;
+import static java.util.Collections.singleton;
+import static org.apache.cassandra.db.lifecycle.Helpers.*;
+import static org.apache.cassandra.db.lifecycle.View.updateCompacting;
+import static org.apache.cassandra.db.lifecycle.View.updateLiveSet;
+import static org.apache.cassandra.utils.Throwables.maybeFail;
+import static org.apache.cassandra.utils.concurrent.Refs.release;
+import static org.apache.cassandra.utils.concurrent.Refs.selfRefs;
+
+public class LifecycleTransaction extends Transactional.AbstractTransactional
+{
+private static final Logger logger = 
LoggerFactory.getLogger(LifecycleTransaction.class);
+
+/**
+ * a class that represents accumulated modifications to the Tracker.
+ * has two instances, one containing modifications that are staged (i.e. 
invisible)
+ * and one containing those logged that have been made visible through a 
call to checkpoint()
+ */
+private static class State
+{
+// readers that are either brand new, update a previous new reader, or 
update one of the original readers
+final SetSSTableReader update = new HashSet();
+// disjoint from update, represents a subset of originals that is no 
longer needed
+final SetSSTableReader obsolete = new HashSet();
+
+void log(State staged)
+{
+update.removeAll(staged.obsolete);
+update.removeAll(staged.update);
+update.addAll(staged.update);
+obsolete.addAll(staged.obsolete);
+}
+
+boolean contains(SSTableReader reader)
+{
+return update.contains(reader) || obsolete.contains(reader);
+}
+
+boolean isEmpty()
+{
+return update.isEmpty()  obsolete.isEmpty();
+}
+
+void clear()
+{
+update.clear();
+obsolete.clear();
+}
+}
+
+public final Tracker tracker;
+private final OperationType operationType;
+// the original readers this transaction was opened over, and that it 
guards
+// (no other transactions may operate over these readers concurrently)
+private final SetSSTableReader originals = new HashSet();
+// the set of readers we've marked as compacting (only updated on creation 
and in checkpoint())
+private final SetSSTableReader marked = new HashSet();
+// the identity set of readers we've ever encountered; used to ensure we 
don't accidentally revisit the
+// same version of a reader. potentially a dangerous property if there are 
reference counting bugs
+// as they won't be caught until the transaction's lifespan is over.
+private final SetUniqueIdentifier identities = 
Collections.newSetFromMap(new IdentityHashMapUniqueIdentifier, Boolean());
+
+// changes that have been made visible
+private final State logged = new State();
+// 

[6/6] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
Extend Transactional API to sstable lifecycle management

patch by benedict; reviewed by marcus for CASSANDRA-8568


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5a76bdb
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5a76bdb
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5a76bdb

Branch: refs/heads/cassandra-2.2
Commit: e5a76bdb5fc04ffa16b8becaa7877186226c3b32
Parents: 33d71b8
Author: Benedict Elliott Smith bened...@apache.org
Authored: Thu Mar 12 10:23:35 2015 +
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri May 22 09:44:36 2015 +0100

--
 CHANGES.txt |   1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 249 --
 .../org/apache/cassandra/db/DataTracker.java| 793 ---
 .../cassandra/db/HintedHandOffManager.java  |   2 +-
 src/java/org/apache/cassandra/db/Keyspace.java  |  15 +-
 src/java/org/apache/cassandra/db/Memtable.java  |  22 +-
 .../compaction/AbstractCompactionStrategy.java  |   7 +-
 .../db/compaction/AbstractCompactionTask.java   |  19 +-
 .../db/compaction/CompactionController.java |   4 +-
 .../db/compaction/CompactionManager.java| 182 +++--
 .../cassandra/db/compaction/CompactionTask.java |  54 +-
 .../DateTieredCompactionStrategy.java   |  17 +-
 .../compaction/LeveledCompactionStrategy.java   |  23 +-
 .../db/compaction/LeveledCompactionTask.java|  11 +-
 .../db/compaction/LeveledManifest.java  |  11 +-
 .../db/compaction/SSTableSplitter.java  |  13 +-
 .../cassandra/db/compaction/Scrubber.java   |  21 +-
 .../SizeTieredCompactionStrategy.java   |  30 +-
 .../cassandra/db/compaction/Upgrader.java   |  15 +-
 .../compaction/WrappingCompactionStrategy.java  |   2 +-
 .../writers/CompactionAwareWriter.java  |  11 +-
 .../writers/DefaultCompactionWriter.java|  11 +-
 .../writers/MajorLeveledCompactionWriter.java   |  11 +-
 .../writers/MaxSSTableSizeWriter.java   |  10 +-
 .../SplittingSizeTieredCompactionWriter.java|  14 +-
 .../AbstractSimplePerColumnSecondaryIndex.java  |   4 +-
 .../db/index/SecondaryIndexManager.java |   2 +-
 .../apache/cassandra/db/lifecycle/Helpers.java  | 241 ++
 .../db/lifecycle/LifecycleTransaction.java  | 511 
 .../db/lifecycle/SSTableIntervalTree.java   |  40 +
 .../apache/cassandra/db/lifecycle/Tracker.java  | 468 +++
 .../org/apache/cassandra/db/lifecycle/View.java | 252 ++
 .../io/compress/CompressionMetadata.java|   2 +-
 .../io/sstable/IndexSummaryManager.java | 106 ++-
 .../io/sstable/SSTableDeletingTask.java |  27 +-
 .../cassandra/io/sstable/SSTableRewriter.java   | 295 ++-
 .../io/sstable/format/SSTableReader.java| 100 ++-
 .../io/sstable/format/big/BigTableWriter.java   |   6 +-
 .../cassandra/io/util/SequentialWriter.java |   2 +-
 .../cassandra/metrics/ColumnFamilyMetrics.java  |  18 +-
 .../cassandra/streaming/StreamSession.java  |   7 +-
 .../cassandra/tools/StandaloneScrubber.java |  12 +-
 .../cassandra/tools/StandaloneSplitter.java |   7 +-
 .../cassandra/tools/StandaloneUpgrader.java |   6 +-
 .../cassandra/utils/concurrent/Blocker.java |  63 ++
 .../utils/concurrent/Transactional.java |  31 +-
 .../db/compaction/LongCompactionsTest.java  |  10 +-
 test/unit/org/apache/cassandra/MockSchema.java  | 167 
 test/unit/org/apache/cassandra/Util.java|  27 +-
 .../org/apache/cassandra/db/KeyCacheTest.java   |   3 +-
 .../unit/org/apache/cassandra/db/ScrubTest.java |  58 +-
 .../db/compaction/AntiCompactionTest.java   |  51 +-
 .../compaction/CompactionAwareWriterTest.java   |  45 +-
 .../DateTieredCompactionStrategyTest.java   |   6 +-
 .../cassandra/db/lifecycle/HelpersTest.java | 158 
 .../db/lifecycle/LifecycleTransactionTest.java  | 412 ++
 .../cassandra/db/lifecycle/TrackerTest.java | 342 
 .../apache/cassandra/db/lifecycle/ViewTest.java | 202 +
 .../io/sstable/IndexSummaryManagerTest.java | 123 ++-
 .../cassandra/io/sstable/SSTableReaderTest.java |  11 +-
 .../io/sstable/SSTableRewriterTest.java | 250 +++---
 61 files changed, 3902 insertions(+), 1711 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 8b59309..ca87385 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.2
+ * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
  * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
  * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
  * Revert 

[5/7] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index d79b835..004e893 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -54,14 +54,10 @@ import org.apache.cassandra.concurrent.NamedThreadFactory;
 import org.apache.cassandra.config.CFMetaData;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
-import org.apache.cassandra.db.Cell;
-import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.DecoratedKey;
-import org.apache.cassandra.db.Keyspace;
-import org.apache.cassandra.db.OnDiskAtom;
-import org.apache.cassandra.db.SystemKeyspace;
+import org.apache.cassandra.db.*;
 import org.apache.cassandra.db.compaction.CompactionInfo.Holder;
 import org.apache.cassandra.db.index.SecondaryIndexBuilder;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
 import org.apache.cassandra.dht.Bounds;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
@@ -82,12 +78,14 @@ import org.apache.cassandra.utils.UUIDGen;
 import org.apache.cassandra.utils.concurrent.OpOrder;
 import org.apache.cassandra.utils.concurrent.Refs;
 
+import static java.util.Collections.singleton;
+
 /**
  * p
  * A singleton which manages a private executor of ongoing compactions.
  * /p
  * Scheduling for compaction is accomplished by swapping sstables to be 
compacted into
- * a set via DataTracker. New scheduling attempts will ignore currently 
compacting
+ * a set via Tracker. New scheduling attempts will ignore currently compacting
  * sstables.
  */
 public class CompactionManager implements CompactionManagerMBean
@@ -195,7 +193,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 public boolean isCompacting(IterableColumnFamilyStore cfses)
 {
 for (ColumnFamilyStore cfs : cfses)
-if (!cfs.getDataTracker().getCompacting().isEmpty())
+if (!cfs.getTracker().getCompacting().isEmpty())
 return true;
 return false;
 }
@@ -245,22 +243,22 @@ public class CompactionManager implements 
CompactionManagerMBean
 }
 }
 
-private AllSSTableOpStatus parallelAllSSTableOperation(final 
ColumnFamilyStore cfs, final OneSSTableOperation operation) throws 
ExecutionException, InterruptedException
+private AllSSTableOpStatus parallelAllSSTableOperation(final 
ColumnFamilyStore cfs, final OneSSTableOperation operation, OperationType 
operationType) throws ExecutionException, InterruptedException
 {
-IterableSSTableReader compactingSSTables = cfs.markAllCompacting();
-if (compactingSSTables == null)
-{
-logger.info(Aborting operation on {}.{} after failing to 
interrupt other compaction operations, cfs.keyspace.getName(), cfs.name);
-return AllSSTableOpStatus.ABORTED;
-}
-if (Iterables.isEmpty(compactingSSTables))
+try (LifecycleTransaction compacting = 
cfs.markAllCompacting(operationType);)
 {
-logger.info(No sstables for {}.{}, cfs.keyspace.getName(), 
cfs.name);
-return AllSSTableOpStatus.SUCCESSFUL;
-}
-try
-{
-IterableSSTableReader sstables = 
operation.filterSSTables(compactingSSTables);
+if (compacting == null)
+{
+logger.info(Aborting operation on {}.{} after failing to 
interrupt other compaction operations, cfs.keyspace.getName(), cfs.name);
+return AllSSTableOpStatus.ABORTED;
+}
+if (compacting.originals().isEmpty())
+{
+logger.info(No sstables for {}.{}, cfs.keyspace.getName(), 
cfs.name);
+return AllSSTableOpStatus.SUCCESSFUL;
+}
+
+IterableSSTableReader sstables = 
operation.filterSSTables(compacting.originals());
 ListFutureObject futures = new ArrayList();
 
 for (final SSTableReader sstable : sstables)
@@ -271,31 +269,30 @@ public class CompactionManager implements 
CompactionManagerMBean
 return AllSSTableOpStatus.ABORTED;
 }
 
+final LifecycleTransaction txn = 
compacting.split(singleton(sstable));
 futures.add(executor.submit(new CallableObject()
 {
 @Override
 public Object call() throws Exception
 {
-operation.execute(sstable);
+operation.execute(txn);
 return this;
 }
 

[jira] [Commented] (CASSANDRA-9449) Running ALTER TABLE cql statement asynchronously results in failure

2015-05-22 Thread Paul Praet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555793#comment-14555793
 ] 

Paul Praet commented on CASSANDRA-9449:
---

1) Yes:
{code}
CREATE TABLE wifidoctor.device (
columna text,
columnb text,
columnc timestamp,
columnd text,
columne text,
columnf text,
columng text,
columnh text,
PRIMARY KEY ((columna, columnb))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

cqlsh INSERT INTO wifidoctor.device (columnA, columnB, 
columnC,columnD,columnE,columnF,columnG,columnH) VALUES 
('a','','2015-01-01','','','','','');
InvalidRequest: code=2200 [Invalid query] message=Unknown identifier columne

{code}

2) yes, it does. After the restart, the INSERT query works.

 Running ALTER TABLE cql statement asynchronously results in failure
 ---

 Key: CASSANDRA-9449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9449
 Project: Cassandra
  Issue Type: Bug
 Environment: Singe cluster environment
Reporter: Paul Praet

 When running 'ALTER TABLE' cql statements asynchronously, we notice that 
 often certain columns are missing, causing subsequent queries to fail.
 The code snippet below can be used to reproduce the problem.
 cassandra is a com.datastax.driver.core.Session reference.
 We construct the table synchronously and then alter it (adding five columns) 
 with the cassandra async API. We synchronize to ensure the table is properly 
 altered before continuing. Preparing the statement at the bottom of the code 
 snippet often fails:
 {noformat} com.datastax.driver.core.exceptions.InvalidQueryException: Unknown 
 identifier columnf {noformat}
 {code}
  @Test
 public void testCassandraAsyncAlterTable() throws Exception {
 ResultSet rs = cassandra.execute(CREATE TABLE device ( columnA text, 
 columnB text, columnC timestamp, PRIMARY KEY ((columnA, columnB))););
 ListResultSetFuture futures = new ArrayList();
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnD 
 text;));
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnE 
 text;));
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnF 
 text;));
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnG 
 text;));
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnH 
 text;));
 for(ResultSetFuture resultfuture : futures){ resultfuture.get(); }
   
 /* discard the result; only interested to see if it works or not */
 cassandra.prepare(INSERT INTO device (columnA, columnB, 
 columnC,columnD,columnE,columnF,columnG,columnH) VALUES (?,?,?,?,?,?,?,?););
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9449) Running ALTER TABLE cql statement asynchronously results in failure

2015-05-22 Thread Paul Praet (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Praet updated CASSANDRA-9449:
--
Description: 
When running 'ALTER TABLE' cql statements asynchronously, we notice that often 
certain columns are missing, causing subsequent queries to fail.
The code snippet below can be used to reproduce the problem.

cassandra is a com.datastax.driver.core.Session reference.
We construct the table synchronously and then alter it (adding five columns) 
with the cassandra async API. We synchronize to ensure the table is properly 
altered before continuing. Preparing the statement at the bottom of the code 
snippet often fails:
{noformat} com.datastax.driver.core.exceptions.InvalidQueryException: Unknown 
identifier columnf {noformat}

{code}
 @Test
public void testCassandraAsyncAlterTable() throws Exception {
ResultSet rs = cassandra.execute(CREATE TABLE device ( columnA text, 
columnB text, columnC timestamp, PRIMARY KEY ((columnA, columnB))););

ListResultSetFuture futures = new ArrayList();
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnD 
text;));
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnE 
text;));
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnF 
text;));
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnG 
text;));
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnH 
text;));

for(ResultSetFuture resultfuture : futures){ resultfuture.get(); }
  
/* discard the result; only interested to see if it works or not */
cassandra.prepare(INSERT INTO device (columnA, columnB, 
columnC,columnD,columnE,columnF,columnG,columnH) VALUES (?,?,?,?,?,?,?,?););

}


{code}

  was:
When running 'ALTER TABLE' cql statements asynchronously, we notice that often 
certain columns are missing, causing subsequent queries to fail.
The code snippet below can be used to reproduce the problem.

cassandra is a com.datastax.driver.core.Session reference.
We construct the table synchronously and then alter it (adding five columns) 
with the cassandra async API. We synchronize to ensure the table is properly 
altered before continuing. Preparing the statement at the bottom of the code 
snippet often fails:
{noformat} com.datastax.driver.core.exceptions.InvalidQueryException: Unknown 
identifier columnf {noformat}

{code}
 @Test
public void testCassandraAsyncAlterTable() throws Exception {
ResultSet rs = cassandra.execute(CREATE TABLE device ( columnA text, 
columnB text, columnC timestamp PRIMARY KEY ((columnA, columnB))););

ListResultSetFuture futures = new ArrayList();
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnD 
text;));
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnE 
text;));
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnF 
text;));
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnG 
text;));
futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnH 
text;));

for(ResultSetFuture resultfuture : futures){ resultfuture.get(); }
  
/* discard the result; only interested to see if it works or not */
cassandra.prepare(INSERT INTO device (columnA, columnB, 
columnC,columnD,columnE,columnF,columnG,columnH) VALUES (?,?,?,?,?,?,?,?););

}


{code}


 Running ALTER TABLE cql statement asynchronously results in failure
 ---

 Key: CASSANDRA-9449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9449
 Project: Cassandra
  Issue Type: Bug
 Environment: Singe cluster environment
Reporter: Paul Praet

 When running 'ALTER TABLE' cql statements asynchronously, we notice that 
 often certain columns are missing, causing subsequent queries to fail.
 The code snippet below can be used to reproduce the problem.
 cassandra is a com.datastax.driver.core.Session reference.
 We construct the table synchronously and then alter it (adding five columns) 
 with the cassandra async API. We synchronize to ensure the table is properly 
 altered before continuing. Preparing the statement at the bottom of the code 
 snippet often fails:
 {noformat} com.datastax.driver.core.exceptions.InvalidQueryException: Unknown 
 identifier columnf {noformat}
 {code}
  @Test
 public void testCassandraAsyncAlterTable() throws Exception {
 ResultSet rs = cassandra.execute(CREATE TABLE device ( columnA text, 
 columnB text, columnC timestamp, PRIMARY KEY ((columnA, columnB))););
 ListResultSetFuture futures = new ArrayList();
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnD 
 text;));
 futures.add(cassandra.executeAsync(ALTER 

[jira] [Commented] (CASSANDRA-7925) TimeUUID LSB should be unique per process, not just per machine

2015-05-22 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555715#comment-14555715
 ] 

Benedict commented on CASSANDRA-7925:
-

I'm not sure we need each thread to have its own... If a client or another 
server generates a UUID, they should both provide the LSB themselves. If this 
server generates one, the act is synchronized to ensure no duplication.

I think salting clockSeqAndNode with the ClassLoader.hashCode() and PID should 
be sufficient?


 TimeUUID LSB should be unique per process, not just per machine
 ---

 Key: CASSANDRA-7925
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7925
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Mädel
Assignee: T Jake Luciani
 Fix For: 2.2.x

 Attachments: cassandra-uuidgen.patch


 as pointed out in 
 [CASSANDRA-7919|https://issues.apache.org/jira/browse/CASSANDRA-7919?focusedCommentId=14132529page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14132529]
  lsb collisions are also possible serverside.
 a sufficient solution would be to include references to pid and classloader 
 within lsb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8805) runWithCompactionsDisabled only cancels compactions, which is not the only source of markCompacted

2015-05-22 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555778#comment-14555778
 ] 

Benedict commented on CASSANDRA-8805:
-

[~carlyeks]: The general approach looks good, but could I get a proper 
(rebased) branch up? If you could get things into a ready to commit state 
with CHANGES.txt wired up as well, that would be appreciated.

I would like to move away from dealing with patch files, as it isn't conducive 
to the new commit process.

 

 runWithCompactionsDisabled only cancels compactions, which is not the only 
 source of markCompacted
 --

 Key: CASSANDRA-8805
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8805
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Carl Yeksigian
 Fix For: 2.1.x

 Attachments: 8805-2.1.txt


 Operations like repair that may operate over all sstables cancel compactions 
 before beginning, and fail if there are any files marked compacting after 
 doing so. Redistribution of index summaries is not a compaction, so is not 
 cancelled by this action, but does mark sstables as compacting, so such an 
 action will fail to initiate if there is an index summary redistribution in 
 progress. It seems that IndexSummaryManager needs to register itself as 
 interruptible along with compactions (AFAICT no other actions that may 
 markCompacting are not themselves compactions).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/7] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/trunk 15d424e86 - d96a02a12


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java 
b/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
index 5dca589..fa91d00 100644
--- a/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
+++ b/test/unit/org/apache/cassandra/io/sstable/SSTableRewriterTest.java
@@ -18,14 +18,16 @@
 package org.apache.cassandra.io.sstable;
 
 import java.io.File;
-import java.io.IOException;
 import java.nio.ByteBuffer;
 import java.util.*;
 import java.util.concurrent.ThreadLocalRandom;
+import java.util.concurrent.TimeUnit;
 
+import com.google.common.collect.Iterables;
 import com.google.common.collect.Sets;
 import org.junit.After;
 import org.junit.BeforeClass;
+import com.google.common.util.concurrent.Uninterruptibles;
 import org.junit.Test;
 
 import org.apache.cassandra.SchemaLoader;
@@ -43,6 +45,7 @@ import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.io.sstable.format.SSTableWriter;
 import org.apache.cassandra.locator.SimpleStrategy;
 import org.apache.cassandra.db.compaction.SSTableSplitter;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
 import org.apache.cassandra.dht.Range;
 import org.apache.cassandra.dht.Token;
 import org.apache.cassandra.metrics.StorageMetrics;
@@ -52,7 +55,6 @@ import org.apache.cassandra.utils.FBUtilities;
 import org.apache.cassandra.utils.Pair;
 
 import static org.junit.Assert.*;
-import static org.apache.cassandra.utils.Throwables.maybeFail;
 
 public class SSTableRewriterTest extends SchemaLoader
 {
@@ -83,7 +85,9 @@ public class SSTableRewriterTest extends SchemaLoader
 {
 Keyspace keyspace = Keyspace.open(KEYSPACE);
 ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF);
-cfs.truncateBlocking();
+truncate(cfs);
+assertEquals(0, cfs.metric.liveDiskSpaceUsed.getCount());
+
 for (int j = 0; j  100; j ++)
 {
 ByteBuffer key = ByteBufferUtil.bytes(String.valueOf(j));
@@ -94,8 +98,10 @@ public class SSTableRewriterTest extends SchemaLoader
 cfs.forceBlockingFlush();
 SetSSTableReader sstables = new HashSet(cfs.getSSTables());
 assertEquals(1, sstables.size());
-try (SSTableRewriter writer = new SSTableRewriter(cfs, sstables, 1000, 
false);
- AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategy().getScanners(sstables);)
+assertEquals(sstables.iterator().next().bytesOnDisk(), 
cfs.metric.liveDiskSpaceUsed.getCount());
+try (AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategy().getScanners(sstables);
+ LifecycleTransaction txn = cfs.getTracker().tryModify(sstables, 
OperationType.UNKNOWN);
+ SSTableRewriter writer = new SSTableRewriter(cfs, txn, 1000, 
false);)
 {
 ISSTableScanner scanner = scanners.scanners.get(0);
 CompactionController controller = new CompactionController(cfs, 
sstables, cfs.gcBefore(System.currentTimeMillis()));
@@ -105,30 +111,29 @@ public class SSTableRewriterTest extends SchemaLoader
 AbstractCompactedRow row = new LazilyCompactedRow(controller, 
Arrays.asList(scanner.next()));
 writer.append(row);
 }
-CollectionSSTableReader newsstables = writer.finish();
-cfs.getDataTracker().markCompactedSSTablesReplaced(sstables, 
newsstables , OperationType.COMPACTION);
+writer.finish();
 }
 SSTableDeletingTask.waitForDeletions();
-
 validateCFS(cfs);
 int filecounts = 
assertFileCounts(sstables.iterator().next().descriptor.directory.list(), 0, 0);
 assertEquals(1, filecounts);
-
+truncate(cfs);
 }
 @Test
 public void basicTest2() throws InterruptedException
 {
 Keyspace keyspace = Keyspace.open(KEYSPACE);
 ColumnFamilyStore cfs = keyspace.getColumnFamilyStore(CF);
-cfs.truncateBlocking();
+truncate(cfs);
 
 SSTableReader s = writeFile(cfs, 1000);
 cfs.addSSTable(s);
 SetSSTableReader sstables = new HashSet(cfs.getSSTables());
 assertEquals(1, sstables.size());
 SSTableRewriter.overrideOpenInterval(1000);
-try (SSTableRewriter writer = new SSTableRewriter(cfs, sstables, 1000, 
false);
- AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategy().getScanners(sstables);)
+try (AbstractCompactionStrategy.ScannerList scanners = 
cfs.getCompactionStrategy().getScanners(sstables);
+ LifecycleTransaction txn = cfs.getTracker().tryModify(sstables, 
OperationType.UNKNOWN);
+ 

[3/7] cassandra git commit: Extend Transactional API to sstable lifecycle management

2015-05-22 Thread benedict
http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5a76bdb/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
--
diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java 
b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
index a526ec9..8029075 100644
--- a/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
+++ b/src/java/org/apache/cassandra/io/sstable/SSTableRewriter.java
@@ -20,33 +20,28 @@ package org.apache.cassandra.io.sstable;
 import java.util.*;
 
 import com.google.common.annotations.VisibleForTesting;
-import com.google.common.collect.ImmutableList;
 
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.db.ColumnFamilyStore;
-import org.apache.cassandra.db.DataTracker;
 import org.apache.cassandra.db.DecoratedKey;
 import org.apache.cassandra.db.RowIndexEntry;
 import org.apache.cassandra.db.compaction.AbstractCompactedRow;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.io.sstable.format.SSTableWriter;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
 import org.apache.cassandra.utils.CLibrary;
-import org.apache.cassandra.utils.FBUtilities;
-import org.apache.cassandra.utils.concurrent.Refs;
 import org.apache.cassandra.utils.concurrent.Transactional;
 
-import static org.apache.cassandra.utils.Throwables.merge;
-
 /**
  * Wraps one or more writers as output for rewriting one or more readers: 
every sstable_preemptive_open_interval_in_mb
  * we look in the summary we're collecting for the latest writer for the 
penultimate key that we know to have been fully
  * flushed to the index file, and then double check that the key is fully 
present in the flushed data file.
- * Then we move the starts of each reader forwards to that point, replace them 
in the datatracker, and attach a runnable
+ * Then we move the starts of each reader forwards to that point, replace them 
in the Tracker, and attach a runnable
  * for on-close (i.e. when all references expire) that drops the page cache 
prior to that key position
  *
  * hard-links are created for each partially written sstable so that readers 
opened against them continue to work past
  * the rename of the temporary file, which is deleted once all readers against 
the hard-link have been closed.
- * If for any reason the writer is rolled over, we immediately rename and 
fully expose the completed file in the DataTracker.
+ * If for any reason the writer is rolled over, we immediately rename and 
fully expose the completed file in the Tracker.
  *
  * On abort we restore the original lower bounds to the existing readers and 
delete any temporary files we had in progress,
  * but leave any hard-links in place for the readers we opened to cleanup when 
they're finished as we would had we finished
@@ -74,26 +69,19 @@ public class SSTableRewriter extends 
Transactional.AbstractTransactional impleme
 return preemptiveOpenInterval;
 }
 
-private final DataTracker dataTracker;
 private final ColumnFamilyStore cfs;
 
 private final long maxAge;
 private long repairedAt = -1;
 // the set of final readers we will expose on commit
+private final LifecycleTransaction transaction; // the readers we are 
rewriting (updated as they are replaced)
 private final ListSSTableReader preparedForCommit = new ArrayList();
-private final SetSSTableReader rewriting; // the readers we are 
rewriting (updated as they are replaced)
-private final MapDescriptor, DecoratedKey originalStarts = new 
HashMap(); // the start key for each reader we are rewriting
 private final MapDescriptor, Integer fileDescriptors = new HashMap(); 
// the file descriptors for each reader descriptor we are rewriting
 
-private SSTableReader currentlyOpenedEarly; // the reader for the most 
recent (re)opening of the target file
 private long currentlyOpenedEarlyAt; // the position (in MB) in the target 
file we last (re)opened at
 
-private final ListFinished finishedWriters = new ArrayList();
-// as writers are closed from finishedWriters, their last readers are 
moved into discard, so that abort can cleanup
-// after us safely; we use a set so we can add in both prepareToCommit and 
abort
-private final SetSSTableReader discard = new HashSet();
-// true for operations that are performed without Cassandra running 
(prevents updates of DataTracker)
-private final boolean isOffline;
+private final ListSSTableWriter writers = new ArrayList();
+private final boolean isOffline; // true for operations that are performed 
without Cassandra running (prevents updates of Tracker)
 
 private SSTableWriter writer;
 private MapDecoratedKey, RowIndexEntry cachedKeys = new HashMap();
@@ -101,15 +89,11 @@ public class SSTableRewriter extends 
Transactional.AbstractTransactional impleme
 // for testing 

[jira] [Updated] (CASSANDRA-8568) Extend Transactional API to sstable lifecycle management

2015-05-22 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8568:

Summary: Extend Transactional API to sstable lifecycle management  (was: 
Impose new API on data tracker modifications that makes correct usage obvious 
and imposes safety)

 Extend Transactional API to sstable lifecycle management
 

 Key: CASSANDRA-8568
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8568
 Project: Cassandra
  Issue Type: Bug
Reporter: Benedict
Assignee: Benedict
 Fix For: 2.2.0 rc1


 DataTracker has become a bit of a quagmire, and not at all obvious to 
 interface with, with many subtly different modifiers. I suspect it is still 
 subtly broken, especially around error recovery.
 I propose piggy-backing on CASSANDRA-7705 to offer RAII (and GC-enforced, for 
 those situations where a try/finally block isn't possible) objects that have 
 transactional behaviour, and with few simple declarative methods that can be 
 composed simply to provide all of the functionality we currently need.
 See CASSANDRA-8399 for context



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9463) ant test-all results incomplete when parsed

2015-05-22 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-9463:
-

 Summary: ant test-all results incomplete when parsed
 Key: CASSANDRA-9463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9463
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler


trunk `ant test` - 1,196 total tests
trunk `ant test-all` - 1,353 total tests

`ant test-all` runs 
test,long-test,test-compression,pig-test,test-clientutil-jar, so we should be 
getting 1196*2 (test, test-compresssion) + N (long-test) + 24 (pig-test) + N 
(test-clientutil-jar)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9431) Static Analysis to warn on unsafe use of Autocloseable instances

2015-05-22 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556724#comment-14556724
 ] 

Benedict commented on CASSANDRA-9431:
-

Thanks guys for squashing these issues. I've pushed some further modifications 
[here|https://github.com/belliottsmith/cassandra/tree/fix-leaks]

* There was a bug introduced in SerializingCache, which could have double 
decremented the RefCountedMemory.
* I have a comment in CompactionManager around a piece of code that was 
previously run in a try/finally block. I'm not honestly sure what it's 
achieving, though, so I don't know which is correct.
* Where it looked possible (and helpful) to do so, I've moved the 
SuppressWarnings flag to the offending statement. This only works for some 
simple cases, but I think helps improve clarity where it can be employed.
* I removed AutoCloseable from Ref, since we should never use it as one
* I also did some random tidying of bits I thought could be simplified

It's worth noting that this static analysis is *not* perfect - it seems to miss 
some other potential holes. I'm hoping that [~iamaleksey]'s upcoming proposal 
for using lambdas will help us avoid Iterators escaping into so much of the 
codebase, and that by attacking it from both angles we'll hopefully be safe.

 Static Analysis to warn on unsafe use of Autocloseable instances
 

 Key: CASSANDRA-9431
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9431
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: T Jake Luciani
 Fix For: 2.2.0 rc1


 The ideal goal would be to fail the build under any unsafe (and not annotated 
 as considered safe independently) uses of Autocloseable. It looks as though 
 eclipse (and hence, hopefully ecj) has support for this feature, so we should 
 investigate if it meets our requirements and we can get it integrated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9461) Error when deleting from list

2015-05-22 Thread T Jake Luciani (JIRA)
T Jake Luciani created CASSANDRA-9461:
-

 Summary: Error when deleting from list
 Key: CASSANDRA-9461
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9461
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
 Attachments: listbug.txt

I encountered this error while testing. 

{code}
org.apache.cassandra.exceptions.InvalidRequestException: Attempted to delete an 
element from a list which is null
[junit] at 
org.apache.cassandra.cql3.Lists$DiscarderByIndex.execute(Lists.java:511)
[junit] at 
org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:86)
[junit] at 
org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:649)
[junit] at 
org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:614)
[junit] at 
org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:326)
[junit] at 
org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:508)
[junit] at 
org.apache.cassandra.cql3.JsonTest.testFromJsonFct(JsonTest.java:362)

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9463) ant test-all results incomplete when parsed

2015-05-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-9463:
--
Fix Version/s: 2.2.x
   3.x

 ant test-all results incomplete when parsed
 ---

 Key: CASSANDRA-9463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9463
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler
 Fix For: 3.x, 2.2.x


 trunk `ant test` - 1,196 total tests
 trunk `ant test-all` - 1,353 total tests
 `ant test-all` runs 
 test,long-test,test-compression,pig-test,test-clientutil-jar, so we should 
 be getting 1196*2 (test, test-compresssion) + N (long-test) + 24 (pig-test) + 
 N (test-clientutil-jar)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9464) test-clientutil-jar broken

2015-05-22 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-9464:
--
Fix Version/s: 2.2.x
   3.x

 test-clientutil-jar broken
 --

 Key: CASSANDRA-9464
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9464
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler
 Fix For: 3.x, 2.2.x


 {noformat}
 20:37:37 test-clientutil-jar:
 20:37:37 [junit] Testsuite: org.apache.cassandra.serializers.ClientUtilsTest
 20:37:37 [junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
 elapsed: 0.032 sec
 20:37:37 [junit]
 20:37:37 [junit] Testcase: 
 test(org.apache.cassandra.serializers.ClientUtilsTest):Caused an ERROR
 20:37:37 [junit] org/apache/thrift/TException
 20:37:37 [junit] java.lang.NoClassDefFoundError: org/apache/thrift/TException
 20:37:37 [junit]  at 
 org.apache.cassandra.utils.UUIDGen.makeNode(UUIDGen.java:275)
 20:37:37 [junit]  at 
 org.apache.cassandra.utils.UUIDGen.makeClockSeqAndNode(UUIDGen.java:229)
 20:37:37 [junit]  at 
 org.apache.cassandra.utils.UUIDGen.clinit(UUIDGen.java:38)
 20:37:37 [junit]  at 
 org.apache.cassandra.serializers.ClientUtilsTest.test(ClientUtilsTest.java:56)
 20:37:37 [junit] Caused by: java.lang.ClassNotFoundException: 
 org.apache.thrift.TException
 20:37:37 [junit]  at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
 20:37:37 [junit]
 20:37:37 [junit]
 20:37:37 Target 'pig-test' failed with message 'The following error occurred 
 while executing this line:
 20:37:37 /home/automaton/cassandra/build.xml:1167: Some pig test(s) failed.'.
 20:37:37 [junit] Test org.apache.cassandra.serializers.ClientUtilsTest FAILED
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-05-22 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt
pylib/cqlshlib/formatting.py


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/321f5e82
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/321f5e82
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/321f5e82

Branch: refs/heads/trunk
Commit: 321f5e82f3083927d642416f1f51e54476225437
Parents: 4900538 7f855d1
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri May 22 17:40:58 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri May 22 17:40:58 2015 -0500

--
 CHANGES.txt  | 1 +
 pylib/cqlshlib/formatting.py | 9 -
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/321f5e82/CHANGES.txt
--
diff --cc CHANGES.txt
index ca87385,a4430c0..d4a8150
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,5 +1,11 @@@
 -2.1.6
 +2.2
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 +Merged from 2.1:
+  * (cqlsh) Better float precision by default (CASSANDRA-9224)
   * Improve estimated row count (CASSANDRA-9107)
   * Optimize range tombstone memory footprint (CASSANDRA-8603)
   * Use configured gcgs in anticompaction (CASSANDRA-9397)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/321f5e82/pylib/cqlshlib/formatting.py
--
diff --cc pylib/cqlshlib/formatting.py
index 2310fa9,2a99e23..c0c3163
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@@ -14,11 -14,11 +14,11 @@@
  # See the License for the specific language governing permissions and
  # limitations under the License.
  
 -import sys
 -import re
 -import time
  import calendar
  import math
 +import re
- import time
 +import sys
++import time
  from collections import defaultdict
  from . import wcwidth
  from .displaying import colorme, FormattedValue, DEFAULT_VALUE_COLORS



[1/2] cassandra git commit: cqlsh: Improve default float precision behavior

2015-05-22 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 490053820 - 321f5e82f


cqlsh: Improve default float precision behavior

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-9224


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f855d11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f855d11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f855d11

Branch: refs/heads/cassandra-2.2
Commit: 7f855d113bef60808dd55735e70ec86646582de1
Parents: 744db70
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Fri May 22 17:38:46 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri May 22 17:38:46 2015 -0500

--
 CHANGES.txt  | 1 +
 pylib/cqlshlib/formatting.py | 8 
 2 files changed, 9 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f855d11/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ca12522..a4430c0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.6
+ * (cqlsh) Better float precision by default (CASSANDRA-9224)
  * Improve estimated row count (CASSANDRA-9107)
  * Optimize range tombstone memory footprint (CASSANDRA-8603)
  * Use configured gcgs in anticompaction (CASSANDRA-9397)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f855d11/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index e9d22fd..2a99e23 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -14,6 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+import sys
 import re
 import time
 import calendar
@@ -144,6 +145,13 @@ def format_floating_point_type(val, colormap, 
float_precision, **_):
 elif math.isinf(val):
 bval = 'Infinity'
 else:
+exponent = int(math.log10(abs(val))) if abs(val)  
sys.float_info.epsilon else -sys.maxint -1
+if -4 = exponent  float_precision:
+# when this is true %g will not use scientific notation,
+# increasing precision should not change this decision
+# so we increase the precision to take into account the
+# digits to the left of the decimal point
+float_precision = float_precision + exponent + 1
 bval = '%.*g' % (float_precision, val)
 return colorme(bval, colormap, 'float')
 



[1/3] cassandra git commit: cqlsh: Improve default float precision behavior

2015-05-22 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 4ed316eca - e36ae521b


cqlsh: Improve default float precision behavior

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-9224


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f855d11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f855d11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f855d11

Branch: refs/heads/trunk
Commit: 7f855d113bef60808dd55735e70ec86646582de1
Parents: 744db70
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Fri May 22 17:38:46 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri May 22 17:38:46 2015 -0500

--
 CHANGES.txt  | 1 +
 pylib/cqlshlib/formatting.py | 8 
 2 files changed, 9 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f855d11/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ca12522..a4430c0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.6
+ * (cqlsh) Better float precision by default (CASSANDRA-9224)
  * Improve estimated row count (CASSANDRA-9107)
  * Optimize range tombstone memory footprint (CASSANDRA-8603)
  * Use configured gcgs in anticompaction (CASSANDRA-9397)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f855d11/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index e9d22fd..2a99e23 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -14,6 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+import sys
 import re
 import time
 import calendar
@@ -144,6 +145,13 @@ def format_floating_point_type(val, colormap, 
float_precision, **_):
 elif math.isinf(val):
 bval = 'Infinity'
 else:
+exponent = int(math.log10(abs(val))) if abs(val)  
sys.float_info.epsilon else -sys.maxint -1
+if -4 = exponent  float_precision:
+# when this is true %g will not use scientific notation,
+# increasing precision should not change this decision
+# so we increase the precision to take into account the
+# digits to the left of the decimal point
+float_precision = float_precision + exponent + 1
 bval = '%.*g' % (float_precision, val)
 return colorme(bval, colormap, 'float')
 



[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-05-22 Thread tylerhobbs
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e36ae521
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e36ae521
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e36ae521

Branch: refs/heads/trunk
Commit: e36ae521b2861a8834266b1871736e2e62563c2a
Parents: 4ed316e 321f5e8
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri May 22 17:41:18 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri May 22 17:41:18 2015 -0500

--
 CHANGES.txt  | 1 +
 pylib/cqlshlib/formatting.py | 9 -
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e36ae521/CHANGES.txt
--



[jira] [Commented] (CASSANDRA-8502) Static columns returning null for pages after first

2015-05-22 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556954#comment-14556954
 ] 

Tyler Hobbs commented on CASSANDRA-8502:


[~slebresne] bump

 Static columns returning null for pages after first
 ---

 Key: CASSANDRA-8502
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8502
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Flavien Charlon
Assignee: Tyler Hobbs
 Fix For: 2.1.x, 2.0.x

 Attachments: 8502-2.0-v2.txt, 8502-2.0.txt, 8502-2.1-v2.txt, 
 null-static-column.txt


 When paging is used for a query containing a static column, the first page 
 contains the right value for the static column, but subsequent pages have 
 null null for the static column instead of the expected value.
 Repro steps:
 - Create a table with a static column
 - Create a partition with 500 cells
 - Using cqlsh, query that partition
 Actual result:
 - You will see that first, the static column appears as expected, but if you 
 press a key after ---MORE---, the static columns will appear as null.
 See the attached file for a repro of the output.
 I am using a single node cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9451) Startup message response for unsupported protocol versions is incorrect

2015-05-22 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9451:
---
Attachment: 9451.txt

The attached patch (and 
[branch|https://github.com/thobbs/cassandra/tree/CASSANDRA-9451]) fixes the 
handling of unsupported protocol versions and adds a unit test.

Pending cassci jobs:
* 
http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-9451-dtest/
* 
http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-9451-testall/

 Startup message response for unsupported protocol versions is incorrect
 ---

 Key: CASSANDRA-9451
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9451
 Project: Cassandra
  Issue Type: Bug
 Environment: OS X 10.9
 Cassandra 2.1.5 
Reporter: Jorge Bay
Assignee: Tyler Hobbs
  Labels: client-impacting
 Fix For: 2.1.x

 Attachments: 9451.txt


 The response to a STARTUP request with protocol v4 on a C* 2.1 host is an 
 error with an incorrect error code (0). 
 Instead of the error code being Protocol error ({{0x000A}}) it has error 
 code 0 and message (wrapped by netty): 
 {{io.netty.handler.codec.DecoderException: 
 org.apache.cassandra.transport.ProtocolException: Invalid or unsupported 
 protocol version: 4}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9464) test-clientutil-jar broken

2015-05-22 Thread Michael Shuler (JIRA)
Michael Shuler created CASSANDRA-9464:
-

 Summary: test-clientutil-jar broken
 Key: CASSANDRA-9464
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9464
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler


{noformat}
20:37:37 test-clientutil-jar:
20:37:37 [junit] Testsuite: org.apache.cassandra.serializers.ClientUtilsTest
20:37:37 [junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 0.032 sec
20:37:37 [junit]
20:37:37 [junit] Testcase: 
test(org.apache.cassandra.serializers.ClientUtilsTest):  Caused an ERROR
20:37:37 [junit] org/apache/thrift/TException
20:37:37 [junit] java.lang.NoClassDefFoundError: org/apache/thrift/TException
20:37:37 [junit]at 
org.apache.cassandra.utils.UUIDGen.makeNode(UUIDGen.java:275)
20:37:37 [junit]at 
org.apache.cassandra.utils.UUIDGen.makeClockSeqAndNode(UUIDGen.java:229)
20:37:37 [junit]at 
org.apache.cassandra.utils.UUIDGen.clinit(UUIDGen.java:38)
20:37:37 [junit]at 
org.apache.cassandra.serializers.ClientUtilsTest.test(ClientUtilsTest.java:56)
20:37:37 [junit] Caused by: java.lang.ClassNotFoundException: 
org.apache.thrift.TException
20:37:37 [junit]at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
20:37:37 [junit]
20:37:37 [junit]
20:37:37 Target 'pig-test' failed with message 'The following error occurred 
while executing this line:
20:37:37 /home/automaton/cassandra/build.xml:1167: Some pig test(s) failed.'.
20:37:37 [junit] Test org.apache.cassandra.serializers.ClientUtilsTest FAILED
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9463) ant test-all results incomplete when parsed

2015-05-22 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556909#comment-14556909
 ] 

Michael Shuler edited comment on CASSANDRA-9463 at 5/22/15 10:15 PM:
-

The {{test-compression}} results appear to overwrite the {{test}} output:
{noformat}
(trunk)mshuler@hana:~/git/cassandra$ ant test -Dtest.name=KeyspaceTest
...
[junit] Testsuite: org.apache.cassandra.db.KeyspaceTest
[junit] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
3.403 sec
[junit]

(trunk)mshuler@hana:~/git/cassandra$ ls -l build/test/output/
total 48
-rw-r--r-- 1 mshuler mshuler 46550 May 22 16:58 
TEST-org.apache.cassandra.db.KeyspaceTest.xml

=

(trunk)mshuler@hana:~/git/cassandra$ ant test-compression 
-Dtest.name=KeyspaceTest
...
[junit] Testsuite: org.apache.cassandra.db.KeyspaceTest
[junit] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
3.205 sec
[junit]

(trunk)mshuler@hana:~/git/cassandra$ ls -l build/test/output/
total 48
-rw-r--r-- 1 mshuler mshuler 48562 May 22 16:59 
TEST-org.apache.cassandra.db.KeyspaceTest.xml
{noformat}

Aside from needing both test result xml output file sets, ideally, the XML 
written from {{test-compresssion}} would include within the XML test result 
block some sort of extra tag that it was run with compression, to allow 
seeing the difference clearly. For example {{test}} outputs 
{{org.apache.cassandra.db.KeyspaceTest.testGetRowNoColumns}} and 
{{test-compression}} appends .compression so we get a test result for 
{{org.apache.cassandra.db.KeyspaceTest.testGetRowNoColumns.compression}}.


was (Author: mshuler):
The {{test-compression}} results appear to overwrite the {{test}} output:
{noformat}
(trunk)mshuler@hana:~/git/cassandra$ ant test -Dtest.name=KeyspaceTest
...
[junit] Testsuite: org.apache.cassandra.db.KeyspaceTest
[junit] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
3.403 sec
[junit]

(trunk)mshuler@hana:~/git/cassandra$ ls -l build/test/output/
total 48
-rw-r--r-- 1 mshuler mshuler 46550 May 22 16:58 
TEST-org.apache.cassandra.db.KeyspaceTest.xml

=

(trunk)mshuler@hana:~/git/cassandra$ ant test-compression 
-Dtest.name=KeyspaceTest
...
[junit] Testsuite: org.apache.cassandra.db.KeyspaceTest
[junit] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
3.205 sec
[junit]

(trunk)mshuler@hana:~/git/cassandra$ ls -l build/test/output/
total 48
-rw-r--r-- 1 mshuler mshuler 48562 May 22 16:59 
TEST-org.apache.cassandra.db.KeyspaceTest.xml
{noformat}

Aside from needing both test result xml output sets, ideally, the output from 
{{test-compresssion}} would get some sort of extra tag that it was run with 
compression, to allow seeing the difference clearly. For example {{test}} 
outputs {{org.apache.cassandra.db.KeyspaceTest.testGetRowNoColumns}} and 
{{test-compression}} appends .compression so we get a test result for 
{{org.apache.cassandra.db.KeyspaceTest.testGetRowNoColumns.compression}}.

 ant test-all results incomplete when parsed
 ---

 Key: CASSANDRA-9463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9463
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler
 Fix For: 3.x, 2.2.x


 trunk `ant test` - 1,196 total tests
 trunk `ant test-all` - 1,353 total tests
 `ant test-all` runs 
 test,long-test,test-compression,pig-test,test-clientutil-jar, so we should 
 be getting 1196*2 (test, test-compresssion) + N (long-test) + 24 (pig-test) + 
 N (test-clientutil-jar)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: exclude maven dependency (thru hadoop-core) of core-3.1.1 as it is provided now by ecj patch by dbrosius reviewed by bwilliams for cassandra-9410

2015-05-22 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 321f5e82f - bb1f13104


exclude maven dependency (thru hadoop-core) of core-3.1.1 as it is provided now 
 by ecj
patch by dbrosius reviewed by bwilliams for cassandra-9410


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bb1f1310
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bb1f1310
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bb1f1310

Branch: refs/heads/cassandra-2.2
Commit: bb1f1310478eab19111d17ac5509bce498d98743
Parents: 321f5e8
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri May 22 18:46:25 2015 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri May 22 18:47:58 2015 -0400

--
 build.xml | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bb1f1310/build.xml
--
diff --git a/build.xml b/build.xml
index f3dab3f..ff048a2 100644
--- a/build.xml
+++ b/build.xml
@@ -360,6 +360,7 @@
   dependency groupId=org.apache.hadoop artifactId=hadoop-core 
version=1.0.3
exclusion groupId=org.mortbay.jetty 
artifactId=servlet-api/
exclusion groupId=commons-logging 
artifactId=commons-logging/
+   exclusion groupId=org.eclipse.jdt artifactId=core/
   /dependency
   dependency groupId=org.apache.hadoop 
artifactId=hadoop-minicluster version=1.0.3/
   dependency groupId=org.apache.pig artifactId=pig 
version=0.12.1/



[1/2] cassandra git commit: exclude maven dependency (thru hadoop-core) of core-3.1.1 as it is provided now by ecj patch by dbrosius reviewed by bwilliams for cassandra-9410

2015-05-22 Thread dbrosius
Repository: cassandra
Updated Branches:
  refs/heads/trunk e36ae521b - a2830ef99


exclude maven dependency (thru hadoop-core) of core-3.1.1 as it is provided now 
 by ecj
patch by dbrosius reviewed by bwilliams for cassandra-9410


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bb1f1310
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bb1f1310
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bb1f1310

Branch: refs/heads/trunk
Commit: bb1f1310478eab19111d17ac5509bce498d98743
Parents: 321f5e8
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri May 22 18:46:25 2015 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri May 22 18:47:58 2015 -0400

--
 build.xml | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bb1f1310/build.xml
--
diff --git a/build.xml b/build.xml
index f3dab3f..ff048a2 100644
--- a/build.xml
+++ b/build.xml
@@ -360,6 +360,7 @@
   dependency groupId=org.apache.hadoop artifactId=hadoop-core 
version=1.0.3
exclusion groupId=org.mortbay.jetty 
artifactId=servlet-api/
exclusion groupId=commons-logging 
artifactId=commons-logging/
+   exclusion groupId=org.eclipse.jdt artifactId=core/
   /dependency
   dependency groupId=org.apache.hadoop 
artifactId=hadoop-minicluster version=1.0.3/
   dependency groupId=org.apache.pig artifactId=pig 
version=0.12.1/



[2/2] cassandra git commit: Merge branch 'cassandra-2.2' into trunk

2015-05-22 Thread dbrosius
Merge branch 'cassandra-2.2' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a2830ef9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a2830ef9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a2830ef9

Branch: refs/heads/trunk
Commit: a2830ef9939637297fdfc34fd7e6ff5913a1f953
Parents: e36ae52 bb1f131
Author: Dave Brosius dbros...@mebigfatguy.com
Authored: Fri May 22 18:48:37 2015 -0400
Committer: Dave Brosius dbros...@mebigfatguy.com
Committed: Fri May 22 18:48:37 2015 -0400

--
 build.xml | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/a2830ef9/build.xml
--



[jira] [Commented] (CASSANDRA-9463) ant test-all results incomplete when parsed

2015-05-22 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556909#comment-14556909
 ] 

Michael Shuler commented on CASSANDRA-9463:
---

The {{test-compression}} results appear to overwrite the {{test}} output:
{noformat}
(trunk)mshuler@hana:~/git/cassandra$ ant test -Dtest.name=KeyspaceTest
...
[junit] Testsuite: org.apache.cassandra.db.KeyspaceTest
[junit] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
3.403 sec
[junit]

(trunk)mshuler@hana:~/git/cassandra$ ls -l build/test/output/
total 48
-rw-r--r-- 1 mshuler mshuler 46550 May 22 16:58 
TEST-org.apache.cassandra.db.KeyspaceTest.xml

=

(trunk)mshuler@hana:~/git/cassandra$ ant test-compression 
-Dtest.name=KeyspaceTest
...
[junit] Testsuite: org.apache.cassandra.db.KeyspaceTest
[junit] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
3.205 sec
[junit]

(trunk)mshuler@hana:~/git/cassandra$ ls -l build/test/output/
total 48
-rw-r--r-- 1 mshuler mshuler 48562 May 22 16:59 
TEST-org.apache.cassandra.db.KeyspaceTest.xml
{noformat}

Aside from needing both test result xml output sets, ideally, the output from 
{{test-compresssion}} would get some sort of extra tag that it was run with 
compression, to allow seeing the difference clearly. For example {{test}} 
outputs {{org.apache.cassandra.db.KeyspaceTest.testGetRowNoColumns}} and 
{{test-compression}} appends .compression so we get a test result for 
{{org.apache.cassandra.db.KeyspaceTest.testGetRowNoColumns.compression}}.

 ant test-all results incomplete when parsed
 ---

 Key: CASSANDRA-9463
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9463
 Project: Cassandra
  Issue Type: Test
Reporter: Michael Shuler
 Fix For: 3.x, 2.2.x


 trunk `ant test` - 1,196 total tests
 trunk `ant test-all` - 1,353 total tests
 `ant test-all` runs 
 test,long-test,test-compression,pig-test,test-clientutil-jar, so we should 
 be getting 1196*2 (test, test-compresssion) + N (long-test) + 24 (pig-test) + 
 N (test-clientutil-jar)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9403) Experiment with skipping file syncs during unit tests to reduce test time

2015-05-22 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9403:
---
Fix Version/s: 2.2.x
   3.x

 Experiment with skipping file syncs during unit tests to reduce test time
 -

 Key: CASSANDRA-9403
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9403
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.x, 2.2.x


 Some environments have ridiculous outliers for disk syncing. 20 seconds 
 ridiculous.
 Unit tests aren't testing crash safety so it is a pointless exercise.
 Instead we could intercept calls to sync files and check whether it looks 
 like the sync would succeed. Check that the things are not null, mapped, 
 closed etc. Outside of units tests it can go straight to the regular sync 
 call.
 I would also like to have the disks for unit and dtests mounted with 
 barrier=0,noatime,nodiratime to further reduce susceptibility to outliers. We 
 aren't going to recover these nodes if they crash/restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9403) Experiment with skipping file syncs during unit tests to reduce test time

2015-05-22 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9403:
---
Reviewer: Tyler Hobbs

 Experiment with skipping file syncs during unit tests to reduce test time
 -

 Key: CASSANDRA-9403
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9403
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.x, 2.2.x


 Some environments have ridiculous outliers for disk syncing. 20 seconds 
 ridiculous.
 Unit tests aren't testing crash safety so it is a pointless exercise.
 Instead we could intercept calls to sync files and check whether it looks 
 like the sync would succeed. Check that the things are not null, mapped, 
 closed etc. Outside of units tests it can go straight to the regular sync 
 call.
 I would also like to have the disks for unit and dtests mounted with 
 barrier=0,noatime,nodiratime to further reduce susceptibility to outliers. We 
 aren't going to recover these nodes if they crash/restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9465) No client warning on tombstone threshold

2015-05-22 Thread Adam Holmberg (JIRA)
Adam Holmberg created CASSANDRA-9465:


 Summary: No client warning on tombstone threshold
 Key: CASSANDRA-9465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Holmberg
Priority: Minor
 Fix For: 2.2.0 rc1


It appears that a client warning is not coming back for the tombstone threshold 
case. The batch warning works.

Repro:
Create a data condition with tombstone_warn_threshold  tombstones  
tombstone_failure_threshold
Query the row

Expected:
Warning in server log, warning returned to client

I'm basing this expectation on what I see 
[here|https://github.com/apache/cassandra/blob/68722e7e594d228b4bf14c8cd8cbee19b50835ec/src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java#L235-L247]

Observed:
Warning in server log, no warning flag in response message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9443) UFTest UFIdentificationTest are failing in the CI environment

2015-05-22 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556778#comment-14556778
 ] 

Ariel Weisberg commented on CASSANDRA-9443:
---

I asked [~snazy] about this and he is not surprised that they time out if run 
concurrently with other tests. I think we should move them to long-test.

 UFTest  UFIdentificationTest are failing in the CI environment
 ---

 Key: CASSANDRA-9443
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9443
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 2.2.0 rc1


 These 2 tests are consistently timing out, but I'm so far unable to repro 
 locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9461) Error when deleting from list

2015-05-22 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-9461.
---
Resolution: Invalid

 Error when deleting from list
 -

 Key: CASSANDRA-9461
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9461
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Tyler Hobbs
 Attachments: listbug.txt


 I encountered this error while testing. 
 {code}
 org.apache.cassandra.exceptions.InvalidRequestException: Attempted to delete 
 an element from a list which is null
 [junit]   at 
 org.apache.cassandra.cql3.Lists$DiscarderByIndex.execute(Lists.java:511)
 [junit]   at 
 org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:86)
 [junit]   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:649)
 [junit]   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:614)
 [junit]   at 
 org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:326)
 [junit]   at 
 org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:508)
 [junit]   at 
 org.apache.cassandra.cql3.JsonTest.testFromJsonFct(JsonTest.java:362)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-9461) Error when deleting from list

2015-05-22 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani reopened CASSANDRA-9461:
---

 Error when deleting from list
 -

 Key: CASSANDRA-9461
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9461
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Tyler Hobbs
 Attachments: listbug.txt


 I encountered this error while testing. 
 {code}
 org.apache.cassandra.exceptions.InvalidRequestException: Attempted to delete 
 an element from a list which is null
 [junit]   at 
 org.apache.cassandra.cql3.Lists$DiscarderByIndex.execute(Lists.java:511)
 [junit]   at 
 org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:86)
 [junit]   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:649)
 [junit]   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:614)
 [junit]   at 
 org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:326)
 [junit]   at 
 org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:508)
 [junit]   at 
 org.apache.cassandra.cql3.JsonTest.testFromJsonFct(JsonTest.java:362)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9232) timestamp is considered as a reserved keyword in cqlsh completion

2015-05-22 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556912#comment-14556912
 ] 

Tyler Hobbs commented on CASSANDRA-9232:


I think the best option is to properly expose the keywords in the driver and 
use that in cqlsh so that we can avoid maintaining multiple lists.  (The driver 
will need to stay updated for {{DESCRIBE}} statements anyway.)

I suppose we should also consider tracking the keywords by Cassandra version so 
that when we add new reserved keywords, we don't treat them as reserved in 
older Cassandra versions.  However, I don't think it's necessarily a bad thing 
to preemptively treat keywords as reserved, since users will need to deal with 
that before upgrading anyway.  So, maybe always using the newest keyword list 
is okay.  What do you think?

The rest of your cleanup looks good to me so far.

 timestamp is considered as a reserved keyword in cqlsh completion
 ---

 Key: CASSANDRA-9232
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9232
 Project: Cassandra
  Issue Type: Bug
Reporter: Michaël Figuière
Assignee: Stefania
Priority: Trivial
  Labels: cqlsh
 Fix For: 3.x, 2.1.x


 cqlsh seems to treat timestamp as a reserved keyword when used as an 
 identifier:
 {code}
 cqlsh:ks1 create table t1 (int int primary key, ascii ascii, bigint bigint, 
 blob blob, boolean boolean, date date, decimal decimal, double double, float 
 float, inet inet, text text, time time, timestamp timestamp, timeuuid 
 timeuuid, uuid uuid, varchar varchar, varint varint);
 {code}
 Leads to the following completion when building an {{INSERT}} statement:
 {code}
 cqlsh:ks1 insert into t1 (int, 
 timestamp ascii   bigint  blobboolean date
 decimal double  float   inettexttime
 timeuuiduuidvarchar varint
 {code}
 timestamp is a keyword but not a reserved one and should therefore not be 
 proposed as a quoted string. It looks like this error happens only for 
 timestamp. Not a big deal of course, but it might be worth reviewing the 
 keywords treated as reserved in cqlsh, especially with the many changes 
 introduced in 3.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7558) Document users and permissions in CQL docs

2015-05-22 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556936#comment-14556936
 ] 

Tyler Hobbs commented on CASSANDRA-7558:


+1 with one nit that can be fixed when committing: I would mention that changes 
to permissions affect existing client sessions.

 Document users and permissions in CQL docs
 --

 Key: CASSANDRA-7558
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7558
 Project: Cassandra
  Issue Type: Task
  Components: Documentation  website
Reporter: Tyler Hobbs
Assignee: Sam Tunnicliffe
Priority: Minor
 Fix For: 2.1.x, 2.0.x, 2.2.x

 Attachments: 7558-2.0.txt, 7558-2.2.txt


 The CQL3 docs don't cover {{CREATE USER}}, {{ALTER USER}}, {{DROP USER}}, 
 {{LIST USERS}}, {{GRANT}}, or {{REVOKE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9461) Error when deleting from list

2015-05-22 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-9461.
---
Resolution: Not A Problem

Sorry bad test

 Error when deleting from list
 -

 Key: CASSANDRA-9461
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9461
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Tyler Hobbs
 Attachments: listbug.txt


 I encountered this error while testing. 
 {code}
 org.apache.cassandra.exceptions.InvalidRequestException: Attempted to delete 
 an element from a list which is null
 [junit]   at 
 org.apache.cassandra.cql3.Lists$DiscarderByIndex.execute(Lists.java:511)
 [junit]   at 
 org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:86)
 [junit]   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:649)
 [junit]   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:614)
 [junit]   at 
 org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:326)
 [junit]   at 
 org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:508)
 [junit]   at 
 org.apache.cassandra.cql3.JsonTest.testFromJsonFct(JsonTest.java:362)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9403) Experiment with skipping file syncs during unit tests to reduce test time

2015-05-22 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556918#comment-14556918
 ] 

Tyler Hobbs commented on CASSANDRA-9403:


Currently waiting on 2.2 test results:
* 
http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-C-9403-2.2-dtest/
* 
http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-C-9403-2.2-testall/

 Experiment with skipping file syncs during unit tests to reduce test time
 -

 Key: CASSANDRA-9403
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9403
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.x, 2.2.x


 Some environments have ridiculous outliers for disk syncing. 20 seconds 
 ridiculous.
 Unit tests aren't testing crash safety so it is a pointless exercise.
 Instead we could intercept calls to sync files and check whether it looks 
 like the sync would succeed. Check that the things are not null, mapped, 
 closed etc. Outside of units tests it can go straight to the regular sync 
 call.
 I would also like to have the disks for unit and dtests mounted with 
 barrier=0,noatime,nodiratime to further reduce susceptibility to outliers. We 
 aren't going to recover these nodes if they crash/restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9224) Figure out a better default float precision rule for cqlsh

2015-05-22 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9224:
---
Attachment: 9224-2.1.txt

 Figure out a better default float precision rule for cqlsh
 --

 Key: CASSANDRA-9224
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9224
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Stefania
  Labels: cqlsh
 Fix For: 3.x, 2.1.x, 2.2.x

 Attachments: 9224-2.1.txt


 We currently use a {{DEFAULT_FLOAT_PRECISION}} of 5 in cqlsh with formatting 
 {{'%.*g' % (float_precision, val)}}.  In practice, this is way too low.  For 
 example, 12345.5 will show up as 123456.  Since the float precision is used 
 for cqlsh's COPY TO, it's particularly important that we maintain as much 
 precision as is practical by default.
 There are some other tricky considerations, though.  If the precision is too 
 high, python will do something like this:
 {noformat}
  '%.25g' % (12345.,)
 '12345.555474711582'
 {noformat}
 That's not terrible, but it would be nice to avoid if we can.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9465) No client warning on tombstone threshold

2015-05-22 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9465:
-
Fix Version/s: (was: 2.2.0 rc1)
   2.2.x

 No client warning on tombstone threshold
 

 Key: CASSANDRA-9465
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9465
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Holmberg
Priority: Minor
 Fix For: 2.2.x


 It appears that a client warning is not coming back for the tombstone 
 threshold case. The batch warning works.
 Repro:
 Create a data condition with tombstone_warn_threshold  tombstones  
 tombstone_failure_threshold
 Query the row
 Expected:
 Warning in server log, warning returned to client
 I'm basing this expectation on what I see 
 [here|https://github.com/apache/cassandra/blob/68722e7e594d228b4bf14c8cd8cbee19b50835ec/src/java/org/apache/cassandra/db/filter/SliceQueryFilter.java#L235-L247]
 Observed:
 Warning in server log, no warning flag in response message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8234) CTAS for COPY

2015-05-22 Thread Evan Chan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556978#comment-14556978
 ] 

Evan Chan commented on CASSANDRA-8234:
--

[~schumacr]  I would recommend integrating Spark HiveContext/HiveMetadata 
support into DSE, if it's not there already.  Then you tell the Oracle DBA:

1. Fire up DSE
2. Go to Spark/Hive shell, and type in COPY (SELECT * FROM ...) TO 


 CTAS for COPY
 -

 Key: CASSANDRA-8234
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8234
 Project: Cassandra
  Issue Type: New Feature
  Components: Tools
Reporter: Robin Schumacher
 Fix For: 3.x


 Continuous request from users is the ability to do CREATE TABLE AS SELECT... 
 The COPY command can be enhanced to perform simple and customized copies of 
 existing tables to satisfy the need. 
 - Simple copy is COPY table a TO new table b.
 - Custom copy can mimic Postgres: (e.g. COPY (SELECT * FROM country WHERE 
 country_name LIKE 'A%') TO …)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-9461) Error when deleting from list

2015-05-22 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-9461:
--

Assignee: Tyler Hobbs

 Error when deleting from list
 -

 Key: CASSANDRA-9461
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9461
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Tyler Hobbs
 Attachments: listbug.txt


 I encountered this error while testing. 
 {code}
 org.apache.cassandra.exceptions.InvalidRequestException: Attempted to delete 
 an element from a list which is null
 [junit]   at 
 org.apache.cassandra.cql3.Lists$DiscarderByIndex.execute(Lists.java:511)
 [junit]   at 
 org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:86)
 [junit]   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:649)
 [junit]   at 
 org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:614)
 [junit]   at 
 org.apache.cassandra.cql3.QueryProcessor.executeOnceInternal(QueryProcessor.java:326)
 [junit]   at 
 org.apache.cassandra.cql3.CQLTester.execute(CQLTester.java:508)
 [junit]   at 
 org.apache.cassandra.cql3.JsonTest.testFromJsonFct(JsonTest.java:362)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9462) ViewTest.sstableInBounds is failing

2015-05-22 Thread Benedict (JIRA)
Benedict created CASSANDRA-9462:
---

 Summary: ViewTest.sstableInBounds is failing
 Key: CASSANDRA-9462
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9462
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
 Fix For: 3.x, 2.1.x, 2.2.x


CASSANDRA-8568 introduced new tests to cover what was DataTracker functionality 
in 2.1, and is now covered by the lifecycle package. This particular test 
indicates this method does not fulfil the expected contract, namely that more 
sstables are returned than should be.

However while looking into it I noticed it also likely has a bug (which I have 
not updated the test to cover) wherein a wrapped range will only yield the 
portion at the end of the token range, not the beginning. It looks like we may 
have call sites using this function that do not realise this, so it could be a 
serious bug, especially for repair.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: cqlsh: Improve default float precision behavior

2015-05-22 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 744db7014 - 7f855d113


cqlsh: Improve default float precision behavior

Patch by Stefania Alborghetti; reviewed by Tyler Hobbs for
CASSANDRA-9224


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f855d11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f855d11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f855d11

Branch: refs/heads/cassandra-2.1
Commit: 7f855d113bef60808dd55735e70ec86646582de1
Parents: 744db70
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Fri May 22 17:38:46 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri May 22 17:38:46 2015 -0500

--
 CHANGES.txt  | 1 +
 pylib/cqlshlib/formatting.py | 8 
 2 files changed, 9 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f855d11/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index ca12522..a4430c0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.6
+ * (cqlsh) Better float precision by default (CASSANDRA-9224)
  * Improve estimated row count (CASSANDRA-9107)
  * Optimize range tombstone memory footprint (CASSANDRA-8603)
  * Use configured gcgs in anticompaction (CASSANDRA-9397)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f855d11/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index e9d22fd..2a99e23 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -14,6 +14,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+import sys
 import re
 import time
 import calendar
@@ -144,6 +145,13 @@ def format_floating_point_type(val, colormap, 
float_precision, **_):
 elif math.isinf(val):
 bval = 'Infinity'
 else:
+exponent = int(math.log10(abs(val))) if abs(val)  
sys.float_info.epsilon else -sys.maxint -1
+if -4 = exponent  float_precision:
+# when this is true %g will not use scientific notation,
+# increasing precision should not change this decision
+# so we increase the precision to take into account the
+# digits to the left of the decimal point
+float_precision = float_precision + exponent + 1
 bval = '%.*g' % (float_precision, val)
 return colorme(bval, colormap, 'float')
 



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-05-22 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt
pylib/cqlshlib/formatting.py


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/321f5e82
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/321f5e82
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/321f5e82

Branch: refs/heads/cassandra-2.2
Commit: 321f5e82f3083927d642416f1f51e54476225437
Parents: 4900538 7f855d1
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri May 22 17:40:58 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri May 22 17:40:58 2015 -0500

--
 CHANGES.txt  | 1 +
 pylib/cqlshlib/formatting.py | 9 -
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/321f5e82/CHANGES.txt
--
diff --cc CHANGES.txt
index ca87385,a4430c0..d4a8150
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,5 +1,11 @@@
 -2.1.6
 +2.2
 + * Extend Transactional API to sstable lifecycle management (CASSANDRA-8568)
 + * (cqlsh) Add support for native protocol 4 (CASSANDRA-9399)
 + * Ensure that UDF and UDAs are keyspace-isolated (CASSANDRA-9409)
 + * Revert CASSANDRA-7807 (tracing completion client notifications) 
(CASSANDRA-9429)
 + * Add ability to stop compaction by ID (CASSANDRA-7207)
 +Merged from 2.1:
+  * (cqlsh) Better float precision by default (CASSANDRA-9224)
   * Improve estimated row count (CASSANDRA-9107)
   * Optimize range tombstone memory footprint (CASSANDRA-8603)
   * Use configured gcgs in anticompaction (CASSANDRA-9397)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/321f5e82/pylib/cqlshlib/formatting.py
--
diff --cc pylib/cqlshlib/formatting.py
index 2310fa9,2a99e23..c0c3163
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@@ -14,11 -14,11 +14,11 @@@
  # See the License for the specific language governing permissions and
  # limitations under the License.
  
 -import sys
 -import re
 -import time
  import calendar
  import math
 +import re
- import time
 +import sys
++import time
  from collections import defaultdict
  from . import wcwidth
  from .displaying import colorme, FormattedValue, DEFAULT_VALUE_COLORS



[jira] [Resolved] (CASSANDRA-9410) Fix? jar conflict with ecj

2015-05-22 Thread Dave Brosius (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dave Brosius resolved CASSANDRA-9410.
-
Resolution: Fixed

committed to cassandra-2.2 as commit bb1f1310478eab19111d17ac5509bce498d98743

 Fix? jar conflict with ecj
 --

 Key: CASSANDRA-9410
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9410
 Project: Cassandra
  Issue Type: Bug
  Components: Core, Hadoop
Reporter: Dave Brosius
Assignee: Dave Brosius
Priority: Trivial
 Fix For: 2.2.0 rc1

 Attachments: ecj_conflict.txt


 hadoop-core pulls in core-3.1.1.jar which is an older version of ecj-4.4.2 
 which is now used directly for UDFs. Thus there are package/class conflicts 
 between the two, when both are present on the classpath (at present)
 Made changes to remove the older core-3.1.1 dependency, and now the hadoop 
 code relies on ecj-4.4.2. Code compiles, but not sure what needs to be done 
 to validate the hadoop still works properly now only relying on ecj-4.4.2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9306) Test coverage for cqlsh COPY

2015-05-22 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555878#comment-14555878
 ] 

Stefania commented on CASSANDRA-9306:
-

Also, I'm guessing the following failures are caused by CqlshCopyTest setup and 
teardown class methods. 

On a branch based on 2.1, I get [this 
failure|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-9232-2.1-dtest/lastCompletedBuild/testReport/replace_address_test/TestReplaceAddress/replace_stopped_node_test/]
 which I can reproduce reliably on my box (cassandra-2.1 or cassandra-2.2 plain 
branches) by launching {{nosetests -s cqlsh_tests}}:

{code}
==
ERROR: test_eat_glass (cqlsh_tests.TestCqlsh)
--
Traceback (most recent call last):
  File /home/stefania/git/cstar/cassandra-dtest/cqlsh_tests/cqlsh_tests.py, 
line 195, in test_eat_glass
'I can eat glass and it does not hurt me' : binascii.a2b_hex(FEEB)
  File /home/stefania/git/cstar/cassandra-dtest/cqlsh_tests/cqlsh_tests.py, 
line 170, in verify_varcharmap
rows = cursor.execute((uSELECT %s FROM testks.varcharmaptable WHERE 
varcharkey= '᚛᚛ᚉᚑᚅᚔᚉᚉᚔᚋ ᚔᚈᚔ ᚍᚂᚐᚅᚑ ᚅᚔᚋᚌᚓᚅᚐ᚜'; % map_name).encode(utf-8))
  File build/bdist.linux-x86_64/egg/cassandra/cluster.py, line 1550, in 
execute
result = future.result(timeout)
  File build/bdist.linux-x86_64/egg/cassandra/cluster.py, line 3249, in result
raise self._final_exception
TypeError: unbound method deserialize() must be called with CqlshCopyTest 
instance as first argument (got str instance instead)
  begin captured logging  
dtest: DEBUG: cluster ccm directory: /tmp/dtest-mI4GRt
-  end captured logging  -
{code}

The python version is 2.7.6.

 Test coverage for cqlsh COPY
 

 Key: CASSANDRA-9306
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9306
 Project: Cassandra
  Issue Type: Test
  Components: Core
Reporter: Tyler Hobbs
Assignee: Jim Witschey
  Labels: cqlsh
 Fix For: 2.1.6, 2.0.16, 2.2.0 rc1


 We need much more thorough test coverage for cqlsh's COPY TO/FROM commands.  
 There is one existing basic dtest ({{cqlsh_tests.py:TestCqlsh.test_copy_to}}) 
 that we can use as a starting point for new tests.
 The following things need to be tested:
 * Non-default delimiters
 * Null fields and non-default null markers
 * Skipping a header line
 * Explicit column ordering
 * Column names that need to be quoted
 * Every supported C* data type
 * Data that fails validation server-side
 * Wrong number of columns
 * Node going down during COPY operation
 In the non-failure cases, the tests should generally inserted data into 
 Cassandra, run COPY TO to dump the data to CSV, truncate, run COPY FROM to 
 reimport the data, and then verify that the reloaded data matches the 
 originally inserted data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9449) Running ALTER TABLE cql statement asynchronously results in failure

2015-05-22 Thread Paul Praet (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555793#comment-14555793
 ] 

Paul Praet edited comment on CASSANDRA-9449 at 5/22/15 11:18 AM:
-

1) Yes:
{code}
cqlsh DESCRIBE TABLE wifidoctor.device;

CREATE TABLE wifidoctor.device (
columna text,
columnb text,
columnc timestamp,
columnd text,
columne text,
columnf text,
columng text,
columnh text,
PRIMARY KEY ((columna, columnb))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

cqlsh INSERT INTO wifidoctor.device (columnA, columnB, 
columnC,columnD,columnE,columnF,columnG,columnH) VALUES 
('a','','2015-01-01','','','','','');
InvalidRequest: code=2200 [Invalid query] message=Unknown identifier columne

{code}

2) yes, it does. After the restart, the INSERT query works.


was (Author: praetp):
1) Yes:
{code}
CREATE TABLE wifidoctor.device (
columna text,
columnb text,
columnc timestamp,
columnd text,
columne text,
columnf text,
columng text,
columnh text,
PRIMARY KEY ((columna, columnb))
) WITH bloom_filter_fp_chance = 0.01
AND caching = '{keys:ALL, rows_per_partition:NONE}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';

cqlsh INSERT INTO wifidoctor.device (columnA, columnB, 
columnC,columnD,columnE,columnF,columnG,columnH) VALUES 
('a','','2015-01-01','','','','','');
InvalidRequest: code=2200 [Invalid query] message=Unknown identifier columne

{code}

2) yes, it does. After the restart, the INSERT query works.

 Running ALTER TABLE cql statement asynchronously results in failure
 ---

 Key: CASSANDRA-9449
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9449
 Project: Cassandra
  Issue Type: Bug
 Environment: Singe cluster environment
Reporter: Paul Praet

 When running 'ALTER TABLE' cql statements asynchronously, we notice that 
 often certain columns are missing, causing subsequent queries to fail.
 The code snippet below can be used to reproduce the problem.
 cassandra is a com.datastax.driver.core.Session reference.
 We construct the table synchronously and then alter it (adding five columns) 
 with the cassandra async API. We synchronize to ensure the table is properly 
 altered before continuing. Preparing the statement at the bottom of the code 
 snippet often fails:
 {noformat} com.datastax.driver.core.exceptions.InvalidQueryException: Unknown 
 identifier columnf {noformat}
 {code}
  @Test
 public void testCassandraAsyncAlterTable() throws Exception {
 ResultSet rs = cassandra.execute(CREATE TABLE device ( columnA text, 
 columnB text, columnC timestamp, PRIMARY KEY ((columnA, columnB))););
 ListResultSetFuture futures = new ArrayList();
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnD 
 text;));
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnE 
 text;));
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnF 
 text;));
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnG 
 text;));
 futures.add(cassandra.executeAsync(ALTER TABLE device ADD columnH 
 text;));
 for(ResultSetFuture resultfuture : futures){ resultfuture.get(); }
   
 /* discard the result; only interested to see if it works or not */
 cassandra.prepare(INSERT INTO device (columnA, columnB, 
 columnC,columnD,columnE,columnF,columnG,columnH) VALUES (?,?,?,?,?,?,?,?););
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-9132) resumable_bootstrap_test can hang

2015-05-22 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita reopened CASSANDRA-9132:
---

 resumable_bootstrap_test can hang
 -

 Key: CASSANDRA-9132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9132
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Fix For: 2.2.0 beta 1, 2.0.15, 2.1.6

 Attachments: 9132-2.0.txt


 The {{bootstrap_test.TestBootstrap.resumable_bootstrap_test}} can hang 
 sometimes.  It looks like the following line never completes:
 {noformat}
 node3.watch_log_for(Listening for thrift clients...)
 {noformat}
 I'm not familiar enough with the recent bootstrap changes to know why that's 
 not happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9455) Rely on TCP keepalive vs failure detector for streaming connections

2015-05-22 Thread Omid Aladini (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omid Aladini updated CASSANDRA-9455:

Attachment: 9455.txt

 Rely on TCP keepalive vs failure detector for streaming connections
 ---

 Key: CASSANDRA-9455
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9455
 Project: Cassandra
  Issue Type: New Feature
Reporter: Omid Aladini
Assignee: Omid Aladini
 Fix For: 2.0.16

 Attachments: 9455.txt


 The patch applies the streaming-related parts of CASSANDRA-3569 into the 
 current 2.0. The rest is already backported in CASSANDRA-7560.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9440) Bootstrap fails without any hint of prior stream failure

2015-05-22 Thread Omid Aladini (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555935#comment-14555935
 ] 

Omid Aladini commented on CASSANDRA-9440:
-

[~rkuris] Possibly related, thank you.

 Bootstrap fails without any hint of prior stream failure
 

 Key: CASSANDRA-9440
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9440
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.0.14
 2 DCs on EC2
Reporter: Omid Aladini

 I'm working on a cluster running Cassandra 2.0.14 and the bootstrap fails but 
 there is no prior hint of failed streams:
 {code}
  WARN [StreamReceiveTask:177] 2015-05-20 04:20:55,251 StreamResultFuture.java 
 (line 215) [Stream #0b42c640-fe03-11e4-8a6f-dd5dc9b30af4] Stream failed
 ERROR [main] 2015-05-20 04:20:55,252 CassandraDaemon.java (line 584) 
 Exception encountered during startup
 java.lang.RuntimeException: Error during boostrap: Stream failed
 at 
 org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:86)
 at 
 org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1005)
 at 
 org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:808)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:621)
 at 
 org.apache.cassandra.service.StorageService.initServer(StorageService.java:510)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:437)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:656)
 Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
 at 
 org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
 at com.google.common.util.concurrent.Futures$4.run(Futures.java:1160)
 at 
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 at 
 com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
 at 
 com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
 at 
 com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
 at 
 org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:216)
 at 
 org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:191)
 at 
 org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:377)
 at 
 org.apache.cassandra.streaming.StreamSession.maybeCompleted(StreamSession.java:662)
 at 
 org.apache.cassandra.streaming.StreamSession.taskCompleted(StreamSession.java:613)
 at 
 org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:143)
 at 
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)
  INFO [StorageServiceShutdownHook] 2015-05-20 04:20:55,286 Gossiper.java 
 (line 1330) Announcing shutdown
 {code}
 There are no WARN or ERROR prior to this in the log files of the 
 bootstrapping node or other nodes in the cluster. Only relevant log lines are 
 Session with 11.22.33.44/11.22.33.44 is complete
 Is it possible that individual stream sessions fail silently? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9455) Rely on TCP keepalive vs failure detector for streaming connections

2015-05-22 Thread Omid Aladini (JIRA)
Omid Aladini created CASSANDRA-9455:
---

 Summary: Rely on TCP keepalive vs failure detector for streaming 
connections
 Key: CASSANDRA-9455
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9455
 Project: Cassandra
  Issue Type: New Feature
Reporter: Omid Aladini
Assignee: Omid Aladini
 Fix For: 2.0.16


The patch applies the streaming-related parts of CASSANDRA-3569 into the 
current 2.0. The rest is already backported in CASSANDRA-7560.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9456) while starting cassandra using cassandra -f ; encountered an errror ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]

2015-05-22 Thread naresh (JIRA)
naresh created CASSANDRA-9456:
-

 Summary: while starting cassandra using cassandra -f ; encountered 
an errror ERROR 11:46:42 Exception in thread 
Thread[MemtableFlushWriter:1,5,main]
 Key: CASSANDRA-9456
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9456
 Project: Cassandra
  Issue Type: Bug
  Components: Config
 Environment: ubuntu14.10
Reporter: naresh
 Fix For: 3.x


I am using openjdk7 and build cassandra successfully using source code from 
github(https://github.com/apache/cassandra.git)
 
afer successfuly I set the path for cassandra 
and tried to start using cassandra -f 

below is the errror encountered while startup
INFO  11:46:42 Token metadata:
INFO  11:46:42 Enqueuing flush of local: 653 (0%) on-heap, 0 (0%) off-heap
INFO  11:46:42 Writing Memtable-local@1257824677(110 serialized bytes, 3 ops, 
0%/0% of on/off-heap limit)
ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]
java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
at 
org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:82) 
~[main/:na]
at org.apache.cassandra.io.util.Memory.init(Memory.java:74) 
~[main/:na]
at org.apache.cassandra.io.util.SafeMemory.init(SafeMemory.java:32) 
~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata$Writer.init(CompressionMetadata.java:274)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:288)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressedSequentialWriter.init(CompressedSequentialWriter.java:75)
 ~[main/:na]
at 
org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:168) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.init(BigTableWriter.java:74)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:107)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:84)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:396)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:343)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:328) 
~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 ~[guava-16.0.jar:na]
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1085)
 ~[main/:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_79]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
~[na:1.7.0_79]
at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_79]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9132) resumable_bootstrap_test can hang

2015-05-22 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-9132:
--
Attachment: 9132-followup-2.0.txt

From CASSANDRA-9444, there is another chance streaming will be stuck when 
thrown IOException is wrapped in IOError.
The previous patch didn't handle that case so attaching follow up.

With the patch, 5 consecutive resumable_bootstrap_test runs without fail.

http://cassci.datastax.com/job/yukim-9132-2-2.2-dtest/5/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/history/

 resumable_bootstrap_test can hang
 -

 Key: CASSANDRA-9132
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9132
 Project: Cassandra
  Issue Type: Bug
  Components: Tests
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Fix For: 2.2.0 beta 1, 2.0.15, 2.1.6

 Attachments: 9132-2.0.txt, 9132-followup-2.0.txt


 The {{bootstrap_test.TestBootstrap.resumable_bootstrap_test}} can hang 
 sometimes.  It looks like the following line never completes:
 {noformat}
 node3.watch_log_for(Listening for thrift clients...)
 {noformat}
 I'm not familiar enough with the recent bootstrap changes to know why that's 
 not happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-3569) Failure detector downs should not break streams

2015-05-22 Thread Omid Aladini (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-3569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14555906#comment-14555906
 ] 

Omid Aladini commented on CASSANDRA-3569:
-

[~yukim]: Here CASSANDRA-9455

 Failure detector downs should not break streams
 ---

 Key: CASSANDRA-3569
 URL: https://issues.apache.org/jira/browse/CASSANDRA-3569
 Project: Cassandra
  Issue Type: New Feature
Reporter: Peter Schuller
Assignee: Joshua McKenzie
 Fix For: 2.1.1

 Attachments: 3569-2.0.txt, 3569_v1.txt


 CASSANDRA-2433 introduced this behavior just to get repairs to don't sit 
 there waiting forever. In my opinion the correct fix to that problem is to 
 use TCP keep alive. Unfortunately the TCP keep alive period is insanely high 
 by default on a modern Linux, so just doing that is not entirely good either.
 But using the failure detector seems non-sensicle to me. We have a 
 communication method which is the TCP transport, that we know is used for 
 long-running processes that you don't want to incorrectly be killed for no 
 good reason, and we are using a failure detector tuned to detecting when not 
 to send real-time sensitive request to nodes in order to actively kill a 
 working connection.
 So, rather than add complexity with protocol based ping/pongs and such, I 
 propose that we simply just use TCP keep alive for streaming connections and 
 instruct operators of production clusters to tweak 
 net.ipv4.tcp_keepalive_{probes,intvl} as appropriate (or whatever equivalent 
 on their OS).
 I can submit the patch. Awaiting opinions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9444) resumable_bootstrap_test dtest failures

2015-05-22 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-9444.
---
Resolution: Fixed

CASSANDRA-9132 fix was not enough to stop hang.
I will attach further fix there.

 resumable_bootstrap_test dtest failures
 ---

 Key: CASSANDRA-9444
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9444
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Tyler Hobbs
Assignee: Yuki Morishita
 Fix For: 2.2.x


 The {{bootstrap_test.TestBootstrap.resumable_bootstrap_test}} dtest is 
 experiencing occasional failures in cassci: 
 http://cassci.datastax.com/job/cassandra-2.2_dtest/lastCompletedBuild/testReport/bootstrap_test/TestBootstrap/resumable_bootstrap_test/
 It looks like the problem is that after one of the streams fail, the 
 bootstrapping node never gets ready to accept client requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6477) Materialized Views (was: Global Indexes)

2015-05-22 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556055#comment-14556055
 ] 

Jack Krupansky commented on CASSANDRA-6477:
---

1. Has a decision been made on refresh modes? It sounds like the focus is on 
always consistent, as opposed to manual refresh or one-time without refresh 
or on some time interval, but is that simply the default, preferred refresh 
mode, or the only mode that will be available (initially)?

2. What happens if an MV is created for a base table that is already populated? 
Will the operation block while all existing data is propagated to the MV, or 
will that propagation happen in the background (in which case, is there a way 
to monitor its status and completion?), or is that not supported (initially)?

 Materialized Views (was: Global Indexes)
 

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Carl Yeksigian
  Labels: cql
 Fix For: 3.0 beta 1


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9442) Pig tests failing in 2.2 and trunk

2015-05-22 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556111#comment-14556111
 ] 

Philip Thompson commented on CASSANDRA-9442:


Patch at 
https://github.com/apache/cassandra/compare/trunk...ptnapoleon:cassandra-9442

 Pig tests failing in 2.2 and trunk
 --

 Key: CASSANDRA-9442
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9442
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Philip Thompson
Assignee: Philip Thompson
 Fix For: 2.2.0 rc1


 In CQLRecordWriter, we are catching and handling a certain class of 
 exception. Unfortunately, we are still setting lastException, so hadoop 
 reports that the map reduce job has errored out and failed, even though it 
 succeeded. This simple patch fixes that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9442) Pig tests failing in 2.2 and trunk

2015-05-22 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9442:
---
Summary: Pig tests failing in 2.2 and trunk  (was: lastException being set 
incorrectly)

 Pig tests failing in 2.2 and trunk
 --

 Key: CASSANDRA-9442
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9442
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Philip Thompson
Assignee: Philip Thompson
 Fix For: 2.2.0 rc1


 In CQLRecordWriter, we are catching and handling a certain class of 
 exception. Unfortunately, we are still setting lastException, so hadoop 
 reports that the map reduce job has errored out and failed, even though it 
 succeeded. This simple patch fixes that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9442) Pig tests failing in 2.2 and trunk

2015-05-22 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9442:
---
Attachment: (was: 9442.txt)

 Pig tests failing in 2.2 and trunk
 --

 Key: CASSANDRA-9442
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9442
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Philip Thompson
Assignee: Philip Thompson
 Fix For: 2.2.0 rc1


 In CQLRecordWriter, we are catching and handling a certain class of 
 exception. Unfortunately, we are still setting lastException, so hadoop 
 reports that the map reduce job has errored out and failed, even though it 
 succeeded. This simple patch fixes that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9442) lastException being set incorrectly

2015-05-22 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556110#comment-14556110
 ] 

Philip Thompson commented on CASSANDRA-9442:


Test results from cassci
http://cassci.datastax.com/view/Dev/view/ptnapoleon/job/ptnapoleon-cassandra-9442-testall/lastCompletedBuild/testReport/

 lastException being set incorrectly
 ---

 Key: CASSANDRA-9442
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9442
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Philip Thompson
Assignee: Philip Thompson
 Fix For: 2.2.0 rc1

 Attachments: 9442.txt


 In CQLRecordWriter, we are catching and handling a certain class of 
 exception. Unfortunately, we are still setting lastException, so hadoop 
 reports that the map reduce job has errored out and failed, even though it 
 succeeded. This simple patch fixes that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9455) Rely on TCP keepalive vs failure detector for streaming connections

2015-05-22 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556101#comment-14556101
 ] 

Philip Thompson commented on CASSANDRA-9455:


[~JoshuaMcKenzie], you were involved with both of the linked tickets, who 
should review this?

 Rely on TCP keepalive vs failure detector for streaming connections
 ---

 Key: CASSANDRA-9455
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9455
 Project: Cassandra
  Issue Type: New Feature
Reporter: Omid Aladini
Assignee: Omid Aladini
 Fix For: 2.0.x

 Attachments: 9455.txt


 The patch applies the streaming-related parts of CASSANDRA-3569 into the 
 current 2.0. The rest is already backported in CASSANDRA-7560.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7973) cqlsh connect error member_descriptor' object is not callable

2015-05-22 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556105#comment-14556105
 ] 

Per Otterström commented on CASSANDRA-7973:
---

I can reproduce on 2.1.5 with Python 2.6.9. On the very same host things work 
if I upgrade to Python 2.7.7.

Am I missing something that prevents my setup from working with 2.6.x?

Relevant parts from my cassandra.yaml
---
client_encryption_options:
enabled: true
keystore: /etc/cassandra/conf/.keystore
keystore_password: ***
protocol: TLSv1
cipher_suites: 
[TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256]
---

cqlshrc:
---
[ssl]
certfile = /root/.cassandra/ca.cert
validate = false

---


 cqlsh connect error member_descriptor' object is not callable
 ---

 Key: CASSANDRA-7973
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7973
 Project: Cassandra
  Issue Type: Bug
 Environment: Cassandra 2.1.0
Reporter: Digant Modha
Assignee: Philip Thompson
Priority: Minor
  Labels: cqlsh, lhf
 Fix For: 2.1.x


 When using cqlsh (Cassandra 2.1.0) with ssl, python 2.6.9. I get Connection 
 error: ('Unable to connect to any servers', {...: 
 TypeError('member_descriptor' object is not callable,)}) 
 I am able to connect from another machine using python 2.7.5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9442) Pig tests failing in 2.2 and trunk

2015-05-22 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9442:
---
Description: 
In CQLRecordWriter, we are catching and handling a certain class of exception. 
Unfortunately, we are still setting lastException, so hadoop reports that the 
map reduce job has errored out and failed, even though it succeeded. 

We are also having an occasional issue where the driver returns a different 
interrupt related exception than what we are checking for. I have moved the 
check logic out into a separate function, and now check for both.

  was:In CQLRecordWriter, we are catching and handling a certain class of 
exception. Unfortunately, we are still setting lastException, so hadoop reports 
that the map reduce job has errored out and failed, even though it succeeded. 
This simple patch fixes that.


 Pig tests failing in 2.2 and trunk
 --

 Key: CASSANDRA-9442
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9442
 Project: Cassandra
  Issue Type: Bug
  Components: Hadoop
Reporter: Philip Thompson
Assignee: Philip Thompson
 Fix For: 2.2.0 rc1


 In CQLRecordWriter, we are catching and handling a certain class of 
 exception. Unfortunately, we are still setting lastException, so hadoop 
 reports that the map reduce job has errored out and failed, even though it 
 succeeded. 
 We are also having an occasional issue where the driver returns a different 
 interrupt related exception than what we are checking for. I have moved the 
 check logic out into a separate function, and now check for both.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9457) Empty INITCOND treated as null in aggregate

2015-05-22 Thread Olivier Michallat (JIRA)
Olivier Michallat created CASSANDRA-9457:


 Summary: Empty INITCOND treated as null in aggregate
 Key: CASSANDRA-9457
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9457
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Olivier Michallat
Assignee: Robert Stupp
Priority: Minor


Given the following test data:
{code}
cqlsh:test create table foo(k int, v int, primary key(k,v));
cqlsh:test insert into foo(k,v) values(1,1);
cqlsh:test insert into foo(k,v) values(1,2);
cqlsh:test insert into foo(k,v) values(1,3);
{code}
And the following aggregate definition:
{code}
cqlsh:test CREATE FUNCTION cat(s text, v int)
RETURNS NULL ON NULL INPUT
RETURNS text 
LANGUAGE java
AS 'return s + v;';
cqlsh:test CREATE AGGREGATE cats(int) SFUNC cat STYPE text INITCOND '';
{code}
The following should return '123', but it returns null:
{code}
cqlsh:test select cats(v) from foo where k = 1;

 test.cats(v)
---
{code}
The empty INITCOND is treated as null, and the SFUNC is never called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9455) Rely on TCP keepalive vs failure detector for streaming connections

2015-05-22 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9455:
---
Fix Version/s: (was: 2.0.16)
   2.0.x

 Rely on TCP keepalive vs failure detector for streaming connections
 ---

 Key: CASSANDRA-9455
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9455
 Project: Cassandra
  Issue Type: New Feature
Reporter: Omid Aladini
Assignee: Omid Aladini
 Fix For: 2.0.x

 Attachments: 9455.txt


 The patch applies the streaming-related parts of CASSANDRA-3569 into the 
 current 2.0. The rest is already backported in CASSANDRA-7560.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7925) TimeUUID LSB should be unique per process, not just per machine

2015-05-22 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556174#comment-14556174
 ] 

T Jake Luciani commented on CASSANDRA-7925:
---

bq. If this server generates one, the act is synchronized to ensure no 
duplication.

There is still an open hole. If the user specifies a time stamp though it will 
collide on the server.  I understand the downside of many lsb is compression is 
not as helpful. however given how we plan to use timeuuids we should ensure 
correctness.

 TimeUUID LSB should be unique per process, not just per machine
 ---

 Key: CASSANDRA-7925
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7925
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Mädel
Assignee: T Jake Luciani
 Fix For: 2.2.x

 Attachments: cassandra-uuidgen.patch


 as pointed out in 
 [CASSANDRA-7919|https://issues.apache.org/jira/browse/CASSANDRA-7919?focusedCommentId=14132529page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14132529]
  lsb collisions are also possible serverside.
 a sufficient solution would be to include references to pid and classloader 
 within lsb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7925) TimeUUID LSB should be unique per process, not just per machine

2015-05-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556178#comment-14556178
 ] 

Sylvain Lebresne commented on CASSANDRA-7925:
-

bq. If the user specifies a time stamp though it will collide on the server

What do you mean by that? That is, what method are you referring to that would 
allow a user to provide a timestamp that would conflict with a server side uuid?

 TimeUUID LSB should be unique per process, not just per machine
 ---

 Key: CASSANDRA-7925
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7925
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Mädel
Assignee: T Jake Luciani
 Fix For: 2.2.x

 Attachments: cassandra-uuidgen.patch


 as pointed out in 
 [CASSANDRA-7919|https://issues.apache.org/jira/browse/CASSANDRA-7919?focusedCommentId=14132529page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14132529]
  lsb collisions are also possible serverside.
 a sufficient solution would be to include references to pid and classloader 
 within lsb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9456) while starting cassandra using cassandra -f ; encountered an errror ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]

2015-05-22 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-9456:
---
Description: 
I am using openjdk7 and build cassandra successfully using source code from 
github(https://github.com/apache/cassandra.git)
 
afer successfuly I set the path for cassandra 
and tried to start using cassandra -f 

below is the errror encountered while startup
{code}
INFO  11:46:42 Token metadata:
INFO  11:46:42 Enqueuing flush of local: 653 (0%) on-heap, 0 (0%) off-heap
INFO  11:46:42 Writing Memtable-local@1257824677(110 serialized bytes, 3 ops, 
0%/0% of on/off-heap limit)
ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]
java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
at 
org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:82) 
~[main/:na]
at org.apache.cassandra.io.util.Memory.init(Memory.java:74) 
~[main/:na]
at org.apache.cassandra.io.util.SafeMemory.init(SafeMemory.java:32) 
~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata$Writer.init(CompressionMetadata.java:274)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:288)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressedSequentialWriter.init(CompressedSequentialWriter.java:75)
 ~[main/:na]
at 
org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:168) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.init(BigTableWriter.java:74)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:107)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:84)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:396)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:343)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:328) 
~[main/:na]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[main/:na]
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 ~[guava-16.0.jar:na]
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1085)
 ~[main/:na]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
~[na:1.7.0_79]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
~[na:1.7.0_79]
at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_79]
{code}

  was:
I am using openjdk7 and build cassandra successfully using source code from 
github(https://github.com/apache/cassandra.git)
 
afer successfuly I set the path for cassandra 
and tried to start using cassandra -f 

below is the errror encountered while startup
INFO  11:46:42 Token metadata:
INFO  11:46:42 Enqueuing flush of local: 653 (0%) on-heap, 0 (0%) off-heap
INFO  11:46:42 Writing Memtable-local@1257824677(110 serialized bytes, 3 ops, 
0%/0% of on/off-heap limit)
ERROR 11:46:42 Exception in thread Thread[MemtableFlushWriter:1,5,main]
java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
at 
org.apache.cassandra.utils.memory.MemoryUtil.allocate(MemoryUtil.java:82) 
~[main/:na]
at org.apache.cassandra.io.util.Memory.init(Memory.java:74) 
~[main/:na]
at org.apache.cassandra.io.util.SafeMemory.init(SafeMemory.java:32) 
~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata$Writer.init(CompressionMetadata.java:274)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressionMetadata$Writer.open(CompressionMetadata.java:288)
 ~[main/:na]
at 
org.apache.cassandra.io.compress.CompressedSequentialWriter.init(CompressedSequentialWriter.java:75)
 ~[main/:na]
at 
org.apache.cassandra.io.util.SequentialWriter.open(SequentialWriter.java:168) 
~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.init(BigTableWriter.java:74)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.big.BigFormat$WriterFactory.open(BigFormat.java:107)
 ~[main/:na]
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.create(SSTableWriter.java:84)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.createFlushWriter(Memtable.java:396)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:343)
 ~[main/:na]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:328) 
~[main/:na]
at 

[jira] [Commented] (CASSANDRA-7925) TimeUUID LSB should be unique per process, not just per machine

2015-05-22 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556184#comment-14556184
 ] 

T Jake Luciani commented on CASSANDRA-7925:
---

I'm referring to the {{UUID getTimeUUID(long when)}} call.

It's currently used for paxos. 

Also, I imagine we will still allow the clients to specify timestamps (esp for 
thrift) in CASSANDRA-7919 so the conversion to uuid would be on the server.

 TimeUUID LSB should be unique per process, not just per machine
 ---

 Key: CASSANDRA-7925
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7925
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Mädel
Assignee: T Jake Luciani
 Fix For: 2.2.x

 Attachments: cassandra-uuidgen.patch


 as pointed out in 
 [CASSANDRA-7919|https://issues.apache.org/jira/browse/CASSANDRA-7919?focusedCommentId=14132529page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14132529]
  lsb collisions are also possible serverside.
 a sufficient solution would be to include references to pid and classloader 
 within lsb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7925) TimeUUID LSB should be unique per process, not just per machine

2015-05-22 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556230#comment-14556230
 ] 

T Jake Luciani commented on CASSANDRA-7925:
---

bq.  but doing so mean that we will need to guarantee to 2 updates with the 
same (user provided) timestamp actually do conflict

:(

bq.  It's not like we can't change this if we really need to later.

Good point. Ok then I'll update to use the classpath and PID only.

 TimeUUID LSB should be unique per process, not just per machine
 ---

 Key: CASSANDRA-7925
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7925
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Mädel
Assignee: T Jake Luciani
 Fix For: 2.2.x

 Attachments: cassandra-uuidgen.patch


 as pointed out in 
 [CASSANDRA-7919|https://issues.apache.org/jira/browse/CASSANDRA-7919?focusedCommentId=14132529page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14132529]
  lsb collisions are also possible serverside.
 a sufficient solution would be to include references to pid and classloader 
 within lsb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7925) TimeUUID LSB should be unique per process, not just per machine

2015-05-22 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556211#comment-14556211
 ] 

Sylvain Lebresne commented on CASSANDRA-7925:
-

bq. It's currently used for paxos.

Right, and that's not a problem for Paxos.

bq. Also, I imagine we will still allow the clients to specify timestamps (esp 
for thrift) in CASSANDRA-7919

We certainly will want to preserve backward compatibility (both for thrift and 
CQL), but doing so mean that we will need to guarantee to 2 updates with the 
same (user provided) timestamp actually *do* conflict, and this no matter what 
node the update hits. So in fact, we'll probably have to hardcode a LSB to use 
for all update with user provided timestamp. In any case, I think anticipating 
problems for CASSANDRA-7919 is a bit premature. It's not like we can't change 
this if we really need to later.

In general, I'd prefer keeping it to a fixed LSB for a given process if 
possible: it's a tad simpler, better for compression and a bit closer to the 
timeuuid RFC imo. And as of now, I think that's good enough.

 TimeUUID LSB should be unique per process, not just per machine
 ---

 Key: CASSANDRA-7925
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7925
 Project: Cassandra
  Issue Type: Improvement
Reporter: Peter Mädel
Assignee: T Jake Luciani
 Fix For: 2.2.x

 Attachments: cassandra-uuidgen.patch


 as pointed out in 
 [CASSANDRA-7919|https://issues.apache.org/jira/browse/CASSANDRA-7919?focusedCommentId=14132529page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14132529]
  lsb collisions are also possible serverside.
 a sufficient solution would be to include references to pid and classloader 
 within lsb.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9459) SecondaryIndex API redesign

2015-05-22 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-9459:
--

 Summary: SecondaryIndex API redesign
 Key: CASSANDRA-9459
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 3.0 beta 1


For some time now the index subsystem has been a pain point and in large part 
this is due to the way that the APIs and principal classes have grown 
organically over the years. It would be a good idea to conduct a wholesale 
review of the area and see if we can come up with something a bit more coherent.

A few starting points:
* There's a lot in AbstractPerColumnSecondaryIndex  its subclasses which could 
be pulled up into SecondaryIndexSearcher (note that to an extent, this is done 
in CASSANDRA-8099).
* SecondayIndexManager is overly complex and several of its functions should be 
simplified/re-examined. The handling of which columns are indexed and index 
selection on both the read and write paths are somewhat dense and unintuitive.
* The SecondaryIndex class hierarchy is rather convoluted and could use some 
serious rework.

There are a number of outstanding tickets which we should be able to roll into 
this higher level one as subtasks (but I'll defer doing that until getting into 
the details of the redesign):

* CASSANDRA-7771
* CASSANDRA-8103
* CASSANDRA-9041
* CASSANDRA-4458
* CASSANDRA-8505

Whilst they're not hard dependencies, I propose that this be done on top of 
both CASSANDRA-8099 and CASSANDRA-6717. The former largely because the storage 
engine changes may facilitate a friendlier index API, but also because of the 
changes to SIS mentioned above. As for 6717, the changes to schema tables there 
will help facilitate CASSANDRA-7771.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9459) SecondaryIndex API redesign

2015-05-22 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556335#comment-14556335
 ] 

Jonathan Ellis commented on CASSANDRA-9459:
---

bq. I propose that this be done on top of both CASSANDRA-8099 and CASSANDRA-6717

(As long as you mean the newly scope-limited 6717 and not everything pulled 
into CASSANDRA-9424.)

 SecondaryIndex API redesign
 ---

 Key: CASSANDRA-9459
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 3.0 beta 1


 For some time now the index subsystem has been a pain point and in large part 
 this is due to the way that the APIs and principal classes have grown 
 organically over the years. It would be a good idea to conduct a wholesale 
 review of the area and see if we can come up with something a bit more 
 coherent.
 A few starting points:
 * There's a lot in AbstractPerColumnSecondaryIndex  its subclasses which 
 could be pulled up into SecondaryIndexSearcher (note that to an extent, this 
 is done in CASSANDRA-8099).
 * SecondayIndexManager is overly complex and several of its functions should 
 be simplified/re-examined. The handling of which columns are indexed and 
 index selection on both the read and write paths are somewhat dense and 
 unintuitive.
 * The SecondaryIndex class hierarchy is rather convoluted and could use some 
 serious rework.
 There are a number of outstanding tickets which we should be able to roll 
 into this higher level one as subtasks (but I'll defer doing that until 
 getting into the details of the redesign):
 * CASSANDRA-7771
 * CASSANDRA-8103
 * CASSANDRA-9041
 * CASSANDRA-4458
 * CASSANDRA-8505
 Whilst they're not hard dependencies, I propose that this be done on top of 
 both CASSANDRA-8099 and CASSANDRA-6717. The former largely because the 
 storage engine changes may facilitate a friendlier index API, but also 
 because of the changes to SIS mentioned above. As for 6717, the changes to 
 schema tables there will help facilitate CASSANDRA-7771.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6477) Materialized Views (was: Global Indexes)

2015-05-22 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556181#comment-14556181
 ] 

Carl Yeksigian commented on CASSANDRA-6477:
---

1. No decision has been made on whether other refresh modes will be added, but 
the focus has only been on an eventually consistent mode.
2. The build happens in the background. In the under development branch, there 
is no way to monitor the progress other than looking at the table on each node 
which stores the MV build progress.

 Materialized Views (was: Global Indexes)
 

 Key: CASSANDRA-6477
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6477
 Project: Cassandra
  Issue Type: New Feature
  Components: API, Core
Reporter: Jonathan Ellis
Assignee: Carl Yeksigian
  Labels: cql
 Fix For: 3.0 beta 1


 Local indexes are suitable for low-cardinality data, where spreading the 
 index across the cluster is a Good Thing.  However, for high-cardinality 
 data, local indexes require querying most nodes in the cluster even if only a 
 handful of rows is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9160) Migrate CQL dtests to unit tests

2015-05-22 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9160:
--
Reviewer: Sylvain Lebresne

 Migrate CQL dtests to unit tests
 

 Key: CASSANDRA-9160
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9160
 Project: Cassandra
  Issue Type: Test
Reporter: Sylvain Lebresne
Assignee: Stefania

 We have CQL tests in 2 places: dtests and unit tests. The unit tests are 
 actually somewhat better in the sense that they have the ability to test both 
 prepared and unprepared statements at the flip of a switch. It's also better 
 to have all those tests in the same place so we can improve the test 
 framework in only one place (CASSANDRA-7959, CASSANDRA-9159, etc...). So we 
 should move the CQL dtests to the unit tests (which will be a good occasion 
 to organize them better).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9459) SecondaryIndex API redesign

2015-05-22 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556346#comment-14556346
 ] 

Sam Tunnicliffe commented on CASSANDRA-9459:


bq. (As long as you mean the newly scope-limited 6717 and not everything pulled 
into CASSANDRA-9424.)

That is exactly what I mean

 SecondaryIndex API redesign
 ---

 Key: CASSANDRA-9459
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9459
 Project: Cassandra
  Issue Type: Improvement
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 3.0 beta 1


 For some time now the index subsystem has been a pain point and in large part 
 this is due to the way that the APIs and principal classes have grown 
 organically over the years. It would be a good idea to conduct a wholesale 
 review of the area and see if we can come up with something a bit more 
 coherent.
 A few starting points:
 * There's a lot in AbstractPerColumnSecondaryIndex  its subclasses which 
 could be pulled up into SecondaryIndexSearcher (note that to an extent, this 
 is done in CASSANDRA-8099).
 * SecondayIndexManager is overly complex and several of its functions should 
 be simplified/re-examined. The handling of which columns are indexed and 
 index selection on both the read and write paths are somewhat dense and 
 unintuitive.
 * The SecondaryIndex class hierarchy is rather convoluted and could use some 
 serious rework.
 There are a number of outstanding tickets which we should be able to roll 
 into this higher level one as subtasks (but I'll defer doing that until 
 getting into the details of the redesign):
 * CASSANDRA-7771
 * CASSANDRA-8103
 * CASSANDRA-9041
 * CASSANDRA-4458
 * CASSANDRA-8505
 Whilst they're not hard dependencies, I propose that this be done on top of 
 both CASSANDRA-8099 and CASSANDRA-6717. The former largely because the 
 storage engine changes may facilitate a friendlier index API, but also 
 because of the changes to SIS mentioned above. As for 6717, the changes to 
 schema tables there will help facilitate CASSANDRA-7771.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-9458) Race condition causing StreamSession to get stuck in WAIT_COMPLETE

2015-05-22 Thread Omid Aladini (JIRA)
Omid Aladini created CASSANDRA-9458:
---

 Summary: Race condition causing StreamSession to get stuck in 
WAIT_COMPLETE
 Key: CASSANDRA-9458
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9458
 Project: Cassandra
  Issue Type: Bug
Reporter: Omid Aladini
Priority: Critical
 Fix For: 2.0.16


I think there is a race condition in StreamSession where one side of the stream 
could get stuck in WAIT_COMPLETE although both have sent COMPLETE messages. 
Consider a scenario that node B is being bootstrapped and it only receives 
files during the session:

1- During a stream session A sends some files to B and B sends no files to A.
2- Once B completes the last task (receiving), StreamSession::maybeComplete is 
invoked.
3- While B is sending the COMPLETE message via StreamSession::maybeComplete, it 
also receives the COMPLETE message from A and therefore 
StreamSession::complete() is invoked.
4- Therefore both maybeComplete() and complete() functions have branched into 
the state != State.WAIT_COMPLETE case and both set the state to WAIT_COMPLETE.
5- Now B is waiting to receive COMPLETE although it's already received it and 
nothing triggers checking the state again, until it times out after 
streaming_socket_timeout_in_ms.

In the log below:

https://gist.github.com/omidaladini/003de259958ad8dfb07e

although the node has received COMPLETE, SocketTimeoutException is thrown 
after streaming_socket_timeout_in_ms (30 minutes here).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9271) IndexSummaryManagerTest.testCompactionRace times out periodically

2015-05-22 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9271:
--
Fix Version/s: 2.1.6

 IndexSummaryManagerTest.testCompactionRace times out periodically
 -

 Key: CASSANDRA-9271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9271
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: T Jake Luciani
 Fix For: 2.1.6


 The issue is that the amount of time the test takes is highly variable to it 
 being biased towards creating a condition where the test has to retry the 
 compaction it is attempting.
 Solution is to decrease the bias by having 
 https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java#L2522
  check every millisecond instead of every 100 milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Lower sleep time to avoid timeout of IndexSummaryManagerTest.testCompactionRace

2015-05-22 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 ae0486132 - 4362e71e5


Lower sleep time to avoid timeout of IndexSummaryManagerTest.testCompactionRace

patch by awiesberg; reviewed by tjake for CASSANDRA-9271


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4362e71e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4362e71e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4362e71e

Branch: refs/heads/cassandra-2.1
Commit: 4362e71e55295067fd8de3a3206e0ff179bfcaf9
Parents: ae04861
Author: T Jake Luciani j...@apache.org
Authored: Fri May 22 11:48:45 2015 -0400
Committer: T Jake Luciani j...@apache.org
Committed: Fri May 22 11:48:45 2015 -0400

--
 src/java/org/apache/cassandra/db/compaction/CompactionManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4362e71e/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 47bd2d6..fd28ceb 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1484,7 +1484,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 while (System.nanoTime() - start  delay)
 {
 if (CompactionManager.instance.isCompacting(cfss))
-Uninterruptibles.sleepUninterruptibly(100, 
TimeUnit.MILLISECONDS);
+Uninterruptibles.sleepUninterruptibly(1, 
TimeUnit.MILLISECONDS);
 else
 break;
 }



[1/3] cassandra git commit: Lower sleep time to avoid timeout of IndexSummaryManagerTest.testCompactionRace

2015-05-22 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/trunk d96a02a12 - 491f7dc27


Lower sleep time to avoid timeout of IndexSummaryManagerTest.testCompactionRace

patch by awiesberg; reviewed by tjake for CASSANDRA-9271


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4362e71e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4362e71e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4362e71e

Branch: refs/heads/trunk
Commit: 4362e71e55295067fd8de3a3206e0ff179bfcaf9
Parents: ae04861
Author: T Jake Luciani j...@apache.org
Authored: Fri May 22 11:48:45 2015 -0400
Committer: T Jake Luciani j...@apache.org
Committed: Fri May 22 11:48:45 2015 -0400

--
 src/java/org/apache/cassandra/db/compaction/CompactionManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4362e71e/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 47bd2d6..fd28ceb 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1484,7 +1484,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 while (System.nanoTime() - start  delay)
 {
 if (CompactionManager.instance.isCompacting(cfss))
-Uninterruptibles.sleepUninterruptibly(100, 
TimeUnit.MILLISECONDS);
+Uninterruptibles.sleepUninterruptibly(1, 
TimeUnit.MILLISECONDS);
 else
 break;
 }



[jira] [Updated] (CASSANDRA-9458) Race condition causing StreamSession to get stuck in WAIT_COMPLETE

2015-05-22 Thread Omid Aladini (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omid Aladini updated CASSANDRA-9458:

Attachment: 9458-v1.txt

This patch may address the race condition (in case I've actually understood the 
problem correctly! :)

 Race condition causing StreamSession to get stuck in WAIT_COMPLETE
 --

 Key: CASSANDRA-9458
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9458
 Project: Cassandra
  Issue Type: Bug
Reporter: Omid Aladini
Priority: Critical
 Attachments: 9458-v1.txt


 I think there is a race condition in StreamSession where one side of the 
 stream could get stuck in WAIT_COMPLETE although both have sent COMPLETE 
 messages. Consider a scenario that node B is being bootstrapped and it only 
 receives files during the session:
 1- During a stream session A sends some files to B and B sends no files to A.
 2- Once B completes the last task (receiving), StreamSession::maybeComplete 
 is invoked.
 3- While B is sending the COMPLETE message via StreamSession::maybeComplete, 
 it also receives the COMPLETE message from A and therefore 
 StreamSession::complete() is invoked.
 4- Therefore both maybeComplete() and complete() functions have branched into 
 the state != State.WAIT_COMPLETE case and both set the state to WAIT_COMPLETE.
 5- Now B is waiting to receive COMPLETE although it's already received it and 
 nothing triggers checking the state again, until it times out after 
 streaming_socket_timeout_in_ms.
 In the log below:
 https://gist.github.com/omidaladini/003de259958ad8dfb07e
 although the node has received COMPLETE, SocketTimeoutException is thrown 
 after streaming_socket_timeout_in_ms (30 minutes here).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9271) IndexSummaryManagerTest.testCompactionRace times out periodically

2015-05-22 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani resolved CASSANDRA-9271.
---
Resolution: Fixed

re-committed [~aweisberg] fix in 4362e71

 IndexSummaryManagerTest.testCompactionRace times out periodically
 -

 Key: CASSANDRA-9271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9271
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: T Jake Luciani
 Fix For: 2.1.6


 The issue is that the amount of time the test takes is highly variable to it 
 being biased towards creating a condition where the test has to retry the 
 compaction it is attempting.
 Solution is to decrease the bias by having 
 https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java#L2522
  check every millisecond instead of every 100 milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Lower sleep time to avoid timeout of IndexSummaryManagerTest.testCompactionRace

2015-05-22 Thread jake
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 e5a76bdb5 - 5ab14968e


Lower sleep time to avoid timeout of IndexSummaryManagerTest.testCompactionRace

patch by awiesberg; reviewed by tjake for CASSANDRA-9271


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4362e71e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4362e71e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4362e71e

Branch: refs/heads/cassandra-2.2
Commit: 4362e71e55295067fd8de3a3206e0ff179bfcaf9
Parents: ae04861
Author: T Jake Luciani j...@apache.org
Authored: Fri May 22 11:48:45 2015 -0400
Committer: T Jake Luciani j...@apache.org
Committed: Fri May 22 11:48:45 2015 -0400

--
 src/java/org/apache/cassandra/db/compaction/CompactionManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/4362e71e/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
index 47bd2d6..fd28ceb 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
@@ -1484,7 +1484,7 @@ public class CompactionManager implements 
CompactionManagerMBean
 while (System.nanoTime() - start  delay)
 {
 if (CompactionManager.instance.isCompacting(cfss))
-Uninterruptibles.sleepUninterruptibly(100, 
TimeUnit.MILLISECONDS);
+Uninterruptibles.sleepUninterruptibly(1, 
TimeUnit.MILLISECONDS);
 else
 break;
 }



[jira] [Commented] (CASSANDRA-9347) Manually run CommitLogStress for 2.2 release

2015-05-22 Thread Alan Boudreault (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556369#comment-14556369
 ] 

Alan Boudreault commented on CASSANDRA-9347:


[~aweisberg] Did you get a chance to take a look about enabling archiving with 
CommitLogStressTest? 

 Manually run CommitLogStress for 2.2 release
 

 Key: CASSANDRA-9347
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9347
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: Alan Boudreault
 Fix For: 2.2.x


 Commitlog stress runs each test for 10 seconds based on a constant. Might be 
 worth raising that to get the CL doing a little bit more work.
 Then run it in a loop on something with a fast SSD and something with a slow 
 disk for a few days and see if it fails.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9271) IndexSummaryManagerTest.testCompactionRace times out periodically

2015-05-22 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-9271:
--
Priority: Trivial  (was: Major)

 IndexSummaryManagerTest.testCompactionRace times out periodically
 -

 Key: CASSANDRA-9271
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9271
 Project: Cassandra
  Issue Type: Test
Reporter: Ariel Weisberg
Assignee: T Jake Luciani
Priority: Trivial
 Fix For: 2.1.6


 The issue is that the amount of time the test takes is highly variable to it 
 being biased towards creating a condition where the test has to retry the 
 compaction it is attempting.
 Solution is to decrease the bias by having 
 https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/ColumnFamilyStore.java#L2522
  check every millisecond instead of every 100 milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-05-22 Thread jake
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ab14968
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ab14968
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ab14968

Branch: refs/heads/cassandra-2.2
Commit: 5ab14968ec3819a94d9c3abdeb93a812c4a45b4a
Parents: e5a76bd 4362e71
Author: T Jake Luciani j...@apache.org
Authored: Fri May 22 11:57:36 2015 -0400
Committer: T Jake Luciani j...@apache.org
Committed: Fri May 22 11:57:36 2015 -0400

--
 src/java/org/apache/cassandra/db/compaction/CompactionManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ab14968/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--



[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-05-22 Thread jake
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5ab14968
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5ab14968
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5ab14968

Branch: refs/heads/trunk
Commit: 5ab14968ec3819a94d9c3abdeb93a812c4a45b4a
Parents: e5a76bd 4362e71
Author: T Jake Luciani j...@apache.org
Authored: Fri May 22 11:57:36 2015 -0400
Committer: T Jake Luciani j...@apache.org
Committed: Fri May 22 11:57:36 2015 -0400

--
 src/java/org/apache/cassandra/db/compaction/CompactionManager.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/5ab14968/src/java/org/apache/cassandra/db/compaction/CompactionManager.java
--



[jira] [Updated] (CASSANDRA-9441) Reject frozen types in UDF

2015-05-22 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-9441:
--
Assignee: Benjamin Lerer  (was: Robert Stupp)

 Reject frozen types in UDF
 

 Key: CASSANDRA-9441
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9441
 Project: Cassandra
  Issue Type: Bug
Reporter: Aleksey Yeschenko
Assignee: Benjamin Lerer
 Fix For: 2.2.0 rc1


 Spin off from CASSANDRA-9186. Rationale in its comments section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9458) Race condition causing StreamSession to get stuck in WAIT_COMPLETE

2015-05-22 Thread Omid Aladini (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omid Aladini updated CASSANDRA-9458:

Fix Version/s: 2.0.x
   2.1.x

 Race condition causing StreamSession to get stuck in WAIT_COMPLETE
 --

 Key: CASSANDRA-9458
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9458
 Project: Cassandra
  Issue Type: Bug
Reporter: Omid Aladini
Priority: Critical
 Fix For: 2.1.x, 2.0.x

 Attachments: 9458-v1.txt


 I think there is a race condition in StreamSession where one side of the 
 stream could get stuck in WAIT_COMPLETE although both have sent COMPLETE 
 messages. Consider a scenario that node B is being bootstrapped and it only 
 receives files during the session:
 1- During a stream session A sends some files to B and B sends no files to A.
 2- Once B completes the last task (receiving), StreamSession::maybeComplete 
 is invoked.
 3- While B is sending the COMPLETE message via StreamSession::maybeComplete, 
 it also receives the COMPLETE message from A and therefore 
 StreamSession::complete() is invoked.
 4- Therefore both maybeComplete() and complete() functions have branched into 
 the state != State.WAIT_COMPLETE case and both set the state to WAIT_COMPLETE.
 5- Now B is waiting to receive COMPLETE although it's already received it and 
 nothing triggers checking the state again, until it times out after 
 streaming_socket_timeout_in_ms.
 In the log below:
 https://gist.github.com/omidaladini/003de259958ad8dfb07e
 although the node has received COMPLETE, SocketTimeoutException is thrown 
 after streaming_socket_timeout_in_ms (30 minutes here).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9443) UFTest UFIdentificationTest are failing in the CI environment

2015-05-22 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14556388#comment-14556388
 ] 

Ariel Weisberg commented on CASSANDRA-9443:
---

Not sure what the relevant difference is. I think that UFTest doesn't play well 
with others. I tried running two of them concurrently and one of them failed on 
not getting a port and the other ran 59 seconds instead of 30. It's pretty CPU 
bound so I am wondering if it is competing with another test and timing out.

I tried running UFTest and UFIdentificationTest together and they took 56 
seconds. 

Now why this is different between utest and test all is a good question. The 
Jenkin's config uses the same value for -Dtest.runners=4. Maybe the tests are 
run in a different order or bucketed differently?

Either way this is yet another case for raising the timeout (how much 
programmer time have we lost to this now?) or maybe moving them to test-long.

 UFTest  UFIdentificationTest are failing in the CI environment
 ---

 Key: CASSANDRA-9443
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9443
 Project: Cassandra
  Issue Type: Test
  Components: Tests
Reporter: Sam Tunnicliffe
Assignee: Sam Tunnicliffe
 Fix For: 2.2.0 rc1


 These 2 tests are consistently timing out, but I'm so far unable to repro 
 locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Fix compiler warning about using _ as an identifier

2015-05-22 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 4362e71e5 - 744db7014


Fix compiler warning about using _ as an identifier


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/744db701
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/744db701
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/744db701

Branch: refs/heads/cassandra-2.1
Commit: 744db701467e42cf19f9251d942fd9e3a4af2dd0
Parents: 4362e71
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri May 22 13:40:25 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri May 22 13:40:25 2015 -0500

--
 test/unit/org/apache/cassandra/io/util/DataOutputTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/744db701/test/unit/org/apache/cassandra/io/util/DataOutputTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/util/DataOutputTest.java 
b/test/unit/org/apache/cassandra/io/util/DataOutputTest.java
index 7110d1d..2063e9a 100644
--- a/test/unit/org/apache/cassandra/io/util/DataOutputTest.java
+++ b/test/unit/org/apache/cassandra/io/util/DataOutputTest.java
@@ -247,7 +247,7 @@ public class DataOutputTest
 test.readInt();
 assert false;
 }
-catch (EOFException _)
+catch (EOFException exc)
 {
 }
 }



[1/2] cassandra git commit: Fix compiler warning about using _ as an identifier

2015-05-22 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 0d24b1a80 - 490053820


Fix compiler warning about using _ as an identifier


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/744db701
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/744db701
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/744db701

Branch: refs/heads/cassandra-2.2
Commit: 744db701467e42cf19f9251d942fd9e3a4af2dd0
Parents: 4362e71
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri May 22 13:40:25 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri May 22 13:40:25 2015 -0500

--
 test/unit/org/apache/cassandra/io/util/DataOutputTest.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/744db701/test/unit/org/apache/cassandra/io/util/DataOutputTest.java
--
diff --git a/test/unit/org/apache/cassandra/io/util/DataOutputTest.java 
b/test/unit/org/apache/cassandra/io/util/DataOutputTest.java
index 7110d1d..2063e9a 100644
--- a/test/unit/org/apache/cassandra/io/util/DataOutputTest.java
+++ b/test/unit/org/apache/cassandra/io/util/DataOutputTest.java
@@ -247,7 +247,7 @@ public class DataOutputTest
 test.readInt();
 assert false;
 }
-catch (EOFException _)
+catch (EOFException exc)
 {
 }
 }



  1   2   >