[jira] [Updated] (CASSANDRA-8903) Super Columns exception during upgrade from C* 1.2.18 to C* 2.0.11.83
[ https://issues.apache.org/jira/browse/CASSANDRA-8903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Ramirez updated CASSANDRA-8903: - Since Version: 2.0.11 Fix Version/s: (was: 2.0.11) > Super Columns exception during upgrade from C* 1.2.18 to C* 2.0.11.83 > - > > Key: CASSANDRA-8903 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8903 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Erick Ramirez > > While in the middle of an upgrade of a 12-node cluster, the following errors > were constantly being reported in the logs: > {noformat} > ERROR [WRITE-/10.73.73.26] 2015-03-02 22:59:12,523 OutboundTcpConnection.java > (line 234) error writing to /xx.xx.xx.xx > java.lang.RuntimeException: Cannot convert filter to old super column format. > Update all nodes to Cassandra 2.0 first. > at > org.apache.cassandra.db.SuperColumns.sliceFilterToSC(SuperColumns.java:353) > at org.apache.cassandra.db.SuperColumns.filterToSC(SuperColumns.java:258) > at > org.apache.cassandra.db.RangeSliceCommandSerializer.serializedSize(RangeSliceCommand.java:284) > > at > org.apache.cassandra.db.RangeSliceCommandSerializer.serializedSize(RangeSliceCommand.java:156) > > at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:116) > at > org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:251) > > at > org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:203) > > at > org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:151) > {noformat} > Confirmed that code is accessing cluster via Thrift and not CQL API. > This issue is very similar to CASSANDRA-6996 but may relate to the way that > single slice and reversed is being handled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8714) row-cache: use preloaded jemalloc w/ Unsafe
[ https://issues.apache.org/jira/browse/CASSANDRA-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346508#comment-14346508 ] Robert Stupp commented on CASSANDRA-8714: - Hm - the micro bench itself tells that neither approach has any benefit. The microbench uses direct and heap BBs. _branch_ means the patch and _trunk_ - yea trunk. {noformat:title=64kB} [java] Benchmark Mode Samples Score Error Units [java] o.a.c.t.m.SetBytesAllocationBench.branch thrpt 10 0,957 ± 0,036 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.branch:branchDirectthrpt 10 0,483 ± 0,013 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.branch:branchHeap thrpt 10 0,474 ± 0,025 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.trunk thrpt 10 0,938 ± 0,053 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.trunk:trunkDirect thrpt 10 0,473 ± 0,024 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.trunk:trunkHeapthrpt 10 0,466 ± 0,030 ops/us {noformat} {noformat:title=1kB} [java] Benchmark Mode Samples Score Error Units [java] o.a.c.t.m.SetBytesAllocationBench.branch thrpt 10 58,268 ± 2,936 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.branch:branchDirectthrpt 10 30,058 ± 1,649 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.branch:branchHeap thrpt 10 28,209 ± 1,525 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.trunk thrpt 10 55,066 ± 2,410 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.trunk:trunkDirect thrpt 10 28,169 ± 1,449 ops/us [java] o.a.c.t.m.SetBytesAllocationBench.trunk:trunkHeapthrpt 10 26,897 ± 1,206 ops/us {noformat} > row-cache: use preloaded jemalloc w/ Unsafe > --- > > Key: CASSANDRA-8714 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8714 > Project: Cassandra > Issue Type: Sub-task >Reporter: Robert Stupp >Assignee: Robert Stupp > Fix For: 3.0 > > Attachments: 8714-2.txt, 8714-3.txt, 8714-4.txt, 8714.txt > > > Using jemalloc via Java's {{Unsafe}} is a better alternative on Linux than > using jemalloc via JNA. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8903) Super Columns exception during upgrade from C* 1.2.18 to C* 2.0.11.83
Erick Ramirez created CASSANDRA-8903: Summary: Super Columns exception during upgrade from C* 1.2.18 to C* 2.0.11.83 Key: CASSANDRA-8903 URL: https://issues.apache.org/jira/browse/CASSANDRA-8903 Project: Cassandra Issue Type: Bug Components: Core Reporter: Erick Ramirez Fix For: 2.0.11 While in the middle of an upgrade of a 12-node cluster, the following errors were constantly being reported in the logs: {noformat} ERROR [WRITE-/10.73.73.26] 2015-03-02 22:59:12,523 OutboundTcpConnection.java (line 234) error writing to /xx.xx.xx.xx java.lang.RuntimeException: Cannot convert filter to old super column format. Update all nodes to Cassandra 2.0 first. at org.apache.cassandra.db.SuperColumns.sliceFilterToSC(SuperColumns.java:353) at org.apache.cassandra.db.SuperColumns.filterToSC(SuperColumns.java:258) at org.apache.cassandra.db.RangeSliceCommandSerializer.serializedSize(RangeSliceCommand.java:284) at org.apache.cassandra.db.RangeSliceCommandSerializer.serializedSize(RangeSliceCommand.java:156) at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:116) at org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:251) at org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:203) at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:151) {noformat} Confirmed that code is accessing cluster via Thrift and not CQL API. This issue is very similar to CASSANDRA-6996 but may relate to the way that single slice and reversed is being handled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/2] cassandra git commit: Make EstimatedHistogram#percentile() use ceil instead of floor
Repository: cassandra Updated Branches: refs/heads/trunk 0014d929f -> 89d31f3da Make EstimatedHistogram#percentile() use ceil instead of floor patch by Carl Yeksigian; reviewed by Chris Lohfink for CASSANDRA-8883 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d4e37869 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d4e37869 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d4e37869 Branch: refs/heads/trunk Commit: d4e37869b1465b231ada7554fb7a6d5ccf43f493 Parents: 0127b69 Author: Carl Yeksigian Authored: Tue Mar 3 13:54:51 2015 -0500 Committer: Aleksey Yeschenko Committed: Tue Mar 3 21:01:14 2015 -0800 -- CHANGES.txt | 1 + .../cassandra/utils/EstimatedHistogram.java | 2 +- .../cassandra/utils/EstimatedHistogramTest.java | 52 3 files changed, 44 insertions(+), 11 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4e37869/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7ce6200..748acf8 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.4 + * Make EstimatedHistogram#percentile() use ceil instead of floor (CASSANDRA-8883) * Fix top partitions reporting wrong cardinality (CASSANDRA-8834) * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067) * Pick sstables for validation as late as possible inc repairs (CASSANDRA-8366) http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4e37869/src/java/org/apache/cassandra/utils/EstimatedHistogram.java -- diff --git a/src/java/org/apache/cassandra/utils/EstimatedHistogram.java b/src/java/org/apache/cassandra/utils/EstimatedHistogram.java index 196a3b9..a5c51c8 100644 --- a/src/java/org/apache/cassandra/utils/EstimatedHistogram.java +++ b/src/java/org/apache/cassandra/utils/EstimatedHistogram.java @@ -178,7 +178,7 @@ public class EstimatedHistogram if (buckets.get(lastBucket) > 0) throw new IllegalStateException("Unable to compute when histogram overflowed"); -long pcount = (long) Math.floor(count() * percentile); +long pcount = (long) Math.ceil(count() * percentile); if (pcount == 0) return 0; http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4e37869/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java -- diff --git a/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java b/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java index bbfd1c7..eebaa25 100644 --- a/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java +++ b/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java @@ -75,17 +75,49 @@ public class EstimatedHistogramTest @Test public void testPercentile() { -EstimatedHistogram histogram = new EstimatedHistogram(); -// percentile of empty histogram is 0 -assertEquals(0, histogram.percentile(0.99)); +{ +EstimatedHistogram histogram = new EstimatedHistogram(); +// percentile of empty histogram is 0 +assertEquals(0, histogram.percentile(0.99)); -histogram.add(1); -// percentile of histogram with just one value will return 0 except 100th -assertEquals(0, histogram.percentile(0.99)); -assertEquals(1, histogram.percentile(1.00)); +histogram.add(1); +// percentile of a histogram with one element should be that element +assertEquals(1, histogram.percentile(0.99)); + +histogram.add(10); +assertEquals(10, histogram.percentile(0.99)); +} + +{ +EstimatedHistogram histogram = new EstimatedHistogram(); + +histogram.add(1); +histogram.add(2); +histogram.add(3); +histogram.add(4); +histogram.add(5); + +assertEquals(0, histogram.percentile(0.00)); +assertEquals(3, histogram.percentile(0.50)); +assertEquals(3, histogram.percentile(0.60)); +assertEquals(5, histogram.percentile(1.00)); +} + +{ +EstimatedHistogram histogram = new EstimatedHistogram(); + +for (int i = 11; i <= 20; i++) +histogram.add(i); -histogram.add(10); -assertEquals(1, histogram.percentile(0.99)); -assertEquals(10, histogram.percentile(1.00)); +// Right now the histogram looks like: +//10 12 14 17 20 +// 02233 +// %: 0 20 40 70 100 +assertE
[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/89d31f3d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/89d31f3d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/89d31f3d Branch: refs/heads/trunk Commit: 89d31f3da7c4007eb71c2e0de043bcaf7e4c5a27 Parents: 0014d92 d4e3786 Author: Aleksey Yeschenko Authored: Tue Mar 3 21:02:47 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 21:02:47 2015 -0800 -- CHANGES.txt | 1 + .../cassandra/utils/EstimatedHistogram.java | 2 +- .../cassandra/utils/EstimatedHistogramTest.java | 52 3 files changed, 44 insertions(+), 11 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/89d31f3d/CHANGES.txt -- diff --cc CHANGES.txt index d6cb9b4,748acf8..45751f1 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,65 -1,5 +1,66 @@@ +3.0 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849) + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268) + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657) + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438) + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707) + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560) + * Support direct buffer decompression for reads (CASSANDRA-8464) + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039) + * Group sstables for anticompaction correctly (CASSANDRA-8578) + * Add ReadFailureException to native protocol, respond + immediately when replicas encounter errors while handling + a read request (CASSANDRA-7886) + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308) + * Allow mixing token and partition key restrictions (CASSANDRA-7016) + * Support index key/value entries on map collections (CASSANDRA-8473) + * Modernize schema tables (CASSANDRA-8261) + * Support for user-defined aggregation functions (CASSANDRA-8053) + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419) + * Refactor SelectStatement, return IN results in natural order instead + of IN value list order and ignore duplicate values in partition key IN restrictions (CASSANDRA-7981) + * Support UDTs, tuples, and collections in user-defined + functions (CASSANDRA-7563) + * Fix aggregate fn results on empty selection, result column name, + and cqlsh parsing (CASSANDRA-8229) + * Mark sstables as repaired after full repair (CASSANDRA-7586) + * Extend Descriptor to include a format value and refactor reader/writer + APIs (CASSANDRA-7443) + * Integrate JMH for microbenchmarks (CASSANDRA-8151) + * Keep sstable levels when bootstrapping (CASSANDRA-7460) + * Add Sigar library and perform basic OS settings check on startup (CASSANDRA-7838) + * Support for aggregation functions (CASSANDRA-4914) + * Remove cassandra-cli (CASSANDRA-7920) + * Accept dollar quoted strings in CQL (CASSANDRA-7769) + * Make assassinate a first class command (CASSANDRA-7935) + * Support IN clause on any partition key column (CASSANDRA-7855) + * Support IN clause on any clustering column (CASSANDRA-4762) + * Improve compaction logging (CASSANDRA-7818) + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917) + * Do anticompaction in groups (CASSANDRA-6851) + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 7929, + 7924, 7812, 8063, 7813, 7708) + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416) + * Move sstable RandomAccessReader to nio2, which allows using the + FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050) + * Remove CQL2 (CASSANDRA-5918) + * Add Thrift get_multi_slice call (CASSANDRA-6757) + * Optimize fetching multiple cells by name (CASSANDRA-6933) + * Allow compilation in java 8 (CASSANDRA-7028) + * Make incremental repair default (CASSANDRA-7250) + * Enable code coverage thru JaCoCo (CASSANDRA-7226) + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) + * Shorten SSTable path (CASSANDRA-6962) + * Use unsafe mutations for most unit tests (CASSANDRA-6969) + * Fix race condition during calculation of pending ranges (CASSANDRA-7390) + * Fail on very large batch sizes (CASSANDRA-8011) + * Improve concurrency of repair (CASSANDRA-6455, 8208) + * Select optimal CRC32 implementation at runtime (CASSANDRA-8614) + * Evaluate MurmurHash of Token once per query (CASSANDRA-7096) + + 2.1.4 + * Make EstimatedHistogram#percentile() use ceil instead of floor (CASSANDRA-8883) * Fix top partitions reporting wrong cardinality (CASSANDRA-8834)
cassandra git commit: Make EstimatedHistogram#percentile() use ceil instead of floor
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 0127b6963 -> d4e37869b Make EstimatedHistogram#percentile() use ceil instead of floor patch by Carl Yeksigian; reviewed by Chris Lohfink for CASSANDRA-8883 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d4e37869 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d4e37869 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d4e37869 Branch: refs/heads/cassandra-2.1 Commit: d4e37869b1465b231ada7554fb7a6d5ccf43f493 Parents: 0127b69 Author: Carl Yeksigian Authored: Tue Mar 3 13:54:51 2015 -0500 Committer: Aleksey Yeschenko Committed: Tue Mar 3 21:01:14 2015 -0800 -- CHANGES.txt | 1 + .../cassandra/utils/EstimatedHistogram.java | 2 +- .../cassandra/utils/EstimatedHistogramTest.java | 52 3 files changed, 44 insertions(+), 11 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4e37869/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7ce6200..748acf8 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.4 + * Make EstimatedHistogram#percentile() use ceil instead of floor (CASSANDRA-8883) * Fix top partitions reporting wrong cardinality (CASSANDRA-8834) * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067) * Pick sstables for validation as late as possible inc repairs (CASSANDRA-8366) http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4e37869/src/java/org/apache/cassandra/utils/EstimatedHistogram.java -- diff --git a/src/java/org/apache/cassandra/utils/EstimatedHistogram.java b/src/java/org/apache/cassandra/utils/EstimatedHistogram.java index 196a3b9..a5c51c8 100644 --- a/src/java/org/apache/cassandra/utils/EstimatedHistogram.java +++ b/src/java/org/apache/cassandra/utils/EstimatedHistogram.java @@ -178,7 +178,7 @@ public class EstimatedHistogram if (buckets.get(lastBucket) > 0) throw new IllegalStateException("Unable to compute when histogram overflowed"); -long pcount = (long) Math.floor(count() * percentile); +long pcount = (long) Math.ceil(count() * percentile); if (pcount == 0) return 0; http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4e37869/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java -- diff --git a/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java b/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java index bbfd1c7..eebaa25 100644 --- a/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java +++ b/test/unit/org/apache/cassandra/utils/EstimatedHistogramTest.java @@ -75,17 +75,49 @@ public class EstimatedHistogramTest @Test public void testPercentile() { -EstimatedHistogram histogram = new EstimatedHistogram(); -// percentile of empty histogram is 0 -assertEquals(0, histogram.percentile(0.99)); +{ +EstimatedHistogram histogram = new EstimatedHistogram(); +// percentile of empty histogram is 0 +assertEquals(0, histogram.percentile(0.99)); -histogram.add(1); -// percentile of histogram with just one value will return 0 except 100th -assertEquals(0, histogram.percentile(0.99)); -assertEquals(1, histogram.percentile(1.00)); +histogram.add(1); +// percentile of a histogram with one element should be that element +assertEquals(1, histogram.percentile(0.99)); + +histogram.add(10); +assertEquals(10, histogram.percentile(0.99)); +} + +{ +EstimatedHistogram histogram = new EstimatedHistogram(); + +histogram.add(1); +histogram.add(2); +histogram.add(3); +histogram.add(4); +histogram.add(5); + +assertEquals(0, histogram.percentile(0.00)); +assertEquals(3, histogram.percentile(0.50)); +assertEquals(3, histogram.percentile(0.60)); +assertEquals(5, histogram.percentile(1.00)); +} + +{ +EstimatedHistogram histogram = new EstimatedHistogram(); + +for (int i = 11; i <= 20; i++) +histogram.add(i); -histogram.add(10); -assertEquals(1, histogram.percentile(0.99)); -assertEquals(10, histogram.percentile(1.00)); +// Right now the histogram looks like: +//10 12 14 17 20 +// 02233 +// %: 0 20 40 70 100 +
[jira] [Commented] (CASSANDRA-8834) Top partitions reporting wrong cardinality
[ https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346400#comment-14346400 ] Aleksey Yeschenko commented on CASSANDRA-8834: -- Committed, thanks. > Top partitions reporting wrong cardinality > -- > > Key: CASSANDRA-8834 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8834 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Chris Lohfink > Fix For: 2.1.4 > > Attachments: CASSANDRA-8834-v3.txt, cardinality.patch > > > It always reports a cardinality of 1. Patch also includes a try/catch around > the conversion of partition keys that isn't always handled well in thrift cfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/2] cassandra git commit: Fix top partitions reporting wrong cardinality
Repository: cassandra Updated Branches: refs/heads/trunk 3f8806d23 -> 0014d929f Fix top partitions reporting wrong cardinality patch by Chris Lohfink; reviewed by Aleksey Yeschenko for CASSANDRA-8834 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0127b696 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0127b696 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0127b696 Branch: refs/heads/trunk Commit: 0127b6963ebf1c4fa407c71c12f10748b509b189 Parents: 9499f7c Author: Chris Lohfink Authored: Tue Mar 3 22:31:50 2015 -0600 Committer: Aleksey Yeschenko Committed: Tue Mar 3 20:37:17 2015 -0800 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 4 ++-- src/java/org/apache/cassandra/utils/TopKSampler.java | 6 +++--- test/unit/org/apache/cassandra/utils/TopKSamplerTest.java | 8 ++-- 4 files changed, 12 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0127b696/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a90dd48..7ce6200 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.4 + * Fix top partitions reporting wrong cardinality (CASSANDRA-8834) * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067) * Pick sstables for validation as late as possible inc repairs (CASSANDRA-8366) * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0127b696/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index e4531f2..9b792b6 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1179,7 +1179,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean Memtable mt = data.getMemtableFor(opGroup, replayPosition); final long timeDelta = mt.put(key, columnFamily, indexer, opGroup); maybeUpdateRowCache(key); -metric.samplers.get(Sampler.WRITES).addSample(key.getKey()); +metric.samplers.get(Sampler.WRITES).addSample(key.getKey(), key.hashCode(), 1); metric.writeLatency.addNano(System.nanoTime() - start); if(timeDelta < Long.MAX_VALUE) metric.colUpdateTimeDeltaHistogram.update(timeDelta); @@ -1915,7 +1915,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean columns = controller.getTopLevelColumns(Memtable.MEMORY_POOL.needToCopyOnHeap()); } if (columns != null) -metric.samplers.get(Sampler.READS).addSample(filter.key.getKey()); +metric.samplers.get(Sampler.READS).addSample(filter.key.getKey(), filter.key.hashCode(), 1); metric.updateSSTableIterated(controller.getSstablesIterated()); return columns; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/0127b696/src/java/org/apache/cassandra/utils/TopKSampler.java -- diff --git a/src/java/org/apache/cassandra/utils/TopKSampler.java b/src/java/org/apache/cassandra/utils/TopKSampler.java index 29d46286..a8bd602 100644 --- a/src/java/org/apache/cassandra/utils/TopKSampler.java +++ b/src/java/org/apache/cassandra/utils/TopKSampler.java @@ -81,7 +81,7 @@ public class TopKSampler public void addSample(T item) { -addSample(item, 1); +addSample(item, item.hashCode(), 1); } /** @@ -89,7 +89,7 @@ public class TopKSampler * use the "Sampler" thread pool to record results if the sampler is enabled. If not * sampling this is a NOOP */ -public void addSample(final T item, final int value) +public void addSample(final T item, final long hash, final int value) { if (enabled) { @@ -107,7 +107,7 @@ public class TopKSampler try { summary.offer(item, value); -hll.offer(item); +hll.offerHashed(hash); } catch (Exception e) { logger.debug("Failure to offer sample", e); http://git-wip-us.apache.org/repos/asf/cassandra/blob/0127b696/test/unit/org/apache/cassandra/utils/TopKSamplerTest.java -- diff --git a/test/unit/org/apache/cassandra/utils/TopKSamplerTest
[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0014d929 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0014d929 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0014d929 Branch: refs/heads/trunk Commit: 0014d929f7504e2f41209af4d311d8105a1cd3aa Parents: 3f8806d 0127b69 Author: Aleksey Yeschenko Authored: Tue Mar 3 20:56:09 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 20:56:09 2015 -0800 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 4 ++-- src/java/org/apache/cassandra/utils/TopKSampler.java | 6 +++--- test/unit/org/apache/cassandra/utils/TopKSamplerTest.java | 8 ++-- 4 files changed, 12 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0014d929/CHANGES.txt -- diff --cc CHANGES.txt index b877cbe,7ce6200..d6cb9b4 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,65 -1,5 +1,66 @@@ +3.0 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849) + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268) + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657) + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438) + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707) + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560) + * Support direct buffer decompression for reads (CASSANDRA-8464) + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039) + * Group sstables for anticompaction correctly (CASSANDRA-8578) + * Add ReadFailureException to native protocol, respond + immediately when replicas encounter errors while handling + a read request (CASSANDRA-7886) + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308) + * Allow mixing token and partition key restrictions (CASSANDRA-7016) + * Support index key/value entries on map collections (CASSANDRA-8473) + * Modernize schema tables (CASSANDRA-8261) + * Support for user-defined aggregation functions (CASSANDRA-8053) + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419) + * Refactor SelectStatement, return IN results in natural order instead + of IN value list order and ignore duplicate values in partition key IN restrictions (CASSANDRA-7981) + * Support UDTs, tuples, and collections in user-defined + functions (CASSANDRA-7563) + * Fix aggregate fn results on empty selection, result column name, + and cqlsh parsing (CASSANDRA-8229) + * Mark sstables as repaired after full repair (CASSANDRA-7586) + * Extend Descriptor to include a format value and refactor reader/writer + APIs (CASSANDRA-7443) + * Integrate JMH for microbenchmarks (CASSANDRA-8151) + * Keep sstable levels when bootstrapping (CASSANDRA-7460) + * Add Sigar library and perform basic OS settings check on startup (CASSANDRA-7838) + * Support for aggregation functions (CASSANDRA-4914) + * Remove cassandra-cli (CASSANDRA-7920) + * Accept dollar quoted strings in CQL (CASSANDRA-7769) + * Make assassinate a first class command (CASSANDRA-7935) + * Support IN clause on any partition key column (CASSANDRA-7855) + * Support IN clause on any clustering column (CASSANDRA-4762) + * Improve compaction logging (CASSANDRA-7818) + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917) + * Do anticompaction in groups (CASSANDRA-6851) + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 7929, + 7924, 7812, 8063, 7813, 7708) + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416) + * Move sstable RandomAccessReader to nio2, which allows using the + FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050) + * Remove CQL2 (CASSANDRA-5918) + * Add Thrift get_multi_slice call (CASSANDRA-6757) + * Optimize fetching multiple cells by name (CASSANDRA-6933) + * Allow compilation in java 8 (CASSANDRA-7028) + * Make incremental repair default (CASSANDRA-7250) + * Enable code coverage thru JaCoCo (CASSANDRA-7226) + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) + * Shorten SSTable path (CASSANDRA-6962) + * Use unsafe mutations for most unit tests (CASSANDRA-6969) + * Fix race condition during calculation of pending ranges (CASSANDRA-7390) + * Fail on very large batch sizes (CASSANDRA-8011) + * Improve concurrency of repair (CASSANDRA-6455, 8208) + * Select optimal CRC32 implementation at runtime (CASSANDRA-8614) + * Evaluate MurmurHash of Token once per query (CASSANDRA-7096) + + 2.1.4 + * Fix top partitions reporting wrong cardinality (CASSANDRA-8834)
[jira] [Commented] (CASSANDRA-7122) Replacement nodes have null entries in system.peers
[ https://issues.apache.org/jira/browse/CASSANDRA-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346399#comment-14346399 ] Sebastian Estevez commented on CASSANDRA-7122: -- [~brandon.williams] I have seen this in a couple of places recently. Could it still be happening post the 2.0.9 fix? > Replacement nodes have null entries in system.peers > --- > > Key: CASSANDRA-7122 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7122 > Project: Cassandra > Issue Type: Bug >Reporter: Richard Low >Assignee: Brandon Williams > Fix For: 1.2.17, 2.0.9, 2.1 rc1 > > Attachments: 7122.txt > > > If a node is replaced with -Dcassandra.replace_address, the new node has > mostly null entries in system.peers: > {code} > > select * from system.peers; > peer | data_center | host_id | rack | release_version | rpc_address | > schema_version | tokens > ---+-+-+--+-+-++-- > 127.0.0.3 |null |null | null |null |null | > null | {'-3074457345618258602'} > {code} > To reproduce, simply kill a node and replace it. The entries are correctly > populated if the replacement node is restarted but they are never populated > if it isn't. > I can think of at least two bad consequences of this: > 1. Drivers like Datastax java-driver use the peers table to find the > rpc_address and location info of a node. If the entires are null it assumes > rpc_address=ip and the node is in the local DC. > 2. When using GossipingPropertyFileSnitch and node won't persist the DC/rack > of another node so may not be able to locate it during restarts. > I reproduced in 1.2.15 but from inspection it looks to be present in 1.2.16 > and 2.0.7. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-7122) Replacement nodes have null entries in system.peers
[ https://issues.apache.org/jira/browse/CASSANDRA-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14319146#comment-14319146 ] Sebastian Estevez edited comment on CASSANDRA-7122 at 3/4/15 4:54 AM: -- Hello, seeing a similar issue on a 2.0.11 cluster(2.0.11). Is it possible that this might still be an issue? Here is the table content for system.peers (with the tokens column omitted): {code} peer | data_center | host_id | preferred_ip | rack | release_version | rpc_address | schema_version | workload ---+-+--+--+---+-+-+--+--- 127.0.0.56 | Cassandra | 7fc420af-d284-48aa-ba58-2b71508995ab | null | rack1 | 2.0.11.83 | 0.0.0.0 | 9dd12e7e-07cf-395b-9d9c-05028bdedd04 | Cassandra 127.0.0.248 | Cassandra | null | null | null |null |null | null | null 127.0.0.246 | Cassandra | null | null | null |null |null | null | null 127.0.0.63 | Cassandra | null | null | null |null |null | null | null 127.0.0.104 | Cassandra | null | null | null |null |null | null | null 127.0.0.41 | Cassandra | b6a9ea10-cf58-452f-9003-87c1e183c888 | null | rack1 | 2.0.11.83 | 0.0.0.0 | 9dd12e7e-07cf-395b-9d9c-05028bdedd04 | Cassandra 127.0.0.48 | Cassandra | 5f33e073-5c71-4ba1-9d9f-7d2b396bd916 | null | rack1 | 2.0.11.83 | 0.0.0.0 | 9dd12e7e-07cf-395b-9d9c-05028bdedd04 | Cassandra 127.0.0.58 | Cassandra | 0f6c891a-b8e5-486c-a3f2-04c62d368925 | null | rack1 | 2.0.11.82 | 0.0.0.0 | 8223f2ad-fddd-3c7f-a0f3-837c598ca96b | null 127.0.0.66 | Cassandra | 8b8f0c8b-bfb5-4256-8da9-c10296f6c4e7 | null | rack1 | 2.0.11.83 | 0.0.0.0 | 9dd12e7e-07cf-395b-9d9c-05028bdedd04 | Cassandra 127.0.0.72 | Cassandra | null | null | null |null |null | null | null {code} Out of the 10 rows, the 5 with null host_id were the nodes removed on Monday. was (Author: sebastian.este...@datastax.com): Hello, seeing a similar issue on a 2.0.11 cluster(DSE 4.6). Is it possible that this might still be an issue? Here is the table content for system.peers (with the tokens column omitted): {code} peer | data_center | host_id | preferred_ip | rack | release_version | rpc_address | schema_version | workload ---+-+--+--+---+-+-+--+--- 127.0.0.56 | Cassandra | 7fc420af-d284-48aa-ba58-2b71508995ab | null | rack1 | 2.0.11.83 | 0.0.0.0 | 9dd12e7e-07cf-395b-9d9c-05028bdedd04 | Cassandra 127.0.0.248 | Cassandra | null | null | null |null |null | null | null 127.0.0.246 | Cassandra | null | null | null |null |null | null | null 127.0.0.63 | Cassandra | null | null | null |null |null | null | null 127.0.0.104 | Cassandra | null | null | null |null |null | null | null 127.0.0.41 | Cassandra | b6a9ea10-cf58-452f-9003-87c1e183c888 | null | rack1 | 2.0.11.83 | 0.0.0.0 | 9dd12e7e-07cf-395b-9d9c-05028bdedd04 | Cassandra 127.0.0.48 | Cassandra | 5f33e073-5c71-4ba1-9d9f-7d2b396bd916 | null | rack1 | 2.0.11.83 | 0.0.0.0 | 9dd12e7e-07cf-395b-9d9c-05028bdedd04 | Cassandra 127.0.0.58 | Cassandra | 0f6c891a-b8e5-486c-a3f2-04c62d368925 | null | rack1 | 2.0.11.82 | 0.0.0.0 | 8223f2ad-fddd-3c7f-a0f3-837c598ca96b | null 127.0.0.66 | Cassandra | 8b8f0c8b-bfb5-4256-8da9-c10296f6c4e7 | null | rack1 | 2.0.11.83 | 0.0.0.0 | 9dd12e7e-07cf-395b-9d9c-05028bdedd04 | Cassandra 127.0.0.72 | Cassandra | null | null | null |null |null |
cassandra git commit: Fix top partitions reporting wrong cardinality
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 9499f7cb9 -> 0127b6963 Fix top partitions reporting wrong cardinality patch by Chris Lohfink; reviewed by Aleksey Yeschenko for CASSANDRA-8834 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0127b696 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0127b696 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0127b696 Branch: refs/heads/cassandra-2.1 Commit: 0127b6963ebf1c4fa407c71c12f10748b509b189 Parents: 9499f7c Author: Chris Lohfink Authored: Tue Mar 3 22:31:50 2015 -0600 Committer: Aleksey Yeschenko Committed: Tue Mar 3 20:37:17 2015 -0800 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 4 ++-- src/java/org/apache/cassandra/utils/TopKSampler.java | 6 +++--- test/unit/org/apache/cassandra/utils/TopKSamplerTest.java | 8 ++-- 4 files changed, 12 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/0127b696/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index a90dd48..7ce6200 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.4 + * Fix top partitions reporting wrong cardinality (CASSANDRA-8834) * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067) * Pick sstables for validation as late as possible inc repairs (CASSANDRA-8366) * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856) http://git-wip-us.apache.org/repos/asf/cassandra/blob/0127b696/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index e4531f2..9b792b6 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1179,7 +1179,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean Memtable mt = data.getMemtableFor(opGroup, replayPosition); final long timeDelta = mt.put(key, columnFamily, indexer, opGroup); maybeUpdateRowCache(key); -metric.samplers.get(Sampler.WRITES).addSample(key.getKey()); +metric.samplers.get(Sampler.WRITES).addSample(key.getKey(), key.hashCode(), 1); metric.writeLatency.addNano(System.nanoTime() - start); if(timeDelta < Long.MAX_VALUE) metric.colUpdateTimeDeltaHistogram.update(timeDelta); @@ -1915,7 +1915,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean columns = controller.getTopLevelColumns(Memtable.MEMORY_POOL.needToCopyOnHeap()); } if (columns != null) -metric.samplers.get(Sampler.READS).addSample(filter.key.getKey()); +metric.samplers.get(Sampler.READS).addSample(filter.key.getKey(), filter.key.hashCode(), 1); metric.updateSSTableIterated(controller.getSstablesIterated()); return columns; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/0127b696/src/java/org/apache/cassandra/utils/TopKSampler.java -- diff --git a/src/java/org/apache/cassandra/utils/TopKSampler.java b/src/java/org/apache/cassandra/utils/TopKSampler.java index 29d46286..a8bd602 100644 --- a/src/java/org/apache/cassandra/utils/TopKSampler.java +++ b/src/java/org/apache/cassandra/utils/TopKSampler.java @@ -81,7 +81,7 @@ public class TopKSampler public void addSample(T item) { -addSample(item, 1); +addSample(item, item.hashCode(), 1); } /** @@ -89,7 +89,7 @@ public class TopKSampler * use the "Sampler" thread pool to record results if the sampler is enabled. If not * sampling this is a NOOP */ -public void addSample(final T item, final int value) +public void addSample(final T item, final long hash, final int value) { if (enabled) { @@ -107,7 +107,7 @@ public class TopKSampler try { summary.offer(item, value); -hll.offer(item); +hll.offerHashed(hash); } catch (Exception e) { logger.debug("Failure to offer sample", e); http://git-wip-us.apache.org/repos/asf/cassandra/blob/0127b696/test/unit/org/apache/cassandra/utils/TopKSamplerTest.java -- diff --git a/test/unit/org/apache/cassandra/utils
[jira] [Commented] (CASSANDRA-8883) Percentile computation should use ceil not floor in EstimatedHistogram
[ https://issues.apache.org/jira/browse/CASSANDRA-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346391#comment-14346391 ] Aleksey Yeschenko commented on CASSANDRA-8883: -- Thanks guys. I'll commit later tonight. > Percentile computation should use ceil not floor in EstimatedHistogram > -- > > Key: CASSANDRA-8883 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8883 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Carl Yeksigian >Priority: Minor > Fix For: 2.1.4 > > Attachments: 8883-2.1.txt > > > When computing the pcount Cassandra uses floor and the comparison with > elements is >= so given a simple example of there being a total of five > elements > {code} > // data > [1, 1, 1, 1, 1] > // offsets > [1, 2, 3, 4, 5] > {code} > Cassandra would report the 50th percentile as 2. While 3 is the more > expected value. As a comparison using numpy > {code} > import numpy as np > np.percentile(np.array([1, 2, 3, 4, 5]), 50) > ==> 3.0 > {code} > The percentiles was added in CASSANDRA-4022 but is now used a lot in metrics > Cassandra reports. I think it should error on the side on overestimating > instead of underestimating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-7816: - Reviewer: Tyler Hobbs > Duplicate DOWN/UP Events Pushed with Native Protocol > > > Key: CASSANDRA-7816 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7816 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Michael Penick >Assignee: Stefania >Priority: Minor > Fix For: 2.0.13, 2.1.4 > > Attachments: cassandra_7816.txt, tcpdump_repeating_status_change.txt, > trunk-7816.txt > > > Added "MOVED_NODE" as a possible type of topology change and also specified > that it is possible to receive the same event multiple times. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346387#comment-14346387 ] Stefania edited comment on CASSANDRA-7816 at 3/4/15 4:40 AM: - Submitting a patch for 2.0, cassandra_7816.txt. The duplicate DOWN notification is caused by {{Gossiper.handleMajorStateChange}} passing the remote endpoint state to {{StorageService.onRestart}}, which then incorrectly comes to the conclusion that the node was not previously marked down. I changed it to receive the local state, if not null. If it is null we do not call {{onRestart}}, please confirm that this does not introduce problems (I checked all {{onStart}} implementations and it looks OK to me). The multiple UP notifications are caused by the call to {{markAlive()}} in {{Gossiper.applyStateLocally()}} when receiving multiple gossip messages. Because {{markAlive()}} only marks the node as alive after receiving an echo message (CASSANDRA-3533), there is a delay during which the node is still not marked as alive. If gossip messages are received during this period, we incorrectly call {{markAlive()}} multiple times in {{applyStateLocally()}}. I fixed it by adding a flag to {{EndpointState}} and by checking this flag in {{markAlive}}, if an echo is outstanding then we do not send another one until we've received an answer. When there is a major change, {{markAlive()}} is called on the remote state, for which this flag is not set and so we try againg sending an echo message in mark alive even if we did not receive a reply to a previous echo request. was (Author: stefania): Submitting a patch for 2.0. The duplicate DOWN notification is caused by {{Gossiper.handleMajorStateChange}} passing the remote endpoint state to {{StorageService.onRestart}}, which then incorrectly comes to the conclusion that the node was not previously marked down. I changed it to receive the local state, if not null. If it is null we do not call {{onRestart}}, please confirm that this does not introduce problems (I checked all {{onStart}} implementations and it looks OK to me). The multiple UP notifications are caused by the call to {{markAlive()}} in {{Gossiper.applyStateLocally()}} when receiving multiple gossip messages. Because {{markAlive()}} only marks the node as alive after receiving an echo message (CASSANDRA-3533), there is a delay during which the node is still not marked as alive. If gossip messages are received during this period, we incorrectly call {{markAlive()}} multiple times in {{applyStateLocally()}}. I fixed it by adding a flag to {{EndpointState}} and by checking this flag in {{markAlive}}, if an echo is outstanding then we do not send another one until we've received an answer. When there is a major change, {{markAlive()}} is called on the remote state, for which this flag is not set and so we try againg sending an echo message in mark alive even if we did not receive a reply to a previous echo request. > Duplicate DOWN/UP Events Pushed with Native Protocol > > > Key: CASSANDRA-7816 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7816 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Michael Penick >Assignee: Stefania >Priority: Minor > Fix For: 2.0.13, 2.1.4 > > Attachments: cassandra_7816.txt, tcpdump_repeating_status_change.txt, > trunk-7816.txt > > > Added "MOVED_NODE" as a possible type of topology change and also specified > that it is possible to receive the same event multiple times. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-7816: Attachment: cassandra_7816.txt Submitting a patch for 2.0. The duplicate DOWN notification is caused by {{Gossiper.handleMajorStateChange}} passing the remote endpoint state to {{StorageService.onRestart}}, which then incorrectly comes to the conclusion that the node was not previously marked down. I changed it to receive the local state, if not null. If it is null we do not call {{onRestart}}, please confirm that this does not introduce problems (I checked all {{onStart}} implementations and it looks OK to me). The multiple UP notifications are caused by the call to {{markAlive()}} in {{Gossiper.applyStateLocally()}} when receiving multiple gossip messages. Because {{markAlive()}} only marks the node as alive after receiving an echo message (CASSANDRA-3533), there is a delay during which the node is still not marked as alive. If gossip messages are received during this period, we incorrectly call {{markAlive()}} multiple times in {{applyStateLocally()}}. I fixed it by adding a flag to {{EndpointState}} and by checking this flag in {{markAlive}}, if an echo is outstanding then we do not send another one until we've received an answer. When there is a major change, {{markAlive()}} is called on the remote state, for which this flag is not set and so we try againg sending an echo message in mark alive even if we did not receive a reply to a previous echo request. > Duplicate DOWN/UP Events Pushed with Native Protocol > > > Key: CASSANDRA-7816 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7816 > Project: Cassandra > Issue Type: Bug > Components: API >Reporter: Michael Penick >Assignee: Stefania >Priority: Minor > Fix For: 2.0.13, 2.1.4 > > Attachments: cassandra_7816.txt, tcpdump_repeating_status_change.txt, > trunk-7816.txt > > > Added "MOVED_NODE" as a possible type of topology change and also specified > that it is possible to receive the same event multiple times. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8834) Top partitions reporting wrong cardinality
[ https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink updated CASSANDRA-8834: - Attachment: CASSANDRA-8834-v3.txt > Top partitions reporting wrong cardinality > -- > > Key: CASSANDRA-8834 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8834 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Chris Lohfink > Fix For: 2.1.4 > > Attachments: CASSANDRA-8834-v3.txt, cardinality.patch > > > It always reports a cardinality of 1. Patch also includes a try/catch around > the conversion of partition keys that isn't always handled well in thrift cfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8834) Top partitions reporting wrong cardinality
[ https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink updated CASSANDRA-8834: - Attachment: (was: CASSANDRA-8834-v2.txt) > Top partitions reporting wrong cardinality > -- > > Key: CASSANDRA-8834 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8834 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Chris Lohfink > Fix For: 2.1.4 > > Attachments: CASSANDRA-8834-v3.txt, cardinality.patch > > > It always reports a cardinality of 1. Patch also includes a try/catch around > the conversion of partition keys that isn't always handled well in thrift cfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8834) Top partitions reporting wrong cardinality
[ https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346383#comment-14346383 ] Chris Lohfink commented on CASSANDRA-8834: -- messed up v2 patch, v3 is with fix > Top partitions reporting wrong cardinality > -- > > Key: CASSANDRA-8834 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8834 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Chris Lohfink > Fix For: 2.1.4 > > Attachments: CASSANDRA-8834-v3.txt, cardinality.patch > > > It always reports a cardinality of 1. Patch also includes a try/catch around > the conversion of partition keys that isn't always handled well in thrift cfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8834) Top partitions reporting wrong cardinality
[ https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346374#comment-14346374 ] Chris Lohfink commented on CASSANDRA-8834: -- changed to actually use the token's hash since already using md5/murmur for good enough distribution for HLL and then can avoid needing the byte[] for clearsprings murmur impl. Also removed try/catch for marshal exception > Top partitions reporting wrong cardinality > -- > > Key: CASSANDRA-8834 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8834 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Chris Lohfink > Fix For: 2.1.4 > > Attachments: CASSANDRA-8834-v2.txt, cardinality.patch > > > It always reports a cardinality of 1. Patch also includes a try/catch around > the conversion of partition keys that isn't always handled well in thrift cfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8834) Top partitions reporting wrong cardinality
[ https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink updated CASSANDRA-8834: - Attachment: CASSANDRA-8834-v2.txt > Top partitions reporting wrong cardinality > -- > > Key: CASSANDRA-8834 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8834 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Chris Lohfink > Fix For: 2.1.4 > > Attachments: CASSANDRA-8834-v2.txt, cardinality.patch > > > It always reports a cardinality of 1. Patch also includes a try/catch around > the conversion of partition keys that isn't always handled well in thrift cfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8883) Percentile computation should use ceil not floor in EstimatedHistogram
[ https://issues.apache.org/jira/browse/CASSANDRA-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346325#comment-14346325 ] Chris Lohfink commented on CASSANDRA-8883: -- +1 lgtm, thanks! > Percentile computation should use ceil not floor in EstimatedHistogram > -- > > Key: CASSANDRA-8883 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8883 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Carl Yeksigian >Priority: Minor > Fix For: 2.1.4 > > Attachments: 8883-2.1.txt > > > When computing the pcount Cassandra uses floor and the comparison with > elements is >= so given a simple example of there being a total of five > elements > {code} > // data > [1, 1, 1, 1, 1] > // offsets > [1, 2, 3, 4, 5] > {code} > Cassandra would report the 50th percentile as 2. While 3 is the more > expected value. As a comparison using numpy > {code} > import numpy as np > np.percentile(np.array([1, 2, 3, 4, 5]), 50) > ==> 3.0 > {code} > The percentiles was added in CASSANDRA-4022 but is now used a lot in metrics > Cassandra reports. I think it should error on the side on overestimating > instead of underestimating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: move strategy field to local
Repository: cassandra Updated Branches: refs/heads/trunk dd825a5f0 -> 3f8806d23 move strategy field to local Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3f8806d2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3f8806d2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3f8806d2 Branch: refs/heads/trunk Commit: 3f8806d233e556ae224472b3a058c2ec71468b9a Parents: dd825a5 Author: Dave Brosius Authored: Tue Mar 3 22:04:24 2015 -0500 Committer: Dave Brosius Committed: Tue Mar 3 22:04:24 2015 -0500 -- .../cassandra/service/DatacenterSyncWriteResponseHandler.java | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3f8806d2/src/java/org/apache/cassandra/service/DatacenterSyncWriteResponseHandler.java -- diff --git a/src/java/org/apache/cassandra/service/DatacenterSyncWriteResponseHandler.java b/src/java/org/apache/cassandra/service/DatacenterSyncWriteResponseHandler.java index 81ae1f3..511a122 100644 --- a/src/java/org/apache/cassandra/service/DatacenterSyncWriteResponseHandler.java +++ b/src/java/org/apache/cassandra/service/DatacenterSyncWriteResponseHandler.java @@ -20,6 +20,7 @@ package org.apache.cassandra.service; import java.net.InetAddress; import java.util.Collection; import java.util.HashMap; +import java.util.Map; import java.util.concurrent.atomic.AtomicInteger; import org.apache.cassandra.config.DatabaseDescriptor; @@ -37,8 +38,7 @@ public class DatacenterSyncWriteResponseHandler extends AbstractWriteResponseHan { private static final IEndpointSnitch snitch = DatabaseDescriptor.getEndpointSnitch(); -private final NetworkTopologyStrategy strategy; -private final HashMap responses = new HashMap(); +private final Map responses = new HashMap(); private final AtomicInteger acks = new AtomicInteger(0); public DatacenterSyncWriteResponseHandler(Collection naturalEndpoints, @@ -52,7 +52,7 @@ public class DatacenterSyncWriteResponseHandler extends AbstractWriteResponseHan super(keyspace, naturalEndpoints, pendingEndpoints, consistencyLevel, callback, writeType); assert consistencyLevel == ConsistencyLevel.EACH_QUORUM; -strategy = (NetworkTopologyStrategy) keyspace.getReplicationStrategy(); +NetworkTopologyStrategy strategy = (NetworkTopologyStrategy) keyspace.getReplicationStrategy(); for (String dc : strategy.getDatacenters()) {
[1/2] cassandra git commit: use long math for long results
Repository: cassandra Updated Branches: refs/heads/trunk 93b365cdc -> dd825a5f0 use long math for long results Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9499f7cb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9499f7cb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9499f7cb Branch: refs/heads/trunk Commit: 9499f7cb98f678b6dde0c24ce87c39bc13b24ac5 Parents: 2acd05d Author: Dave Brosius Authored: Tue Mar 3 21:50:23 2015 -0500 Committer: Dave Brosius Committed: Tue Mar 3 21:50:23 2015 -0500 -- .../cassandra/io/compress/CompressionMetadata.java | 16 +++- .../apache/cassandra/io/sstable/SSTableReader.java | 2 +- .../cassandra/streaming/StreamReceiveTask.java | 2 +- .../cassandra/stress/settings/SettingsSchema.java | 2 +- 4 files changed, 10 insertions(+), 12 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9499f7cb/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java -- diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java index b29e259..59c5da5 100644 --- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java +++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java @@ -38,7 +38,6 @@ import java.util.TreeSet; import com.google.common.annotations.VisibleForTesting; import com.google.common.primitives.Longs; -import org.apache.cassandra.cache.RefCountedMemory; import org.apache.cassandra.db.TypeSizes; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.io.FSReadError; @@ -47,7 +46,6 @@ import org.apache.cassandra.io.IVersionedSerializer; import org.apache.cassandra.io.sstable.Component; import org.apache.cassandra.io.sstable.CorruptSSTableException; import org.apache.cassandra.io.sstable.Descriptor; -import org.apache.cassandra.io.sstable.SSTableWriter; import org.apache.cassandra.io.util.DataOutputPlus; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.io.util.Memory; @@ -181,7 +179,7 @@ public class CompressionMetadata if (chunkCount <= 0) throw new IOException("Compressed file with 0 chunks encountered: " + input); -Memory offsets = Memory.allocate(chunkCount * 8); +Memory offsets = Memory.allocate(chunkCount * 8L); for (int i = 0; i < chunkCount; i++) { @@ -248,7 +246,7 @@ public class CompressionMetadata endIndex = section.right % parameters.chunkLength() == 0 ? endIndex - 1 : endIndex; for (int i = startIndex; i <= endIndex; i++) { -long offset = i * 8; +long offset = i * 8L; long chunkOffset = chunkOffsets.getLong(offset); long nextChunkOffset = offset + 8 == chunkOffsetsSize ? compressedFileLength @@ -270,7 +268,7 @@ public class CompressionMetadata private final CompressionParameters parameters; private final String filePath; private int maxCount = 100; -private SafeMemory offsets = new SafeMemory(maxCount * 8); +private SafeMemory offsets = new SafeMemory(maxCount * 8L); private int count = 0; private Writer(CompressionParameters parameters, String path) @@ -288,11 +286,11 @@ public class CompressionMetadata { if (count == maxCount) { -SafeMemory newOffsets = offsets.copy((maxCount *= 2) * 8); +SafeMemory newOffsets = offsets.copy((maxCount *= 2L) * 8); offsets.close(); offsets = newOffsets; } -offsets.setLong(8 * count++, offset); +offsets.setLong(8L * count++, offset); } private void writeHeader(DataOutput out, long dataLength, int chunks) @@ -362,7 +360,7 @@ public class CompressionMetadata count = (int) (dataLength / parameters.chunkLength()); // grab our actual compressed length from the next offset from our the position we're opened to if (count < this.count) -compressedLength = offsets.getLong(count * 8); +compressedLength = offsets.getLong(count * 8L); break; default: @@ -401,7 +399,7 @@ public class CompressionMetadata assert chunks == count; writeHeader(out, dataLength, chunks); for (int i = 0 ; i < count ; i++) -out.writeLong(offsets.getLong(i *
[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd825a5f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd825a5f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd825a5f Branch: refs/heads/trunk Commit: dd825a5f06fdcfaff46fd0c316840ad3a45504cf Parents: 93b365c 9499f7c Author: Dave Brosius Authored: Tue Mar 3 21:54:03 2015 -0500 Committer: Dave Brosius Committed: Tue Mar 3 21:54:03 2015 -0500 -- .../cassandra/io/compress/CompressionMetadata.java | 17 +++-- .../cassandra/io/sstable/format/SSTableReader.java | 2 +- .../cassandra/streaming/StreamReceiveTask.java | 2 +- .../cassandra/stress/settings/SettingsSchema.java | 2 +- 4 files changed, 10 insertions(+), 13 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd825a5f/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java -- diff --cc src/java/org/apache/cassandra/io/compress/CompressionMetadata.java index f550de4,59c5da5..eef9d0c --- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java +++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java @@@ -38,8 -38,6 +38,7 @@@ import java.util.TreeSet import com.google.common.annotations.VisibleForTesting; import com.google.common.primitives.Longs; - import org.apache.cassandra.cache.RefCountedMemory; +import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.db.TypeSizes; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.io.FSReadError; @@@ -48,8 -46,6 +47,7 @@@ import org.apache.cassandra.io.IVersion import org.apache.cassandra.io.sstable.Component; import org.apache.cassandra.io.sstable.CorruptSSTableException; import org.apache.cassandra.io.sstable.Descriptor; +import org.apache.cassandra.io.sstable.format.Version; - import org.apache.cassandra.io.sstable.format.SSTableWriter; import org.apache.cassandra.io.util.DataOutputPlus; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.io.util.Memory; @@@ -269,11 -268,9 +267,10 @@@ public class CompressionMetadat private final CompressionParameters parameters; private final String filePath; private int maxCount = 100; - private SafeMemory offsets = new SafeMemory(maxCount * 8); + private SafeMemory offsets = new SafeMemory(maxCount * 8L); private int count = 0; - private Version latestVersion = DatabaseDescriptor.getSSTableFormat().info.getLatestVersion(); + private Writer(CompressionParameters parameters, String path) { this.parameters = parameters; http://git-wip-us.apache.org/repos/asf/cassandra/blob/dd825a5f/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java -- diff --cc src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java index 0b55794,000..4468d57 mode 100644,00..100644 --- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java +++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java @@@ -1,2057 -1,0 +1,2057 @@@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.io.sstable.format; + +import java.io.*; +import java.nio.ByteBuffer; +import java.util.*; +import java.util.concurrent.*; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLong; + +import com.google.common.annotations.VisibleForTesting; +import com.google.common.base.Predicate; +import com.google.common.collect.Iterators; +import com.google.common.collect.Ordering; +import com.google.common.primitives.Longs; +import com.google.common.util.concurrent.RateLimiter; + +import com.clearspring.analytics.stream.cardinality.CardinalityMergeException; +import com.cl
cassandra git commit: use long math for long results
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 2acd05d96 -> 9499f7cb9 use long math for long results Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9499f7cb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9499f7cb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9499f7cb Branch: refs/heads/cassandra-2.1 Commit: 9499f7cb98f678b6dde0c24ce87c39bc13b24ac5 Parents: 2acd05d Author: Dave Brosius Authored: Tue Mar 3 21:50:23 2015 -0500 Committer: Dave Brosius Committed: Tue Mar 3 21:50:23 2015 -0500 -- .../cassandra/io/compress/CompressionMetadata.java | 16 +++- .../apache/cassandra/io/sstable/SSTableReader.java | 2 +- .../cassandra/streaming/StreamReceiveTask.java | 2 +- .../cassandra/stress/settings/SettingsSchema.java | 2 +- 4 files changed, 10 insertions(+), 12 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9499f7cb/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java -- diff --git a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java index b29e259..59c5da5 100644 --- a/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java +++ b/src/java/org/apache/cassandra/io/compress/CompressionMetadata.java @@ -38,7 +38,6 @@ import java.util.TreeSet; import com.google.common.annotations.VisibleForTesting; import com.google.common.primitives.Longs; -import org.apache.cassandra.cache.RefCountedMemory; import org.apache.cassandra.db.TypeSizes; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.io.FSReadError; @@ -47,7 +46,6 @@ import org.apache.cassandra.io.IVersionedSerializer; import org.apache.cassandra.io.sstable.Component; import org.apache.cassandra.io.sstable.CorruptSSTableException; import org.apache.cassandra.io.sstable.Descriptor; -import org.apache.cassandra.io.sstable.SSTableWriter; import org.apache.cassandra.io.util.DataOutputPlus; import org.apache.cassandra.io.util.FileUtils; import org.apache.cassandra.io.util.Memory; @@ -181,7 +179,7 @@ public class CompressionMetadata if (chunkCount <= 0) throw new IOException("Compressed file with 0 chunks encountered: " + input); -Memory offsets = Memory.allocate(chunkCount * 8); +Memory offsets = Memory.allocate(chunkCount * 8L); for (int i = 0; i < chunkCount; i++) { @@ -248,7 +246,7 @@ public class CompressionMetadata endIndex = section.right % parameters.chunkLength() == 0 ? endIndex - 1 : endIndex; for (int i = startIndex; i <= endIndex; i++) { -long offset = i * 8; +long offset = i * 8L; long chunkOffset = chunkOffsets.getLong(offset); long nextChunkOffset = offset + 8 == chunkOffsetsSize ? compressedFileLength @@ -270,7 +268,7 @@ public class CompressionMetadata private final CompressionParameters parameters; private final String filePath; private int maxCount = 100; -private SafeMemory offsets = new SafeMemory(maxCount * 8); +private SafeMemory offsets = new SafeMemory(maxCount * 8L); private int count = 0; private Writer(CompressionParameters parameters, String path) @@ -288,11 +286,11 @@ public class CompressionMetadata { if (count == maxCount) { -SafeMemory newOffsets = offsets.copy((maxCount *= 2) * 8); +SafeMemory newOffsets = offsets.copy((maxCount *= 2L) * 8); offsets.close(); offsets = newOffsets; } -offsets.setLong(8 * count++, offset); +offsets.setLong(8L * count++, offset); } private void writeHeader(DataOutput out, long dataLength, int chunks) @@ -362,7 +360,7 @@ public class CompressionMetadata count = (int) (dataLength / parameters.chunkLength()); // grab our actual compressed length from the next offset from our the position we're opened to if (count < this.count) -compressedLength = offsets.getLong(count * 8); +compressedLength = offsets.getLong(count * 8L); break; default: @@ -401,7 +399,7 @@ public class CompressionMetadata assert chunks == count; writeHeader(out, dataLength, chunks); for (int i = 0 ; i < count ; i++) -out.writeLong(offs
[jira] [Commented] (CASSANDRA-8902) Missing data files, database corruption
[ https://issues.apache.org/jira/browse/CASSANDRA-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346188#comment-14346188 ] Aleksey Yeschenko commented on CASSANDRA-8902: -- [~kishkaru] test against 'cassandra-2.0.13' branch, please. Thanks. > Missing data files, database corruption > --- > > Key: CASSANDRA-8902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8902 > Project: Cassandra > Issue Type: Bug > Environment: ruby-driver 2.1.0 | C* 2.0.12 >Reporter: Kishan Karunaratne > > During a recent endurance test run of the ruby-driver (as well as a previous > run), I see many of the following exceptions thrown in the system.log in the > 2nd node (10.240.185.204): > {noformat} > ERROR [CompactionExecutor:81] 2015-02-20 22:32:33,064 CassandraDaemon.java > (line 199) Exception in thread Thread[CompactionExecutor:81,1,main] > java.lang.RuntimeException: java.io.FileNotFoundException: > /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db > (No such file or directory) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52) > at > org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1399) > at > org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67) > at > org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1205) > at > org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1217) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:272) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:278) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:131) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:724) > Caused by: java.io.FileNotFoundException: > /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db > (No such file or directory) > at java.io.RandomAccessFile.open(Native Method) > at java.io.RandomAccessFile.(RandomAccessFile.java:233) > at > org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58) > at > org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48) > ... 17 more > {noformat} > I've checked this data directory and indeed this specific db file is missing. > This would signal a database corruption. > The endurance test uses a 3-node cluster run over 3 days, with a chaos rhino > randomly restarting one of the nodes. It seems like the nodes have also gone > out of sync. For example, getting a nodetool status on one node gives: > {noformat} > $ cassandra/bin/nodetool -h 10.240.61.210 status > -- Address Load Tokens Owns Host ID > Rack > DN 10.240.210.69 533.32 MB 256 32.3% > 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 > DN 10.240.185.204 570.86 MB 256 36.7% > 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 > UN 10.240.61.210 877.43 MB 256 31.0% > c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 > {noformat} > While on another node it gives: > {noformat} > $ cassandra/bin/nodetool -h 10.240.210.69 status (or 10.240.185.204) > -- Address Load Tokens Owns Host ID > Rack > UN 10.240.210.69 4.83 GB256 32.3% > 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 > UN 10.240.185.204 4.88 GB256 36.7% > 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 > DN 10.240.61.210 877.43 MB 256
[jira] [Commented] (CASSANDRA-8902) Missing data files, database corruption
[ https://issues.apache.org/jira/browse/CASSANDRA-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346185#comment-14346185 ] Kishan Karunaratne commented on CASSANDRA-8902: --- I have not tested against 2.0-HEAD. I'll launch a 3-day trial and check the status. Let me get back to you. > Missing data files, database corruption > --- > > Key: CASSANDRA-8902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8902 > Project: Cassandra > Issue Type: Bug > Environment: ruby-driver 2.1.0 | C* 2.0.12 >Reporter: Kishan Karunaratne > > During a recent endurance test run of the ruby-driver (as well as a previous > run), I see many of the following exceptions thrown in the system.log in the > 2nd node (10.240.185.204): > {noformat} > ERROR [CompactionExecutor:81] 2015-02-20 22:32:33,064 CassandraDaemon.java > (line 199) Exception in thread Thread[CompactionExecutor:81,1,main] > java.lang.RuntimeException: java.io.FileNotFoundException: > /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db > (No such file or directory) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52) > at > org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1399) > at > org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67) > at > org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1205) > at > org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1217) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:272) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:278) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:131) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:724) > Caused by: java.io.FileNotFoundException: > /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db > (No such file or directory) > at java.io.RandomAccessFile.open(Native Method) > at java.io.RandomAccessFile.(RandomAccessFile.java:233) > at > org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58) > at > org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48) > ... 17 more > {noformat} > I've checked this data directory and indeed this specific db file is missing. > This would signal a database corruption. > The endurance test uses a 3-node cluster run over 3 days, with a chaos rhino > randomly restarting one of the nodes. It seems like the nodes have also gone > out of sync. For example, getting a nodetool status on one node gives: > {noformat} > $ cassandra/bin/nodetool -h 10.240.61.210 status > -- Address Load Tokens Owns Host ID > Rack > DN 10.240.210.69 533.32 MB 256 32.3% > 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 > DN 10.240.185.204 570.86 MB 256 36.7% > 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 > UN 10.240.61.210 877.43 MB 256 31.0% > c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 > {noformat} > While on another node it gives: > {noformat} > $ cassandra/bin/nodetool -h 10.240.210.69 status (or 10.240.185.204) > -- Address Load Tokens Owns Host ID > Rack > UN 10.240.210.69 4.83 GB256 32.3% > 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 > UN 10.240.185.204 4.88 GB256 36.7% > 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0
Git Push Summary
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0.13 [created] a936d7e7f
[jira] [Commented] (CASSANDRA-8902) Missing data files, database corruption
[ https://issues.apache.org/jira/browse/CASSANDRA-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346179#comment-14346179 ] Aleksey Yeschenko commented on CASSANDRA-8902: -- Is 2.0-HEAD (future 2.0.13) also affected? Can you repro there? > Missing data files, database corruption > --- > > Key: CASSANDRA-8902 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8902 > Project: Cassandra > Issue Type: Bug > Environment: ruby-driver 2.1.0 | C* 2.0.12 >Reporter: Kishan Karunaratne > > During a recent endurance test run of the ruby-driver (as well as a previous > run), I see many of the following exceptions thrown in the system.log in the > 2nd node (10.240.185.204): > {noformat} > ERROR [CompactionExecutor:81] 2015-02-20 22:32:33,064 CassandraDaemon.java > (line 199) Exception in thread Thread[CompactionExecutor:81,1,main] > java.lang.RuntimeException: java.io.FileNotFoundException: > /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db > (No such file or directory) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52) > at > org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1399) > at > org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67) > at > org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1205) > at > org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1217) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:272) > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:278) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:131) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:724) > Caused by: java.io.FileNotFoundException: > /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db > (No such file or directory) > at java.io.RandomAccessFile.open(Native Method) > at java.io.RandomAccessFile.(RandomAccessFile.java:233) > at > org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58) > at > org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34) > at > org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48) > ... 17 more > {noformat} > I've checked this data directory and indeed this specific db file is missing. > This would signal a database corruption. > The endurance test uses a 3-node cluster run over 3 days, with a chaos rhino > randomly restarting one of the nodes. It seems like the nodes have also gone > out of sync. For example, getting a nodetool status on one node gives: > {noformat} > $ cassandra/bin/nodetool -h 10.240.61.210 status > -- Address Load Tokens Owns Host ID > Rack > DN 10.240.210.69 533.32 MB 256 32.3% > 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 > DN 10.240.185.204 570.86 MB 256 36.7% > 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 > UN 10.240.61.210 877.43 MB 256 31.0% > c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 > {noformat} > While on another node it gives: > {noformat} > $ cassandra/bin/nodetool -h 10.240.210.69 status (or 10.240.185.204) > -- Address Load Tokens Owns Host ID > Rack > UN 10.240.210.69 4.83 GB256 32.3% > 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 > UN 10.240.185.204 4.88 GB256 36.7% > 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 > DN 10.240.61.210 877.43 MB 256
[jira] [Commented] (CASSANDRA-8850) clean up options syntax for create/alter role
[ https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346154#comment-14346154 ] Aleksey Yeschenko commented on CASSANDRA-8850: -- And while we are at it, *enforce* that role names are always quoted strings. No need to carry that CREATE USER mistake to CREATE ROLE, etc. > clean up options syntax for create/alter role > -- > > Key: CASSANDRA-8850 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8850 > Project: Cassandra > Issue Type: Improvement >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.0 > > Attachments: 8850-v2.txt, 8850.txt > > > {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} > in a way more consistent with other statements. > e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8850) clean up options syntax for create/alter role
[ https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346146#comment-14346146 ] Aleksey Yeschenko commented on CASSANDRA-8850: -- I've been thinking about it a bit. If we don't care too much about emulating Postgres syntactically, can we go with a more consistent (wrt the rest of C* CQL statements) option? Instead of {noformat} CREATE ROLE mike WITH PASSWORD '12345' AND NOSUPERUSER AND LOGIN {noformat} have {noformat} CREATE ROLE mike WITH PASSWORD = '12345' AND SUPERUSER = false AND LOGIN = true {noformat} Instead of {noformat} ALTER ROLE mike WITH NOSUPERUSER AND NOLOGIN {noformat} have {noformat} ALTER ROLE mike WITH SUPERUSER = false AND LOGIN = false {noformat} It also extends nicely to OPTIONS, too. What do you think? > clean up options syntax for create/alter role > -- > > Key: CASSANDRA-8850 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8850 > Project: Cassandra > Issue Type: Improvement >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.0 > > Attachments: 8850-v2.txt, 8850.txt > > > {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} > in a way more consistent with other statements. > e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8902) Missing data files, database corruption
[ https://issues.apache.org/jira/browse/CASSANDRA-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kishan Karunaratne updated CASSANDRA-8902: -- Reproduced In: 2.0.12, 2.0.11 (was: 2.0.11, 2.0.12) Description: During a recent endurance test run of the ruby-driver (as well as a previous run), I see many of the following exceptions thrown in the system.log in the 2nd node (10.240.185.204): {noformat} ERROR [CompactionExecutor:81] 2015-02-20 22:32:33,064 CassandraDaemon.java (line 199) Exception in thread Thread[CompactionExecutor:81,1,main] java.lang.RuntimeException: java.io.FileNotFoundException: /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db (No such file or directory) at org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52) at org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1399) at org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67) at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1205) at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1217) at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:272) at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:278) at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:131) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.io.FileNotFoundException: /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db (No such file or directory) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.(RandomAccessFile.java:233) at org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76) at org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34) at org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48) ... 17 more {noformat} I've checked this data directory and indeed this specific db file is missing. This would signal a database corruption. The endurance test uses a 3-node cluster run over 3 days, with a chaos rhino randomly restarting one of the nodes. It seems like the nodes have also gone out of sync. For example, getting a nodetool status on one node gives: {noformat} $ cassandra/bin/nodetool -h 10.240.61.210 status -- Address Load Tokens Owns Host ID Rack DN 10.240.210.69 533.32 MB 256 32.3% 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 DN 10.240.185.204 570.86 MB 256 36.7% 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 UN 10.240.61.210 877.43 MB 256 31.0% c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 {noformat} While on another node it gives: {noformat} $ cassandra/bin/nodetool -h 10.240.210.69 status (or 10.240.185.204) -- Address Load Tokens Owns Host ID Rack UN 10.240.210.69 4.83 GB256 32.3% 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 UN 10.240.185.204 4.88 GB256 36.7% 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 DN 10.240.61.210 877.43 MB 256 31.0% c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 {noformat} In addition to seeing the missing db file (albeit a different one), I also see many occurrences of the following exception in the 3rd node (10.240.61.210): {noformat} INFO [MeteredFlusher:1] 2015-03-03 21:17:41,032 MeteredFlusher.java (line 86) Estimated 488539743 live and 265675077 flushing bytes used by all memtables ERROR [MeteredFlusher:1] 2015-03-03 21:17:41,033 CassandraDaemon.java (line 199) Exception in thread Thread[MeteredFlusher:1,5,main] java.lang.NoClassDefFoundError: org/apac
[jira] [Updated] (CASSANDRA-8850) clean up options syntax for create/alter role
[ https://issues.apache.org/jira/browse/CASSANDRA-8850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-8850: - Reviewer: Aleksey Yeschenko > clean up options syntax for create/alter role > -- > > Key: CASSANDRA-8850 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8850 > Project: Cassandra > Issue Type: Improvement >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.0 > > Attachments: 8850-v2.txt, 8850.txt > > > {{CREATE/ALTER ROLE}} syntax would be improved by using {{WITH}} and {{AND}} > in a way more consistent with other statements. > e.g. {{CREATE ROLE foo WITH LOGIN AND SUPERUSER AND PASSWORD 'password'}} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8902) Missing data files, database corruption
[ https://issues.apache.org/jira/browse/CASSANDRA-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kishan Karunaratne updated CASSANDRA-8902: -- Reproduced In: 2.0.12, 2.0.11 (was: 2.0.11, 2.0.12) Description: During a recent endurance test run of the ruby-driver (as well as a previous run), I see many of the following exceptions thrown in the system.log in the 2nd node (10.240.185.204): {noformat} ERROR [CompactionExecutor:81] 2015-02-20 22:32:33,064 CassandraDaemon.java (line 199) Exception in thread Thread[CompactionExecutor:81,1,main] java.lang.RuntimeException: java.io.FileNotFoundException: /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db (No such file or directory) at org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52) at org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1399) at org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67) at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1205) at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1217) at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:272) at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:278) at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:131) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.io.FileNotFoundException: /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db (No such file or directory) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.(RandomAccessFile.java:233) at org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76) at org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34) at org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48) ... 17 more {noformat} I've checked this data directory and indeed this specific db file is missing. This would signal a database corruption. The endurance test uses a 3-node cluster run over 3 days, with a chaos rhino randomly restarting one of the nodes. It seems like the nodes have also gone out of sync. For example, getting a nodetool status on one node gives: {noformat} $ cassandra/bin/nodetool -h 10.240.61.210 status -- Address Load Tokens Owns Host ID Rack DN 10.240.210.69 533.32 MB 256 32.3% 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 DN 10.240.185.204 570.86 MB 256 36.7% 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 UN 10.240.61.210 877.43 MB 256 31.0% c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 {noformat} While on another node it gives: {noformat} $ cassandra/bin/nodetool -h 10.240.210.69 status (or 10.240.185.204) -- Address Load Tokens Owns Host ID Rack UN 10.240.210.69 4.83 GB256 32.3% 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 UN 10.240.185.204 4.88 GB256 36.7% 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 DN 10.240.61.210 877.43 MB 256 31.0% c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 {noformat} In addition to seeing the missing db file (albeit a different one), I also see many occurrences of the following exception in the 3rd node (10.240.61.210): {noformat} INFO [MeteredFlusher:1] 2015-03-03 21:17:41,032 MeteredFlusher.java (line 86) Estimated 488539743 live and 265675077 flushing bytes used by all memtables ERROR [MeteredFlusher:1] 2015-03-03 21:17:41,033 CassandraDaemon.java (line 199) Exception in thread Thread[MeteredFlusher:1,5,main] java.lang.NoClassDefFoundError: org/apac
[jira] [Commented] (CASSANDRA-8761) Make custom role options accessible from IRoleManager
[ https://issues.apache.org/jira/browse/CASSANDRA-8761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346123#comment-14346123 ] Aleksey Yeschenko commented on CASSANDRA-8761: -- I'd rather not have "options" conditional in the resulteset metadata of LIST ROLES - it does potentially make clients implement extra logic to handle both cases. Otherwise LGTM. > Make custom role options accessible from IRoleManager > - > > Key: CASSANDRA-8761 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8761 > Project: Cassandra > Issue Type: Bug >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.0 > > Attachments: 8761.txt > > > IRoleManager implementations may support custom OPTIONS arguments to CREATE & > ALTER ROLE. If supported, these custom options should be retrievable from the > IRoleManager and included in the results of LIST ROLES queries. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8902) Missing data files, database corruption
Kishan Karunaratne created CASSANDRA-8902: - Summary: Missing data files, database corruption Key: CASSANDRA-8902 URL: https://issues.apache.org/jira/browse/CASSANDRA-8902 Project: Cassandra Issue Type: Bug Environment: ruby-driver 2.1.0 | C* 2.0.12 Reporter: Kishan Karunaratne During a recent duration test run of the ruby-driver (as well as a previous run), I see many of the following exceptions thrown in the system.log: {noformat} ERROR [CompactionExecutor:81] 2015-02-20 22:32:33,064 CassandraDaemon.java (line 199) Exception in thread Thread[CompactionExecutor:81,1,main] java.lang.RuntimeException: java.io.FileNotFoundException: /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db (No such file or directory) at org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:52) at org.apache.cassandra.io.sstable.SSTableReader.openDataReader(SSTableReader.java:1399) at org.apache.cassandra.io.sstable.SSTableScanner.(SSTableScanner.java:67) at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1205) at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:1217) at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:272) at org.apache.cassandra.db.compaction.AbstractCompactionStrategy.getScanners(AbstractCompactionStrategy.java:278) at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:131) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.io.FileNotFoundException: /srv/performance/cass/data/duration_test1/ints/duration_test1-ints-jb-39-Data.db (No such file or directory) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.(RandomAccessFile.java:233) at org.apache.cassandra.io.util.RandomAccessReader.(RandomAccessReader.java:58) at org.apache.cassandra.io.compress.CompressedRandomAccessReader.(CompressedRandomAccessReader.java:76) at org.apache.cassandra.io.compress.CompressedThrottledReader.(CompressedThrottledReader.java:34) at org.apache.cassandra.io.compress.CompressedThrottledReader.open(CompressedThrottledReader.java:48) ... 17 more {noformat} I've checked this data directory and indeed this specific db file is missing. This would signal a database corruption. The duration test uses a 3-node cluster. It seems like the nodes have also gone out of sync. For example, getting a nodetool status on one node gives: {noformat} $ cassandra/bin/nodetool -h 10.240.61.210 status -- Address Load Tokens Owns Host ID Rack DN 10.240.210.69 533.32 MB 256 32.3% 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 DN 10.240.185.204 570.86 MB 256 36.7% 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 UN 10.240.61.210 877.43 MB 256 31.0% c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 {noformat} While on another node it gives: {noformat} $ cassandra/bin/nodetool -h 10.240.210.69 status (or 10.240.185.204) -- Address Load Tokens Owns Host ID Rack UN 10.240.210.69 4.83 GB256 32.3% 2947fe5e-f149-4ff6-b26c-570ae72b7606 RAC1 UN 10.240.185.204 4.88 GB256 36.7% 3a6e2152-c7dc-457a-a4c5-4c6f01986dd0 RAC1 DN 10.240.61.210 877.43 MB 256 31.0% c3b1beff-9587-4851-85a9-05a9ba6deaff RAC1 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer
[ https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-8067: Comment: was deleted (was: Eh, it's not a burning desire to make better, so I'll leave it thanks. More than enough to do. A brief handwavy outline is that the current abstraction adds complexity by not really separating concerns; the duplication is not a problem of the cost of the work but of the cognitive burden (which is essentially why this happened in the first place). But since it's not a significant pain point, it's not worth agonising over either. ) > NullPointerException in KeyCacheSerializer > -- > > Key: CASSANDRA-8067 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8067 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Eric Leleu >Assignee: Aleksey Yeschenko > Fix For: 2.1.4 > > Attachments: 8067.txt > > > Hi, > I have this stack trace in the logs of Cassandra server (v2.1) > {code} > ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 > CassandraDaemon.java:166 - Exception in thread > Thread[CompactionExecutor:14,1,main] > java.lang.NullPointerException: null > at > org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown > Source) ~[na:1.7.0] > at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) > ~[na:1.7.0] > at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0] > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > [na:1.7.0] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > [na:1.7.0] > at java.lang.Thread.run(Unknown Source) [na:1.7.0] > {code} > It may not be critical because this error occured in the AutoSavingCache. > However the line 475 is about the CFMetaData so it may hide bigger issue... > {code} > 474 CFMetaData cfm = > Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname); > 475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, > out); > {code} > Regards, > Eric -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: fix CRC32Ex unit test when JDK < 1.8
Repository: cassandra Updated Branches: refs/heads/trunk b32ce687e -> 93b365cdc fix CRC32Ex unit test when JDK < 1.8 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/93b365cd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/93b365cd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/93b365cd Branch: refs/heads/trunk Commit: 93b365cdcff34418bdac56473997f0a6d2b2aaaf Parents: b32ce68 Author: Benedict Elliott Smith Authored: Wed Mar 4 00:26:41 2015 + Committer: Benedict Elliott Smith Committed: Wed Mar 4 00:26:41 2015 + -- test/unit/org/apache/cassandra/utils/CRC32FactoryTest.java | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/93b365cd/test/unit/org/apache/cassandra/utils/CRC32FactoryTest.java -- diff --git a/test/unit/org/apache/cassandra/utils/CRC32FactoryTest.java b/test/unit/org/apache/cassandra/utils/CRC32FactoryTest.java index a55fbf0..2187cb9 100644 --- a/test/unit/org/apache/cassandra/utils/CRC32FactoryTest.java +++ b/test/unit/org/apache/cassandra/utils/CRC32FactoryTest.java @@ -55,6 +55,9 @@ public class CRC32FactoryTest private void testOnce() { +if (Float.parseFloat(System.getProperty("java.version").substring(0, 3)) < 1.8) +return; + final long seed = System.nanoTime(); System.out.println("Seed is " + seed); Random r = new java.util.Random(seed); @@ -112,9 +115,9 @@ public class CRC32FactoryTest @Test public void jdkDetection() { -if (System.getProperty("java.version").startsWith("1.7")) -assertFalse(CRC32Factory.create() instanceof CRC32Factory.CRC32Ex); -else +if (Float.parseFloat(System.getProperty("java.version").substring(0, 3)) >= 1.8) assertTrue(CRC32Factory.create() instanceof CRC32Factory.CRC32Ex); +else +assertFalse(CRC32Factory.create() instanceof CRC32Factory.CRC32Ex); } }
cassandra git commit: Make LIST USERS display inherited superuser status
Repository: cassandra Updated Branches: refs/heads/trunk 56348ea7b -> b32ce687e Make LIST USERS display inherited superuser status patch by Sam Tunnicliffe; reviewed by Aleksey Yeschenko for CASSANDRA-8849 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b32ce687 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b32ce687 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b32ce687 Branch: refs/heads/trunk Commit: b32ce687e713de8d8535b0607c9edd9b55a7b6aa Parents: 56348ea Author: Sam Tunnicliffe Authored: Tue Mar 3 15:58:48 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 15:58:48 2015 -0800 -- CHANGES.txt | 2 +- conf/cassandra.yaml | 8 ++ .../cassandra/auth/AuthenticatedUser.java | 89 +-- src/java/org/apache/cassandra/auth/Roles.java | 59 ++ .../org/apache/cassandra/auth/RolesCache.java | 109 +++ .../org/apache/cassandra/config/Config.java | 2 + .../cassandra/config/DatabaseDescriptor.java| 12 ++ .../statements/AuthenticationStatement.java | 20 .../cql3/statements/DropRoleStatement.java | 6 +- .../cql3/statements/ListUsersStatement.java | 6 +- .../apache/cassandra/service/ClientState.java | 2 +- 11 files changed, 200 insertions(+), 115 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b32ce687/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index cc3658d..b877cbe 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,5 +1,5 @@ 3.0 - * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760) + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849) * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268) * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657) * Serializing Row cache alternative, fully off heap (CASSANDRA-7438) http://git-wip-us.apache.org/repos/asf/cassandra/blob/b32ce687/conf/cassandra.yaml -- diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml index 3643986..240f37a 100644 --- a/conf/cassandra.yaml +++ b/conf/cassandra.yaml @@ -93,6 +93,14 @@ role_manager: CassandraRoleManager # Will be disabled automatically for AllowAllAuthenticator. roles_validity_in_ms: 2000 +# Refresh interval for roles cache (if enabled). +# After this interval, cache entries become eligible for refresh. Upon next +# access, an async reload is scheduled and the old value returned until it +# completes. If roles_validity_in_ms is non-zero, then this must be +# also. +# Defaults to the same value as roles_validity_in_ms. +# roles_update_interval_in_ms: 1000 + # Validity period for permissions cache (fetching permissions can be an # expensive operation depending on the authorizer, CassandraAuthorizer is # one example). Defaults to 2000, set to 0 to disable. http://git-wip-us.apache.org/repos/asf/cassandra/blob/b32ce687/src/java/org/apache/cassandra/auth/AuthenticatedUser.java -- diff --git a/src/java/org/apache/cassandra/auth/AuthenticatedUser.java b/src/java/org/apache/cassandra/auth/AuthenticatedUser.java index e4a065d..ee62503 100644 --- a/src/java/org/apache/cassandra/auth/AuthenticatedUser.java +++ b/src/java/org/apache/cassandra/auth/AuthenticatedUser.java @@ -18,20 +18,10 @@ package org.apache.cassandra.auth; import java.util.Set; -import java.util.concurrent.Callable; -import java.util.concurrent.TimeUnit; import com.google.common.base.Objects; -import com.google.common.cache.CacheBuilder; -import com.google.common.cache.CacheLoader; -import com.google.common.cache.LoadingCache; -import com.google.common.util.concurrent.ListenableFuture; -import com.google.common.util.concurrent.ListenableFutureTask; -import org.apache.cassandra.concurrent.ScheduledExecutors; import org.apache.cassandra.config.DatabaseDescriptor; -import org.apache.cassandra.exceptions.RequestExecutionException; -import org.apache.cassandra.exceptions.RequestValidationException; /** * Returned from IAuthenticator#authenticate(), represents an authenticated user everywhere internally. @@ -47,9 +37,6 @@ public class AuthenticatedUser public static final String ANONYMOUS_USERNAME = "anonymous"; public static final AuthenticatedUser ANONYMOUS_USER = new AuthenticatedUser(ANONYMOUS_USERNAME); -// User-level roles cache -private static final LoadingCache> rolesCache = initRolesCache(); - // User-level permissions cache. private static final PermissionsCache permissionsCache = new Permissions
[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer
[ https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346035#comment-14346035 ] Benedict commented on CASSANDRA-8067: - Eh, it's not a burning desire to make better, so I'll leave it thanks. More than enough to do. A brief handwavy outline is that the current abstraction adds complexity by not really separating concerns; the duplication is not a problem of the cost of the work but of the cognitive burden (which is essentially why this happened in the first place). But since it's not a significant pain point, it's not worth agonising over either. > NullPointerException in KeyCacheSerializer > -- > > Key: CASSANDRA-8067 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8067 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Eric Leleu >Assignee: Aleksey Yeschenko > Fix For: 2.1.4 > > Attachments: 8067.txt > > > Hi, > I have this stack trace in the logs of Cassandra server (v2.1) > {code} > ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 > CassandraDaemon.java:166 - Exception in thread > Thread[CompactionExecutor:14,1,main] > java.lang.NullPointerException: null > at > org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown > Source) ~[na:1.7.0] > at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) > ~[na:1.7.0] > at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0] > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > [na:1.7.0] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > [na:1.7.0] > at java.lang.Thread.run(Unknown Source) [na:1.7.0] > {code} > It may not be critical because this error occured in the AutoSavingCache. > However the line 475 is about the CFMetaData so it may hide bigger issue... > {code} > 474 CFMetaData cfm = > Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname); > 475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, > out); > {code} > Regards, > Eric -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer
[ https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14346036#comment-14346036 ] Benedict commented on CASSANDRA-8067: - Eh, it's not a burning desire to make better, so I'll leave it thanks. More than enough to do. A brief handwavy outline is that the current abstraction adds complexity by not really separating concerns; the duplication is not a problem of the cost of the work but of the cognitive burden (which is essentially why this happened in the first place). But since it's not a significant pain point, it's not worth agonising over either. > NullPointerException in KeyCacheSerializer > -- > > Key: CASSANDRA-8067 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8067 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Eric Leleu >Assignee: Aleksey Yeschenko > Fix For: 2.1.4 > > Attachments: 8067.txt > > > Hi, > I have this stack trace in the logs of Cassandra server (v2.1) > {code} > ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 > CassandraDaemon.java:166 - Exception in thread > Thread[CompactionExecutor:14,1,main] > java.lang.NullPointerException: null > at > org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown > Source) ~[na:1.7.0] > at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) > ~[na:1.7.0] > at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0] > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > [na:1.7.0] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > [na:1.7.0] > at java.lang.Thread.run(Unknown Source) [na:1.7.0] > {code} > It may not be critical because this error occured in the AutoSavingCache. > However the line 475 is about the CFMetaData so it may hide bigger issue... > {code} > 474 CFMetaData cfm = > Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname); > 475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, > out); > {code} > Regards, > Eric -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-8900) AssertionError when binding nested collection in a DELETE
[ https://issues.apache.org/jira/browse/CASSANDRA-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs reassigned CASSANDRA-8900: -- Assignee: Tyler Hobbs > AssertionError when binding nested collection in a DELETE > - > > Key: CASSANDRA-8900 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8900 > Project: Cassandra > Issue Type: Bug >Reporter: Olivier Michallat >Assignee: Tyler Hobbs >Priority: Minor > > Running this with the Java driver: > {code} > session.execute("create table if not exists foo2(k int primary key, m > map>, int>);"); > PreparedStatement pst = session.prepare("delete m[?] from foo2 where k = 1"); > session.execute(pst.bind(ImmutableList.of(1))); > {code} > Produces a server error. Server-side stack trace: > {code} > ERROR [SharedPool-Worker-4] 2015-03-03 13:33:24,740 Message.java:538 - > Unexpected exception during request; channel = [id: 0xf9e92e61, > /127.0.0.1:58163 => /127.0.0.1:9042] > java.lang.AssertionError: null > at > org.apache.cassandra.cql3.Maps$DiscarderByKey.execute(Maps.java:381) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:85) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:654) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:487) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:473) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493) > ~[main/:na] > at > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134) > ~[main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439) > [main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335) > [main/:na] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > [na:1.7.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60] > {code} > A simple statement (i.e. QUERY message with values) produces the same result: > {code} > session.execute("delete m[?] from foo2 where k = 1", ImmutableList.of(1)); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8290) archiving commitlogs after restart fails
[ https://issues.apache.org/jira/browse/CASSANDRA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345979#comment-14345979 ] Aleksey Yeschenko commented on CASSANDRA-8290: -- Committed, thanks. We might want to change it back in 3.0, once CASSANDRA-6809 removes segment recycling, but that's a question for another ticket. > archiving commitlogs after restart fails > - > > Key: CASSANDRA-8290 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8290 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 2.0.11 > Debian wheezy >Reporter: Manuel Lausch >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 2.0.13, 2.1.4 > > Attachments: 8290.txt > > > After update to Cassandra 2.0.11 Cassandra mostly fails during startup while > archiving commitlogs > see logfile: > {noformat} > RROR [main] 2014-11-03 13:08:59,388 CassandraDaemon.java (line 513) Exception > encountered during startup > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.io.IOException: Exception while executing > the command: /bin/ln > /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:158) > at > org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:124) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:336) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585) > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.io.IOException: Exception while executing > the command: /bin/ln > /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:145) > ... 4 more > Caused by: java.lang.RuntimeException: java.io.IOException: Exception while > executing the command: /bin/ln > /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at com.google.common.base.Throwables.propagate(Throwables.java:160) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Exception while executing the command: > /bin/ln /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at org.apache.cassandra.utils.FBUtilities.exec(FBUtilities.java:604) > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.exec(CommitLogArchiver.java:197) > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.access$100(CommitLogArchiver.java:44) > at > org.apache.cassandra.db.commitlog.CommitLogArchiver$1.runMayThrow(CommitLogArchiver.java:132) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ... 5 more > ERROR [commitlog_archiver:1] 2014-11-03 13:08:59,388 CassandraDaemon.java > (line 199) Exception in thread Thread[commitlog_archiver:1,5,main] > java.lang.RuntimeException: java.io.IOException: Exception while executing > the command: /bin/ln > /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.
[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2acd05d9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2acd05d9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2acd05d9 Branch: refs/heads/trunk Commit: 2acd05d96778fc4d4f8ee8cef322f3624638bd8c Parents: 2d1e46e a936d7e Author: Aleksey Yeschenko Authored: Tue Mar 3 15:14:59 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 15:14:59 2015 -0800 -- conf/commitlog_archiving.properties | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --
[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/56348ea7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/56348ea7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/56348ea7 Branch: refs/heads/trunk Commit: 56348ea7b7011d3d203d9dfaa0f5f32752771a86 Parents: 787a20f 2acd05d Author: Aleksey Yeschenko Authored: Tue Mar 3 15:15:29 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 15:15:29 2015 -0800 -- conf/commitlog_archiving.properties | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --
[1/2] cassandra git commit: Update example commands in commitlog_archiving.properties
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 2d1e46e40 -> 2acd05d96 Update example commands in commitlog_archiving.properties patch by Sam Tunnicliffe; reviewed by Michael Shuler for CASSANDRA-8290 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a936d7e7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a936d7e7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a936d7e7 Branch: refs/heads/cassandra-2.1 Commit: a936d7e7fbbc432748d634c326b680d5063742d0 Parents: e7d802e Author: Sam Tunnicliffe Authored: Tue Mar 3 15:10:36 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 15:12:32 2015 -0800 -- conf/commitlog_archiving.properties | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a936d7e7/conf/commitlog_archiving.properties -- diff --git a/conf/commitlog_archiving.properties b/conf/commitlog_archiving.properties index be4692e..109a50b 100644 --- a/conf/commitlog_archiving.properties +++ b/conf/commitlog_archiving.properties @@ -27,7 +27,7 @@ # Command to execute to archive a commitlog segment # Parameters: %path => Fully qualified path of the segment to archive # %name => Name of the commit log. -# Example: archive_command=/bin/ln %path /backup/%name +# Example: archive_command=/bin/cp -f %path /backup/%name # # Limitation: *_command= expects one command with arguments. STDOUT # and STDIN or multiple commands cannot be executed. You might want @@ -37,7 +37,7 @@ archive_command= # Command to execute to make an archived commitlog live again. # Parameters: %from is the full path to an archived commitlog segment (from restore_directories) # %to is the live commitlog directory -# Example: restore_command=cp -f %from %to +# Example: restore_command=/bin/cp -f %from %to restore_command= # Directory to scan the recovery files in.
[3/5] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d1e46e4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d1e46e4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d1e46e4 Branch: refs/heads/trunk Commit: 2d1e46e40a2c9205ab42ebe5bddfd2bc3837f719 Parents: bef1d0c e7d802e Author: Yuki Morishita Authored: Tue Mar 3 17:13:06 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 17:13:06 2015 -0600 -- .../cassandra/db/compaction/LongCompactionsTest.java | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d1e46e4/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java -- diff --cc test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java index 94bc09f,a21cee5..e87e336 --- a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java +++ b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java @@@ -99,7 -108,8 +108,8 @@@ public class LongCompactionsTest extend long start = System.nanoTime(); final int gcBefore = (int) (System.currentTimeMillis() / 1000) - Schema.instance.getCFMetaData(KEYSPACE1, "Standard1").getGcGraceSeconds(); + assert store.getDataTracker().markCompacting(sstables): "Cannot markCompacting all sstables"; -new CompactionTask(store, sstables, gcBefore).execute(null); +new CompactionTask(store, sstables, gcBefore, false).execute(null); System.out.println(String.format("%s: sstables=%d rowsper=%d colsper=%d: %d ms", this.getClass().getName(), sstableCount,
[2/5] cassandra git commit: Fixing LongCompactionsTest
Fixing LongCompactionsTest Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7d802e3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7d802e3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7d802e3 Branch: refs/heads/trunk Commit: e7d802e35976d41979d77978da7d70e4f30b630a Parents: 6ee0c75 Author: Carl Yeksigian Authored: Tue Mar 3 11:17:30 2015 -0500 Committer: Yuki Morishita Committed: Tue Mar 3 16:36:39 2015 -0600 -- .../cassandra/db/compaction/LongCompactionsTest.java | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d802e3/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java -- diff --git a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java index 21c6457..a21cee5 100644 --- a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java +++ b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java @@ -24,22 +24,31 @@ import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; -import org.apache.cassandra.config.Schema; +import org.junit.Before; import org.junit.Test; + import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.Util; +import org.apache.cassandra.config.Schema; import org.apache.cassandra.db.*; import org.apache.cassandra.io.sstable.SSTableReader; import org.apache.cassandra.io.sstable.SSTableUtils; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; - import static org.junit.Assert.assertEquals; public class LongCompactionsTest extends SchemaLoader { public static final String KEYSPACE1 = "Keyspace1"; +@Before +public void cleanupFiles() +{ +Keyspace keyspace = Keyspace.open(KEYSPACE1); +ColumnFamilyStore cfs = keyspace.getColumnFamilyStore("Standard1"); +cfs.truncateBlocking(); +} + /** * Test compaction with a very wide row. */ @@ -99,6 +108,7 @@ public class LongCompactionsTest extends SchemaLoader long start = System.nanoTime(); final int gcBefore = (int) (System.currentTimeMillis() / 1000) - Schema.instance.getCFMetaData(KEYSPACE1, "Standard1").getGcGraceSeconds(); +assert store.getDataTracker().markCompacting(sstables): "Cannot markCompacting all sstables"; new CompactionTask(store, sstables, gcBefore).execute(null); System.out.println(String.format("%s: sstables=%d rowsper=%d colsper=%d: %d ms", this.getClass().getName(),
[1/3] cassandra git commit: Update example commands in commitlog_archiving.properties
Repository: cassandra Updated Branches: refs/heads/trunk 787a20fdd -> 56348ea7b Update example commands in commitlog_archiving.properties patch by Sam Tunnicliffe; reviewed by Michael Shuler for CASSANDRA-8290 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a936d7e7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a936d7e7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a936d7e7 Branch: refs/heads/trunk Commit: a936d7e7fbbc432748d634c326b680d5063742d0 Parents: e7d802e Author: Sam Tunnicliffe Authored: Tue Mar 3 15:10:36 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 15:12:32 2015 -0800 -- conf/commitlog_archiving.properties | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a936d7e7/conf/commitlog_archiving.properties -- diff --git a/conf/commitlog_archiving.properties b/conf/commitlog_archiving.properties index be4692e..109a50b 100644 --- a/conf/commitlog_archiving.properties +++ b/conf/commitlog_archiving.properties @@ -27,7 +27,7 @@ # Command to execute to archive a commitlog segment # Parameters: %path => Fully qualified path of the segment to archive # %name => Name of the commit log. -# Example: archive_command=/bin/ln %path /backup/%name +# Example: archive_command=/bin/cp -f %path /backup/%name # # Limitation: *_command= expects one command with arguments. STDOUT # and STDIN or multiple commands cannot be executed. You might want @@ -37,7 +37,7 @@ archive_command= # Command to execute to make an archived commitlog live again. # Parameters: %from is the full path to an archived commitlog segment (from restore_directories) # %to is the live commitlog directory -# Example: restore_command=cp -f %from %to +# Example: restore_command=/bin/cp -f %from %to restore_command= # Directory to scan the recovery files in.
[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2acd05d9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2acd05d9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2acd05d9 Branch: refs/heads/cassandra-2.1 Commit: 2acd05d96778fc4d4f8ee8cef322f3624638bd8c Parents: 2d1e46e a936d7e Author: Aleksey Yeschenko Authored: Tue Mar 3 15:14:59 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 15:14:59 2015 -0800 -- conf/commitlog_archiving.properties | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --
[1/5] cassandra git commit: Fixing LongCompactionsTest
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 bef1d0cb0 -> 2d1e46e40 refs/heads/trunk b2dfe1be9 -> 787a20fdd Fixing LongCompactionsTest Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7d802e3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7d802e3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7d802e3 Branch: refs/heads/cassandra-2.1 Commit: e7d802e35976d41979d77978da7d70e4f30b630a Parents: 6ee0c75 Author: Carl Yeksigian Authored: Tue Mar 3 11:17:30 2015 -0500 Committer: Yuki Morishita Committed: Tue Mar 3 16:36:39 2015 -0600 -- .../cassandra/db/compaction/LongCompactionsTest.java | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d802e3/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java -- diff --git a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java index 21c6457..a21cee5 100644 --- a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java +++ b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java @@ -24,22 +24,31 @@ import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; -import org.apache.cassandra.config.Schema; +import org.junit.Before; import org.junit.Test; + import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.Util; +import org.apache.cassandra.config.Schema; import org.apache.cassandra.db.*; import org.apache.cassandra.io.sstable.SSTableReader; import org.apache.cassandra.io.sstable.SSTableUtils; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; - import static org.junit.Assert.assertEquals; public class LongCompactionsTest extends SchemaLoader { public static final String KEYSPACE1 = "Keyspace1"; +@Before +public void cleanupFiles() +{ +Keyspace keyspace = Keyspace.open(KEYSPACE1); +ColumnFamilyStore cfs = keyspace.getColumnFamilyStore("Standard1"); +cfs.truncateBlocking(); +} + /** * Test compaction with a very wide row. */ @@ -99,6 +108,7 @@ public class LongCompactionsTest extends SchemaLoader long start = System.nanoTime(); final int gcBefore = (int) (System.currentTimeMillis() / 1000) - Schema.instance.getCFMetaData(KEYSPACE1, "Standard1").getGcGraceSeconds(); +assert store.getDataTracker().markCompacting(sstables): "Cannot markCompacting all sstables"; new CompactionTask(store, sstables, gcBefore).execute(null); System.out.println(String.format("%s: sstables=%d rowsper=%d colsper=%d: %d ms", this.getClass().getName(),
[4/5] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d1e46e4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d1e46e4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d1e46e4 Branch: refs/heads/cassandra-2.1 Commit: 2d1e46e40a2c9205ab42ebe5bddfd2bc3837f719 Parents: bef1d0c e7d802e Author: Yuki Morishita Authored: Tue Mar 3 17:13:06 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 17:13:06 2015 -0600 -- .../cassandra/db/compaction/LongCompactionsTest.java | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d1e46e4/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java -- diff --cc test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java index 94bc09f,a21cee5..e87e336 --- a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java +++ b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java @@@ -99,7 -108,8 +108,8 @@@ public class LongCompactionsTest extend long start = System.nanoTime(); final int gcBefore = (int) (System.currentTimeMillis() / 1000) - Schema.instance.getCFMetaData(KEYSPACE1, "Standard1").getGcGraceSeconds(); + assert store.getDataTracker().markCompacting(sstables): "Cannot markCompacting all sstables"; -new CompactionTask(store, sstables, gcBefore).execute(null); +new CompactionTask(store, sstables, gcBefore, false).execute(null); System.out.println(String.format("%s: sstables=%d rowsper=%d colsper=%d: %d ms", this.getClass().getName(), sstableCount,
[5/5] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/787a20fd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/787a20fd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/787a20fd Branch: refs/heads/trunk Commit: 787a20fdd616944e679f82d2d38cfe671fa4c188 Parents: b2dfe1b 2d1e46e Author: Yuki Morishita Authored: Tue Mar 3 17:14:37 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 17:14:37 2015 -0600 -- .../cassandra/db/compaction/LongCompactionsTest.java | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/787a20fd/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java -- diff --cc test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java index f16d094,e87e336..394f27b --- a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java +++ b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java @@@ -24,40 -24,31 +24,49 @@@ import java.util.concurrent.ExecutionEx import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; - import org.apache.cassandra.io.sstable.format.SSTableReader; +import org.junit.BeforeClass; + import org.junit.Before; import org.junit.Test; + import org.apache.cassandra.SchemaLoader; -import org.apache.cassandra.Util; +import org.apache.cassandra.config.KSMetaData; import org.apache.cassandra.config.Schema; +import org.apache.cassandra.Util; +import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.db.*; -import org.apache.cassandra.io.sstable.SSTableReader; ++import org.apache.cassandra.io.sstable.format.SSTableReader; import org.apache.cassandra.io.sstable.SSTableUtils; +import org.apache.cassandra.locator.SimpleStrategy; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; - import static org.junit.Assert.assertEquals; -public class LongCompactionsTest extends SchemaLoader +public class LongCompactionsTest { public static final String KEYSPACE1 = "Keyspace1"; +public static final String CF_STANDARD = "Standard1"; + +@BeforeClass +public static void defineSchema() throws ConfigurationException +{ +Map compactionOptions = new HashMap<>(); +compactionOptions.put("tombstone_compaction_interval", "1"); +SchemaLoader.prepareServer(); +SchemaLoader.createKeyspace(KEYSPACE1, +SimpleStrategy.class, +KSMetaData.optsWithRF(1), +SchemaLoader.standardCFMD(KEYSPACE1, CF_STANDARD) + .compactionStrategyOptions(compactionOptions)); +} + @Before + public void cleanupFiles() + { + Keyspace keyspace = Keyspace.open(KEYSPACE1); + ColumnFamilyStore cfs = keyspace.getColumnFamilyStore("Standard1"); + cfs.truncateBlocking(); + } + /** * Test compaction with a very wide row. */
cassandra git commit: Update example commands in commitlog_archiving.properties
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 e7d802e35 -> a936d7e7f Update example commands in commitlog_archiving.properties patch by Sam Tunnicliffe; reviewed by Michael Shuler for CASSANDRA-8290 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a936d7e7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a936d7e7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a936d7e7 Branch: refs/heads/cassandra-2.0 Commit: a936d7e7fbbc432748d634c326b680d5063742d0 Parents: e7d802e Author: Sam Tunnicliffe Authored: Tue Mar 3 15:10:36 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 15:12:32 2015 -0800 -- conf/commitlog_archiving.properties | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a936d7e7/conf/commitlog_archiving.properties -- diff --git a/conf/commitlog_archiving.properties b/conf/commitlog_archiving.properties index be4692e..109a50b 100644 --- a/conf/commitlog_archiving.properties +++ b/conf/commitlog_archiving.properties @@ -27,7 +27,7 @@ # Command to execute to archive a commitlog segment # Parameters: %path => Fully qualified path of the segment to archive # %name => Name of the commit log. -# Example: archive_command=/bin/ln %path /backup/%name +# Example: archive_command=/bin/cp -f %path /backup/%name # # Limitation: *_command= expects one command with arguments. STDOUT # and STDIN or multiple commands cannot be executed. You might want @@ -37,7 +37,7 @@ archive_command= # Command to execute to make an archived commitlog live again. # Parameters: %from is the full path to an archived commitlog segment (from restore_directories) # %to is the live commitlog directory -# Example: restore_command=cp -f %from %to +# Example: restore_command=/bin/cp -f %from %to restore_command= # Directory to scan the recovery files in.
cassandra git commit: Fixing LongCompactionsTest
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 6ee0c757c -> e7d802e35 Fixing LongCompactionsTest Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7d802e3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7d802e3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7d802e3 Branch: refs/heads/cassandra-2.0 Commit: e7d802e35976d41979d77978da7d70e4f30b630a Parents: 6ee0c75 Author: Carl Yeksigian Authored: Tue Mar 3 11:17:30 2015 -0500 Committer: Yuki Morishita Committed: Tue Mar 3 16:36:39 2015 -0600 -- .../cassandra/db/compaction/LongCompactionsTest.java | 14 -- 1 file changed, 12 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7d802e3/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java -- diff --git a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java index 21c6457..a21cee5 100644 --- a/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java +++ b/test/long/org/apache/cassandra/db/compaction/LongCompactionsTest.java @@ -24,22 +24,31 @@ import java.util.concurrent.ExecutionException; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; -import org.apache.cassandra.config.Schema; +import org.junit.Before; import org.junit.Test; + import org.apache.cassandra.SchemaLoader; import org.apache.cassandra.Util; +import org.apache.cassandra.config.Schema; import org.apache.cassandra.db.*; import org.apache.cassandra.io.sstable.SSTableReader; import org.apache.cassandra.io.sstable.SSTableUtils; import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.FBUtilities; - import static org.junit.Assert.assertEquals; public class LongCompactionsTest extends SchemaLoader { public static final String KEYSPACE1 = "Keyspace1"; +@Before +public void cleanupFiles() +{ +Keyspace keyspace = Keyspace.open(KEYSPACE1); +ColumnFamilyStore cfs = keyspace.getColumnFamilyStore("Standard1"); +cfs.truncateBlocking(); +} + /** * Test compaction with a very wide row. */ @@ -99,6 +108,7 @@ public class LongCompactionsTest extends SchemaLoader long start = System.nanoTime(); final int gcBefore = (int) (System.currentTimeMillis() / 1000) - Schema.instance.getCFMetaData(KEYSPACE1, "Standard1").getGcGraceSeconds(); +assert store.getDataTracker().markCompacting(sstables): "Cannot markCompacting all sstables"; new CompactionTask(store, sstables, gcBefore).execute(null); System.out.println(String.format("%s: sstables=%d rowsper=%d colsper=%d: %d ms", this.getClass().getName(),
[jira] [Updated] (CASSANDRA-8290) archiving commitlogs after restart fails
[ https://issues.apache.org/jira/browse/CASSANDRA-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-8290: - Reviewer: Michael Shuler > archiving commitlogs after restart fails > - > > Key: CASSANDRA-8290 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8290 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 2.0.11 > Debian wheezy >Reporter: Manuel Lausch >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 2.0.13 > > Attachments: 8290.txt > > > After update to Cassandra 2.0.11 Cassandra mostly fails during startup while > archiving commitlogs > see logfile: > {noformat} > RROR [main] 2014-11-03 13:08:59,388 CassandraDaemon.java (line 513) Exception > encountered during startup > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.io.IOException: Exception while executing > the command: /bin/ln > /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:158) > at > org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:124) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:336) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585) > Caused by: java.util.concurrent.ExecutionException: > java.lang.RuntimeException: java.io.IOException: Exception while executing > the command: /bin/ln > /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.maybeWaitForArchiving(CommitLogArchiver.java:145) > ... 4 more > Caused by: java.lang.RuntimeException: java.io.IOException: Exception while > executing the command: /bin/ln > /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at com.google.common.base.Throwables.propagate(Throwables.java:160) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Exception while executing the command: > /bin/ln /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at org.apache.cassandra.utils.FBUtilities.exec(FBUtilities.java:604) > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.exec(CommitLogArchiver.java:197) > at > org.apache.cassandra.db.commitlog.CommitLogArchiver.access$100(CommitLogArchiver.java:44) > at > org.apache.cassandra.db.commitlog.CommitLogArchiver$1.runMayThrow(CommitLogArchiver.java:132) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ... 5 more > ERROR [commitlog_archiver:1] 2014-11-03 13:08:59,388 CassandraDaemon.java > (line 199) Exception in thread Thread[commitlog_archiver:1,5,main] > java.lang.RuntimeException: java.io.IOException: Exception while executing > the command: /bin/ln > /var/lib/cassandra/commitlog/CommitLog-3-1413451666161.log > /var/lib/cassandra/archive/CommitLog-3-1413451666161.log, command error Code: > 1, command output: /bin/ln: failed to create hard link > `/var/lib/cassandra/archive/CommitLog-3-1413451666161.log': File exists > at com.google.com
[jira] [Commented] (CASSANDRA-8834) Top partitions reporting wrong cardinality
[ https://issues.apache.org/jira/browse/CASSANDRA-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345953#comment-14345953 ] Aleksey Yeschenko commented on CASSANDRA-8834: -- Probably a stress bug (using a partition key of the wrong type), so no need to guard against that. Another issue I see is accessing BB#array() of the partition key directly in CFS#apply(). You can't rely on it being available. > Top partitions reporting wrong cardinality > -- > > Key: CASSANDRA-8834 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8834 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Chris Lohfink > Fix For: 2.1.4 > > Attachments: cardinality.patch > > > It always reports a cardinality of 1. Patch also includes a try/catch around > the conversion of partition keys that isn't always handled well in thrift cfs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8901) Generalize progress reporting between tools and a server
[ https://issues.apache.org/jira/browse/CASSANDRA-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-8901: --- Fix Version/s: 3.0 > Generalize progress reporting between tools and a server > > > Key: CASSANDRA-8901 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8901 > Project: Cassandra > Issue Type: Improvement >Reporter: Yuki Morishita >Assignee: Yuki Morishita >Priority: Minor > Fix For: 3.0 > > > Right now, {{nodetool repair}} uses its own method and JMX notification > message format to report progress of async operation call. As we are > expanding async call to other operations (CASSANDRA-7124), we should have > generalized way to report to clients. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8901) Generalize progress reporting between tools and a server
[ https://issues.apache.org/jira/browse/CASSANDRA-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-8901: --- Reviewer: Joshua McKenzie > Generalize progress reporting between tools and a server > > > Key: CASSANDRA-8901 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8901 > Project: Cassandra > Issue Type: Improvement >Reporter: Yuki Morishita >Assignee: Yuki Morishita >Priority: Minor > > Right now, {{nodetool repair}} uses its own method and JMX notification > message format to report progress of async operation call. As we are > expanding async call to other operations (CASSANDRA-7124), we should have > generalized way to report to clients. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7124) Use JMX Notifications to Indicate Success/Failure of Long-Running Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345899#comment-14345899 ] Yuki Morishita commented on CASSANDRA-7124: --- Sorry for delayed response. Looking at your async {{nodetool compact}} implementation, it doesn't seem to work as expected since you spawn threads for number of columnfamilies, and track only the last one. You should add the way to track all of works, possibly using {{Futures.allAsList}}. Also, I created CASSANDRA-8901 to tackle generalized way of async progress tracking. I appreciate if you can take a look and see if it is feasible for this work. Thanks. > Use JMX Notifications to Indicate Success/Failure of Long-Running Operations > > > Key: CASSANDRA-7124 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7124 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Tyler Hobbs >Assignee: Rajanarayanan Thottuvaikkatumana >Priority: Minor > Labels: lhf > Fix For: 3.0 > > Attachments: 7124-wip.txt, cassandra-trunk-compact-7124.txt, > cassandra-trunk-decommission-7124.txt > > > If {{nodetool cleanup}} or some other long-running operation takes too long > to complete, you'll see an error like the one in CASSANDRA-2126, so you can't > tell if the operation completed successfully or not. CASSANDRA-4767 fixed > this for repairs with JMX notifications. We should do something similar for > nodetool cleanup, compact, decommission, move, relocate, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8901) Generalize progress reporting between tools and a server
[ https://issues.apache.org/jira/browse/CASSANDRA-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345889#comment-14345889 ] Yuki Morishita commented on CASSANDRA-8901: --- First attempt here: https://github.com/yukim/cassandra/tree/8901 [First commit|https://github.com/yukim/cassandra/commit/0ed9b041def2140638051ad91fcb1de580cdcd95] is for generalized progress framework and [second commit|https://github.com/yukim/cassandra/commit/6164c7b9e94ccf95ceeb1529846f247b16efe1c9] is the use of framework in current async repair. Implementation of repair can be more generalized to be used for other operations I believe, though I did not go that far in these commits. > Generalize progress reporting between tools and a server > > > Key: CASSANDRA-8901 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8901 > Project: Cassandra > Issue Type: Improvement >Reporter: Yuki Morishita >Assignee: Yuki Morishita >Priority: Minor > > Right now, {{nodetool repair}} uses its own method and JMX notification > message format to report progress of async operation call. As we are > expanding async call to other operations (CASSANDRA-7124), we should have > generalized way to report to clients. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8901) Generalize progress reporting between tools and a server
Yuki Morishita created CASSANDRA-8901: - Summary: Generalize progress reporting between tools and a server Key: CASSANDRA-8901 URL: https://issues.apache.org/jira/browse/CASSANDRA-8901 Project: Cassandra Issue Type: Improvement Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Minor Right now, {{nodetool repair}} uses its own method and JMX notification message format to report progress of async operation call. As we are expanding async call to other operations (CASSANDRA-7124), we should have generalized way to report to clients. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8870) Tombstone overwhelming issue aborts client queries
[ https://issues.apache.org/jira/browse/CASSANDRA-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345871#comment-14345871 ] Aleksey Yeschenko commented on CASSANDRA-8870: -- They will if you update or insert null. > Tombstone overwhelming issue aborts client queries > -- > > Key: CASSANDRA-8870 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8870 > Project: Cassandra > Issue Type: Bug > Environment: cassandra 2.1.2 ubunbtu 12.04 >Reporter: Jeff Liu > > We are getting client queries timeout issues on the clients who are trying to > query data from cassandra cluster. > Nodetool status shows that all nodes are still up regardless. > Logs from client side: > {noformat} > com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) > tried for query failed (tried: > cass-chisel01.abc01.abc02.abc.abc.com/10.66.182.113:9042 > (com.datastax.driver.core.TransportException: > [cass-chisel01.tgr01.iad02.testd.nestlabs.com/10.66.182.113:9042] Connection > has been closed)) > at > com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:108) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na] > at > com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:179) > ~[com.datastax.cassandra.cassandra-driver-core-2.1.3.jar:na] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > ~[na:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > ~[na:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55] > {noformat} > Logs from cassandra/system.log > {noformat} > ERROR [HintedHandoff:2] 2015-02-23 23:46:28,410 SliceQueryFilter.java:212 - > Scanned over 10 tombstones in system.hints; query aborted (see > tombstone_failure_threshold) > ERROR [HintedHandoff:2] 2015-02-23 23:46:28,417 CassandraDaemon.java:153 - > Exception in thread Thread[HintedHandoff:2,1,main] > org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null > at > org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:214) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:310) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:60) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1858) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1666) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:385) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:344) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.HintedHandOffManager.access$400(HintedHandOffManager.java:94) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:555) > ~[apache-cassandra-2.1.2.jar:2.1.2] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > ~[na:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > ~[na:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) ~[na:1.7.0_55] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8574) Gracefully degrade SELECT when there are lots of tombstones
[ https://issues.apache.org/jira/browse/CASSANDRA-8574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345850#comment-14345850 ] Jens Rantil commented on CASSANDRA-8574: I'd be fine with that solution as long as the underlying problem can be solved -- the fact that it's really hard to reliably page through results that has a large amount of tombstones. > Gracefully degrade SELECT when there are lots of tombstones > --- > > Key: CASSANDRA-8574 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8574 > Project: Cassandra > Issue Type: Improvement >Reporter: Jens Rantil > Fix For: 3.0 > > > *Background:* There's lots of tooling out there to do BigData analysis on > Cassandra clusters. Examples are Spark and Hadoop, which is offered by DSE. > The problem with both of these so far, is that a single partition key with > too many tombstones can make the query job fail hard. > The described scenario happens despite the user setting a rather small > FetchSize. I assume this is a common scenario if you have larger rows. > *Proposal:* To allow a CQL SELECT to gracefully degrade to only return a > smaller batch of results if there are too many tombstones. The tombstones are > ordered according to clustering key and one should be able to page through > them. Potentially: > SELECT * FROM mytable LIMIT 1000 TOMBSTONES; > would page through maximum 1000 tombstones, _or_ 1000 (CQL) rows. > I understand that this obviously would degrade performance, but it would at > least yield a result. > *Additional comment:* I haven't dug into Cassandra code, but conceptually I > guess this would be doable. Let me know what you think. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b2dfe1be Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b2dfe1be Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b2dfe1be Branch: refs/heads/trunk Commit: b2dfe1be96288bd9d15ec40cd3d20deff09ca625 Parents: ab15d8e bef1d0c Author: Aleksey Yeschenko Authored: Tue Mar 3 13:58:37 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 13:58:37 2015 -0800 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/CacheService.java | 10 ++ 2 files changed, 7 insertions(+), 4 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b2dfe1be/CHANGES.txt -- diff --cc CHANGES.txt index d8b222e,a90dd48..cc3658d --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,65 -1,5 +1,66 @@@ +3.0 + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760) + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268) + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657) + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438) + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707) + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560) + * Support direct buffer decompression for reads (CASSANDRA-8464) + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039) + * Group sstables for anticompaction correctly (CASSANDRA-8578) + * Add ReadFailureException to native protocol, respond + immediately when replicas encounter errors while handling + a read request (CASSANDRA-7886) + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308) + * Allow mixing token and partition key restrictions (CASSANDRA-7016) + * Support index key/value entries on map collections (CASSANDRA-8473) + * Modernize schema tables (CASSANDRA-8261) + * Support for user-defined aggregation functions (CASSANDRA-8053) + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419) + * Refactor SelectStatement, return IN results in natural order instead + of IN value list order and ignore duplicate values in partition key IN restrictions (CASSANDRA-7981) + * Support UDTs, tuples, and collections in user-defined + functions (CASSANDRA-7563) + * Fix aggregate fn results on empty selection, result column name, + and cqlsh parsing (CASSANDRA-8229) + * Mark sstables as repaired after full repair (CASSANDRA-7586) + * Extend Descriptor to include a format value and refactor reader/writer + APIs (CASSANDRA-7443) + * Integrate JMH for microbenchmarks (CASSANDRA-8151) + * Keep sstable levels when bootstrapping (CASSANDRA-7460) + * Add Sigar library and perform basic OS settings check on startup (CASSANDRA-7838) + * Support for aggregation functions (CASSANDRA-4914) + * Remove cassandra-cli (CASSANDRA-7920) + * Accept dollar quoted strings in CQL (CASSANDRA-7769) + * Make assassinate a first class command (CASSANDRA-7935) + * Support IN clause on any partition key column (CASSANDRA-7855) + * Support IN clause on any clustering column (CASSANDRA-4762) + * Improve compaction logging (CASSANDRA-7818) + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917) + * Do anticompaction in groups (CASSANDRA-6851) + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 7929, + 7924, 7812, 8063, 7813, 7708) + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416) + * Move sstable RandomAccessReader to nio2, which allows using the + FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050) + * Remove CQL2 (CASSANDRA-5918) + * Add Thrift get_multi_slice call (CASSANDRA-6757) + * Optimize fetching multiple cells by name (CASSANDRA-6933) + * Allow compilation in java 8 (CASSANDRA-7028) + * Make incremental repair default (CASSANDRA-7250) + * Enable code coverage thru JaCoCo (CASSANDRA-7226) + * Switch external naming of 'column families' to 'tables' (CASSANDRA-4369) + * Shorten SSTable path (CASSANDRA-6962) + * Use unsafe mutations for most unit tests (CASSANDRA-6969) + * Fix race condition during calculation of pending ranges (CASSANDRA-7390) + * Fail on very large batch sizes (CASSANDRA-8011) + * Improve concurrency of repair (CASSANDRA-6455, 8208) + * Select optimal CRC32 implementation at runtime (CASSANDRA-8614) + * Evaluate MurmurHash of Token once per query (CASSANDRA-7096) + + 2.1.4 + * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067) * Pick sstables for validation as late as possible inc repairs (CASSANDRA-8366) * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856) *
[1/2] cassandra git commit: Fix rare NPE in KeyCacheSerializer
Repository: cassandra Updated Branches: refs/heads/trunk ab15d8e61 -> b2dfe1be9 Fix rare NPE in KeyCacheSerializer patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for CASSANDRA-8067 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bef1d0cb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bef1d0cb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bef1d0cb Branch: refs/heads/trunk Commit: bef1d0cb064faa3641fee31e1584b77ca95c9843 Parents: abc4a37 Author: Aleksey Yeschenko Authored: Tue Mar 3 13:53:14 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 13:56:18 2015 -0800 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/CacheService.java | 9 ++--- 2 files changed, 7 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bef1d0cb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 76c2e10..a90dd48 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.4 + * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067) * Pick sstables for validation as late as possible inc repairs (CASSANDRA-8366) * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856) * Fix parallelism adjustment in range and secondary index queries http://git-wip-us.apache.org/repos/asf/cassandra/blob/bef1d0cb/src/java/org/apache/cassandra/service/CacheService.java -- diff --git a/src/java/org/apache/cassandra/service/CacheService.java b/src/java/org/apache/cassandra/service/CacheService.java index 1b93c2c..48c0941 100644 --- a/src/java/org/apache/cassandra/service/CacheService.java +++ b/src/java/org/apache/cassandra/service/CacheService.java @@ -467,11 +467,14 @@ public class CacheService implements CacheServiceMBean RowIndexEntry entry = CacheService.instance.keyCache.get(key); if (entry == null) return; + +CFMetaData cfm = Schema.instance.getCFMetaData(key.cfId); +if (cfm == null) +return; // the table no longer exists. + ByteBufferUtil.writeWithLength(key.key, out); -Descriptor desc = key.desc; -out.writeInt(desc.generation); +out.writeInt(key.desc.generation); out.writeBoolean(true); -CFMetaData cfm = Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname); cfm.comparator.rowIndexEntrySerializer().serialize(entry, out); }
cassandra git commit: Fix rare NPE in KeyCacheSerializer
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 abc4a37d0 -> bef1d0cb0 Fix rare NPE in KeyCacheSerializer patch by Aleksey Yeschenko; reviewed by Benedict Elliott Smith for CASSANDRA-8067 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bef1d0cb Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bef1d0cb Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bef1d0cb Branch: refs/heads/cassandra-2.1 Commit: bef1d0cb064faa3641fee31e1584b77ca95c9843 Parents: abc4a37 Author: Aleksey Yeschenko Authored: Tue Mar 3 13:53:14 2015 -0800 Committer: Aleksey Yeschenko Committed: Tue Mar 3 13:56:18 2015 -0800 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/CacheService.java | 9 ++--- 2 files changed, 7 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bef1d0cb/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 76c2e10..a90dd48 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.4 + * Fix rare NPE in KeyCacheSerializer (CASSANDRA-8067) * Pick sstables for validation as late as possible inc repairs (CASSANDRA-8366) * Fix commitlog getPendingTasks to not increment (CASSANDRA-8856) * Fix parallelism adjustment in range and secondary index queries http://git-wip-us.apache.org/repos/asf/cassandra/blob/bef1d0cb/src/java/org/apache/cassandra/service/CacheService.java -- diff --git a/src/java/org/apache/cassandra/service/CacheService.java b/src/java/org/apache/cassandra/service/CacheService.java index 1b93c2c..48c0941 100644 --- a/src/java/org/apache/cassandra/service/CacheService.java +++ b/src/java/org/apache/cassandra/service/CacheService.java @@ -467,11 +467,14 @@ public class CacheService implements CacheServiceMBean RowIndexEntry entry = CacheService.instance.keyCache.get(key); if (entry == null) return; + +CFMetaData cfm = Schema.instance.getCFMetaData(key.cfId); +if (cfm == null) +return; // the table no longer exists. + ByteBufferUtil.writeWithLength(key.key, out); -Descriptor desc = key.desc; -out.writeInt(desc.generation); +out.writeInt(key.desc.generation); out.writeBoolean(true); -CFMetaData cfm = Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname); cfm.comparator.rowIndexEntrySerializer().serialize(entry, out); }
[jira] [Updated] (CASSANDRA-8303) Create a capability limitation framework
[ https://issues.apache.org/jira/browse/CASSANDRA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-8303: - Assignee: Sam Tunnicliffe > Create a capability limitation framework > > > Key: CASSANDRA-8303 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8303 > Project: Cassandra > Issue Type: Improvement >Reporter: Anupam Arora >Assignee: Sam Tunnicliffe > Fix For: 3.0 > > > In addition to our current Auth framework that acts as a white list, and > regulates access to data, functions, and roles, it would be beneficial to > have a different, capability limitation framework, that would be orthogonal > to Auth, and would act as a blacklist. > Example uses: > - take away the ability to TRUNCATE from all users but the admin (TRUNCATE > itself would still require MODIFY permission) > - take away the ability to use ALLOW FILTERING from all users but > Spark/Hadoop (SELECT would still require SELECT permission) > - take away the ability to use UNLOGGED BATCH from everyone (the operation > itself would still require MODIFY permission) > - take away the ability to use certain consistency levels (make certain > tables LWT-only for all users, for example) > Original description: > Please provide a "strict mode" option in cassandra that will kick out any CQL > queries that are expensive, e.g. any query with ALLOWS FILTERING, > multi-partition queries, secondary index queries, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8900) AssertionError when binding nested collection in a DELETE
[ https://issues.apache.org/jira/browse/CASSANDRA-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Olivier Michallat updated CASSANDRA-8900: - Description: Running this with the Java driver: {code} session.execute("create table if not exists foo2(k int primary key, m map>, int>);"); PreparedStatement pst = session.prepare("delete m[?] from foo2 where k = 1"); session.execute(pst.bind(ImmutableList.of(1))); {code} Produces a server error. Server-side stack trace: {code} ERROR [SharedPool-Worker-4] 2015-03-03 13:33:24,740 Message.java:538 - Unexpected exception during request; channel = [id: 0xf9e92e61, /127.0.0.1:58163 => /127.0.0.1:9042] java.lang.AssertionError: null at org.apache.cassandra.cql3.Maps$DiscarderByKey.execute(Maps.java:381) ~[main/:na] at org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:85) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:654) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:487) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:473) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493) ~[main/:na] at org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134) ~[main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439) [main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335) [main/:na] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_60] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) [main/:na] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [main/:na] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60] {code} A simple statement (i.e. QUERY message with values) produces the same result: {code} session.execute("delete m[?] from foo2 where k = 1", ImmutableList.of(1)); {code} was: Running this with the Java driver: {code} session.execute("create table if not exists foo2(k int primary key, m map>, int>);"); PreparedStatement pst = session.prepare("delete m[?] from foo2 where k = 1"); session.execute(pst.bind(ImmutableList.of(1))); {code} Produces a server error. Server-side stack trace: {code} ERROR [SharedPool-Worker-4] 2015-03-03 13:33:24,740 Message.java:538 - Unexpected exception during request; channel = [id: 0xf9e92e61, /127.0.0.1:58163 => /127.0.0.1:9042] java.lang.AssertionError: null at org.apache.cassandra.cql3.Maps$DiscarderByKey.execute(Maps.java:381) ~[main/:na] at org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:85) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:654) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:487) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:473) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493) ~[main/:na] at org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134) ~[main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439) [main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335) [main/:na] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(Abstract
[jira] [Created] (CASSANDRA-8900) AssertionError when binding nested collection in a DELETE
Olivier Michallat created CASSANDRA-8900: Summary: AssertionError when binding nested collection in a DELETE Key: CASSANDRA-8900 URL: https://issues.apache.org/jira/browse/CASSANDRA-8900 Project: Cassandra Issue Type: Bug Reporter: Olivier Michallat Priority: Minor Running this with the Java driver: {code} session.execute("create table if not exists foo2(k int primary key, m map>, int>);"); PreparedStatement pst = session.prepare("delete m[?] from foo2 where k = 1"); session.execute(pst.bind(ImmutableList.of(1))); {code} Produces a server error. Server-side stack trace: {code} ERROR [SharedPool-Worker-4] 2015-03-03 13:33:24,740 Message.java:538 - Unexpected exception during request; channel = [id: 0xf9e92e61, /127.0.0.1:58163 => /127.0.0.1:9042] java.lang.AssertionError: null at org.apache.cassandra.cql3.Maps$DiscarderByKey.execute(Maps.java:381) ~[main/:na] at org.apache.cassandra.cql3.statements.DeleteStatement.addUpdateForKey(DeleteStatement.java:85) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.getMutations(ModificationStatement.java:654) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.executeWithoutCondition(ModificationStatement.java:487) ~[main/:na] at org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:473) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:238) ~[main/:na] at org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:493) ~[main/:na] at org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:134) ~[main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439) [main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335) [main/:na] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) [netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) [netty-all-4.0.23.Final.jar:4.0.23.Final] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_60] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) [main/:na] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [main/:na] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_60] {code} A simple statement produces the same result: {code} session.execute("delete m[?] from foo2 where k = 1", ImmutableList.of(1)); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8890) Enhance cassandra-env.sh to handle Java version output in case of OpenJDK icedtea"
[ https://issues.apache.org/jira/browse/CASSANDRA-8890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345770#comment-14345770 ] Sumod Pawgi commented on CASSANDRA-8890: Thanks Philip, I will take a shot at that. > Enhance cassandra-env.sh to handle Java version output in case of OpenJDK > icedtea" > -- > > Key: CASSANDRA-8890 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8890 > Project: Cassandra > Issue Type: Improvement > Components: Config > Environment: Red Hat Enterprise Linux Server release 6.4 (Santiago) >Reporter: Sumod Pawgi >Priority: Minor > Fix For: 2.1.4 > > > Where observed - > Cassandra node has OpenJDK - > java version "1.7.0_09-icedtea" > In some situations, external agents trying to monitor a C* cluster would need > to run cassandra -v command to determine the Cassandra version and would > expect a numerical output e.g. java version "1.7.0_75" as in case of Oracle > JDK. But if the cluster has OpenJDK IcedTea installed, then this condition is > not satisfied and the agents will not work correctly as the output from > "cassandra -v" is > /opt/apache/cassandra/bin/../conf/cassandra-env.sh: line 102: [: 09-icedtea: > integer expression expected > Cause - > The line which is causing this behavior is - > jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' > 'NR==1 {print $2}'` > Suggested enhancement - > If we change the line to - > jvmver=`echo "$java_ver_output" | grep '[openjdk|java] version' | awk -F'"' > 'NR==1 {print $2}' | awk 'BEGIN {FS="-"};{print $1}'`, > it will give $jvmver as - 1.7.0_09 for the above case. > Can we add this enhancement in the cassandra-env.sh? I would like to add it > myself and submit for review, but I am not familiar with C* check in process. > There might be better ways to do this, but I thought of this to be simplest > and as the edition is at the end of the line, it will be easy to reverse if > needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8657) long-test LongCompactionsTest fails
[ https://issues.apache.org/jira/browse/CASSANDRA-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345751#comment-14345751 ] Yuki Morishita commented on CASSANDRA-8657: --- +1 > long-test LongCompactionsTest fails > --- > > Key: CASSANDRA-8657 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8657 > Project: Cassandra > Issue Type: Test > Components: Tests >Reporter: Michael Shuler >Assignee: Carl Yeksigian >Priority: Minor > Fix For: 2.0.13, 2.1.4 > > Attachments: 8657-2.0.txt, system.log > > > Same error on 3 of the 4 tests in this suite - failure is the same for 2.0 > and 2.1 branch: > {noformat} > [junit] Testsuite: org.apache.cassandra.db.compaction.LongCompactionsTest > [junit] Tests run: 4, Failures: 3, Errors: 0, Skipped: 0, Time elapsed: > 27.294 sec > [junit] > [junit] Testcase: > testCompactionMany(org.apache.cassandra.db.compaction.LongCompactionsTest): > FAILED > [junit] > /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db > is not correctly marked compacting > [junit] junit.framework.AssertionFailedError: > /tmp/Keyspace14247587528884809907Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db > is not correctly marked compacting > [junit] at > org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49) > [junit] at > org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47) > [junit] at > org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102) > [junit] at > org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionMany(LongCompactionsTest.java:67) > [junit] > [junit] > [junit] Testcase: > testCompactionSlim(org.apache.cassandra.db.compaction.LongCompactionsTest): > FAILED > [junit] > /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db > is not correctly marked compacting > [junit] junit.framework.AssertionFailedError: > /tmp/Keyspace13809058557206351042Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db > is not correctly marked compacting > [junit] at > org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49) > [junit] at > org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47) > [junit] at > org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102) > [junit] at > org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionSlim(LongCompactionsTest.java:58) > [junit] > [junit] > [junit] Testcase: > testCompactionWide(org.apache.cassandra.db.compaction.LongCompactionsTest): > FAILED > [junit] > /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db > is not correctly marked compacting > [junit] junit.framework.AssertionFailedError: > /tmp/Keyspace15276133158440321595Standard1/Keyspace1/Keyspace1-Standard1-jb-0-Data.db > is not correctly marked compacting > [junit] at > org.apache.cassandra.db.compaction.AbstractCompactionTask.(AbstractCompactionTask.java:49) > [junit] at > org.apache.cassandra.db.compaction.CompactionTask.(CompactionTask.java:47) > [junit] at > org.apache.cassandra.db.compaction.LongCompactionsTest.testCompaction(LongCompactionsTest.java:102) > [junit] at > org.apache.cassandra.db.compaction.LongCompactionsTest.testCompactionWide(LongCompactionsTest.java:49) > [junit] > [junit] > [junit] Test org.apache.cassandra.db.compaction.LongCompactionsTest FAILED > {noformat} > A system.log is attached from the above run on 2.0 HEAD. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab15d8e6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab15d8e6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab15d8e6 Branch: refs/heads/trunk Commit: ab15d8e61698809913fcf9c32817551dafefe699 Parents: 6951726 abc4a37 Author: Yuki Morishita Authored: Tue Mar 3 15:07:24 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 15:07:24 2015 -0600 -- CHANGES.txt | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab15d8e6/CHANGES.txt --
[2/3] cassandra git commit: Show progress of streaming in nodetool netstats
Show progress of streaming in nodetool netstats patch by Phil Yang; reviewed by yukim for CASSANDRA-8886 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d201a251 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d201a251 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d201a251 Branch: refs/heads/trunk Commit: d201a2518a9dfd0aa4fca43c919a7a99cfb46412 Parents: f6d82a5 Author: Phil Yang Authored: Tue Mar 3 14:30:11 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 15:02:59 2015 -0600 -- src/java/org/apache/cassandra/tools/NodeTool.java | 8 1 file changed, 4 insertions(+), 4 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d201a251/src/java/org/apache/cassandra/tools/NodeTool.java -- diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java b/src/java/org/apache/cassandra/tools/NodeTool.java index a639b45..9094fbd 100644 --- a/src/java/org/apache/cassandra/tools/NodeTool.java +++ b/src/java/org/apache/cassandra/tools/NodeTool.java @@ -626,9 +626,9 @@ public class NodeTool if (!info.receivingSummaries.isEmpty()) { if (humanReadable) -System.out.printf("Receiving %d files, %s total%n", info.getTotalFilesToReceive(), FileUtils.stringifyFileSize(info.getTotalSizeToReceive())); +System.out.printf("Receiving %d files, %s total. Already received %d files, %s total%n", info.getTotalFilesToReceive(), FileUtils.stringifyFileSize(info.getTotalSizeToReceive()), info.getTotalFilesReceived(), FileUtils.stringifyFileSize(info.getTotalSizeReceived())); else -System.out.printf("Receiving %d files, %d bytes total%n", info.getTotalFilesToReceive(), info.getTotalSizeToReceive()); +System.out.printf("Receiving %d files, %d bytes total. Already received %d files, %d bytes total%n", info.getTotalFilesToReceive(), info.getTotalSizeToReceive(), info.getTotalFilesReceived(), info.getTotalSizeReceived()); for (ProgressInfo progress : info.getReceivingFiles()) { System.out.printf("%s%n", progress.toString()); @@ -637,9 +637,9 @@ public class NodeTool if (!info.sendingSummaries.isEmpty()) { if (humanReadable) -System.out.printf("Sending %d files, %s total%n", info.getTotalFilesToSend(), FileUtils.stringifyFileSize(info.getTotalSizeToSend())); +System.out.printf("Sending %d files, %s total. Already sent %d files, %s total%n", info.getTotalFilesToSend(), FileUtils.stringifyFileSize(info.getTotalSizeToSend()), info.getTotalFilesSent(), FileUtils.stringifyFileSize(info.getTotalSizeSent())); else -System.out.printf("Sending %d files, %d bytes total%n", info.getTotalFilesToSend(), info.getTotalSizeToSend()); +System.out.printf("Sending %d files, %d bytes total. Already sent %d files, %d bytes total%n", info.getTotalFilesToSend(), info.getTotalSizeToSend(), info.getTotalFilesSent(), info.getTotalSizeSent()); for (ProgressInfo progress : info.getSendingFiles()) { System.out.printf("%s%n", progress.toString());
[jira] [Updated] (CASSANDRA-8886) nodetool netstats shows the progress of every streaming session
[ https://issues.apache.org/jira/browse/CASSANDRA-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-8886: -- Priority: Minor (was: Major) > nodetool netstats shows the progress of every streaming session > --- > > Key: CASSANDRA-8886 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8886 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Phil Yang >Assignee: Phil Yang >Priority: Minor > Fix For: 2.1.4 > > Attachments: 8886.txt > > > Now if there is a streaming session in one node, the nodetool netstats only > shows how many files and bytes should be sent or received. I think users may > want to know how many files and bytes have been sent or received without > counting the receiving/sending files themselves. It is a very small patch to > show the progress of a session. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/3] cassandra git commit: add CASSANDRA-8886 to change log
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 d201a2518 -> abc4a37d0 refs/heads/trunk 695172631 -> ab15d8e61 add CASSANDRA-8886 to change log Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/abc4a37d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/abc4a37d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/abc4a37d Branch: refs/heads/cassandra-2.1 Commit: abc4a37d0ae1972d73866079ad7a01eefc5220c5 Parents: d201a25 Author: Yuki Morishita Authored: Tue Mar 3 15:07:16 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 15:07:16 2015 -0600 -- CHANGES.txt | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/abc4a37d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c3c7a19..76c2e10 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -19,6 +19,7 @@ * Write partition size estimates into a system table (CASSANDRA-7688) * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output (CASSANDRA-8154) + * Show progress of streaming in nodetool netstats (CASSANDRA-8886) Merged from 2.0: * Add offline tool to relevel sstables (CASSANDRA-8301) * Preserve stream ID for more protocol errors (CASSANDRA-8848)
[jira] [Resolved] (CASSANDRA-8886) nodetool netstats shows the progress of every streaming session
[ https://issues.apache.org/jira/browse/CASSANDRA-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita resolved CASSANDRA-8886. --- Resolution: Fixed Fix Version/s: 2.1.4 Thanks for the patch! Committed with slightly change in message(add "bytes" when not using -H option). > nodetool netstats shows the progress of every streaming session > --- > > Key: CASSANDRA-8886 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8886 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Phil Yang >Assignee: Phil Yang > Fix For: 2.1.4 > > Attachments: 8886.txt > > > Now if there is a streaming session in one node, the nodetool netstats only > shows how many files and bytes should be sent or received. I think users may > want to know how many files and bytes have been sent or received without > counting the receiving/sending files themselves. It is a very small patch to > show the progress of a session. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/3] cassandra git commit: add CASSANDRA-8886 to change log
add CASSANDRA-8886 to change log Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/abc4a37d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/abc4a37d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/abc4a37d Branch: refs/heads/trunk Commit: abc4a37d0ae1972d73866079ad7a01eefc5220c5 Parents: d201a25 Author: Yuki Morishita Authored: Tue Mar 3 15:07:16 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 15:07:16 2015 -0600 -- CHANGES.txt | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/abc4a37d/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c3c7a19..76c2e10 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -19,6 +19,7 @@ * Write partition size estimates into a system table (CASSANDRA-7688) * cqlsh: Fix keys() and full() collection indexes in DESCRIBE output (CASSANDRA-8154) + * Show progress of streaming in nodetool netstats (CASSANDRA-8886) Merged from 2.0: * Add offline tool to relevel sstables (CASSANDRA-8301) * Preserve stream ID for more protocol errors (CASSANDRA-8848)
[1/3] cassandra git commit: Show progress of streaming in nodetool netstats
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 f6d82a55f -> d201a2518 refs/heads/trunk b53313533 -> 695172631 Show progress of streaming in nodetool netstats patch by Phil Yang; reviewed by yukim for CASSANDRA-8886 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d201a251 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d201a251 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d201a251 Branch: refs/heads/cassandra-2.1 Commit: d201a2518a9dfd0aa4fca43c919a7a99cfb46412 Parents: f6d82a5 Author: Phil Yang Authored: Tue Mar 3 14:30:11 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 15:02:59 2015 -0600 -- src/java/org/apache/cassandra/tools/NodeTool.java | 8 1 file changed, 4 insertions(+), 4 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d201a251/src/java/org/apache/cassandra/tools/NodeTool.java -- diff --git a/src/java/org/apache/cassandra/tools/NodeTool.java b/src/java/org/apache/cassandra/tools/NodeTool.java index a639b45..9094fbd 100644 --- a/src/java/org/apache/cassandra/tools/NodeTool.java +++ b/src/java/org/apache/cassandra/tools/NodeTool.java @@ -626,9 +626,9 @@ public class NodeTool if (!info.receivingSummaries.isEmpty()) { if (humanReadable) -System.out.printf("Receiving %d files, %s total%n", info.getTotalFilesToReceive(), FileUtils.stringifyFileSize(info.getTotalSizeToReceive())); +System.out.printf("Receiving %d files, %s total. Already received %d files, %s total%n", info.getTotalFilesToReceive(), FileUtils.stringifyFileSize(info.getTotalSizeToReceive()), info.getTotalFilesReceived(), FileUtils.stringifyFileSize(info.getTotalSizeReceived())); else -System.out.printf("Receiving %d files, %d bytes total%n", info.getTotalFilesToReceive(), info.getTotalSizeToReceive()); +System.out.printf("Receiving %d files, %d bytes total. Already received %d files, %d bytes total%n", info.getTotalFilesToReceive(), info.getTotalSizeToReceive(), info.getTotalFilesReceived(), info.getTotalSizeReceived()); for (ProgressInfo progress : info.getReceivingFiles()) { System.out.printf("%s%n", progress.toString()); @@ -637,9 +637,9 @@ public class NodeTool if (!info.sendingSummaries.isEmpty()) { if (humanReadable) -System.out.printf("Sending %d files, %s total%n", info.getTotalFilesToSend(), FileUtils.stringifyFileSize(info.getTotalSizeToSend())); +System.out.printf("Sending %d files, %s total. Already sent %d files, %s total%n", info.getTotalFilesToSend(), FileUtils.stringifyFileSize(info.getTotalSizeToSend()), info.getTotalFilesSent(), FileUtils.stringifyFileSize(info.getTotalSizeSent())); else -System.out.printf("Sending %d files, %d bytes total%n", info.getTotalFilesToSend(), info.getTotalSizeToSend()); +System.out.printf("Sending %d files, %d bytes total. Already sent %d files, %d bytes total%n", info.getTotalFilesToSend(), info.getTotalSizeToSend(), info.getTotalFilesSent(), info.getTotalSizeSent()); for (ProgressInfo progress : info.getSendingFiles()) { System.out.printf("%s%n", progress.toString());
[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69517263 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69517263 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69517263 Branch: refs/heads/trunk Commit: 695172631d80eb093ef56b71bfa9d444cb12a3c1 Parents: b533135 d201a25 Author: Yuki Morishita Authored: Tue Mar 3 15:03:16 2015 -0600 Committer: Yuki Morishita Committed: Tue Mar 3 15:03:16 2015 -0600 -- src/java/org/apache/cassandra/tools/NodeTool.java | 8 1 file changed, 4 insertions(+), 4 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/69517263/src/java/org/apache/cassandra/tools/NodeTool.java --
[jira] [Commented] (CASSANDRA-8889) CQL spec is missing doc for support of bind variables for LIMIT, TTL, and TIMESTAMP
[ https://issues.apache.org/jira/browse/CASSANDRA-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345711#comment-14345711 ] Jack Krupansky commented on CASSANDRA-8889: --- Thanks. The change for the special variable names looks fine, but the grammar for LIMIT, TTL, and TIMESTAMP still says "" - it needs to be "( | )". > CQL spec is missing doc for support of bind variables for LIMIT, TTL, and > TIMESTAMP > --- > > Key: CASSANDRA-8889 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8889 > Project: Cassandra > Issue Type: Bug > Components: Documentation & website >Reporter: Jack Krupansky >Assignee: Tyler Hobbs >Priority: Minor > > CASSANDRA-4450 added the ability to specify a bind variable for the integer > value of a LIMIT, TTL, or TIMESTAMP option, but the CQL spec has not been > updated to reflect this enhancement. > Also, the special predefined bind variable names are not documented in the > CQL spec: "[limit]", "[ttl]", and "[timestamp]". -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7094) cqlsh: DESCRIBE is not case-insensitive
[ https://issues.apache.org/jira/browse/CASSANDRA-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-7094: --- Assignee: Philip Thompson (was: Tyler Hobbs) > cqlsh: DESCRIBE is not case-insensitive > --- > > Key: CASSANDRA-7094 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7094 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: cassandra 1.2.16 >Reporter: Karl Mueller >Assignee: Philip Thompson >Priority: Trivial > Labels: cqlsh > > Keyspaces which are named starting with capital letters (and perhaps other > things) sometimes require double quotes and sometimes do not. > For example, describe works without quotes: > cqlsh> describe keyspace ProductGenomeLocal; > CREATE KEYSPACE "ProductGenomeLocal" WITH replication = { > 'class': 'SimpleStrategy', > 'replication_factor': '3' > }; > USE "ProductGenomeLocal"; > [...] > But use will not: > cqlsh> use ProductGenomeLocal; > Bad Request: Keyspace 'productgenomelocal' does not exist > It seems that qoutes should only really be necessary when there's spaces or > other symbols that need to be quoted. > At the least, the acceptance or failures of quotes should be consistent. > Other minor annoyance: tab expansion works in use and describe with quotes, > but will not work in either without quotes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7094) cqlsh: DESCRIBE is not case-insensitive
[ https://issues.apache.org/jira/browse/CASSANDRA-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345678#comment-14345678 ] Philip Thompson commented on CASSANDRA-7094: Yup, I can take this. > cqlsh: DESCRIBE is not case-insensitive > --- > > Key: CASSANDRA-7094 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7094 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: cassandra 1.2.16 >Reporter: Karl Mueller >Assignee: Tyler Hobbs >Priority: Trivial > Labels: cqlsh > > Keyspaces which are named starting with capital letters (and perhaps other > things) sometimes require double quotes and sometimes do not. > For example, describe works without quotes: > cqlsh> describe keyspace ProductGenomeLocal; > CREATE KEYSPACE "ProductGenomeLocal" WITH replication = { > 'class': 'SimpleStrategy', > 'replication_factor': '3' > }; > USE "ProductGenomeLocal"; > [...] > But use will not: > cqlsh> use ProductGenomeLocal; > Bad Request: Keyspace 'productgenomelocal' does not exist > It seems that qoutes should only really be necessary when there's spaces or > other symbols that need to be quoted. > At the least, the acceptance or failures of quotes should be consistent. > Other minor annoyance: tab expansion works in use and describe with quotes, > but will not work in either without quotes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7094) cqlsh: DESCRIBE is not case-insensitive
[ https://issues.apache.org/jira/browse/CASSANDRA-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-7094: --- Fix Version/s: 2.1.4 3.0 > cqlsh: DESCRIBE is not case-insensitive > --- > > Key: CASSANDRA-7094 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7094 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: cassandra 1.2.16 >Reporter: Karl Mueller >Assignee: Philip Thompson >Priority: Trivial > Labels: cqlsh > Fix For: 3.0, 2.1.4 > > > Keyspaces which are named starting with capital letters (and perhaps other > things) sometimes require double quotes and sometimes do not. > For example, describe works without quotes: > cqlsh> describe keyspace ProductGenomeLocal; > CREATE KEYSPACE "ProductGenomeLocal" WITH replication = { > 'class': 'SimpleStrategy', > 'replication_factor': '3' > }; > USE "ProductGenomeLocal"; > [...] > But use will not: > cqlsh> use ProductGenomeLocal; > Bad Request: Keyspace 'productgenomelocal' does not exist > It seems that qoutes should only really be necessary when there's spaces or > other symbols that need to be quoted. > At the least, the acceptance or failures of quotes should be consistent. > Other minor annoyance: tab expansion works in use and describe with quotes, > but will not work in either without quotes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7094) cqlsh: DESCRIBE is not case-insensitive
[ https://issues.apache.org/jira/browse/CASSANDRA-7094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345673#comment-14345673 ] Tyler Hobbs commented on CASSANDRA-7094: [~philipthompson] do you want to take a stab at this? > cqlsh: DESCRIBE is not case-insensitive > --- > > Key: CASSANDRA-7094 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7094 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: cassandra 1.2.16 >Reporter: Karl Mueller >Assignee: Tyler Hobbs >Priority: Trivial > Labels: cqlsh > > Keyspaces which are named starting with capital letters (and perhaps other > things) sometimes require double quotes and sometimes do not. > For example, describe works without quotes: > cqlsh> describe keyspace ProductGenomeLocal; > CREATE KEYSPACE "ProductGenomeLocal" WITH replication = { > 'class': 'SimpleStrategy', > 'replication_factor': '3' > }; > USE "ProductGenomeLocal"; > [...] > But use will not: > cqlsh> use ProductGenomeLocal; > Bad Request: Keyspace 'productgenomelocal' does not exist > It seems that qoutes should only really be necessary when there's spaces or > other symbols that need to be quoted. > At the least, the acceptance or failures of quotes should be consistent. > Other minor annoyance: tab expansion works in use and describe with quotes, > but will not work in either without quotes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8883) Percentile computation should use ceil not floor in EstimatedHistogram
[ https://issues.apache.org/jira/browse/CASSANDRA-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-8883: -- Reviewer: Chris Lohfink [~cnlwsu] to review > Percentile computation should use ceil not floor in EstimatedHistogram > -- > > Key: CASSANDRA-8883 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8883 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Chris Lohfink >Assignee: Carl Yeksigian >Priority: Minor > Fix For: 2.1.4 > > Attachments: 8883-2.1.txt > > > When computing the pcount Cassandra uses floor and the comparison with > elements is >= so given a simple example of there being a total of five > elements > {code} > // data > [1, 1, 1, 1, 1] > // offsets > [1, 2, 3, 4, 5] > {code} > Cassandra would report the 50th percentile as 2. While 3 is the more > expected value. As a comparison using numpy > {code} > import numpy as np > np.percentile(np.array([1, 2, 3, 4, 5]), 50) > ==> 3.0 > {code} > The percentiles was added in CASSANDRA-4022 but is now used a lot in metrics > Cassandra reports. I think it should error on the side on overestimating > instead of underestimating. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8879) Alter table on compact storage broken
[ https://issues.apache.org/jira/browse/CASSANDRA-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345670#comment-14345670 ] Jonathan Ellis commented on CASSANDRA-8879: --- with the cli going away in 3.0, perhaps we should allow this from cql as well as thrift, since our message is "hey thrift people, you can do everything you used to do from cli, from cqlsh only better." > Alter table on compact storage broken > - > > Key: CASSANDRA-8879 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8879 > Project: Cassandra > Issue Type: Bug >Reporter: Nick Bailey >Assignee: Tyler Hobbs > Fix For: 2.0.13 > > Attachments: 8879-2.0.txt > > > In 2.0 HEAD, alter table on compact storage tables seems to be broken. With > the following table definition, altering the column breaks cqlsh and > generates a stack trace in the log. > {noformat} > CREATE TABLE settings ( > key blob, > column1 blob, > value blob, > PRIMARY KEY ((key), column1) > ) WITH COMPACT STORAGE > {noformat} > {noformat} > cqlsh:OpsCenter> alter table settings ALTER column1 TYPE ascii ; > TSocket read 0 bytes > cqlsh:OpsCenter> DESC TABLE settings; > {noformat} > {noformat} > ERROR [Thrift:7] 2015-02-26 17:20:24,640 CassandraDaemon.java (line 199) > Exception in thread Thread[Thrift:7,5,main] > java.lang.AssertionError > >...at > >org.apache.cassandra.cql3.statements.AlterTableStatement.announceMigration(AlterTableStatement.java:198) > >...at > >org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:79) > >...at > >org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158) > >...at > >org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:175) > >...at > >org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1958) > >...at > >org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486) > >...at > >org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470) > >...at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > >...at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > >...at > >org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204) > >...at > >java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > >...at > >java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > >...at java.lang.Thread.run(Thread.java:724) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-8899) cqlsh - not able to get row count with select(*) for large table
[ https://issues.apache.org/jira/browse/CASSANDRA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer reassigned CASSANDRA-8899: - Assignee: Benjamin Lerer > cqlsh - not able to get row count with select(*) for large table > > > Key: CASSANDRA-8899 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8899 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra 2.1.2 ubuntu12.04 >Reporter: Jeff Liu >Assignee: Benjamin Lerer > > I'm getting errors when running a query that looks at a large number of rows. > {noformat} > cqlsh:events> select count(*) from catalog; > count > --- > 1 > (1 rows) > cqlsh:events> select count(*) from catalog limit 11000; > count > --- > 11000 > (1 rows) > cqlsh:events> select count(*) from catalog limit 5; > errors={}, last_host=127.0.0.1 > cqlsh:events> > {noformat} > We are not able to make the select * query to get row count. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (CASSANDRA-8067) NullPointerException in KeyCacheSerializer
[ https://issues.apache.org/jira/browse/CASSANDRA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-8067: Comment: was deleted (was: bq. but hesitant to do that in 2.1.x Agreed) > NullPointerException in KeyCacheSerializer > -- > > Key: CASSANDRA-8067 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8067 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Eric Leleu >Assignee: Aleksey Yeschenko > Fix For: 2.1.4 > > Attachments: 8067.txt > > > Hi, > I have this stack trace in the logs of Cassandra server (v2.1) > {code} > ERROR [CompactionExecutor:14] 2014-10-06 23:32:02,098 > CassandraDaemon.java:166 - Exception in thread > Thread[CompactionExecutor:14,1,main] > java.lang.NullPointerException: null > at > org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at > org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061) > ~[apache-cassandra-2.1.0.jar:2.1.0] > at java.util.concurrent.Executors$RunnableAdapter.call(Unknown > Source) ~[na:1.7.0] > at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) > ~[na:1.7.0] > at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0] > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > [na:1.7.0] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > [na:1.7.0] > at java.lang.Thread.run(Unknown Source) [na:1.7.0] > {code} > It may not be critical because this error occured in the AutoSavingCache. > However the line 475 is about the CFMetaData so it may hide bigger issue... > {code} > 474 CFMetaData cfm = > Schema.instance.getCFMetaData(key.desc.ksname, key.desc.cfname); > 475 cfm.comparator.rowIndexEntrySerializer().serialize(entry, > out); > {code} > Regards, > Eric -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8832) SSTableRewriter.abort() should be more robust to failure
[ https://issues.apache.org/jira/browse/CASSANDRA-8832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345648#comment-14345648 ] Benedict commented on CASSANDRA-8832: - bq. AFAICS the actual fix for the problem was committed as part of 7705 and this patch only adds continued processing after exceptions. Can you confirm this? Regrettably, no. This was broken _by_ 7705 unfortunately. I've included a regression that demonstrates the problem. In the event that currentlyOpenedEarly != null, and we abort, we do not close (or unmark compacting) the early opened file. bq. replaceWithFinishedReaders can also throw (e.g. due to a reference counting bug), hiding any earlier errors. It should also be wrapped in a try/merge block. I wasn't too sure about this when I wrote it, since it both shouldn't fail in the same way (has to be programmer error rather than other problems), and it itself leaves the program in a problematic state if it doesn't complete successfully. A lot of code paths need reworking to be resilient to this, and I didn't want to scope creep. However since you raise it, I've opted to fix this latter problem and also wrap it in its own try/catch as you suggest. bq. The static merge of throwables will probably be needed in many other places. Could we move it to a more generic location? Again, I was torn on writing it since I can't think of a good place to group it. I've created our own Throwables utility class, which contains only this for now. If you have a better idea for where to put it, pipe up. > SSTableRewriter.abort() should be more robust to failure > > > Key: CASSANDRA-8832 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8832 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Benedict >Assignee: Benedict > Fix For: 2.1.4 > > > This fixes a bug introduced in CASSANDRA-8124 that attempts to open early > during abort, introducing a failure risk. This patch further preempts > CASSANDRA-8690 to wrap every rollback action in a try/catch block, so that > any internal assertion checks do not actually worsen the state. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-8889) CQL spec is missing doc for support of bind variables for LIMIT, TTL, and TIMESTAMP
[ https://issues.apache.org/jira/browse/CASSANDRA-8889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs resolved CASSANDRA-8889. Resolution: Fixed I've updated the docs as commit 6ee0c757c3 and pushed the updated versions to the website. Thanks! > CQL spec is missing doc for support of bind variables for LIMIT, TTL, and > TIMESTAMP > --- > > Key: CASSANDRA-8889 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8889 > Project: Cassandra > Issue Type: Bug > Components: Documentation & website >Reporter: Jack Krupansky >Assignee: Tyler Hobbs >Priority: Minor > > CASSANDRA-4450 added the ability to specify a bind variable for the integer > value of a LIMIT, TTL, or TIMESTAMP option, but the CQL spec has not been > updated to reflect this enhancement. > Also, the special predefined bind variable names are not documented in the > CQL spec: "[limit]", "[ttl]", and "[timestamp]". -- This message was sent by Atlassian JIRA (v6.3.4#6332)
svn commit: r1663774 - in /cassandra/site/publish/doc/cql3: CQL-2.0.html CQL-2.1.html
Author: tylerhobbs Date: Tue Mar 3 20:05:46 2015 New Revision: 1663774 URL: http://svn.apache.org/r1663774 Log: Update CQL3 docs for CASSANDRA-8889 Modified: cassandra/site/publish/doc/cql3/CQL-2.0.html cassandra/site/publish/doc/cql3/CQL-2.1.html Modified: cassandra/site/publish/doc/cql3/CQL-2.0.html URL: http://svn.apache.org/viewvc/cassandra/site/publish/doc/cql3/CQL-2.0.html?rev=1663774&r1=1663773&r2=1663774&view=diff == --- cassandra/site/publish/doc/cql3/CQL-2.0.html (original) +++ cassandra/site/publish/doc/cql3/CQL-2.0.html Tue Mar 3 20:05:46 2015 @@ -38,7 +38,7 @@::= (AND )* ::= '=' ( | | ) -Please note that not every possible productions of the grammar above will be valid in practice. Most notably, and nested are currently not allowed inside .A can be either anonymous (a question mark (?)) or named (an identifier preceded by :). Both declare a bind variables for prepared statements. The only difference between an anymous and a named variable is that a named one will be easier to refer to (how exactly depends on the client driver used).The production is use by statement that create and alter keyspaces and tables. Each is either a simple one, in which case it just has a value, or a map one, in which case it’s value is a map grouping sub-options. The following will refer to one or the other as the kind (simple or map) of the property.A will be used to identify a table. This is an identifier representing the table name that can be preceded by a keyspace name. The keyspace name, if provided, allow to identify a table in another keyspace than the currently active one (the currently active keyspace is set through the USE statement).For supported , see the section on functions.Prepared StatementCQL supports prepared statements. Prepared statement is an optimization that allows to parse a query only once but execute it multiple times with different concrete values.In a statement, each time a column value is expected (in the data manipulation and query statements), a (see above) can be used instead. A statement with bind variables m ust then be prepared. Once it has been prepared, it can executed by providing concrete values for the bind variables. The exact procedure to prepare a statement and execute a prepared statement depends on the CQL driver used and is beyond the scope of this document.Data DefinitionCREATE KEYSPACESyntax: ::= CREATE KEYSPACE (IF NOT EXISTS)? WITH +Please note that not every possible productions of the grammar above will be valid in practice. Most notably, and nested are currently not allowed inside .A can be either anonymous (a question mark (?)) or named (an identifier preceded by :). Both declare a bind variables for prepared statements. The only difference between an anymous and a named variable is that a named one will be easier to refer to (how exactly depends on the client driver used).The production is use by statement that create and alter keyspaces and tables. Each is either a simple one, in which case it just has a value, or a map one, in which case it’s value is a map grouping sub-options. The following will refer to one or the other as the kind (simple or map) of the property.A will be used to identify a table. This is an identifier representing the table name that can be preceded by a keyspace name. The keyspace name, if provided, allow to identify a table in another keyspace than the currently active one (the currently active keyspace is set through the USE statement).For supported , see the section on functions.Prepared StatementCQL supports prepared statements. Prepared statement is an optimization that allows to parse a query only once but execute it multiple times with different concrete values.In a statement, each time a column value is expected (in the data manipulation and query statements), a (see above) can be used instead. A statement with bind variables m ust then be prepared. Once it has been prepared, it can executed by providing concrete values for the bind variables. The exact procedure to prepare a statement and execute a prepared statement depends on the CQL driver used and is beyond the scope of this document.In addition to providing column values, bind markers may be used to provide values for LIMIT, TIMESTAMP, and TTL clauses. If anonymous bind markers are used, the names for the query parameters will be [limit], [timestamp], and [ttl], respectively.Data DefinitionCREATE KEYSPACESyntax:
[jira] [Commented] (CASSANDRA-8879) Alter table on compact storage broken
[ https://issues.apache.org/jira/browse/CASSANDRA-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345640#comment-14345640 ] Nick Bailey commented on CASSANDRA-8879: FWIW, that is essentially the case I was hitting. This was a thrift table that I know contains only ascii data and rather than deal with hex/bytes i wanted to just update the schema. I can see the argument for not allowing this since you could be shooting yourself in the foot if the actual data isn't the right type. On the other hand the user-friendliness of having to alter my schema with thrift (in not completely obvious ways) leaves something to be desired as well. Either way thats probably separate from the actual bug in this ticket (since it's broken going bytes->ascii or ascii->bytes). > Alter table on compact storage broken > - > > Key: CASSANDRA-8879 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8879 > Project: Cassandra > Issue Type: Bug >Reporter: Nick Bailey >Assignee: Tyler Hobbs > Fix For: 2.0.13 > > Attachments: 8879-2.0.txt > > > In 2.0 HEAD, alter table on compact storage tables seems to be broken. With > the following table definition, altering the column breaks cqlsh and > generates a stack trace in the log. > {noformat} > CREATE TABLE settings ( > key blob, > column1 blob, > value blob, > PRIMARY KEY ((key), column1) > ) WITH COMPACT STORAGE > {noformat} > {noformat} > cqlsh:OpsCenter> alter table settings ALTER column1 TYPE ascii ; > TSocket read 0 bytes > cqlsh:OpsCenter> DESC TABLE settings; > {noformat} > {noformat} > ERROR [Thrift:7] 2015-02-26 17:20:24,640 CassandraDaemon.java (line 199) > Exception in thread Thread[Thrift:7,5,main] > java.lang.AssertionError > >...at > >org.apache.cassandra.cql3.statements.AlterTableStatement.announceMigration(AlterTableStatement.java:198) > >...at > >org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:79) > >...at > >org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158) > >...at > >org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:175) > >...at > >org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1958) > >...at > >org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486) > >...at > >org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470) > >...at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > >...at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > >...at > >org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204) > >...at > >java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > >...at > >java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > >...at java.lang.Thread.run(Thread.java:724) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/2] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6d82a55 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6d82a55 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6d82a55 Branch: refs/heads/cassandra-2.1 Commit: f6d82a55fbf938286245c8ed510094715d0c4dc1 Parents: 3f6ad3c 6ee0c75 Author: Tyler Hobbs Authored: Tue Mar 3 14:02:47 2015 -0600 Committer: Tyler Hobbs Committed: Tue Mar 3 14:02:47 2015 -0600 -- doc/cql3/CQL.textile | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6d82a55/doc/cql3/CQL.textile --
[1/3] cassandra git commit: Document bind markers for TIMESTAMP, TLL, and LIMIT
Repository: cassandra Updated Branches: refs/heads/trunk fccf0b4f6 -> b53313533 Document bind markers for TIMESTAMP, TLL, and LIMIT Patch by Tyler Hobbs for CASSANDRA-8889 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ee0c757 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ee0c757 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ee0c757 Branch: refs/heads/trunk Commit: 6ee0c757c387f5e55299e8f6bb433b9c6166ead2 Parents: 72c6ed2 Author: Tyler Hobbs Authored: Tue Mar 3 14:01:43 2015 -0600 Committer: Tyler Hobbs Committed: Tue Mar 3 14:01:43 2015 -0600 -- doc/cql3/CQL.textile | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ee0c757/doc/cql3/CQL.textile -- diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile index 6085d00..cf074af 100644 --- a/doc/cql3/CQL.textile +++ b/doc/cql3/CQL.textile @@ -131,6 +131,8 @@ CQL supports _prepared statements_. Prepared statement is an optimization that a In a statement, each time a column value is expected (in the data manipulation and query statements), a @@ (see above) can be used instead. A statement with bind variables must then be _prepared_. Once it has been prepared, it can executed by providing concrete values for the bind variables. The exact procedure to prepare a statement and execute a prepared statement depends on the CQL driver used and is beyond the scope of this document. +In addition to providing column values, bind markers may be used to provide values for @LIMIT@, @TIMESTAMP@, and @TTL@ clauses. If anonymous bind markers are used, the names for the query parameters will be @[limit]@, @[timestamp]@, and @[ttl]@, respectively. + h2(#dataDefinition). Data Definition
cassandra git commit: Document bind markers for TIMESTAMP, TLL, and LIMIT
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 72c6ed288 -> 6ee0c757c Document bind markers for TIMESTAMP, TLL, and LIMIT Patch by Tyler Hobbs for CASSANDRA-8889 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ee0c757 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ee0c757 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ee0c757 Branch: refs/heads/cassandra-2.0 Commit: 6ee0c757c387f5e55299e8f6bb433b9c6166ead2 Parents: 72c6ed2 Author: Tyler Hobbs Authored: Tue Mar 3 14:01:43 2015 -0600 Committer: Tyler Hobbs Committed: Tue Mar 3 14:01:43 2015 -0600 -- doc/cql3/CQL.textile | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ee0c757/doc/cql3/CQL.textile -- diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile index 6085d00..cf074af 100644 --- a/doc/cql3/CQL.textile +++ b/doc/cql3/CQL.textile @@ -131,6 +131,8 @@ CQL supports _prepared statements_. Prepared statement is an optimization that a In a statement, each time a column value is expected (in the data manipulation and query statements), a @@ (see above) can be used instead. A statement with bind variables must then be _prepared_. Once it has been prepared, it can executed by providing concrete values for the bind variables. The exact procedure to prepare a statement and execute a prepared statement depends on the CQL driver used and is beyond the scope of this document. +In addition to providing column values, bind markers may be used to provide values for @LIMIT@, @TIMESTAMP@, and @TTL@ clauses. If anonymous bind markers are used, the names for the query parameters will be @[limit]@, @[timestamp]@, and @[ttl]@, respectively. + h2(#dataDefinition). Data Definition
[2/3] cassandra git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f6d82a55 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f6d82a55 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f6d82a55 Branch: refs/heads/trunk Commit: f6d82a55fbf938286245c8ed510094715d0c4dc1 Parents: 3f6ad3c 6ee0c75 Author: Tyler Hobbs Authored: Tue Mar 3 14:02:47 2015 -0600 Committer: Tyler Hobbs Committed: Tue Mar 3 14:02:47 2015 -0600 -- doc/cql3/CQL.textile | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f6d82a55/doc/cql3/CQL.textile --
[3/3] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b5331353 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b5331353 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b5331353 Branch: refs/heads/trunk Commit: b533135333e14ed4b482dbdc0febae7f2ee5be6f Parents: fccf0b4 f6d82a5 Author: Tyler Hobbs Authored: Tue Mar 3 14:03:07 2015 -0600 Committer: Tyler Hobbs Committed: Tue Mar 3 14:03:07 2015 -0600 -- doc/cql3/CQL.textile | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b5331353/doc/cql3/CQL.textile --
[1/2] cassandra git commit: Document bind markers for TIMESTAMP, TLL, and LIMIT
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 3f6ad3c98 -> f6d82a55f Document bind markers for TIMESTAMP, TLL, and LIMIT Patch by Tyler Hobbs for CASSANDRA-8889 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ee0c757 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ee0c757 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ee0c757 Branch: refs/heads/cassandra-2.1 Commit: 6ee0c757c387f5e55299e8f6bb433b9c6166ead2 Parents: 72c6ed2 Author: Tyler Hobbs Authored: Tue Mar 3 14:01:43 2015 -0600 Committer: Tyler Hobbs Committed: Tue Mar 3 14:01:43 2015 -0600 -- doc/cql3/CQL.textile | 2 ++ 1 file changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ee0c757/doc/cql3/CQL.textile -- diff --git a/doc/cql3/CQL.textile b/doc/cql3/CQL.textile index 6085d00..cf074af 100644 --- a/doc/cql3/CQL.textile +++ b/doc/cql3/CQL.textile @@ -131,6 +131,8 @@ CQL supports _prepared statements_. Prepared statement is an optimization that a In a statement, each time a column value is expected (in the data manipulation and query statements), a @@ (see above) can be used instead. A statement with bind variables must then be _prepared_. Once it has been prepared, it can executed by providing concrete values for the bind variables. The exact procedure to prepare a statement and execute a prepared statement depends on the CQL driver used and is beyond the scope of this document. +In addition to providing column values, bind markers may be used to provide values for @LIMIT@, @TIMESTAMP@, and @TTL@ clauses. If anonymous bind markers are used, the names for the query parameters will be @[limit]@, @[timestamp]@, and @[ttl]@, respectively. + h2(#dataDefinition). Data Definition