[jira] [Commented] (CASSANDRA-10007) Repeated rows in paged result

2015-08-07 Thread Steve Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662630#comment-14662630
 ] 

Steve Wang commented on CASSANDRA-10007:


Seems to work fine in dtest, as Adam and I both found to be true. 

The problem seems to be when the cluster is running on more than one node. 
There's a direct correlation between the number of nodes and the number of rows 
returned. In addition, there's an inverse relation between the fetch_size and 
the number_of_rows returned. For example, with: 

5 nodes, fetch_size = 3 i get values from select* from test.test oscillating 
between 112 and 113, when the result should be 100. 

In addition, the error only seems to occur when there isn't a limit. 



 Repeated rows in paged result
 -

 Key: CASSANDRA-10007
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10007
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Adam Holmberg
Assignee: Benjamin Lerer
  Labels: client-impacting
 Fix For: 3.x

 Attachments: paging-test.py


 We noticed an anomaly in paged results while testing against 3.0.0-alpha1. It 
 seems that unbounded selects can return rows repeated at page boundaries. 
 Furthermore, the number of repeated rows seems to dither in count across 
 consecutive runs of the same query.
 Does not reproduce on 2.2.0 and earlier.
 I also noted that this behavior only manifests on multi-node clusters.
 The attached script shows this behavior when run against 3.0.0-alpha1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9426) Provide a per-table text-text map for storing extra metadata

2015-08-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662659#comment-14662659
 ] 

Aleksey Yeschenko commented on CASSANDRA-9426:
--

Well, technically, you can put anything in a text column too (:

That said, the initial goal was to not even expose this via CQL, but have it 
available for internal consumers. I suggest in v1 we don't expose it.

 Provide a per-table text-text map for storing extra metadata
 -

 Key: CASSANDRA-9426
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9426
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Aleksey Yeschenko
Assignee: Sam Tunnicliffe
  Labels: client-impacting
 Fix For: 3.0 beta 1


 For some applications that build on Cassandra it's important to be able to 
 attach extra metadata to tables, and have it be distributed via regular 
 Cassandra schema paths.
 I propose a new {{extensions maptext,text}} table param for just that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9061) Add backoff and recovery to cqlsh COPY FROM when write timeouts occur

2015-08-07 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-9061:

Reviewer: Stefania  (was: Stefania Alborghetti)

 Add backoff and recovery to cqlsh COPY FROM when write timeouts occur
 -

 Key: CASSANDRA-9061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9061
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Carl Yeksigian
Priority: Minor
  Labels: cqlsh
 Fix For: 2.1.x

 Attachments: 9061-2.1.txt, 9061-suggested.txt


 Previous versions of COPY FROM didn't handle write timeouts because it was 
 rarely fast enough for that to matter.  Now that performance has improved, 
 write timeouts are more likely to occur.  We should handle these by backing 
 off and retrying the operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9927) Security for MaterializedViews

2015-08-07 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662676#comment-14662676
 ] 

Paulo Motta commented on CASSANDRA-9927:


Updated branches with suggested modifications:
* [3.0 patch|https://github.com/pauloricardomg/cassandra/tree/9927-3.0]
* [trunk patch|https://github.com/pauloricardomg/cassandra/tree/9927-trunk]

Updated policies:
* {{CREATE MATERIALIZED VIEW}}: {{ALTER}} and {{SELECT}} permissions on base 
table or parent keyspace.
* {{ALTER MATERIALIZED VIEW}}: {{ALTER}} permission on base table or parent 
keyspace.
* {{DROP MATERIALIZED VIEW}}: {{DROP}} permission on base table or parent 
keyspace.
* {{SELECT}}: {{SELECT}} permissions on base table or parent keyspace.

Tests will be available shortly on the following links:
* [3.0 
dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-9927-3.0-dtest/]
* [3.0 
testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-9927-3.0-testall/]
* [trunk 
dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-9927-trunk-dtest/]
* [trunk 
testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-9927-trunk-testall/]

 Security for MaterializedViews
 --

 Key: CASSANDRA-9927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9927
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
Assignee: Paulo Motta
  Labels: materializedviews
 Fix For: 3.0 beta 1


 We need to think about how to handle security wrt materialized views. Since 
 they are based on a source table we should possibly inherit the same security 
 model as that table.  
 However I can see cases where users would want to create different security 
 auth for different views.  esp once we have CASSANDRA-9664 and users can 
 filter out sensitive data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10008) Upgrading SSTables fails on 2.2.0 (after upgrade from 2.1.2)

2015-08-07 Thread Chris Moos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662808#comment-14662808
 ] 

Chris Moos commented on CASSANDRA-10008:


FYI Tested this patch on my node experiencing the upgrade issue and it was able 
to successfully upgrade the SSTables.

 Upgrading SSTables fails on 2.2.0 (after upgrade from 2.1.2)
 

 Key: CASSANDRA-10008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10008
 Project: Cassandra
  Issue Type: Bug
Reporter: Chris Moos
Assignee: Chris Moos
 Fix For: 2.2.x

 Attachments: CASSANDRA-10008.patch


 Running *nodetool upgradesstables* fails with the following after upgrading 
 to 2.2.0 from 2.1.2:
 {code}
 error: null
 -- StackTrace --
 java.lang.AssertionError
 at 
 org.apache.cassandra.db.lifecycle.LifecycleTransaction.checkUnused(LifecycleTransaction.java:428)
 at 
 org.apache.cassandra.db.lifecycle.LifecycleTransaction.split(LifecycleTransaction.java:408)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.parallelAllSSTableOperation(CompactionManager.java:268)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:373)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:1524)
 at 
 org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2521)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9341) IndexOutOfBoundsException on server when unlogged batch write times out

2015-08-07 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9341:
---
Fix Version/s: (was: 2.1.x)

 IndexOutOfBoundsException on server when unlogged batch write times out
 ---

 Key: CASSANDRA-9341
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9341
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 14.04 LTS 64bit
 Cassandra 2.1.5
Reporter: Nimi Wariboko Jr.
Assignee: Tyler Hobbs
Priority: Minor

 In our application (golang) we were debugging an issue that caused our entire 
 app to lockup (I think this is community-driver related, and has little to do 
 with the server).
 What caused this issue is we were rapidly sending large batches - and (pretty 
 rarely) one of these write requests would timeout. I think what may have 
 happened is the we end up writing incomplete data to the server.
 When this happens we get this response frame from the server
 This is with the native protocol version 2
 {code}
  flags=0x0 
 stream=9 
 op=ERROR 
 length=107
 Error Code: 0
 Message: java.lang.IndexOutOfBoundsException: index: 1408818, length: 
 1375797264 (expected: range(0, 1506453))
 {code}
 And in the Cassandra logs on that node:
 {code}
 ERROR [SharedPool-Worker-28] 2015-05-10 22:32:15,242 Message.java:538 - 
 Unexpected exception during request; channel = [id: 0x68d4acfb, 
 /10.129.196.41:33549 = /10.129.196.24:9042]
 java.lang.IndexOutOfBoundsException: index: 1408818, length: 1375797264 
 (expected: range(0, 1506453))
   at 
 io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1143) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.buffer.SlicedByteBuf.slice(SlicedByteBuf.java:155) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.buffer.AbstractByteBuf.readSlice(AbstractByteBuf.java:669) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at org.apache.cassandra.transport.CBUtil.readValue(CBUtil.java:336) 
 ~[apache-cassandra-2.1.5.jar:2.1.5]
   at org.apache.cassandra.transport.CBUtil.readValueList(CBUtil.java:386) 
 ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:64)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.messages.BatchMessage$1.decode(BatchMessage.java:45)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:247)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 org.apache.cassandra.transport.Message$ProtocolDecoder.decode(Message.java:235)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
   at 
 io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722)
  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
  

[2/2] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/85098de2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/85098de2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/85098de2

Branch: refs/heads/trunk
Commit: 85098de2fdd64fb5ad930d6339dacceb206bca0e
Parents: f207869 e5e2910
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 14:19:44 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 14:19:44 2015 -0500

--
 CHANGES.txt   |  1 +
 .../db/partitions/AbstractThreadUnsafePartition.java  | 10 --
 2 files changed, 9 insertions(+), 2 deletions(-)
--




[jira] [Updated] (CASSANDRA-7645) cqlsh: show partial trace if incomplete after max_trace_wait

2015-08-07 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-7645:
---
Reviewer: Paulo Motta  (was: Tyler Hobbs)

 cqlsh: show partial trace if incomplete after max_trace_wait
 

 Key: CASSANDRA-7645
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7645
 Project: Cassandra
  Issue Type: Improvement
  Components: Tools
Reporter: Tyler Hobbs
Assignee: Carl Yeksigian
Priority: Trivial
 Fix For: 2.1.x


 If a trace hasn't completed within {{max_trace_wait}}, cqlsh will say the 
 trace is unavailable and not show anything.  It (and the underlying python 
 driver) determines when the trace is complete by checking if the {{duration}} 
 column in {{system_traces.sessions}} is non-null.  If {{duration}} is null 
 but we still have some trace events when the timeout is hit, cqlsh should 
 print whatever trace events we have along with a warning about it being 
 incomplete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Fix repeated slices on AbstractThreadUnsafePartition.SliceableIterator

2015-08-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk f207869ed - 85098de2f


Fix repeated slices on AbstractThreadUnsafePartition.SliceableIterator

Patch by Tyler Hobbs; reviewed by Stefania Alborghetti for CASSANDRA-10002


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5e29104
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5e29104
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5e29104

Branch: refs/heads/trunk
Commit: e5e29104436fbccb9fb202c3305765ffde6f16b8
Parents: a8b8515
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 14:19:00 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 14:19:00 2015 -0500

--
 CHANGES.txt   |  1 +
 .../db/partitions/AbstractThreadUnsafePartition.java  | 10 --
 2 files changed, 9 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5e29104/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 13614cc..7549e70 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.0-beta1
+ * Fix multiple slices on RowSearchers (CASSANDRA-10002)
  * Fix bug in merging of collections (CASSANDRA-10001)
  * Optimize batchlog replay to avoid full scans (CASSANDRA-7237)
  * Repair improvements when using vnodes (CASSANDRA-5220)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5e29104/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
--
diff --git 
a/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
 
b/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
index a716768..acdd0e2 100644
--- 
a/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
+++ 
b/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
@@ -325,11 +325,17 @@ public abstract class AbstractThreadUnsafePartition 
implements Partition, Iterab
 // Note that because a Slice.Bound can never sort equally to a 
Clustering, we know none of the search will
 // be a match, so we save from testing for it.
 
-final int start = -search(slice.start(), nextIdx, rows.size()) - 
1; // First index to include
+// since the binary search starts from nextIdx, the position 
returned will be an offset from nextIdx; to
+// get an absolute position, add nextIdx back in
+int searchResult = search(slice.start(), nextIdx, rows.size());
+final int start = nextIdx + (-searchResult - 1); // First index to 
include
+
 if (start = rows.size())
 return Collections.emptyIterator();
 
-final int end = -search(slice.end(), start, rows.size()) - 1; // 
First index to exclude
+// similarly, add start to the returned position
+searchResult = search(slice.end(), start, rows.size());
+final int end = start + (-searchResult - 1); // First index to 
exclude
 
 // Remember the end to speed up potential further slice search
 nextIdx = end;



[jira] [Updated] (CASSANDRA-7423) Allow updating individual subfields of UDT

2015-08-07 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-7423:
--
Assignee: Benjamin Lerer

 Allow updating individual subfields of UDT
 --

 Key: CASSANDRA-7423
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7423
 Project: Cassandra
  Issue Type: Improvement
  Components: API, Core
Reporter: Tupshin Harper
Assignee: Benjamin Lerer
  Labels: cql
 Fix For: 3.x


 Since user defined types were implemented in CASSANDRA-5590 as blobs (you 
 have to rewrite the entire type in order to make any modifications), they 
 can't be safely used without LWT for any operation that wants to modify a 
 subset of the UDT's fields by any client process that is not authoritative 
 for the entire blob. 
 When trying to use UDTs to model complex records (particularly with nesting), 
 this is not an exceptional circumstance, this is the totally expected normal 
 situation. 
 The use of UDTs for anything non-trivial is harmful to either performance or 
 consistency or both.
 edit: to clarify, i believe that most potential uses of UDTs should be 
 considered anti-patterns until/unless we have field-level r/w access to 
 individual elements of the UDT, with individual timestamps and standard LWW 
 semantics



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/4] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8902074a
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8902074a
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8902074a

Branch: refs/heads/trunk
Commit: 8902074a02a692a094e7b23a50a77bf421bd691d
Parents: a8e0029 dfc1a44
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:51:09 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:51:09 2015 -0500

--
 NEWS.txt | 24 
 pylib/cqlshlib/formatting.py | 28 
 pylib/cqlshlib/util.py   | 15 +++
 3 files changed, 43 insertions(+), 24 deletions(-)
--




[2/4] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
NEWS.txt
pylib/cqlshlib/formatting.py


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce7159e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce7159e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce7159e1

Branch: refs/heads/trunk
Commit: ce7159e102971c27fd414ce03ced2451c7498bcd
Parents: 8ffeebf 1ecc9cd
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:49:42 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:49:42 2015 -0500

--
 NEWS.txt | 24 
 pylib/cqlshlib/formatting.py | 28 
 pylib/cqlshlib/util.py   | 15 +++
 3 files changed, 43 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7159e1/NEWS.txt
--
diff --cc NEWS.txt
index e9c6ef8,0b64e31..1f5e190
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -13,141 -13,35 +13,165 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 +2.2
 +===
 +
 +New features
 +
 +   - Selecting columns,scalar functions, UDT fields, writetime or ttl together
 + with aggregated is now possible. The value returned for the columns,
 + scalar functions, UDT fields, writetime and ttl will be the ones for
 + the first row matching the query.
 +   - Windows is now a supported platform. Powershell execution for startup 
scripts
 + is highly recommended and can be enabled via an administrator 
command-prompt
 + with: 'powershell set-executionpolicy unrestricted'
 +   - It is now possible to do major compactions when using leveled compaction.
 + Doing that will take all sstables and compact them out in levels. The
 + levels will be non overlapping so doing this will still not be something
 + you want to do very often since it might cause more compactions for a 
while.
 + It is also possible to split output when doing a major compaction with
 + STCS - files will be split in sizes 50%, 25%, 12.5% etc of the total 
size.
 + This might be a bit better than old major compactions which created one 
big
 + file on disk.
 +   - A new tool has been added bin/sstableverify that checks for errors/bitrot
 + in all sstables.  Unlike scrub, this is a non-invasive tool.
 +   - Authentication  Authorization APIs have been updated to introduce
 + roles. Roles and Permissions granted to them are inherited, supporting
 + role based access control. The role concept supercedes that of users
 + and CQL constructs such as CREATE USER are deprecated but retained for
 + compatibility. The requirement to explicitly create Roles in Cassandra
 + even when auth is handled by an external system has been removed, so
 + authentication  authorization can be delegated to such systems in their
 + entirety.
 +   - In addition to the above, Roles are also first class resources and can 
be the
 + subject of permissions. Users (roles) can now be granted permissions on 
other
 + roles, including CREATE, ALTER, DROP  AUTHORIZE, which removesthe need 
for
 + superuser privileges in order to perform user/role management operations.
 +   - Creators of database resources (Keyspaces, Tables, Roles) are now 
automatically
 + granted all permissions on them (if the IAuthorizer implementation 
supports
 + this).
 +   - SSTable file name is changed. Now you don't have Keyspace/CF name
 + in file name. Also, secondary index has its own directory under parent's
 + directory.
 +   - Support for user-defined functions and user-defined aggregates have
 + been added to CQL.
 + 
 + IMPORTANT NOTE: user-defined functions can be used to execute
 + arbitrary and possibly evil code in Cassandra 2.2, and are
 + therefore disabled by default.  To enable UDFs edit
 + cassandra.yaml and set enable_user_defined_functions to true.
 +
 + CASSANDRA-9402 will add a security manager for UDFs in Cassandra
 + 3.0.  This will inherently be backwards-incompatible with any 2.2
 + UDF that perform insecure operations such as opening a socket or
 + writing to the filesystem.
 + 
 +   - Row-cache is now fully off-heap.
 +   - jemalloc is now automatically preloaded and used on Linux and OS-X if
 + installed.
 +   - Please ensure on Unix platforms that there is no libjnadispath.so
 + installed which is 

[3/4] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dfc1a445
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dfc1a445
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dfc1a445

Branch: refs/heads/trunk
Commit: dfc1a445081315e394c55e39e0ac55875acb2b2d
Parents: 10e4781 ce7159e
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:50:07 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:50:07 2015 -0500

--
 NEWS.txt | 24 
 pylib/cqlshlib/formatting.py | 28 
 pylib/cqlshlib/util.py   | 15 +++
 3 files changed, 43 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/dfc1a445/NEWS.txt
--



[1/4] cassandra git commit: cqlsh: Fix timestamps before 1970 on Windows

2015-08-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk a8e002959 - 8902074a0


cqlsh: Fix timestamps before 1970 on Windows

This also has the side effect of displaying timestamps in the UTC
timezone instead of the local timezone.

Patch by Paulo Motta; reviewed by Tyler Hobbs for CASSANDRA-1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1ecc9cd6
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1ecc9cd6
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1ecc9cd6

Branch: refs/heads/trunk
Commit: 1ecc9cd61e376e57a26042b5ea7e57142dd8a247
Parents: 34193ee
Author: Paulo Motta pauloricard...@gmail.com
Authored: Fri Aug 7 15:45:16 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:45:16 2015 -0500

--
 NEWS.txt |  4 
 pylib/cqlshlib/formatting.py | 19 ---
 pylib/cqlshlib/util.py   | 15 +++
 3 files changed, 23 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ecc9cd6/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index dec3e99..0b64e31 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -19,10 +19,13 @@ using the provided 'sstableupgrade' tool.
 
 Upgrading
 -
+- cqlsh will now display timestamps with a UTC timezone. Previously,
+  timestamps were displayed with the local timezone.
 - Commit log files are no longer recycled by default, due to negative
   performance implications. This can be enabled again with the 
   commitlog_segment_recycling option in your cassandra.yaml 
 
+
 2.1.8
 =
 
@@ -31,6 +34,7 @@ Upgrading
 - Nothing specific to this release, but please see 2.1 if you are upgrading
   from a previous version.
 
+
 2.1.7
 =
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ecc9cd6/pylib/cqlshlib/formatting.py
--
diff --git a/pylib/cqlshlib/formatting.py b/pylib/cqlshlib/formatting.py
index 2a99e23..37bd361 100644
--- a/pylib/cqlshlib/formatting.py
+++ b/pylib/cqlshlib/formatting.py
@@ -23,6 +23,8 @@ from collections import defaultdict
 from . import wcwidth
 from .displaying import colorme, FormattedValue, DEFAULT_VALUE_COLORS
 from cassandra.cqltypes import EMPTY
+from cassandra.util import datetime_from_timestamp
+from util import UTC
 
 unicode_controlchars_re = re.compile(r'[\x00-\x31\x7f-\xa0]')
 controlchars_re = re.compile(r'[\x00-\x31\x7f-\xff]')
@@ -175,21 +177,8 @@ def format_value_timestamp(val, colormap, time_format, 
quote=False, **_):
 formatter_for('datetime')(format_value_timestamp)
 
 def strftime(time_format, seconds):
-local = time.localtime(seconds)
-formatted = time.strftime(time_format, local)
-if local.tm_isdst != 0:
-offset = -time.altzone
-else:
-offset = -time.timezone
-if formatted[-4:] != '' or time_format[-2:] != '%z' or offset == 0:
-return formatted
-# deal with %z on platforms where it isn't supported. see CASSANDRA-4746.
-if offset  0:
-sign = '-'
-else:
-sign = '+'
-hours, minutes = divmod(abs(offset) / 60, 60)
-return formatted[:-5] + sign + '{0:0=2}{1:0=2}'.format(hours, minutes)
+tzless_dt = datetime_from_timestamp(seconds)
+return tzless_dt.replace(tzinfo=UTC()).strftime(time_format)
 
 @formatter_for('str')
 def format_value_text(val, encoding, colormap, quote=False, **_):

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1ecc9cd6/pylib/cqlshlib/util.py
--
diff --git a/pylib/cqlshlib/util.py b/pylib/cqlshlib/util.py
index bc58c8b..4273efc 100644
--- a/pylib/cqlshlib/util.py
+++ b/pylib/cqlshlib/util.py
@@ -16,6 +16,21 @@
 
 import codecs
 from itertools import izip
+from datetime import timedelta, tzinfo
+
+ZERO = timedelta(0)
+
+class UTC(tzinfo):
+UTC
+
+def utcoffset(self, dt):
+return ZERO
+
+def tzname(self, dt):
+return UTC
+
+def dst(self, dt):
+return ZERO
 
 
 def split_list(items, pred):



[jira] [Commented] (CASSANDRA-9960) UDTs still visible after drop/recreate keyspace

2015-08-07 Thread Jaroslav Kamenik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661857#comment-14661857
 ] 

Jaroslav Kamenik commented on CASSANDRA-9960:
-

New info - it seems, that old type data are resurrected after creation few new 
types in empty keyspace. Now I have it in those strange state, example from 
cqlsh:

cqlsh drop keyspace woc;
cqlsh CREATE KEYSPACE woc WITH REPLICATION = {'class': 'SimpleStrategy', 
'replication_factor': '1'};
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | field_types
---+---+-+-

(0 rows)
cqlsh CREATE TYPE IF NOT EXISTS woc.a(aa int);
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | field_types
---+---+-+-

(0 rows)
cqlsh CREATE TYPE IF NOT EXISTS woc.b(bb int) ;
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | 
field_types
---+---+-+
   woc |   xxx | ['xxx', 'xxx', 'xxx'] | 
['org.apache.cassandra.db.marshal.AsciiType', 
'org.apache.cassandra.db.marshal.TimeUUIDType', 
'org.apache.cassandra.db.marshal.AsciiType']



 UDTs still visible after drop/recreate keyspace
 ---

 Key: CASSANDRA-9960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9960
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik
Assignee: Robert Stupp
Priority: Critical
 Fix For: 2.2.x


 When deploying my app from the scratch I run sequence - drop keyspaces, 
 create keyspaces, create UDTs, create tables, generate lots of data... After 
 few cycles, randomly, cassandra ends in state, where I cannot see anything in 
 table system.schema_usertypes, when I select all rows, but queries with 
 specified keyspace_name and type_name return old values. Usually it helps to 
 restart C* and old data disapear, sometimes it needs to delete all C* data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9960) UDTs still visible after drop/recreate keyspace

2015-08-07 Thread Jaroslav Kamenik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661857#comment-14661857
 ] 

Jaroslav Kamenik edited comment on CASSANDRA-9960 at 8/7/15 2:01 PM:
-

New info - it seems, that old type data are resurrected after creation few new 
types in empty keyspace. Now I have it in those strange state, example from 
cqlsh:

cqlsh drop keyspace woc;
cqlsh CREATE KEYSPACE woc WITH REPLICATION = {'class': 'SimpleStrategy', 
'replication_factor': '1'};
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | field_types
---+---+-+-

(0 rows)
cqlsh CREATE TYPE IF NOT EXISTS woc.a(aa int);
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | field_types
---+---+-+-

(0 rows)
cqlsh CREATE TYPE IF NOT EXISTS woc.b(bb int) ;
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | 
field_types
---+---+-+
   woc |   xxx | ['xxx', 'xxx', 'xxx'] | 
['org.apache.cassandra.db.marshal.AsciiType', 
'org.apache.cassandra.db.marshal.TimeUUIDType', 
'org.apache.cassandra.db.marshal.AsciiType']


Can succesfully do this again and again:)



was (Author: shinigami):
New info - it seems, that old type data are resurrected after creation few new 
types in empty keyspace. Now I have it in those strange state, example from 
cqlsh:

cqlsh drop keyspace woc;
cqlsh CREATE KEYSPACE woc WITH REPLICATION = {'class': 'SimpleStrategy', 
'replication_factor': '1'};
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | field_types
---+---+-+-

(0 rows)
cqlsh CREATE TYPE IF NOT EXISTS woc.a(aa int);
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | field_types
---+---+-+-

(0 rows)
cqlsh CREATE TYPE IF NOT EXISTS woc.b(bb int) ;
cqlsh select * FROM system.schema_usertypes WHERE keyspace_name='woc' AND 
type_name='xxx';

 keyspace_name | type_name | field_names | 
field_types
---+---+-+
   woc |   xxx | ['xxx', 'xxx', 'xxx'] | 
['org.apache.cassandra.db.marshal.AsciiType', 
'org.apache.cassandra.db.marshal.TimeUUIDType', 
'org.apache.cassandra.db.marshal.AsciiType']



 UDTs still visible after drop/recreate keyspace
 ---

 Key: CASSANDRA-9960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9960
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik
Assignee: Robert Stupp
Priority: Critical
 Fix For: 2.2.x


 When deploying my app from the scratch I run sequence - drop keyspaces, 
 create keyspaces, create UDTs, create tables, generate lots of data... After 
 few cycles, randomly, cassandra ends in state, where I cannot see anything in 
 table system.schema_usertypes, when I select all rows, but queries with 
 specified keyspace_name and type_name return old values. Usually it helps to 
 restart C* and old data disapear, sometimes it needs to delete all C* data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-9717) TestCommitLog segment size dtests fail on trunk

2015-08-07 Thread Ariel Weisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg reopened CASSANDRA-9717:
---

Still happening?
http://cassci.datastax.com/view/trunk/job/trunk_dtest/445/testReport/junit/commitlog_test/TestCommitLog/small_segment_size_test/

 TestCommitLog segment size dtests fail on trunk
 ---

 Key: CASSANDRA-9717
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9717
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Jim Witschey
Assignee: Jim Witschey
Priority: Blocker
 Fix For: 3.0 beta 1


 The test for the commit log segment size when the specified size is 32MB. It 
 fails for me locally and on on cassci. ([cassci 
 link|http://cassci.datastax.com/view/trunk/job/trunk_dtest/305/testReport/commitlog_test/TestCommitLog/default_segment_size_test/])
 The command to run the test by itself is {{CASSANDRA_VERSION=git:trunk 
 nosetests commitlog_test.py:TestCommitLog.default_segment_size_test}}.
 EDIT: a similar test, 
 {{commitlog_test.py:TestCommitLog.small_segment_size_test}}, also fails with 
 a similar error.
 The solution here may just be to change the expected size or the acceptable 
 error -- the result isn't far off. I'm happy to make the dtest change if 
 that's the solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9602) EACH_QUORUM READ support needed

2015-08-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662554#comment-14662554
 ] 

Aleksey Yeschenko commented on CASSANDRA-9602:
--

This would require a new native protocol version, so probably won't make it to 
3.0.

 EACH_QUORUM READ support needed
 ---

 Key: CASSANDRA-9602
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9602
 Project: Cassandra
  Issue Type: Improvement
Reporter: Scott Guminy
Assignee: Carl Yeksigian
  Labels: client-impacting, doc-impacting
 Fix For: 3.x


 EACH_QUORUM consistency for READ should be added.
 This bug https://issues.apache.org/jira/browse/CASSANDRA-3272 says it is not 
 needed ever, however I have a use case where I need it.  I think the decision 
 made was incorrect. Here's why...
  
  My application has two key pieces:
  
  # *End user actions* which add/modify data in the system.  End users 
 typically access the application from only one Data Center and only see their 
 own data
 # *Scheduled business logic tasks* which run from any node in any data 
 center.  These tasks process data added by the end users in an asynchronous 
 way
  
  *End user actions must have the highest degree of availability.*  Users must 
 always be able to add data to the system.  The data will be processed later.  
 To support this, end user actions will use *LOCAL_QUORUM Read and Write 
 Consistency*.
  
  Scheduled tasks don't need to have a high degree of availability but MUST 
 operate on the most up to date data.  *The tasks will run with EACH_QUORUM* 
 to ensure that no matter how many data centers we have, we always READ the 
 latest data.  This approach allows us some amount of fault tolerance. 
  
  The problem is that EACH_QUORUM is not a valid READ consistency level.  This 
 mean I have no alternative but to use ALL.  ALL will work, but is not the 
 best since it offers support for ZERO failures.  I would prefer EACH_QUORUM 
 since it can support some failures in each data center.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9927) Security for MaterializedViews

2015-08-07 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662556#comment-14662556
 ] 

Paulo Motta commented on CASSANDRA-9927:


Finished initial implementation of materialized views authentication. 3.0 v1 
patch is available 
[here|https://github.com/pauloricardomg/cassandra/tree/9927-3.0] for review.

In this initial implementation, the following permissions are necessary for 
each operation:
* {{CREATE MATERIALIZED VIEW}}: {{CREATE}} and {{SELECT}} permissions on base 
table or parent keyspace.
* {{ALTER MATERIALIZED VIEW}}: {{ALTER}} permission on base table or parent 
keyspace.
* {{DROP MATERIALIZED VIEW}}: {{DROP}} permission on base table or parent 
keyspace.

Tests will be available shortly on the following links:
* [3.0 
dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-9927-3.0-dtest/]
* [3.0 
testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-9927-3.0-testall/]
* [trunk 
dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-9927-trunk-dtest/]
* [trunk 
testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-9927-trunk-testall/]

 Security for MaterializedViews
 --

 Key: CASSANDRA-9927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9927
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
Assignee: Paulo Motta
  Labels: materializedviews
 Fix For: 3.0 beta 1


 We need to think about how to handle security wrt materialized views. Since 
 they are based on a source table we should possibly inherit the same security 
 model as that table.  
 However I can see cases where users would want to create different security 
 auth for different views.  esp once we have CASSANDRA-9664 and users can 
 filter out sensitive data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9927) Security for MaterializedViews

2015-08-07 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662586#comment-14662586
 ] 

Paulo Motta commented on CASSANDRA-9927:


bq. {{CREATE}} is not a table-level permission, so you should change {{CREATE 
MV}} to require {{SELECT}} on the table and {{CREATE}} on the keyspace.

Really? I saw in this 
[doc|http://docs.datastax.com/en/cql/3.1/cql/cql_reference/list_permissions_r.html]
 {{CREATE}} being listed as a table permission. If it's really forbidden, 
shouldn't we rethink that for MVs ? It allows for a more fine-grained 
authorization, since you may want to allow creation of MVs of a given table, 
but not on the whole keyspace.

bq. More importantly, you should alter SelectStatement to check for {{SELECT}} 
on the base table.

How can I miss that? My filtering algorithm for implementing the policies was 
statement.contains(MATERIALIZED VIEW). Will fix that in v2.

 Security for MaterializedViews
 --

 Key: CASSANDRA-9927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9927
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
Assignee: Paulo Motta
  Labels: materializedviews
 Fix For: 3.0 beta 1


 We need to think about how to handle security wrt materialized views. Since 
 they are based on a source table we should possibly inherit the same security 
 model as that table.  
 However I can see cases where users would want to create different security 
 auth for different views.  esp once we have CASSANDRA-9664 and users can 
 filter out sensitive data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/4] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/288f2cf4
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/288f2cf4
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/288f2cf4

Branch: refs/heads/trunk
Commit: 288f2cf4fb8ed90c45511dc0e35b1bdbfbd5a41d
Parents: ed9343e 8c64cef
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 17:44:28 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 17:44:28 2015 -0500

--
 CHANGES.txt |   1 +
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../org/apache/cassandra/db/Clustering.java |   8 +-
 src/java/org/apache/cassandra/db/DataRange.java |  28 +-
 .../org/apache/cassandra/db/LegacyLayout.java   | 922 +++--
 .../apache/cassandra/db/PartitionColumns.java   |   6 +
 .../cassandra/db/PartitionRangeReadCommand.java |  11 +
 .../cassandra/db/RangeSliceVerbHandler.java |  29 +
 .../org/apache/cassandra/db/ReadCommand.java| 995 ++-
 .../cassandra/db/ReadCommandVerbHandler.java|   9 +-
 .../org/apache/cassandra/db/ReadResponse.java   | 250 -
 .../db/SinglePartitionReadCommand.java  |  11 +-
 src/java/org/apache/cassandra/db/Slice.java |  48 +-
 .../filter/AbstractClusteringIndexFilter.java   |  20 -
 .../db/filter/ClusteringIndexFilter.java|  20 +
 .../db/filter/ClusteringIndexNamesFilter.java   |   4 +-
 .../db/filter/ClusteringIndexSliceFilter.java   |   4 +-
 .../cassandra/db/filter/ColumnFilter.java   |   3 +
 .../apache/cassandra/db/filter/DataLimits.java  |  12 +-
 .../db/marshal/AbstractCompositeType.java   |  32 -
 .../cassandra/db/marshal/CompositeType.java |  26 +
 .../AbstractThreadUnsafePartition.java  |   2 +-
 .../db/partitions/PartitionUpdate.java  |  93 +-
 .../UnfilteredPartitionIterators.java   |  13 +-
 .../cassandra/db/rows/BTreeBackedRow.java   |  62 ++
 src/java/org/apache/cassandra/db/rows/Row.java  |  12 +
 .../apache/cassandra/net/MessagingService.java  |   4 +-
 .../cassandra/service/AbstractReadExecutor.java |   5 +-
 .../apache/cassandra/service/DataResolver.java  |   8 +-
 .../cassandra/service/DigestResolver.java   |   6 +-
 .../apache/cassandra/service/ReadCallback.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   4 +-
 .../cassandra/service/StorageService.java   |   4 +-
 .../service/pager/RangeSliceQueryPager.java |   4 +-
 .../service/pager/SinglePartitionPager.java |   8 +-
 .../cassandra/thrift/CassandraServer.java   |   4 +-
 36 files changed, 2353 insertions(+), 321 deletions(-)
--




[2/4] cassandra git commit: On-wire backward compatibility for 3.0

2015-08-07 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index c3f036a..913a1de 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -18,18 +18,24 @@
 package org.apache.cassandra.db;
 
 import java.io.IOException;
-import java.util.Iterator;
+import java.nio.ByteBuffer;
+import java.util.*;
+
+import com.google.common.collect.Lists;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.cql3.Operator;
 import org.apache.cassandra.db.index.SecondaryIndexSearcher;
 import org.apache.cassandra.db.filter.*;
 import org.apache.cassandra.db.rows.*;
 import org.apache.cassandra.db.partitions.*;
+import org.apache.cassandra.dht.AbstractBounds;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
@@ -37,7 +43,10 @@ import org.apache.cassandra.metrics.TableMetrics;
 import org.apache.cassandra.net.MessageOut;
 import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.ClientWarn;
+import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.tracing.Tracing;
+import org.apache.cassandra.utils.ByteBufferUtil;
+import org.apache.cassandra.utils.Pair;
 
 /**
  * General interface for storage-engine read commands (common to both range and
@@ -51,6 +60,10 @@ public abstract class ReadCommand implements ReadQuery
 
 public static final IVersionedSerializerReadCommand serializer = new 
Serializer();
 
+public static final IVersionedSerializerReadCommand 
legacyRangeSliceCommandSerializer = new LegacyRangeSliceCommandSerializer();
+public static final IVersionedSerializerReadCommand 
legacyPagedRangeCommandSerializer = new LegacyPagedRangeCommandSerializer();
+public static final IVersionedSerializerReadCommand 
legacyReadCommandSerializer = new LegacyReadCommandSerializer();
+
 private final Kind kind;
 private final CFMetaData metadata;
 private final int nowInSec;
@@ -72,9 +85,9 @@ public abstract class ReadCommand implements ReadQuery
 SINGLE_PARTITION (SinglePartitionReadCommand.selectionDeserializer),
 PARTITION_RANGE  (PartitionRangeReadCommand.selectionDeserializer);
 
-private SelectionDeserializer selectionDeserializer;
+private final SelectionDeserializer selectionDeserializer;
 
-private Kind(SelectionDeserializer selectionDeserializer)
+Kind(SelectionDeserializer selectionDeserializer)
 {
 this.selectionDeserializer = selectionDeserializer;
 }
@@ -251,8 +264,6 @@ public abstract class ReadCommand implements ReadQuery
 /**
  * Executes this command on the local host.
  *
- * @param cfs the store for the table queried by this command.
- *
  * @return an iterator over the result of executing this command locally.
  */
 @SuppressWarnings(resource) // The result iterator is closed upon 
exceptions (we know it's fine to potentially not close the intermediary
@@ -281,7 +292,7 @@ public abstract class ReadCommand implements ReadQuery
 // we'll probably want to optimize by pushing it down the layer 
(like for dropped columns) as it
 // would be more efficient (the sooner we discard stuff we know we 
don't care, the less useless
 // processing we do on it).
-return limits().filter(rowFilter().filter(resultIterator, 
nowInSec()), nowInSec());
+return limits().filter(updatedFilter.filter(resultIterator, 
nowInSec()), nowInSec());
 }
 catch (RuntimeException | Error e)
 {
@@ -389,7 +400,7 @@ public abstract class ReadCommand implements ReadQuery
 logger.warn(msg);
 }
 
-Tracing.trace(Read {} live and {} tombstone cells{}, new 
Object[]{ liveRows, tombstones, (warnTombstones ?  (see 
tombstone_warn_threshold) : ) });
+Tracing.trace(Read {} live and {} tombstone cells{}, 
liveRows, tombstones, (warnTombstones ?  (see tombstone_warn_threshold) : 
));
 }
 }
 };
@@ -398,12 +409,16 @@ public abstract class ReadCommand implements ReadQuery
 /**
  * Creates a message for this command.
  */
-public MessageOutReadCommand createMessage()
+public MessageOutReadCommand createMessage(int version)
 {
-// TODO: we should use different verbs for old message (RANGE_SLICE, 

[3/4] cassandra git commit: On-wire backward compatibility for 3.0

2015-08-07 Thread tylerhobbs
On-wire backward compatibility for 3.0

This adds support for mixed-version clusters with Cassandra 2.1
and 2.2.

Patch by Tyler Hobbs and Sylvain Lebresne for CASSANDRA-9704


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c64cefd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c64cefd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c64cefd

Branch: refs/heads/trunk
Commit: 8c64cefd19d706003d4b33b333274dbf17c9cb34
Parents: 69f0b89
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 17:42:18 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 17:42:18 2015 -0500

--
 CHANGES.txt |   1 +
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../org/apache/cassandra/db/Clustering.java |   8 +-
 src/java/org/apache/cassandra/db/DataRange.java |  28 +-
 .../org/apache/cassandra/db/LegacyLayout.java   | 922 +++--
 .../apache/cassandra/db/PartitionColumns.java   |   6 +
 .../cassandra/db/PartitionRangeReadCommand.java |  11 +
 .../cassandra/db/RangeSliceVerbHandler.java |  29 +
 .../org/apache/cassandra/db/ReadCommand.java| 995 ++-
 .../cassandra/db/ReadCommandVerbHandler.java|   9 +-
 .../org/apache/cassandra/db/ReadResponse.java   | 250 -
 .../db/SinglePartitionReadCommand.java  |  11 +-
 src/java/org/apache/cassandra/db/Slice.java |  48 +-
 .../filter/AbstractClusteringIndexFilter.java   |  20 -
 .../db/filter/ClusteringIndexFilter.java|  20 +
 .../db/filter/ClusteringIndexNamesFilter.java   |   4 +-
 .../db/filter/ClusteringIndexSliceFilter.java   |   4 +-
 .../cassandra/db/filter/ColumnFilter.java   |   3 +
 .../apache/cassandra/db/filter/DataLimits.java  |  12 +-
 .../db/marshal/AbstractCompositeType.java   |  32 -
 .../cassandra/db/marshal/CompositeType.java |  26 +
 .../AbstractThreadUnsafePartition.java  |   2 +-
 .../db/partitions/PartitionUpdate.java  |  93 +-
 .../UnfilteredPartitionIterators.java   |  13 +-
 .../cassandra/db/rows/BTreeBackedRow.java   |  62 ++
 src/java/org/apache/cassandra/db/rows/Row.java  |  12 +
 .../apache/cassandra/net/MessagingService.java  |   4 +-
 .../cassandra/service/AbstractReadExecutor.java |   5 +-
 .../apache/cassandra/service/DataResolver.java  |   8 +-
 .../cassandra/service/DigestResolver.java   |   6 +-
 .../apache/cassandra/service/ReadCallback.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   4 +-
 .../cassandra/service/StorageService.java   |   4 +-
 .../service/pager/RangeSliceQueryPager.java |   4 +-
 .../service/pager/SinglePartitionPager.java |   8 +-
 .../cassandra/thrift/CassandraServer.java   |   4 +-
 36 files changed, 2353 insertions(+), 321 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 216d3f7..0ba7b4e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.0-beta1
+ * Support mixed-version clusters with Cassandra 2.1 and 2.2 (CASSANDRA-9704)
  * Fix multiple slices on RowSearchers (CASSANDRA-10002)
  * Fix bug in merging of collections (CASSANDRA-10001)
  * Optimize batchlog replay to avoid full scans (CASSANDRA-7237)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 9f2c952..5fa1842 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -544,7 +544,7 @@ public abstract class ModificationStatement implements 
CQLStatement
  key,
  new 
ClusteringIndexNamesFilter(clusterings, false)));
 
-MapDecoratedKey, Partition map = new HashMap();
+MapDecoratedKey, Partition map = new HashMap();
 
 SinglePartitionReadCommand.Group group = new 
SinglePartitionReadCommand.Group(commands, DataLimits.NONE);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/Clustering.java
--
diff --git a/src/java/org/apache/cassandra/db/Clustering.java 
b/src/java/org/apache/cassandra/db/Clustering.java
index 7754182..a29ce65 100644
--- 

[1/4] cassandra git commit: On-wire backward compatibility for 3.0

2015-08-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk ed9343edf - 288f2cf4f


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java
--
diff --git 
a/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java 
b/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java
index 4baf6a3..bb2fbf1 100644
--- a/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java
+++ b/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java
@@ -103,38 +103,6 @@ public abstract class AbstractCompositeType extends 
AbstractTypeByteBuffer
 return l.toArray(new ByteBuffer[l.size()]);
 }
 
-public static class CompositeComponent
-{
-public AbstractType? comparator;
-public ByteBuffer   value;
-
-public CompositeComponent( AbstractType? comparator, ByteBuffer 
value )
-{
-this.comparator = comparator;
-this.value  = value;
-}
-}
-
-public ListCompositeComponent deconstruct( ByteBuffer bytes )
-{
-ListCompositeComponent list = new ArrayListCompositeComponent();
-
-ByteBuffer bb = bytes.duplicate();
-readIsStatic(bb);
-int i = 0;
-
-while (bb.remaining()  0)
-{
-AbstractType comparator = getComparator(i, bb);
-ByteBuffer value = ByteBufferUtil.readBytesWithShortLength(bb);
-
-list.add( new CompositeComponent(comparator,value) );
-
-byte b = bb.get(); // Ignore; not relevant here
-++i;
-}
-return list;
-}
 
 /*
  * Escapes all occurences of the ':' character from the input, replacing 
them by \:.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/marshal/CompositeType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/CompositeType.java 
b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
index 01eb58f..633a994 100644
--- a/src/java/org/apache/cassandra/db/marshal/CompositeType.java
+++ b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
@@ -218,6 +218,32 @@ public class CompositeType extends AbstractCompositeType
 return null;
 }
 
+public static class CompositeComponent
+{
+public ByteBuffer value;
+public byte eoc;
+
+public CompositeComponent(ByteBuffer value, byte eoc)
+{
+this.value = value;
+this.eoc = eoc;
+}
+}
+
+public static ListCompositeComponent deconstruct(ByteBuffer bytes)
+{
+ListCompositeComponent list = new ArrayList();
+ByteBuffer bb = bytes.duplicate();
+readStatic(bb);
+while (bb.remaining()  0)
+{
+ByteBuffer value = ByteBufferUtil.readBytesWithShortLength(bb);
+byte eoc = bb.get();
+list.add(new CompositeComponent(value, eoc));
+}
+return list;
+}
+
 // Extract CQL3 column name from the full column name.
 public ByteBuffer extractLastComponent(ByteBuffer bb)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
--
diff --git 
a/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
 
b/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
index acdd0e2..0b218f5 100644
--- 
a/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
+++ 
b/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
@@ -189,7 +189,7 @@ public abstract class AbstractThreadUnsafePartition 
implements Partition, Iterab
 return sliceableUnfilteredIterator(ColumnFilter.all(metadata()), 
false);
 }
 
-protected SliceableUnfilteredRowIterator 
sliceableUnfilteredIterator(ColumnFilter selection, boolean reversed)
+public SliceableUnfilteredRowIterator 
sliceableUnfilteredIterator(ColumnFilter selection, boolean reversed)
 {
 return new SliceableIterator(this, selection, reversed);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java
--
diff --git a/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java 
b/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java
index f2e0617..bb73929 100644
--- a/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java
+++ b/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java
@@ -23,12 +23,15 @@ import java.util.*;
 
 import com.google.common.collect.Iterables;
 import 

[jira] [Created] (CASSANDRA-10021) Losing writes in a single-node cluster

2015-08-07 Thread Jeremy Schlatter (JIRA)
Jeremy Schlatter created CASSANDRA-10021:


 Summary: Losing writes in a single-node cluster
 Key: CASSANDRA-10021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10021
 Project: Cassandra
  Issue Type: Bug
 Environment: I have been testing this against Docker's official 
Cassandra image.
Reporter: Jeremy Schlatter
 Attachments: cpp-repro.zip, go-repro.zip

I am able to reliably reproduce write losses in the following circumstances:

* Set up a single-node cluster.
* Create keyspace with SimpleStrategy, replication_factor = 1.
* Create a table with a float field.
* Send an UPDATE command to set the float value on a row.
* After the command returns, immediately send another UPDATE to set the float 
value to something _smaller_ than the first value.
* The second UPDATE is sometimes lost.

Reproduction code attached. There are two implementations: one in Go and one in 
C++. They do the same thing -- I implemented both to rule out a bug in the 
client library. For both cases, you can reproduce by doing the following:

1. docker run --name repro-cassandra --rm cassandra:2.0.14
(or any other Cassandra version)
2. Download and unzip one of the zip files, and change to its directory.
3. docker build -t repro .
4. docker run --link repro-cassandra:cassandra --rm repro

The reproduction code will repeatedly run two UPDATEs followed by a SELECT 
until it detects a lost write, and then print:

Lost a write. Got 20.50, want 10.50

This may be fixed in 2.1.8 because I have not been able to reproduce it in that 
version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9927) Security for MaterializedViews

2015-08-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662567#comment-14662567
 ] 

Aleksey Yeschenko commented on CASSANDRA-9927:
--

{{CREATE}} is not a table-level permission, so you should change {{CREATE MV}} 
to require {{SELECT}} on the table and {{CREATE}} on the keyspace.

More importantly, you should alter {{SelectStatement}} to check for {{SELECT}} 
on the base table.

 Security for MaterializedViews
 --

 Key: CASSANDRA-9927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9927
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
Assignee: Paulo Motta
  Labels: materializedviews
 Fix For: 3.0 beta 1


 We need to think about how to handle security wrt materialized views. Since 
 they are based on a source table we should possibly inherit the same security 
 model as that table.  
 However I can see cases where users would want to create different security 
 auth for different views.  esp once we have CASSANDRA-9664 and users can 
 filter out sensitive data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: On-wire backward compatibility for 3.0

2015-08-07 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/ReadCommand.java
--
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index c3f036a..913a1de 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -18,18 +18,24 @@
 package org.apache.cassandra.db;
 
 import java.io.IOException;
-import java.util.Iterator;
+import java.nio.ByteBuffer;
+import java.util.*;
+
+import com.google.common.collect.Lists;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.config.CFMetaData;
+import org.apache.cassandra.config.ColumnDefinition;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.config.Schema;
+import org.apache.cassandra.cql3.Operator;
 import org.apache.cassandra.db.index.SecondaryIndexSearcher;
 import org.apache.cassandra.db.filter.*;
 import org.apache.cassandra.db.rows.*;
 import org.apache.cassandra.db.partitions.*;
+import org.apache.cassandra.dht.AbstractBounds;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
@@ -37,7 +43,10 @@ import org.apache.cassandra.metrics.TableMetrics;
 import org.apache.cassandra.net.MessageOut;
 import org.apache.cassandra.net.MessagingService;
 import org.apache.cassandra.service.ClientWarn;
+import org.apache.cassandra.service.StorageService;
 import org.apache.cassandra.tracing.Tracing;
+import org.apache.cassandra.utils.ByteBufferUtil;
+import org.apache.cassandra.utils.Pair;
 
 /**
  * General interface for storage-engine read commands (common to both range and
@@ -51,6 +60,10 @@ public abstract class ReadCommand implements ReadQuery
 
 public static final IVersionedSerializerReadCommand serializer = new 
Serializer();
 
+public static final IVersionedSerializerReadCommand 
legacyRangeSliceCommandSerializer = new LegacyRangeSliceCommandSerializer();
+public static final IVersionedSerializerReadCommand 
legacyPagedRangeCommandSerializer = new LegacyPagedRangeCommandSerializer();
+public static final IVersionedSerializerReadCommand 
legacyReadCommandSerializer = new LegacyReadCommandSerializer();
+
 private final Kind kind;
 private final CFMetaData metadata;
 private final int nowInSec;
@@ -72,9 +85,9 @@ public abstract class ReadCommand implements ReadQuery
 SINGLE_PARTITION (SinglePartitionReadCommand.selectionDeserializer),
 PARTITION_RANGE  (PartitionRangeReadCommand.selectionDeserializer);
 
-private SelectionDeserializer selectionDeserializer;
+private final SelectionDeserializer selectionDeserializer;
 
-private Kind(SelectionDeserializer selectionDeserializer)
+Kind(SelectionDeserializer selectionDeserializer)
 {
 this.selectionDeserializer = selectionDeserializer;
 }
@@ -251,8 +264,6 @@ public abstract class ReadCommand implements ReadQuery
 /**
  * Executes this command on the local host.
  *
- * @param cfs the store for the table queried by this command.
- *
  * @return an iterator over the result of executing this command locally.
  */
 @SuppressWarnings(resource) // The result iterator is closed upon 
exceptions (we know it's fine to potentially not close the intermediary
@@ -281,7 +292,7 @@ public abstract class ReadCommand implements ReadQuery
 // we'll probably want to optimize by pushing it down the layer 
(like for dropped columns) as it
 // would be more efficient (the sooner we discard stuff we know we 
don't care, the less useless
 // processing we do on it).
-return limits().filter(rowFilter().filter(resultIterator, 
nowInSec()), nowInSec());
+return limits().filter(updatedFilter.filter(resultIterator, 
nowInSec()), nowInSec());
 }
 catch (RuntimeException | Error e)
 {
@@ -389,7 +400,7 @@ public abstract class ReadCommand implements ReadQuery
 logger.warn(msg);
 }
 
-Tracing.trace(Read {} live and {} tombstone cells{}, new 
Object[]{ liveRows, tombstones, (warnTombstones ?  (see 
tombstone_warn_threshold) : ) });
+Tracing.trace(Read {} live and {} tombstone cells{}, 
liveRows, tombstones, (warnTombstones ?  (see tombstone_warn_threshold) : 
));
 }
 }
 };
@@ -398,12 +409,16 @@ public abstract class ReadCommand implements ReadQuery
 /**
  * Creates a message for this command.
  */
-public MessageOutReadCommand createMessage()
+public MessageOutReadCommand createMessage(int version)
 {
-// TODO: we should use different verbs for old message (RANGE_SLICE, 

[3/3] cassandra git commit: On-wire backward compatibility for 3.0

2015-08-07 Thread tylerhobbs
On-wire backward compatibility for 3.0

This adds support for mixed-version clusters with Cassandra 2.1
and 2.2.

Patch by Tyler Hobbs and Sylvain Lebresne for CASSANDRA-9704


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c64cefd
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c64cefd
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c64cefd

Branch: refs/heads/cassandra-3.0
Commit: 8c64cefd19d706003d4b33b333274dbf17c9cb34
Parents: 69f0b89
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 17:42:18 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 17:42:18 2015 -0500

--
 CHANGES.txt |   1 +
 .../cql3/statements/ModificationStatement.java  |   2 +-
 .../org/apache/cassandra/db/Clustering.java |   8 +-
 src/java/org/apache/cassandra/db/DataRange.java |  28 +-
 .../org/apache/cassandra/db/LegacyLayout.java   | 922 +++--
 .../apache/cassandra/db/PartitionColumns.java   |   6 +
 .../cassandra/db/PartitionRangeReadCommand.java |  11 +
 .../cassandra/db/RangeSliceVerbHandler.java |  29 +
 .../org/apache/cassandra/db/ReadCommand.java| 995 ++-
 .../cassandra/db/ReadCommandVerbHandler.java|   9 +-
 .../org/apache/cassandra/db/ReadResponse.java   | 250 -
 .../db/SinglePartitionReadCommand.java  |  11 +-
 src/java/org/apache/cassandra/db/Slice.java |  48 +-
 .../filter/AbstractClusteringIndexFilter.java   |  20 -
 .../db/filter/ClusteringIndexFilter.java|  20 +
 .../db/filter/ClusteringIndexNamesFilter.java   |   4 +-
 .../db/filter/ClusteringIndexSliceFilter.java   |   4 +-
 .../cassandra/db/filter/ColumnFilter.java   |   3 +
 .../apache/cassandra/db/filter/DataLimits.java  |  12 +-
 .../db/marshal/AbstractCompositeType.java   |  32 -
 .../cassandra/db/marshal/CompositeType.java |  26 +
 .../AbstractThreadUnsafePartition.java  |   2 +-
 .../db/partitions/PartitionUpdate.java  |  93 +-
 .../UnfilteredPartitionIterators.java   |  13 +-
 .../cassandra/db/rows/BTreeBackedRow.java   |  62 ++
 src/java/org/apache/cassandra/db/rows/Row.java  |  12 +
 .../apache/cassandra/net/MessagingService.java  |   4 +-
 .../cassandra/service/AbstractReadExecutor.java |   5 +-
 .../apache/cassandra/service/DataResolver.java  |   8 +-
 .../cassandra/service/DigestResolver.java   |   6 +-
 .../apache/cassandra/service/ReadCallback.java  |   4 +-
 .../apache/cassandra/service/StorageProxy.java  |   4 +-
 .../cassandra/service/StorageService.java   |   4 +-
 .../service/pager/RangeSliceQueryPager.java |   4 +-
 .../service/pager/SinglePartitionPager.java |   8 +-
 .../cassandra/thrift/CassandraServer.java   |   4 +-
 36 files changed, 2353 insertions(+), 321 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 216d3f7..0ba7b4e 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.0-beta1
+ * Support mixed-version clusters with Cassandra 2.1 and 2.2 (CASSANDRA-9704)
  * Fix multiple slices on RowSearchers (CASSANDRA-10002)
  * Fix bug in merging of collections (CASSANDRA-10001)
  * Optimize batchlog replay to avoid full scans (CASSANDRA-7237)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
--
diff --git 
a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java 
b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
index 9f2c952..5fa1842 100644
--- a/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
+++ b/src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java
@@ -544,7 +544,7 @@ public abstract class ModificationStatement implements 
CQLStatement
  key,
  new 
ClusteringIndexNamesFilter(clusterings, false)));
 
-MapDecoratedKey, Partition map = new HashMap();
+MapDecoratedKey, Partition map = new HashMap();
 
 SinglePartitionReadCommand.Group group = new 
SinglePartitionReadCommand.Group(commands, DataLimits.NONE);
 

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/Clustering.java
--
diff --git a/src/java/org/apache/cassandra/db/Clustering.java 
b/src/java/org/apache/cassandra/db/Clustering.java
index 7754182..a29ce65 100644
--- 

[1/3] cassandra git commit: On-wire backward compatibility for 3.0

2015-08-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 69f0b8936 - 8c64cefd1


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java
--
diff --git 
a/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java 
b/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java
index 4baf6a3..bb2fbf1 100644
--- a/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java
+++ b/src/java/org/apache/cassandra/db/marshal/AbstractCompositeType.java
@@ -103,38 +103,6 @@ public abstract class AbstractCompositeType extends 
AbstractTypeByteBuffer
 return l.toArray(new ByteBuffer[l.size()]);
 }
 
-public static class CompositeComponent
-{
-public AbstractType? comparator;
-public ByteBuffer   value;
-
-public CompositeComponent( AbstractType? comparator, ByteBuffer 
value )
-{
-this.comparator = comparator;
-this.value  = value;
-}
-}
-
-public ListCompositeComponent deconstruct( ByteBuffer bytes )
-{
-ListCompositeComponent list = new ArrayListCompositeComponent();
-
-ByteBuffer bb = bytes.duplicate();
-readIsStatic(bb);
-int i = 0;
-
-while (bb.remaining()  0)
-{
-AbstractType comparator = getComparator(i, bb);
-ByteBuffer value = ByteBufferUtil.readBytesWithShortLength(bb);
-
-list.add( new CompositeComponent(comparator,value) );
-
-byte b = bb.get(); // Ignore; not relevant here
-++i;
-}
-return list;
-}
 
 /*
  * Escapes all occurences of the ':' character from the input, replacing 
them by \:.

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/marshal/CompositeType.java
--
diff --git a/src/java/org/apache/cassandra/db/marshal/CompositeType.java 
b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
index 01eb58f..633a994 100644
--- a/src/java/org/apache/cassandra/db/marshal/CompositeType.java
+++ b/src/java/org/apache/cassandra/db/marshal/CompositeType.java
@@ -218,6 +218,32 @@ public class CompositeType extends AbstractCompositeType
 return null;
 }
 
+public static class CompositeComponent
+{
+public ByteBuffer value;
+public byte eoc;
+
+public CompositeComponent(ByteBuffer value, byte eoc)
+{
+this.value = value;
+this.eoc = eoc;
+}
+}
+
+public static ListCompositeComponent deconstruct(ByteBuffer bytes)
+{
+ListCompositeComponent list = new ArrayList();
+ByteBuffer bb = bytes.duplicate();
+readStatic(bb);
+while (bb.remaining()  0)
+{
+ByteBuffer value = ByteBufferUtil.readBytesWithShortLength(bb);
+byte eoc = bb.get();
+list.add(new CompositeComponent(value, eoc));
+}
+return list;
+}
+
 // Extract CQL3 column name from the full column name.
 public ByteBuffer extractLastComponent(ByteBuffer bb)
 {

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
--
diff --git 
a/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
 
b/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
index acdd0e2..0b218f5 100644
--- 
a/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
+++ 
b/src/java/org/apache/cassandra/db/partitions/AbstractThreadUnsafePartition.java
@@ -189,7 +189,7 @@ public abstract class AbstractThreadUnsafePartition 
implements Partition, Iterab
 return sliceableUnfilteredIterator(ColumnFilter.all(metadata()), 
false);
 }
 
-protected SliceableUnfilteredRowIterator 
sliceableUnfilteredIterator(ColumnFilter selection, boolean reversed)
+public SliceableUnfilteredRowIterator 
sliceableUnfilteredIterator(ColumnFilter selection, boolean reversed)
 {
 return new SliceableIterator(this, selection, reversed);
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8c64cefd/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java
--
diff --git a/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java 
b/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java
index f2e0617..bb73929 100644
--- a/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java
+++ b/src/java/org/apache/cassandra/db/partitions/PartitionUpdate.java
@@ -23,12 +23,15 @@ import java.util.*;
 
 import com.google.common.collect.Iterables;
 import 

[jira] [Commented] (CASSANDRA-6434) Repair-aware gc grace period

2015-08-07 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662320#comment-14662320
 ] 

Marcus Eriksson commented on CASSANDRA-6434:


rebased on 3.0 here: 
https://github.com/krummas/cassandra/commits/marcuse/6434-3.0

 Repair-aware gc grace period 
 -

 Key: CASSANDRA-6434
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6434
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: sankalp kohli
Assignee: Marcus Eriksson
 Fix For: 3.0 beta 1


 Since the reason for gcgs is to ensure that we don't purge tombstones until 
 every replica has been notified, it's redundant in a world where we're 
 tracking repair times per sstable (and repairing frequentily), i.e., a world 
 where we default to incremental repair a la CASSANDRA-5351.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8406) Add option to set max_sstable_age in fractional days in DTCS

2015-08-07 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-8406:

Labels: dtcs  (was: )

 Add option to set max_sstable_age in fractional days in DTCS
 

 Key: CASSANDRA-8406
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8406
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
  Labels: dtcs
 Fix For: 2.1.5

 Attachments: 0001-8406.patch, 0001-patch.patch


 Using days as the unit for max_sstable_age in DTCS might be too much, add 
 option to set it in seconds



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8684) Replace usage of Adler32 with CRC32

2015-08-07 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662368#comment-14662368
 ] 

T Jake Luciani commented on CASSANDRA-8684:
---

The ChecksummedRandomAccessReaderTests are failing among others.  
http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-8684-testall/lastCompletedBuild/testReport/org.apache.cassandra.io/ChecksummedRandomAccessReaderTest/

* Shouldn't the Digest sstable component name need to change? 
ma-9-big-Digest.adler32
 
I can see a slight improvement in read speed with compressed stress runs. So 
looks like it's a slight win.



 Replace usage of Adler32 with CRC32
 ---

 Key: CASSANDRA-8684
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8684
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Ariel Weisberg
Assignee: Ariel Weisberg
 Fix For: 3.0 beta 1

 Attachments: CRCBenchmark.java, PureJavaCrc32.java, Sample.java


 I could not find a situation in which Adler32 outperformed PureJavaCrc32 much 
 less the intrinsic from Java 8. For small allocations PureJavaCrc32 was much 
 faster probably due to the JNI overhead of invoking the native Adler32 
 implementation where the array has to be allocated and copied.
 I tested on a 65w Sandy Bridge i5 running Ubuntu 14.04 with JDK 1.7.0_71 as 
 well as a c3.8xlarge running Ubuntu 14.04.
 I think it makes sense to stop using Adler32 when generating new checksums.
 c3.8xlarge, results are time in milliseconds, lower is better
 ||Allocation size|Adler32|CRC32|PureJavaCrc32||
 |64|47636|46075|25782|
 |128|36755|36712|23782|
 |256|31194|32211|22731|
 |1024|27194|28792|22010|
 |1048576|25941|27807|21808|
 |536870912|25957|27840|21836|
 i5
 ||Allocation size|Adler32|CRC32|PureJavaCrc32||
 |64|50539|50466|26826|
 |128|37092|38533|24553|
 |256|30630|32938|23459|
 |1024|26064|29079|22592|
 |1048576|24357|27911|22481|
 |536870912|24838|28360|22853|
 Another fun fact. Performance of the CRC32 intrinsic appears to double from 
 Sandy Bridge - Haswell. Unless I am measuring something different when going 
 from Linux/Sandy to Haswell/OS X.
 The intrinsic/JDK 8 implementation also operates against DirectByteBuffers 
 better and coding against the wrapper will get that boost when run with Java 
 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread tylerhobbs
http://git-wip-us.apache.org/repos/asf/cassandra/blob/8ffeebff/bin/cqlsh.py
--
diff --cc bin/cqlsh.py
index 6df1d75,000..11f110d
mode 100644,00..100644
--- a/bin/cqlsh.py
+++ b/bin/cqlsh.py
@@@ -1,2668 -1,0 +1,2675 @@@
 +#!/bin/sh
 +# -*- mode: Python -*-
 +
 +# Licensed to the Apache Software Foundation (ASF) under one
 +# or more contributor license agreements.  See the NOTICE file
 +# distributed with this work for additional information
 +# regarding copyright ownership.  The ASF licenses this file
 +# to you under the Apache License, Version 2.0 (the
 +# License); you may not use this file except in compliance
 +# with the License.  You may obtain a copy of the License at
 +#
 +# http://www.apache.org/licenses/LICENSE-2.0
 +#
 +# Unless required by applicable law or agreed to in writing, software
 +# distributed under the License is distributed on an AS IS BASIS,
 +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 +# See the License for the specific language governing permissions and
 +# limitations under the License.
 +
 +:
 +# bash code here; finds a suitable python interpreter and execs this file.
 +# prefer unqualified python if suitable:
 +python -c 'import sys; sys.exit(not (0x020500b0  sys.hexversion  
0x0300))' 2/dev/null \
 + exec python $0 $@
 +for pyver in 2.6 2.7 2.5; do
 +which python$pyver  /dev/null 21  exec python$pyver $0 $@
 +done
 +echo No appropriate python interpreter found. 2
 +exit 1
 +:
 +
 +from __future__ import with_statement
 +from uuid import UUID
 +
 +description = CQL Shell for Apache Cassandra
 +version = 5.0.1
 +
 +from StringIO import StringIO
 +from contextlib import contextmanager
 +from glob import glob
 +
 +import cmd
 +import sys
 +import os
 +import time
 +import optparse
 +import ConfigParser
 +import codecs
 +import locale
 +import platform
 +import warnings
 +import csv
 +import getpass
 +from functools import partial
 +import traceback
 +
 +
 +readline = None
 +try:
 +# check if tty first, cause readline doesn't check, and only cares
 +# about $TERM. we don't want the funky escape code stuff to be
 +# output if not a tty.
 +if sys.stdin.isatty():
 +import readline
 +except ImportError:
 +pass
 +
 +CQL_LIB_PREFIX = 'cassandra-driver-internal-only-'
 +
 +CASSANDRA_PATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), 
'..')
 +
 +# use bundled libs for python-cql and thrift, if available. if there
 +# is a ../lib dir, use bundled libs there preferentially.
 +ZIPLIB_DIRS = [os.path.join(CASSANDRA_PATH, 'lib')]
 +myplatform = platform.system()
 +if myplatform == 'Linux':
 +ZIPLIB_DIRS.append('/usr/share/cassandra/lib')
 +
 +if os.environ.get('CQLSH_NO_BUNDLED', ''):
 +ZIPLIB_DIRS = ()
 +
 +
 +def find_zip(libprefix):
 +for ziplibdir in ZIPLIB_DIRS:
 +zips = glob(os.path.join(ziplibdir, libprefix + '*.zip'))
 +if zips:
 +return max(zips)   # probably the highest version, if multiple
 +
 +cql_zip = find_zip(CQL_LIB_PREFIX)
 +if cql_zip:
 +ver = os.path.splitext(os.path.basename(cql_zip))[0][len(CQL_LIB_PREFIX):]
 +sys.path.insert(0, os.path.join(cql_zip, 'cassandra-driver-' + ver))
 +
 +third_parties = ('futures-', 'six-')
 +
 +for lib in third_parties:
 +lib_zip = find_zip(lib)
 +if lib_zip:
 +sys.path.insert(0, lib_zip)
 +
 +warnings.filterwarnings(ignore, r.*blist.*)
 +try:
 +import cassandra
 +except ImportError, e:
 +sys.exit(\nPython Cassandra driver not installed, or not on 
PYTHONPATH.\n
 + 'You might try pip install cassandra-driver.\n\n'
 + 'Python: %s\n'
 + 'Module load path: %r\n\n'
 + 'Error: %s\n' % (sys.executable, sys.path, e))
 +
 +from cassandra.cluster import Cluster, PagedResult
 +from cassandra.query import SimpleStatement, ordered_dict_factory
 +from cassandra.policies import WhiteListRoundRobinPolicy
 +from cassandra.protocol import QueryMessage, ResultMessage
 +from cassandra.metadata import protect_name, protect_names, protect_value, 
KeyspaceMetadata, TableMetadata, ColumnMetadata
 +from cassandra.auth import PlainTextAuthProvider
 +
 +# cqlsh should run correctly when run out of a Cassandra source tree,
 +# out of an unpacked Cassandra tarball, and after a proper package install.
 +cqlshlibdir = os.path.join(CASSANDRA_PATH, 'pylib')
 +if os.path.isdir(cqlshlibdir):
 +sys.path.insert(0, cqlshlibdir)
 +
 +from cqlshlib import cqlhandling, cql3handling, pylexotron, sslhandling
 +from cqlshlib.displaying import (RED, BLUE, CYAN, ANSI_RESET, 
COLUMN_NAME_COLORS,
 + FormattedValue, colorme)
 +from cqlshlib.formatting import format_by_type, formatter_for, 
format_value_utype
 +from cqlshlib.util import trim_if_present, get_file_encoding_bomsize
 +from cqlshlib.formatting import DateTimeFormat
 +from cqlshlib.formatting import 

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
NEWS.txt
pylib/cqlshlib/formatting.py


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce7159e1
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce7159e1
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce7159e1

Branch: refs/heads/cassandra-2.2
Commit: ce7159e102971c27fd414ce03ced2451c7498bcd
Parents: 8ffeebf 1ecc9cd
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:49:42 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:49:42 2015 -0500

--
 NEWS.txt | 24 
 pylib/cqlshlib/formatting.py | 28 
 pylib/cqlshlib/util.py   | 15 +++
 3 files changed, 43 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce7159e1/NEWS.txt
--
diff --cc NEWS.txt
index e9c6ef8,0b64e31..1f5e190
--- a/NEWS.txt
+++ b/NEWS.txt
@@@ -13,141 -13,35 +13,165 @@@ restore snapshots created with the prev
  'sstableloader' tool. You can upgrade the file format of your snapshots
  using the provided 'sstableupgrade' tool.
  
 +2.2
 +===
 +
 +New features
 +
 +   - Selecting columns,scalar functions, UDT fields, writetime or ttl together
 + with aggregated is now possible. The value returned for the columns,
 + scalar functions, UDT fields, writetime and ttl will be the ones for
 + the first row matching the query.
 +   - Windows is now a supported platform. Powershell execution for startup 
scripts
 + is highly recommended and can be enabled via an administrator 
command-prompt
 + with: 'powershell set-executionpolicy unrestricted'
 +   - It is now possible to do major compactions when using leveled compaction.
 + Doing that will take all sstables and compact them out in levels. The
 + levels will be non overlapping so doing this will still not be something
 + you want to do very often since it might cause more compactions for a 
while.
 + It is also possible to split output when doing a major compaction with
 + STCS - files will be split in sizes 50%, 25%, 12.5% etc of the total 
size.
 + This might be a bit better than old major compactions which created one 
big
 + file on disk.
 +   - A new tool has been added bin/sstableverify that checks for errors/bitrot
 + in all sstables.  Unlike scrub, this is a non-invasive tool.
 +   - Authentication  Authorization APIs have been updated to introduce
 + roles. Roles and Permissions granted to them are inherited, supporting
 + role based access control. The role concept supercedes that of users
 + and CQL constructs such as CREATE USER are deprecated but retained for
 + compatibility. The requirement to explicitly create Roles in Cassandra
 + even when auth is handled by an external system has been removed, so
 + authentication  authorization can be delegated to such systems in their
 + entirety.
 +   - In addition to the above, Roles are also first class resources and can 
be the
 + subject of permissions. Users (roles) can now be granted permissions on 
other
 + roles, including CREATE, ALTER, DROP  AUTHORIZE, which removesthe need 
for
 + superuser privileges in order to perform user/role management operations.
 +   - Creators of database resources (Keyspaces, Tables, Roles) are now 
automatically
 + granted all permissions on them (if the IAuthorizer implementation 
supports
 + this).
 +   - SSTable file name is changed. Now you don't have Keyspace/CF name
 + in file name. Also, secondary index has its own directory under parent's
 + directory.
 +   - Support for user-defined functions and user-defined aggregates have
 + been added to CQL.
 + 
 + IMPORTANT NOTE: user-defined functions can be used to execute
 + arbitrary and possibly evil code in Cassandra 2.2, and are
 + therefore disabled by default.  To enable UDFs edit
 + cassandra.yaml and set enable_user_defined_functions to true.
 +
 + CASSANDRA-9402 will add a security manager for UDFs in Cassandra
 + 3.0.  This will inherently be backwards-incompatible with any 2.2
 + UDF that perform insecure operations such as opening a socket or
 + writing to the filesystem.
 + 
 +   - Row-cache is now fully off-heap.
 +   - jemalloc is now automatically preloaded and used on Linux and OS-X if
 + installed.
 +   - Please ensure on Unix platforms that there is no libjnadispath.so
 + installed which 

[2/4] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67903d77
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67903d77
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67903d77

Branch: refs/heads/trunk
Commit: 67903d7789d88aebb524480ff9854c666ce3cace
Parents: ce7159e 1e3f03e
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:52:47 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:52:47 2015 -0500

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67903d77/CHANGES.txt
--
diff --cc CHANGES.txt
index 0ed63f1,3b0241c..705e560
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,6 +1,12 @@@
 -2.1.9
 +2.2.1
 + * Add checksum to saved cache files (CASSANDRA-9265)
 + * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 + * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
 + * UDF / UDA execution time in trace (CASSANDRA-9723)
 + * Fix broken internode SSL (CASSANDRA-9884)
 +Merged from 2.1:
+  * (cqlsh) Fix timestamps before 1970 on Windows, always
+use UTC for timestamp display (CASSANDRA-1)
   * (cqlsh) Avoid overwriting new config file with old config
 when both exist (CASSANDRA-9777)
   * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)



[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69f0b893
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69f0b893
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69f0b893

Branch: refs/heads/cassandra-3.0
Commit: 69f0b8936b660fda91b316c73de9294b898e47c8
Parents: dfc1a44 67903d7
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:53:04 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:53:04 2015 -0500

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69f0b893/CHANGES.txt
--
diff --cc CHANGES.txt
index f5cf1b2,705e560..216d3f7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,12 +1,18 @@@
 -2.2.1
 +3.0.0-beta1
 + * Fix multiple slices on RowSearchers (CASSANDRA-10002)
 + * Fix bug in merging of collections (CASSANDRA-10001)
 + * Optimize batchlog replay to avoid full scans (CASSANDRA-7237)
 + * Repair improvements when using vnodes (CASSANDRA-5220)
 + * Disable scripted UDFs by default (CASSANDRA-9889)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 + * Bytecode inspection for Java-UDFs (CASSANDRA-9890)
 + * Use byte to serialize MT hash length (CASSANDRA-9792)
 +Merged from 2.2:
   * Add checksum to saved cache files (CASSANDRA-9265)
   * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 - * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
 - * UDF / UDA execution time in trace (CASSANDRA-9723)
 - * Fix broken internode SSL (CASSANDRA-9884)
  Merged from 2.1:
+  * (cqlsh) Fix timestamps before 1970 on Windows, always
+use UTC for timestamp display (CASSANDRA-1)
   * (cqlsh) Avoid overwriting new config file with old config
 when both exist (CASSANDRA-9777)
   * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)



[1/3] cassandra git commit: Add CHANGES.txt entry for CASSANDRA-10000

2015-08-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 dfc1a4450 - 69f0b8936


Add CHANGES.txt entry for CASSANDRA-1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1e3f03e5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1e3f03e5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1e3f03e5

Branch: refs/heads/cassandra-3.0
Commit: 1e3f03e5b7b444be3597649d416ce7db3604815a
Parents: 1ecc9cd
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:52:04 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:52:04 2015 -0500

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1e3f03e5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c4409c1..3b0241c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.9
+ * (cqlsh) Fix timestamps before 1970 on Windows, always
+   use UTC for timestamp display (CASSANDRA-1)
  * (cqlsh) Avoid overwriting new config file with old config
when both exist (CASSANDRA-9777)
  * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)



[4/4] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ed9343ed
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ed9343ed
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ed9343ed

Branch: refs/heads/trunk
Commit: ed9343edf7d459f9d7b3aa2e6bb579d3a3963af0
Parents: 8902074 69f0b89
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:53:26 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:53:26 2015 -0500

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--




[1/4] cassandra git commit: Add CHANGES.txt entry for CASSANDRA-10000

2015-08-07 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk 8902074a0 - ed9343edf


Add CHANGES.txt entry for CASSANDRA-1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1e3f03e5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1e3f03e5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1e3f03e5

Branch: refs/heads/trunk
Commit: 1e3f03e5b7b444be3597649d416ce7db3604815a
Parents: 1ecc9cd
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:52:04 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:52:04 2015 -0500

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1e3f03e5/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index c4409c1..3b0241c 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 2.1.9
+ * (cqlsh) Fix timestamps before 1970 on Windows, always
+   use UTC for timestamp display (CASSANDRA-1)
  * (cqlsh) Avoid overwriting new config file with old config
when both exist (CASSANDRA-9777)
  * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)



[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67903d77
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67903d77
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67903d77

Branch: refs/heads/cassandra-3.0
Commit: 67903d7789d88aebb524480ff9854c666ce3cace
Parents: ce7159e 1e3f03e
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:52:47 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:52:47 2015 -0500

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67903d77/CHANGES.txt
--
diff --cc CHANGES.txt
index 0ed63f1,3b0241c..705e560
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,6 +1,12 @@@
 -2.1.9
 +2.2.1
 + * Add checksum to saved cache files (CASSANDRA-9265)
 + * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 + * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
 + * UDF / UDA execution time in trace (CASSANDRA-9723)
 + * Fix broken internode SSL (CASSANDRA-9884)
 +Merged from 2.1:
+  * (cqlsh) Fix timestamps before 1970 on Windows, always
+use UTC for timestamp display (CASSANDRA-1)
   * (cqlsh) Avoid overwriting new config file with old config
 when both exist (CASSANDRA-9777)
   * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)



[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/67903d77
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/67903d77
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/67903d77

Branch: refs/heads/cassandra-2.2
Commit: 67903d7789d88aebb524480ff9854c666ce3cace
Parents: ce7159e 1e3f03e
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:52:47 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:52:47 2015 -0500

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/67903d77/CHANGES.txt
--
diff --cc CHANGES.txt
index 0ed63f1,3b0241c..705e560
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,10 -1,6 +1,12 @@@
 -2.1.9
 +2.2.1
 + * Add checksum to saved cache files (CASSANDRA-9265)
 + * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 + * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
 + * UDF / UDA execution time in trace (CASSANDRA-9723)
 + * Fix broken internode SSL (CASSANDRA-9884)
 +Merged from 2.1:
+  * (cqlsh) Fix timestamps before 1970 on Windows, always
+use UTC for timestamp display (CASSANDRA-1)
   * (cqlsh) Avoid overwriting new config file with old config
 when both exist (CASSANDRA-9777)
   * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)



[3/4] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-08-07 Thread tylerhobbs
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/69f0b893
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/69f0b893
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/69f0b893

Branch: refs/heads/trunk
Commit: 69f0b8936b660fda91b316c73de9294b898e47c8
Parents: dfc1a44 67903d7
Author: Tyler Hobbs tylerlho...@gmail.com
Authored: Fri Aug 7 15:53:04 2015 -0500
Committer: Tyler Hobbs tylerlho...@gmail.com
Committed: Fri Aug 7 15:53:04 2015 -0500

--
 CHANGES.txt | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/69f0b893/CHANGES.txt
--
diff --cc CHANGES.txt
index f5cf1b2,705e560..216d3f7
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,16 -1,12 +1,18 @@@
 -2.2.1
 +3.0.0-beta1
 + * Fix multiple slices on RowSearchers (CASSANDRA-10002)
 + * Fix bug in merging of collections (CASSANDRA-10001)
 + * Optimize batchlog replay to avoid full scans (CASSANDRA-7237)
 + * Repair improvements when using vnodes (CASSANDRA-5220)
 + * Disable scripted UDFs by default (CASSANDRA-9889)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 + * Bytecode inspection for Java-UDFs (CASSANDRA-9890)
 + * Use byte to serialize MT hash length (CASSANDRA-9792)
 +Merged from 2.2:
   * Add checksum to saved cache files (CASSANDRA-9265)
   * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 - * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
 - * UDF / UDA execution time in trace (CASSANDRA-9723)
 - * Fix broken internode SSL (CASSANDRA-9884)
  Merged from 2.1:
+  * (cqlsh) Fix timestamps before 1970 on Windows, always
+use UTC for timestamp display (CASSANDRA-1)
   * (cqlsh) Avoid overwriting new config file with old config
 when both exist (CASSANDRA-9777)
   * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)



[jira] [Updated] (CASSANDRA-10020) Support eager retries for range queries

2015-08-07 Thread Jonathan Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Ellis updated CASSANDRA-10020:
---
  Priority: Minor  (was: Critical)
Issue Type: New Feature  (was: Bug)
   Summary: Support eager retries for range queries  (was: Range queries 
don't go on all replicas. )

 Support eager retries for range queries
 ---

 Key: CASSANDRA-10020
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10020
 Project: Cassandra
  Issue Type: New Feature
  Components: Core
Reporter: Gautam Kumar
Priority: Minor

 A simple query like `select * from table` may time out if one of the nodes 
 fail.
 We had a 4-node cassandra cluster with RF=3 and CL=LOCAL_QUORUM. 
 The range query is issued to only two as per ConsistencyLevel.java: 
 liveEndpoints.subList(0, Math.min(liveEndpoints.size(), blockFor(keyspace)));
 If a node fails amongst this sublist, the query will time out. Why don't you 
 issue range queries to all replicas? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10021) Losing writes in a single-node cluster

2015-08-07 Thread Jeremy Schlatter (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Schlatter updated CASSANDRA-10021:
-
Environment: Docker images  (was: I have been testing this against Docker's 
official Cassandra image.)

 Losing writes in a single-node cluster
 --

 Key: CASSANDRA-10021
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10021
 Project: Cassandra
  Issue Type: Bug
 Environment: Docker images
Reporter: Jeremy Schlatter
 Attachments: cpp-repro.zip, go-repro.zip


 I am able to reliably reproduce write losses in the following circumstances:
 * Set up a single-node cluster.
 * Create keyspace with SimpleStrategy, replication_factor = 1.
 * Create a table with a float field.
 * Send an UPDATE command to set the float value on a row.
 * After the command returns, immediately send another UPDATE to set the float 
 value to something _smaller_ than the first value.
 * The second UPDATE is sometimes lost.
 Reproduction code attached. There are two implementations: one in Go and one 
 in C++. They do the same thing -- I implemented both to rule out a bug in the 
 client library. For both cases, you can reproduce by doing the following:
 1. docker run --name repro-cassandra --rm cassandra:2.0.14
 (or any other Cassandra version)
 2. Download and unzip one of the zip files, and change to its directory.
 3. docker build -t repro .
 4. docker run --link repro-cassandra:cassandra --rm repro
 The reproduction code will repeatedly run two UPDATEs followed by a SELECT 
 until it detects a lost write, and then print:
 Lost a write. Got 20.50, want 10.50
 This may be fixed in 2.1.8 because I have not been able to reproduce it in 
 that version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9927) Security for MaterializedViews

2015-08-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662600#comment-14662600
 ] 

Aleksey Yeschenko commented on CASSANDRA-9927:
--

The doc is for 2.0 and 2.1, it says so in the title. In 2.2 and 3.0 that is no 
longer true.

What base permissions we should require for {{CREATE MV}} is the only minor 
detail I'm not certain about. Maybe we should just treat them as indexes for 
now and require {{ALTER}} on the base table for both {{CREATE MV}} and {{DROP 
MV}}. [~beobal] ?

 Security for MaterializedViews
 --

 Key: CASSANDRA-9927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9927
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
Assignee: Paulo Motta
  Labels: materializedviews
 Fix For: 3.0 beta 1


 We need to think about how to handle security wrt materialized views. Since 
 they are based on a source table we should possibly inherit the same security 
 model as that table.  
 However I can see cases where users would want to create different security 
 auth for different views.  esp once we have CASSANDRA-9664 and users can 
 filter out sensitive data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-9704) On-wire backward compatibility for 8099

2015-08-07 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-9704.

   Resolution: Fixed
Fix Version/s: (was: 3.0 alpha 1)
   3.0 beta 1

All dtest failures on existing tests appear to not be regressions, so I've 
committed this as {{8c64cefd19d706003d4b33b333274dbf17c9cb34}} to 3.0 and 
merged to trunk.

A few of the new upgrade dtests are still failing.  I will document those in 
CASSANDRA-9893 so that somebody can continue the work there.

 On-wire backward compatibility for 8099
 ---

 Key: CASSANDRA-9704
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9704
 Project: Cassandra
  Issue Type: Sub-task
Reporter: Sylvain Lebresne
Assignee: Tyler Hobbs
 Fix For: 3.0 beta 1

 Attachments: 9704-2.1.txt


 The currently committed patch for CASSANDRA-8099 has left backward 
 compatibility on the wire as a TODO. This ticket is to track the actual doing 
 (of which I know [~thobbs] has already done a good chunk).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9927) Security for MaterializedViews

2015-08-07 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662609#comment-14662609
 ] 

Sam Tunnicliffe commented on CASSANDRA-9927:


bq.Maybe we should just treat them as indexes for now
sgtm for now

 Security for MaterializedViews
 --

 Key: CASSANDRA-9927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9927
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
Assignee: Paulo Motta
  Labels: materializedviews
 Fix For: 3.0 beta 1


 We need to think about how to handle security wrt materialized views. Since 
 they are based on a source table we should possibly inherit the same security 
 model as that table.  
 However I can see cases where users would want to create different security 
 auth for different views.  esp once we have CASSANDRA-9664 and users can 
 filter out sensitive data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9893) Fix upgrade tests from #9704 that are still failing

2015-08-07 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662626#comment-14662626
 ] 

Tyler Hobbs commented on CASSANDRA-9893:


The three tests that appear to still be failing for me are: {{bug_5240_test}}, 
{{bug_5732_test}}, and {{edge_2i_on_complex_pk_test}}.  There may be other 
occasional failures that don't show up on every run due to different token 
assignments (only the upgraded or non-upgraded node may own a particularly 
partition).

The {{bug_5732_test}} failure is due to CASSANDRA-10006.

The new tests have a few useful environment variables that make it easier to 
debug problems.  These are documented at the top of 
{{upgrade_tests/upgrade_base.py}}.  While working on these tests, I normally 
used two cassandra directories: one for 3.0, and one for 2.1 with debug 
logging.  For example, I might run a test like this:

{noformat}
UPGRADE_MODE=normal DEBUG=true CASSANDRA_DIR=/home/thobbs/cassandra 
OLD_CASSANDRA_DIR=/home/thobbs/cassandra2 nosetests  
upgrade_tests/cql_tests.py:TestCQL.select_distinct_with_deletions_test
{noformat}

For 2.1 debug logging, I started this branch with an excessive amount of 
logging around command and response serialization: 
https://github.com/thobbs/cassandra/tree/8099-backward-compat-2.1-debug.

The {{UPGRADE_MODE}} variable is useful for seeing exactly what commands or 
responses 2.1 or 3.0 expect.  For example, to see how 2.1 handles a particular 
query, you can do:

{noformat}
UPGRADE_MODE=none KEEP_LOGS=true DEBUG=true 
CASSANDRA_DIR=/home/thobbs/cassandra OLD_CASSANDRA_DIR=/home/thobbs/cassandra2 
nosetests  
upgrade_tests/cql_tests.py:TestCQL.select_distinct_with_deletions_test
{noformat}

Note the extra {{KEEP_LOGS=true}}, as the test should  pass without the upgrade 
step.

In some cases I've had problems with the new auth schema in 3.0.  This can be 
worked around by doing something like this in {{upgrade_base.py}}:

{noformat}
node1.start(wait_for_binary_proto=True, 
jvm_args=['-Dcassandra.superuser_setup_delay_ms=100'])
{noformat}

However, I have not committed this, because I think this might cause legitimate 
problems for users.  We probably either need to do CASSANDRA-9289 or 
investigate why reconnections are still causing queries to be dropped.


 Fix upgrade tests from #9704 that are still failing
 ---

 Key: CASSANDRA-9893
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9893
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
 Fix For: 3.0.0 rc1


 The first things to do on this ticket would be to commit Tyler's branch 
 (https://github.com/thobbs/cassandra-dtest/tree/8099-backwards-compat) to the 
 dtests so cassci run them. I've had to do a few minor modifications to have 
 them run locally so someone which access to cassci should do it and make sure 
 it runs properly.
 Once we have that, we should fix any test that isn't passing. I've ran the 
 tests locally and I had 8 failures. for 2 of them, it sounds plausible that 
 they'll get fixed by the patch of CASSANDRA-9775, though that's just a guess. 
  The rest where test that timeouted without a particular error in the log, 
 and running some of them individually, they passed.  So we'll have to see if 
 it's just my machine being overly slow when running them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9324) Map Mutation rejected by Cassandra: IllegalArgumentException

2015-08-07 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9324:
---
Fix Version/s: (was: 2.1.x)

 Map Mutation rejected by Cassandra: IllegalArgumentException
 

 Key: CASSANDRA-9324
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9324
 Project: Cassandra
  Issue Type: Bug
  Components: API
 Environment: Windows 7, Cassandra 2.1.5
Reporter: Mark Wick
Assignee: Tyler Hobbs
Priority: Minor

 We use a collection (mapascii,ascii) in a CQL3 table. We write into that 
 cql3 table using thrift mutations, from a c++ application. We are prototyping 
 migrating from our current Cassandra (2.0.7) to 2.1.5, and are unable to 
 write rows to this cql3 table. We have no problems when we remove the writes 
 to the map column, and all other writes succeed in this case. Cassandra is 
 rejecting our writes and we are catching a TTransportException (no more data 
 to read). The below call stack is from the Cassandra instance that is 
 rejecting the write.
 {code}
 ERROR 14:08:10 Error occurred during processing of message.
 java.lang.IllegalArgumentException: null
 at java.nio.Buffer.limit(Unknown Source) ~[na:1.7.0_71]
 at 
 org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:543) 
 ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:124)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.serializers.MapSerializer.validateForNativeProtocol(MapSerializer.java:80)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.serializers.CollectionSerializer.validate(CollectionSerializer.java:61)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.db.marshal.AbstractType.validate(AbstractType.java:97) 
 ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.thrift.ThriftValidation.validateColumnData(ThriftValidation.java:449)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.thrift.ThriftValidation.validateColumnOrSuperColumn(ThriftValidation.java:318)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.thrift.ThriftValidation.validateMutation(ThriftValidation.java:385)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.thrift.CassandraServer.createMutationList(CassandraServer.java:861)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.thrift.CassandraServer.batch_mutate(CassandraServer.java:976)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3996)
  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
 at 
 org.apache.cassandra.thrift.Cassandra$Processor$batch_mutate.getResult(Cassandra.java:3980)
  ~[apache-cassandra-thrift-2.1.5.jar:2.1.5]
 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
 ~[libthrift-0.9.2.jar:0.9.2]
 at 
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:205)
  ~[apache-cassandra-2.1.5.jar:2.1.5]
 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
 [na:1.7.0_71]
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
 [na:1.7.0_71]
 at java.lang.Thread.run(Unknown Source) [na:1.7.0_71]{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9893) Fix upgrade tests from #9704 that are still failing

2015-08-07 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-9893:
---
Assignee: Benjamin Lerer

 Fix upgrade tests from #9704 that are still failing
 ---

 Key: CASSANDRA-9893
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9893
 Project: Cassandra
  Issue Type: Bug
Reporter: Sylvain Lebresne
Assignee: Benjamin Lerer
 Fix For: 3.0.0 rc1


 The first things to do on this ticket would be to commit Tyler's branch 
 (https://github.com/thobbs/cassandra-dtest/tree/8099-backwards-compat) to the 
 dtests so cassci run them. I've had to do a few minor modifications to have 
 them run locally so someone which access to cassci should do it and make sure 
 it runs properly.
 Once we have that, we should fix any test that isn't passing. I've ran the 
 tests locally and I had 8 failures. for 2 of them, it sounds plausible that 
 they'll get fixed by the patch of CASSANDRA-9775, though that's just a guess. 
  The rest where test that timeouted without a particular error in the log, 
 and running some of them individually, they passed.  So we'll have to see if 
 it's just my machine being overly slow when running them all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/4] cassandra git commit: Release the self ref when opening snapshot sstable readers

2015-08-07 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 448e276b7 - 4f51341b5


Release the self ref when opening snapshot sstable readers

Patch by marcuse; reviewed by benedict for CASSANDRA-9998


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9b975c2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9b975c2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9b975c2

Branch: refs/heads/trunk
Commit: e9b975c279578fe2d6f25368fe2839e1d0572371
Parents: e1bb792
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Aug 6 15:22:50 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:20:06 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9b975c2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9a475ea..75bdcde 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.9
+ * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
  * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
  * Fix handling of enable/disable autocompaction (CASSANDRA-9899)
  * Commit log segment recycling is disabled by default (CASSANDRA-9896)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9b975c2/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index ad66f8e..6777e7a 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2355,8 +2355,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logger.debug(using snapshot sstable {}, 
entries.getKey());
 // open without tracking hotness
 sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner, true, false);
-// This is technically not necessary since it's a snapshot 
but makes things easier
 refs.tryRef(sstable);
+// release the self ref as we never add the snapshot 
sstable to DataTracker where it is otherwise released
+sstable.selfRef().release();
 }
 else if (logger.isDebugEnabled())
 {



[2/4] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf08e663
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf08e663
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf08e663

Branch: refs/heads/trunk
Commit: bf08e663b7b0e88fdeb2a0cbd7b754e168dde77a
Parents: ed4fad1 e9b975c
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Aug 7 08:21:47 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:21:47 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf08e663/CHANGES.txt
--
diff --cc CHANGES.txt
index ff0fdda,75bdcde..6e17be0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,29 -1,9 +1,30 @@@
 -2.1.9
 +2.2.1
 + * Add checksum to saved cache files (CASSANDRA-9265)
 + * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 + * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
 + * UDF / UDA execution time in trace (CASSANDRA-9723)
 + * Fix broken internode SSL (CASSANDRA-9884)
 +Merged from 2.1:
+  * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
   * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
   * Fix handling of enable/disable autocompaction (CASSANDRA-9899)
 - * Commit log segment recycling is disabled by default (CASSANDRA-9896)
   * Add consistency level to tracing ouput (CASSANDRA-9827)
 + * Remove repair snapshot leftover on startup (CASSANDRA-7357)
 + * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
 + * Ensure atomicity inside thrift and stream session (CASSANDRA-7757)
 + * Fix nodetool info error when the node is not joined (CASSANDRA-9031)
 +Merged from 2.0:
 + * Log when messages are dropped due to cross_node_timeout (CASSANDRA-9793)
 + * Don't track hotness when opening from snapshot for validation 
(CASSANDRA-9382)
 +
 +
 +2.2.0
 + * Allow the selection of columns together with aggregates (CASSANDRA-9767)
 + * Fix cqlsh copy methods and other windows specific issues (CASSANDRA-9795)
 + * Don't wrap byte arrays in SequentialWriter (CASSANDRA-9797)
 + * sum() and avg() functions missing for smallint and tinyint types 
(CASSANDRA-9671)
 + * Revert CASSANDRA-9542 (allow native functions in UDA) (CASSANDRA-9771)
 +Merged from 2.1:
   * Fix MarshalException when upgrading superColumn family (CASSANDRA-9582)
   * Fix broken logging for empty flushes in Memtable (CASSANDRA-9837)
   * Handle corrupt files on startup (CASSANDRA-9686)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf08e663/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[jira] [Updated] (CASSANDRA-9927) Security for MaterializedViews

2015-08-07 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9927:
-
Reviewer: Aleksey Yeschenko

 Security for MaterializedViews
 --

 Key: CASSANDRA-9927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9927
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
Assignee: Paulo Motta
  Labels: materializedviews
 Fix For: 3.0 beta 1


 We need to think about how to handle security wrt materialized views. Since 
 they are based on a source table we should possibly inherit the same security 
 model as that table.  
 However I can see cases where users would want to create different security 
 auth for different views.  esp once we have CASSANDRA-9664 and users can 
 filter out sensitive data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9962) WaitQueueTest is flakey

2015-08-07 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9962:

Fix Version/s: (was: 3.x)
   3.0 beta 1
   2.2.1
   2.1.9

 WaitQueueTest is flakey
 ---

 Key: CASSANDRA-9962
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9962
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor
 Fix For: 2.1.9, 2.2.1, 3.0 beta 1


 While the test is a little noddy, and superfluous, it shouldn't fail even 
 vanishingly infrequently. [~aweisberg] has spotted it doing so, and I have 
 also encountered it once, so I suspect that a change in hardware/OS may have 
 made vanishingly unlikely just pretty unlikely, which is even less good. 
 Right now it depends on {{Thread.start()}} completing before the new thread 
 starts; this isn't guaranteed. This patch fixes that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9927) Security for MaterializedViews

2015-08-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661378#comment-14661378
 ] 

Aleksey Yeschenko commented on CASSANDRA-9927:
--

bq. So let's go with the latter.

+1

 Security for MaterializedViews
 --

 Key: CASSANDRA-9927
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9927
 Project: Cassandra
  Issue Type: Task
Reporter: T Jake Luciani
Assignee: Paulo Motta
  Labels: materializedviews
 Fix For: 3.0 beta 1


 We need to think about how to handle security wrt materialized views. Since 
 they are based on a source table we should possibly inherit the same security 
 model as that table.  
 However I can see cases where users would want to create different security 
 auth for different views.  esp once we have CASSANDRA-9664 and users can 
 filter out sensitive data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9925) Sorted + Unique BTree.Builder usage can use a thread-local TreeBuilder directly

2015-08-07 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-9925:

Fix Version/s: (was: 3.0.x)

 Sorted + Unique BTree.Builder usage can use a thread-local TreeBuilder 
 directly
 ---

 Key: CASSANDRA-9925
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9925
 Project: Cassandra
  Issue Type: Improvement
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Minor

 There are now a number of situations where we use a BTree.Builder that could 
 be made to go directly to the TreeBuilder, since they only perform in-order 
 unique additions to build an initial tree. This would potentially avoid a 
 number of array allocations/copies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9998) LEAK DETECTED with snapshot/sequential repairs

2015-08-07 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661382#comment-14661382
 ] 

Benedict commented on CASSANDRA-9998:
-

+1

 LEAK DETECTED with snapshot/sequential repairs
 --

 Key: CASSANDRA-9998
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9998
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 3.x, 2.1.x, 2.2.x


 http://cassci.datastax.com/job/cassandra-2.1_dtest/lastCompletedBuild/testReport/repair_test/TestRepair/simple_sequential_repair_test/
 does not happen if I add -par to the test



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[4/4] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-07 Thread marcuse
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4f51341b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4f51341b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4f51341b

Branch: refs/heads/trunk
Commit: 4f51341b5695f718dc6b0a66ee264664a1d66094
Parents: 448e276 5285000
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Aug 7 08:26:32 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:26:32 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--




[3/4] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-08-07 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/db/ColumnFamilyStore.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/52850009
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/52850009
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/52850009

Branch: refs/heads/trunk
Commit: 5285000908e486ae80ef1a289e8a055d65d5dd7f
Parents: 0a94c7a bf08e66
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Aug 7 08:23:36 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:23:36 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/52850009/CHANGES.txt
--
diff --cc CHANGES.txt
index a894297,6e17be0..9aca2ce
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,44 -1,6 +1,45 @@@
 -2.2.1
 +3.0.0-beta1
 + * Optimize batchlog replay to avoid full scans (CASSANDRA-7237)
 + * Repair improvements when using vnodes (CASSANDRA-5220)
 + * Disable scripted UDFs by default (CASSANDRA-9889)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 + * Bytecode inspection for Java-UDFs (CASSANDRA-9890)
 + * Use byte to serialize MT hash length (CASSANDRA-9792)
 +Merged from 2.2:
   * Add checksum to saved cache files (CASSANDRA-9265)
   * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 +Merged from 2.1:
++ * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
 + * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
 +Merged from 2.0:
 + * Don't cast expected bf size to an int (CASSANDRA-9959)
 +
 +
 +3.0.0-alpha1
 + * Implement proper sandboxing for UDFs (CASSANDRA-9402)
 + * Simplify (and unify) cleanup of compaction leftovers (CASSANDRA-7066)
 + * Allow extra schema definitions in cassandra-stress yaml (CASSANDRA-9850)
 + * Metrics should use up to date nomenclature (CASSANDRA-9448)
 + * Change CREATE/ALTER TABLE syntax for compression (CASSANDRA-8384)
 + * Cleanup crc and adler code for java 8 (CASSANDRA-9650)
 + * Storage engine refactor (CASSANDRA-8099, 9743, 9746, 9759, 9781, 9808, 
9825,
 +   9848, 9705, 9859, 9867, 9874, 9828, 9801)
 + * Update Guava to 18.0 (CASSANDRA-9653)
 + * Bloom filter false positive ratio is not honoured (CASSANDRA-8413)
 + * New option for cassandra-stress to leave a ratio of columns null 
(CASSANDRA-9522)
 + * Change hinted_handoff_enabled yaml setting, JMX (CASSANDRA-9035)
 + * Add algorithmic token allocation (CASSANDRA-7032)
 + * Add nodetool command to replay batchlog (CASSANDRA-9547)
 + * Make file buffer cache independent of paths being read (CASSANDRA-8897)
 + * Remove deprecated legacy Hadoop code (CASSANDRA-9353)
 + * Decommissioned nodes will not rejoin the cluster (CASSANDRA-8801)
 + * Change gossip stabilization to use endpoit size (CASSANDRA-9401)
 + * Change default garbage collector to G1 (CASSANDRA-7486)
 + * Populate TokenMetadata early during startup (CASSANDRA-9317)
 + * Undeprecate cache recentHitRate (CASSANDRA-6591)
 + * Add support for selectively varint encoding fields (CASSANDRA-9499, 9865)
 + * Materialized Views (CASSANDRA-6477)
 +Merged from 2.2:
   * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
   * UDF / UDA execution time in trace (CASSANDRA-9723)
   * Fix broken internode SSL (CASSANDRA-9884)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/52850009/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 255f9a0,5e8c521..beb2b93
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -1600,9 -2363,10 +1600,10 @@@ public class ColumnFamilyStore implemen
  if (logger.isDebugEnabled())
  logger.debug(using snapshot sstable {}, 
entries.getKey());
  // open without tracking hotness
 -sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner, true, false);
 +sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, true, false);
- // This is technically not necessary since it's a 
snapshot but makes things easier
  refs.tryRef(sstable);
+ // release the self ref as we never add the snapshot 
sstable to DataTracker where it is otherwise released

[1/3] cassandra git commit: Release the self ref when opening snapshot sstable readers

2015-08-07 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 0a94c7a36 - 528500090


Release the self ref when opening snapshot sstable readers

Patch by marcuse; reviewed by benedict for CASSANDRA-9998


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9b975c2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9b975c2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9b975c2

Branch: refs/heads/cassandra-3.0
Commit: e9b975c279578fe2d6f25368fe2839e1d0572371
Parents: e1bb792
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Aug 6 15:22:50 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:20:06 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9b975c2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9a475ea..75bdcde 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.9
+ * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
  * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
  * Fix handling of enable/disable autocompaction (CASSANDRA-9899)
  * Commit log segment recycling is disabled by default (CASSANDRA-9896)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9b975c2/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index ad66f8e..6777e7a 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2355,8 +2355,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logger.debug(using snapshot sstable {}, 
entries.getKey());
 // open without tracking hotness
 sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner, true, false);
-// This is technically not necessary since it's a snapshot 
but makes things easier
 refs.tryRef(sstable);
+// release the self ref as we never add the snapshot 
sstable to DataTracker where it is otherwise released
+sstable.selfRef().release();
 }
 else if (logger.isDebugEnabled())
 {



[1/2] cassandra git commit: Release the self ref when opening snapshot sstable readers

2015-08-07 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.2 ed4fad19e - bf08e663b


Release the self ref when opening snapshot sstable readers

Patch by marcuse; reviewed by benedict for CASSANDRA-9998


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9b975c2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9b975c2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9b975c2

Branch: refs/heads/cassandra-2.2
Commit: e9b975c279578fe2d6f25368fe2839e1d0572371
Parents: e1bb792
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Aug 6 15:22:50 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:20:06 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9b975c2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9a475ea..75bdcde 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.9
+ * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
  * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
  * Fix handling of enable/disable autocompaction (CASSANDRA-9899)
  * Commit log segment recycling is disabled by default (CASSANDRA-9896)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9b975c2/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index ad66f8e..6777e7a 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2355,8 +2355,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logger.debug(using snapshot sstable {}, 
entries.getKey());
 // open without tracking hotness
 sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner, true, false);
-// This is technically not necessary since it's a snapshot 
but makes things easier
 refs.tryRef(sstable);
+// release the self ref as we never add the snapshot 
sstable to DataTracker where it is otherwise released
+sstable.selfRef().release();
 }
 else if (logger.isDebugEnabled())
 {



[2/3] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf08e663
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf08e663
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf08e663

Branch: refs/heads/cassandra-3.0
Commit: bf08e663b7b0e88fdeb2a0cbd7b754e168dde77a
Parents: ed4fad1 e9b975c
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Aug 7 08:21:47 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:21:47 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf08e663/CHANGES.txt
--
diff --cc CHANGES.txt
index ff0fdda,75bdcde..6e17be0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,29 -1,9 +1,30 @@@
 -2.1.9
 +2.2.1
 + * Add checksum to saved cache files (CASSANDRA-9265)
 + * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 + * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
 + * UDF / UDA execution time in trace (CASSANDRA-9723)
 + * Fix broken internode SSL (CASSANDRA-9884)
 +Merged from 2.1:
+  * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
   * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
   * Fix handling of enable/disable autocompaction (CASSANDRA-9899)
 - * Commit log segment recycling is disabled by default (CASSANDRA-9896)
   * Add consistency level to tracing ouput (CASSANDRA-9827)
 + * Remove repair snapshot leftover on startup (CASSANDRA-7357)
 + * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
 + * Ensure atomicity inside thrift and stream session (CASSANDRA-7757)
 + * Fix nodetool info error when the node is not joined (CASSANDRA-9031)
 +Merged from 2.0:
 + * Log when messages are dropped due to cross_node_timeout (CASSANDRA-9793)
 + * Don't track hotness when opening from snapshot for validation 
(CASSANDRA-9382)
 +
 +
 +2.2.0
 + * Allow the selection of columns together with aggregates (CASSANDRA-9767)
 + * Fix cqlsh copy methods and other windows specific issues (CASSANDRA-9795)
 + * Don't wrap byte arrays in SequentialWriter (CASSANDRA-9797)
 + * sum() and avg() functions missing for smallint and tinyint types 
(CASSANDRA-9671)
 + * Revert CASSANDRA-9542 (allow native functions in UDA) (CASSANDRA-9771)
 +Merged from 2.1:
   * Fix MarshalException when upgrading superColumn family (CASSANDRA-9582)
   * Fix broken logging for empty flushes in Memtable (CASSANDRA-9837)
   * Handle corrupt files on startup (CASSANDRA-9686)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf08e663/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[3/3] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-08-07 Thread marcuse
Merge branch 'cassandra-2.2' into cassandra-3.0

Conflicts:
CHANGES.txt
src/java/org/apache/cassandra/db/ColumnFamilyStore.java


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/52850009
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/52850009
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/52850009

Branch: refs/heads/cassandra-3.0
Commit: 5285000908e486ae80ef1a289e8a055d65d5dd7f
Parents: 0a94c7a bf08e66
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Aug 7 08:23:36 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:23:36 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/52850009/CHANGES.txt
--
diff --cc CHANGES.txt
index a894297,6e17be0..9aca2ce
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,44 -1,6 +1,45 @@@
 -2.2.1
 +3.0.0-beta1
 + * Optimize batchlog replay to avoid full scans (CASSANDRA-7237)
 + * Repair improvements when using vnodes (CASSANDRA-5220)
 + * Disable scripted UDFs by default (CASSANDRA-9889)
 + * Add transparent data encryption core classes (CASSANDRA-9945)
 + * Bytecode inspection for Java-UDFs (CASSANDRA-9890)
 + * Use byte to serialize MT hash length (CASSANDRA-9792)
 +Merged from 2.2:
   * Add checksum to saved cache files (CASSANDRA-9265)
   * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 +Merged from 2.1:
++ * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
 + * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
 +Merged from 2.0:
 + * Don't cast expected bf size to an int (CASSANDRA-9959)
 +
 +
 +3.0.0-alpha1
 + * Implement proper sandboxing for UDFs (CASSANDRA-9402)
 + * Simplify (and unify) cleanup of compaction leftovers (CASSANDRA-7066)
 + * Allow extra schema definitions in cassandra-stress yaml (CASSANDRA-9850)
 + * Metrics should use up to date nomenclature (CASSANDRA-9448)
 + * Change CREATE/ALTER TABLE syntax for compression (CASSANDRA-8384)
 + * Cleanup crc and adler code for java 8 (CASSANDRA-9650)
 + * Storage engine refactor (CASSANDRA-8099, 9743, 9746, 9759, 9781, 9808, 
9825,
 +   9848, 9705, 9859, 9867, 9874, 9828, 9801)
 + * Update Guava to 18.0 (CASSANDRA-9653)
 + * Bloom filter false positive ratio is not honoured (CASSANDRA-8413)
 + * New option for cassandra-stress to leave a ratio of columns null 
(CASSANDRA-9522)
 + * Change hinted_handoff_enabled yaml setting, JMX (CASSANDRA-9035)
 + * Add algorithmic token allocation (CASSANDRA-7032)
 + * Add nodetool command to replay batchlog (CASSANDRA-9547)
 + * Make file buffer cache independent of paths being read (CASSANDRA-8897)
 + * Remove deprecated legacy Hadoop code (CASSANDRA-9353)
 + * Decommissioned nodes will not rejoin the cluster (CASSANDRA-8801)
 + * Change gossip stabilization to use endpoit size (CASSANDRA-9401)
 + * Change default garbage collector to G1 (CASSANDRA-7486)
 + * Populate TokenMetadata early during startup (CASSANDRA-9317)
 + * Undeprecate cache recentHitRate (CASSANDRA-6591)
 + * Add support for selectively varint encoding fields (CASSANDRA-9499, 9865)
 + * Materialized Views (CASSANDRA-6477)
 +Merged from 2.2:
   * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
   * UDF / UDA execution time in trace (CASSANDRA-9723)
   * Fix broken internode SSL (CASSANDRA-9884)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/52850009/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 255f9a0,5e8c521..beb2b93
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -1600,9 -2363,10 +1600,10 @@@ public class ColumnFamilyStore implemen
  if (logger.isDebugEnabled())
  logger.debug(using snapshot sstable {}, 
entries.getKey());
  // open without tracking hotness
 -sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner, true, false);
 +sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, true, false);
- // This is technically not necessary since it's a 
snapshot but makes things easier
  refs.tryRef(sstable);
+ // release the self ref as we never add the snapshot 
sstable to DataTracker where it is otherwise 

[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-08-07 Thread marcuse
Merge branch 'cassandra-2.1' into cassandra-2.2

Conflicts:
CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf08e663
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf08e663
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf08e663

Branch: refs/heads/cassandra-2.2
Commit: bf08e663b7b0e88fdeb2a0cbd7b754e168dde77a
Parents: ed4fad1 e9b975c
Author: Marcus Eriksson marc...@apache.org
Authored: Fri Aug 7 08:21:47 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:21:47 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf08e663/CHANGES.txt
--
diff --cc CHANGES.txt
index ff0fdda,75bdcde..6e17be0
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,29 -1,9 +1,30 @@@
 -2.1.9
 +2.2.1
 + * Add checksum to saved cache files (CASSANDRA-9265)
 + * Log warning when using an aggregate without partition key (CASSANDRA-9737)
 + * Avoid grouping sstables for anticompaction with DTCS (CASSANDRA-9900)
 + * UDF / UDA execution time in trace (CASSANDRA-9723)
 + * Fix broken internode SSL (CASSANDRA-9884)
 +Merged from 2.1:
+  * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
   * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
   * Fix handling of enable/disable autocompaction (CASSANDRA-9899)
 - * Commit log segment recycling is disabled by default (CASSANDRA-9896)
   * Add consistency level to tracing ouput (CASSANDRA-9827)
 + * Remove repair snapshot leftover on startup (CASSANDRA-7357)
 + * Use random nodes for batch log when only 2 racks (CASSANDRA-8735)
 + * Ensure atomicity inside thrift and stream session (CASSANDRA-7757)
 + * Fix nodetool info error when the node is not joined (CASSANDRA-9031)
 +Merged from 2.0:
 + * Log when messages are dropped due to cross_node_timeout (CASSANDRA-9793)
 + * Don't track hotness when opening from snapshot for validation 
(CASSANDRA-9382)
 +
 +
 +2.2.0
 + * Allow the selection of columns together with aggregates (CASSANDRA-9767)
 + * Fix cqlsh copy methods and other windows specific issues (CASSANDRA-9795)
 + * Don't wrap byte arrays in SequentialWriter (CASSANDRA-9797)
 + * sum() and avg() functions missing for smallint and tinyint types 
(CASSANDRA-9671)
 + * Revert CASSANDRA-9542 (allow native functions in UDA) (CASSANDRA-9771)
 +Merged from 2.1:
   * Fix MarshalException when upgrading superColumn family (CASSANDRA-9582)
   * Fix broken logging for empty flushes in Memtable (CASSANDRA-9837)
   * Handle corrupt files on startup (CASSANDRA-9686)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf08e663/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



cassandra git commit: Release the self ref when opening snapshot sstable readers

2015-08-07 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 e1bb79260 - e9b975c27


Release the self ref when opening snapshot sstable readers

Patch by marcuse; reviewed by benedict for CASSANDRA-9998


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e9b975c2
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e9b975c2
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e9b975c2

Branch: refs/heads/cassandra-2.1
Commit: e9b975c279578fe2d6f25368fe2839e1d0572371
Parents: e1bb792
Author: Marcus Eriksson marc...@apache.org
Authored: Thu Aug 6 15:22:50 2015 +0200
Committer: Marcus Eriksson marc...@apache.org
Committed: Fri Aug 7 08:20:06 2015 +0200

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9b975c2/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9a475ea..75bdcde 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.9
+ * Release snapshot selfRef when doing snapshot repair (CASSANDRA-9998)
  * Cannot replace token does not exist - DN node removed as Fat Client 
(CASSANDRA-9871)
  * Fix handling of enable/disable autocompaction (CASSANDRA-9899)
  * Commit log segment recycling is disabled by default (CASSANDRA-9896)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e9b975c2/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index ad66f8e..6777e7a 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2355,8 +2355,9 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 logger.debug(using snapshot sstable {}, 
entries.getKey());
 // open without tracking hotness
 sstable = SSTableReader.open(entries.getKey(), 
entries.getValue(), metadata, partitioner, true, false);
-// This is technically not necessary since it's a snapshot 
but makes things easier
 refs.tryRef(sstable);
+// release the self ref as we never add the snapshot 
sstable to DataTracker where it is otherwise released
+sstable.selfRef().release();
 }
 else if (logger.isDebugEnabled())
 {



[jira] [Updated] (CASSANDRA-9961) cqlsh should have DESCRIBE MATERIALIZED VIEW

2015-08-07 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-9961:
-
Fix Version/s: (was: 3.x)
   3.0.x

 cqlsh should have DESCRIBE MATERIALIZED VIEW
 

 Key: CASSANDRA-9961
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9961
 Project: Cassandra
  Issue Type: Improvement
Reporter: Carl Yeksigian
Assignee: Stefania
  Labels: client-impacting, materializedviews
 Fix For: 3.0.x


 cqlsh doesn't currently produce describe output that can be used to recreate 
 a MV. Needs to add a new {{DESCRIBE MATERIALIZED VIEW}} command, and also add 
 to {{DESCRIBE KEYSPACE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9985) Introduce our own AbstractIterator

2015-08-07 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661392#comment-14661392
 ] 

Benedict commented on CASSANDRA-9985:
-

Right, it seems exceedingly difficult to get into, and is a worse misuse of the 
iterator than the iterator can handle. 

My main reason to not want to address it, though, is that we will be stepping 
through this {{AbstractIterator.hasNext()}} often, and I'd like it to be easy 
and attractive to do so. I had a quick go at making it remain easy to read and, 
hopefully, step through, while still catching this edge case. I've pushed a 
version that I think meets the criteria.

 Introduce our own AbstractIterator
 --

 Key: CASSANDRA-9985
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9985
 Project: Cassandra
  Issue Type: Sub-task
  Components: Core
Reporter: Benedict
Assignee: Benedict
Priority: Trivial
 Fix For: 3.0.0 rc1


 The Guava AbstractIterator not only has unnecessary method call depth, it is 
 difficult to debug without attaching source. Since it's absolutely trivial to 
 write our own, and it's used widely within the codebase, I think we should do 
 so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10003) RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk

2015-08-07 Thread Daniel Chia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Chia updated CASSANDRA-10003:

Attachment: 0001-trunk-10003.patch

 RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk
 ---

 Key: CASSANDRA-10003
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10003
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Priority: Blocker
 Fix For: 3.0.x

 Attachments: 0001-trunk-10003.patch


 RowAndDeletionMergeIteratorTest is falling with the followings:
 {code}
 [junit] Testcase: 
 testWithAtMostRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostRangeTombstone(RowAndDeletionMergeIteratorTest.java:134)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
   FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:179)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithAtMostAndGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
  FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostAndGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:207)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithIncludingEndExcludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithIncludingEndExcludingStartMarker(RowAndDeletionMergeIteratorTest.java:257)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithExcludingEndIncludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithExcludingEndIncludingStartMarker(RowAndDeletionMergeIteratorTest.java:297)
 {code}
 cassci started to show the failure recently, but I can go back as far as 
 {{2457599427d361314dce4833abeb5cd4915d0b06}} to see the failure on local.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10003) RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk

2015-08-07 Thread Daniel Chia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661467#comment-14661467
 ] 

Daniel Chia commented on CASSANDRA-10003:
-

Relatedly, while looking at code during fixing this test, I came across this: 
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/rows/RowAndDeletionMergeIterator.java#L138

I'm not sure if this is a bug, but don't we need to test if the 
partitionLevelDeletion supersedes the range tombstone deletion?

 RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk
 ---

 Key: CASSANDRA-10003
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10003
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Daniel Chia
Priority: Blocker
 Fix For: 3.0.x

 Attachments: 0001-trunk-10003.patch, 0002-trunk-10003.patch


 RowAndDeletionMergeIteratorTest is falling with the followings:
 {code}
 [junit] Testcase: 
 testWithAtMostRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostRangeTombstone(RowAndDeletionMergeIteratorTest.java:134)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
   FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:179)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithAtMostAndGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
  FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostAndGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:207)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithIncludingEndExcludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithIncludingEndExcludingStartMarker(RowAndDeletionMergeIteratorTest.java:257)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithExcludingEndIncludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithExcludingEndIncludingStartMarker(RowAndDeletionMergeIteratorTest.java:297)
 {code}
 cassci started to show the failure recently, but I can go back as far as 
 {{2457599427d361314dce4833abeb5cd4915d0b06}} to see the failure on local.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9129) HintedHandoff in pending state forever after upgrading to 2.0.14 from 2.0.11 and 2.0.12

2015-08-07 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654189#comment-14654189
 ] 

Sam Tunnicliffe edited comment on CASSANDRA-9129 at 8/7/15 8:31 AM:


The attached patch adds a second ScheduledExecutor to HintedHandoffManager. 
This reverts back to something more like pre-8285 behaviour where only the 
actual hint delivery tasks are run on the executor exposed through JMX  
tpstats. 
The periodic scheduling task, plus deletions from  truncations of the hints 
table are run on this new executor, so the stats will go back to just reporting 
the number of hints delivered/pending etc. 
 

Cassci: 
[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-9129-testall/]
  
[dtests|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-9129-dtest/]


was (Author: beobal):

The attached patch adds a second ScheduledExecutor to HintedHandoffManager. 
This reverts back to something more like pre-8285 behaviour where only the 
actual hint delivery tasks are run on the executor exposed through JMX  
tpstats. 
The periodic scheduling task, plus deletions from  truncations of the hints 
table are run on this new executor, so the stats will go back to just reporting 
the number of hints delivered/pending etc. 
 

Cassci: 
[testall|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-9219-testall/]
  
[dtests|http://cassci.datastax.com/view/Dev/view/beobal/job/beobal-9219-dtest/]

 HintedHandoff in pending state forever after upgrading to 2.0.14 from 2.0.11 
 and 2.0.12
 ---

 Key: CASSANDRA-9129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9129
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04.5 LTS
 AWS (m3.xlarge)
 15G RAM
 4 core Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
 Cassandra 2.0.14
Reporter: Russ Lavoie
Assignee: Sam Tunnicliffe
 Fix For: 2.0.x

 Attachments: 9129-2.0.txt


 Upgrading from Cassandra 2.0.11 or 2.0.12 to 2.0.14 I am seeing a pending 
 hinted hand off that never clears.  New hinted hand offs that go into pending 
 waiting for a node to come up clear as expected.  But 1 always remains.
 I through the following steps.
 1) stop cassandra
 2) Upgrade cassandra to 2.0.14
 3) Start cassandra
 4) nodetool tpstats
 There are no errors in the logs, to help with this issue.  I ran a few 
 nodetool commands to get some data and pasted them below:
 Below is what is shown after running nodetool status on each node in the ring
 {code}Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address   Load   Tokens  Owns   Host ID   Rack
 UN  NODE1  279.8 MB   256 34.9%  HOSTID   rack1
 UN  NODE2  279.79 MB  256 33.0%  HOSTID   rack1
 UN  NODE3  279.87 MB  256 32.1%  HOSTID   rack1
 {code}
 Below is what is shown after running nodetool tpstats on each node in the 
 ring showing a single HintedHandoff in pending status that never clears
 {code}
 Pool NameActive   Pending  Completed   Blocked  All 
 time blocked
 ReadStage 0 0  14550 0
  0
 RequestResponseStage  0 0 113040 0
  0
 MutationStage 0 0 168873 0
  0
 ReadRepairStage   0 0   1147 0
  0
 ReplicateOnWriteStage 0 0  0 0
  0
 GossipStage   0 0 232112 0
  0
 CacheCleanupExecutor  0 0  0 0
  0
 MigrationStage0 0  0 0
  0
 MemoryMeter   0 0  6 0
  0
 FlushWriter   0 0 38 0
  0
 ValidationExecutor0 0  0 0
  0
 InternalResponseStage 0 0  0 0
  0
 AntiEntropyStage  0 0  0 0
  0
 MemtablePostFlusher   0 0   1333 0
  0
 MiscStage 0 0  0 0
  0
 PendingRangeCalculator0 0  6 0
  0
 CompactionExecutor0 0178 0
  0
 commitlog_archiver0 0  0 0
  0
 HintedHandoff 

[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-07 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661554#comment-14661554
 ] 

Robert Stupp commented on CASSANDRA-8630:
-

OHC forces native byte order to prevent swapping the values. I can however 
provide a config switch in the builder to use big-endian if that helps.

 Faster sequential IO (on compaction, streaming, etc)
 

 Key: CASSANDRA-8630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
 Project: Cassandra
  Issue Type: Improvement
  Components: Core, Tools
Reporter: Oleg Anastasyev
Assignee: Stefania
  Labels: compaction, performance
 Fix For: 3.x

 Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
 flight_recorder_001_files.tar.gz


 When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
 of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
 This is because default implementations of readShort,readLong, etc as well as 
 their matching write* are implemented with numerous calls of byte by byte 
 read and write. 
 This makes a lot of syscalls as well.
 A quick microbench shows than just reimplementation of these methods in 
 either way gives 8x speed increase.
 A patch attached implements RandomAccessReader.readType and 
 SequencialWriter.writeType methods in more efficient way.
 I also eliminated some extra byte copies in CompositeType.split and 
 ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
 list during tests.
 A stress tests on my laptop show that this patch makes compaction 25-30% 
 faster  on uncompressed sstables and 15% faster for compressed ones.
 A deployment to production shows much less CPU load for compaction. 
 (I attached a cpu load graph from one of our production, orange is niced CPU 
 load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10003) RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk

2015-08-07 Thread Daniel Chia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661425#comment-14661425
 ] 

Daniel Chia commented on CASSANDRA-10003:
-

I think there's a combination of two things that causes the weirdness:

1) There's actually an issue in calculating timestamp, where we overflow the 
(int) nowInSeconds in converting it into a timestamp. Due to bad (good?) luck, 
up until really recently, this caused timestamp to overflow to a negative 
number, which made non of the range tombstones actually shadow any of the data 
in rows.

2) I think this test really meant to test the iterator without the 
`removeShadowedData` flag set to true, but that's what the test does.

 RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk
 ---

 Key: CASSANDRA-10003
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10003
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Priority: Blocker
 Fix For: 3.0.x


 RowAndDeletionMergeIteratorTest is falling with the followings:
 {code}
 [junit] Testcase: 
 testWithAtMostRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostRangeTombstone(RowAndDeletionMergeIteratorTest.java:134)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
   FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:179)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithAtMostAndGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
  FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostAndGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:207)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithIncludingEndExcludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithIncludingEndExcludingStartMarker(RowAndDeletionMergeIteratorTest.java:257)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithExcludingEndIncludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithExcludingEndIncludingStartMarker(RowAndDeletionMergeIteratorTest.java:297)
 {code}
 cassci started to show the failure recently, but I can go back as far as 
 {{2457599427d361314dce4833abeb5cd4915d0b06}} to see the failure on local.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9606) this query is not supported in new version

2015-08-07 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661498#comment-14661498
 ] 

Sam Tunnicliffe commented on CASSANDRA-9606:


+1 lgtm

 this query is not supported in new version
 --

 Key: CASSANDRA-9606
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9606
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: cassandra 2.1.6
 jdk 1.7.0_55
Reporter: zhaoyan
Assignee: Benjamin Lerer
 Attachments: 9606-2.0.txt, 9606-2.1.txt, 9606-2.2.txt


 Background:
 1、create a table:
 {code}
 CREATE TABLE test (
 a int,
 b int,
 c int,
   d int,
 PRIMARY KEY (a, b, c)
 );
 {code}
 2、query by a=1 and b6
 {code}
 select * from test where a=1 and b6;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (6 rows)
 {code}
 3、query by page
 first page:
 {code}
 select * from test where a=1 and b6 limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 1 | 2
  1 | 3 | 2 | 2
 (2 rows)
 {code}
 second page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 3 | 4 | 2
  1 | 3 | 5 | 2
 (2 rows)
 {code}
 last page:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,5) limit 2;
  a | b | c | d
 ---+---+---+---
  1 | 4 | 4 | 2
  1 | 5 | 5 | 2
 (2 rows)
 {code}
 question:
 this query by page is ok when cassandra 2.0.8.
 but is not supported in the latest version 2.1.6
 when execute:
 {code}
 select * from test where a=1 and b6 and (b,c)  (3,2) limit 2;
 {code}
 get one error message:
 InvalidRequest: code=2200 [Invalid query] message=Column b cannot have 
 both tuple-notation inequalities and single-column inequalities: (b, c)  (3, 
 2)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10008) Upgrading SSTables fails on 2.2.0 (after upgrade from 2.1.2)

2015-08-07 Thread Chris Moos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661405#comment-14661405
 ] 

Chris Moos edited comment on CASSANDRA-10008 at 8/7/15 6:46 AM:


I think I may have tracked down the issue, it seems that this happens because 
sstables from the old version and new version both exist in the transaction and 
during upgrade cancel() is used to filter out the latest versions sstables, but 
cancel() is not fully removing the SSTableReader from the transaction so 
checkUnused() fails.

Patch attached.


was (Author: chrismoos):
I think I may have tracked down the issue, it seems that this happens because 
sstables from the old version and new version both exist in the transaction and 
during upgrade cancel() is used to filter out the latest versions sstables, but 
cancel() is not fully removing the SSTableReader from the transaction.

Patch attached.

 Upgrading SSTables fails on 2.2.0 (after upgrade from 2.1.2)
 

 Key: CASSANDRA-10008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10008
 Project: Cassandra
  Issue Type: Bug
Reporter: Chris Moos
 Fix For: 2.2.x


 Running *nodetool upgradesstables* fails with the following after upgrading 
 to 2.2.0 from 2.1.2:
 {code}
 error: null
 -- StackTrace --
 java.lang.AssertionError
 at 
 org.apache.cassandra.db.lifecycle.LifecycleTransaction.checkUnused(LifecycleTransaction.java:428)
 at 
 org.apache.cassandra.db.lifecycle.LifecycleTransaction.split(LifecycleTransaction.java:408)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.parallelAllSSTableOperation(CompactionManager.java:268)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:373)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:1524)
 at 
 org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2521)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10008) Upgrading SSTables fails on 2.2.0 (after upgrade from 2.1.2)

2015-08-07 Thread Chris Moos (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Moos updated CASSANDRA-10008:
---
Attachment: CASSANDRA-10008.patch

 Upgrading SSTables fails on 2.2.0 (after upgrade from 2.1.2)
 

 Key: CASSANDRA-10008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10008
 Project: Cassandra
  Issue Type: Bug
Reporter: Chris Moos
 Fix For: 2.2.x

 Attachments: CASSANDRA-10008.patch


 Running *nodetool upgradesstables* fails with the following after upgrading 
 to 2.2.0 from 2.1.2:
 {code}
 error: null
 -- StackTrace --
 java.lang.AssertionError
 at 
 org.apache.cassandra.db.lifecycle.LifecycleTransaction.checkUnused(LifecycleTransaction.java:428)
 at 
 org.apache.cassandra.db.lifecycle.LifecycleTransaction.split(LifecycleTransaction.java:408)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.parallelAllSSTableOperation(CompactionManager.java:268)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:373)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:1524)
 at 
 org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2521)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10008) Upgrading SSTables fails on 2.2.0 (after upgrade from 2.1.2)

2015-08-07 Thread Chris Moos (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661405#comment-14661405
 ] 

Chris Moos commented on CASSANDRA-10008:


I think I may have tracked down the issue, it seems that this happens because 
sstables from the old version and new version both exist in the transaction and 
during upgrade cancel() is used to filter out the latest versions sstables, but 
cancel() is not fully removing the SSTableReader from the transaction.

Patch attached.

 Upgrading SSTables fails on 2.2.0 (after upgrade from 2.1.2)
 

 Key: CASSANDRA-10008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10008
 Project: Cassandra
  Issue Type: Bug
Reporter: Chris Moos
 Fix For: 2.2.x


 Running *nodetool upgradesstables* fails with the following after upgrading 
 to 2.2.0 from 2.1.2:
 {code}
 error: null
 -- StackTrace --
 java.lang.AssertionError
 at 
 org.apache.cassandra.db.lifecycle.LifecycleTransaction.checkUnused(LifecycleTransaction.java:428)
 at 
 org.apache.cassandra.db.lifecycle.LifecycleTransaction.split(LifecycleTransaction.java:408)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.parallelAllSSTableOperation(CompactionManager.java:268)
 at 
 org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:373)
 at 
 org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:1524)
 at 
 org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2521)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10002) Repeated slices on RowSearchers are incorrect

2015-08-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661436#comment-14661436
 ] 

Stefania commented on CASSANDRA-10002:
--

Code change is +1. 

You may want to use parentheses consistently for these two lines although it is 
not required for correctness:

{code}
final int start = nextIdx + (-searchResult - 1);
[...]
final int end = start + -searchResult - 1;
{code}


 Repeated slices on RowSearchers are incorrect
 -

 Key: CASSANDRA-10002
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10002
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Tyler Hobbs
Assignee: Tyler Hobbs
 Fix For: 3.0 beta 1


 In {{AbstractThreadUnsafePartition}}, repeated {{slice()}} calls on a 
 {{RowSearcher}} can produce incorrect results.  This is caused by only 
 performing a binary search over a sublist (based on {{nextIdx}}), but not 
 taking {{nextIdx}} into account when using the search result index.
 I made a quick fix in [this 
 commit|https://github.com/thobbs/cassandra/commit/73725ea6825c9c0da1fa4986b01f39ae08130e10]
  on one of my branches, but the full fix also needs to cover 
 {{ReverseRowSearcher}} and include a test to reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk

2015-08-07 Thread benedict
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e8a5327b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e8a5327b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e8a5327b

Branch: refs/heads/trunk
Commit: e8a5327bb98e7b449f561c06a7cda43099320647
Parents: 4f51341 605bcdc
Author: Benedict Elliott Smith bened...@apache.org
Authored: Fri Aug 7 10:01:44 2015 +0200
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Aug 7 10:01:44 2015 +0200

--
 .../compaction/AbstractCompactionStrategy.java  |  2 +-
 .../cassandra/db/compaction/CompactionTask.java | 11 +--
 .../DateTieredCompactionStrategy.java   |  6 ++--
 .../db/compaction/LeveledCompactionTask.java|  6 ++--
 .../db/compaction/SSTableSplitter.java  |  4 +--
 .../cassandra/db/compaction/Scrubber.java   | 32 
 .../SizeTieredCompactionStrategy.java   | 12 
 .../writers/CompactionAwareWriter.java  | 12 ++--
 .../writers/DefaultCompactionWriter.java|  9 --
 .../writers/MajorLeveledCompactionWriter.java   | 17 +--
 .../writers/MaxSSTableSizeWriter.java   | 19 ++--
 .../SplittingSizeTieredCompactionWriter.java|  2 +-
 .../cassandra/db/lifecycle/TransactionLogs.java |  2 +-
 .../cassandra/io/sstable/SSTableTxnWriter.java  |  6 ++--
 .../cassandra/tools/StandaloneSplitter.java |  7 +
 .../db/compaction/LongCompactionsTest.java  |  2 +-
 .../unit/org/apache/cassandra/db/ScrubTest.java |  7 +++--
 .../compaction/CompactionAwareWriterTest.java   |  6 ++--
 .../io/sstable/BigTableWriterTest.java  | 13 ++--
 .../io/sstable/SSTableRewriterTest.java |  2 --
 20 files changed, 110 insertions(+), 67 deletions(-)
--




[1/3] cassandra git commit: Fix split and scrub tool sstable cleanup Follow up to CASSANDRA-9978

2015-08-07 Thread benedict
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 528500090 - 605bcdcf1
  refs/heads/trunk 4f51341b5 - e8a5327bb


Fix split and scrub tool sstable cleanup
Follow up to CASSANDRA-9978

patch by stefania; reviewed by benedict for CASSANDRA-7066


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/605bcdcf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/605bcdcf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/605bcdcf

Branch: refs/heads/cassandra-3.0
Commit: 605bcdcf11f2238d6d3d95b6281c9e38cf56e533
Parents: 5285000
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Wed Aug 5 15:32:55 2015 +0800
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Aug 7 10:00:56 2015 +0200

--
 .../compaction/AbstractCompactionStrategy.java  |  2 +-
 .../cassandra/db/compaction/CompactionTask.java | 11 +--
 .../DateTieredCompactionStrategy.java   |  6 ++--
 .../db/compaction/LeveledCompactionTask.java|  6 ++--
 .../db/compaction/SSTableSplitter.java  |  4 +--
 .../cassandra/db/compaction/Scrubber.java   | 32 
 .../SizeTieredCompactionStrategy.java   | 12 
 .../writers/CompactionAwareWriter.java  | 12 ++--
 .../writers/DefaultCompactionWriter.java|  9 --
 .../writers/MajorLeveledCompactionWriter.java   | 17 +--
 .../writers/MaxSSTableSizeWriter.java   | 19 ++--
 .../SplittingSizeTieredCompactionWriter.java|  2 +-
 .../cassandra/db/lifecycle/TransactionLogs.java |  2 +-
 .../cassandra/io/sstable/SSTableTxnWriter.java  |  6 ++--
 .../cassandra/tools/StandaloneSplitter.java |  7 +
 .../db/compaction/LongCompactionsTest.java  |  2 +-
 .../unit/org/apache/cassandra/db/ScrubTest.java |  7 +++--
 .../compaction/CompactionAwareWriterTest.java   |  6 ++--
 .../io/sstable/BigTableWriterTest.java  | 13 ++--
 .../io/sstable/SSTableRewriterTest.java |  2 --
 20 files changed, 110 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/605bcdcf/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index 379d3de..4279f6e 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -180,7 +180,7 @@ public abstract class AbstractCompactionStrategy
 
 public AbstractCompactionTask getCompactionTask(LifecycleTransaction txn, 
final int gcBefore, long maxSSTableBytes)
 {
-return new CompactionTask(cfs, txn, gcBefore, false);
+return new CompactionTask(cfs, txn, gcBefore);
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/605bcdcf/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 7897a1a..0bd6aae 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -52,14 +52,21 @@ public class CompactionTask extends AbstractCompactionTask
 protected static final Logger logger = 
LoggerFactory.getLogger(CompactionTask.class);
 protected final int gcBefore;
 private final boolean offline;
+private final boolean keepOriginals;
 protected static long totalBytesCompacted = 0;
 private CompactionExecutorStatsCollector collector;
 
-public CompactionTask(ColumnFamilyStore cfs, LifecycleTransaction txn, int 
gcBefore, boolean offline)
+public CompactionTask(ColumnFamilyStore cfs, LifecycleTransaction txn, int 
gcBefore)
+{
+this(cfs, txn, gcBefore, false, false);
+}
+
+public CompactionTask(ColumnFamilyStore cfs, LifecycleTransaction txn, int 
gcBefore, boolean offline, boolean keepOriginals)
 {
 super(cfs, txn);
 this.gcBefore = gcBefore;
 this.offline = offline;
+this.keepOriginals = keepOriginals;
 }
 
 public static synchronized long addToTotalBytesCompacted(long 
bytesCompacted)
@@ -224,7 +231,7 @@ public class CompactionTask extends AbstractCompactionTask
   LifecycleTransaction 
transaction,
   SetSSTableReader 
nonExpiredSSTables)
 {
-

[2/3] cassandra git commit: Fix split and scrub tool sstable cleanup Follow up to CASSANDRA-9978

2015-08-07 Thread benedict
Fix split and scrub tool sstable cleanup
Follow up to CASSANDRA-9978

patch by stefania; reviewed by benedict for CASSANDRA-7066


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/605bcdcf
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/605bcdcf
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/605bcdcf

Branch: refs/heads/trunk
Commit: 605bcdcf11f2238d6d3d95b6281c9e38cf56e533
Parents: 5285000
Author: Stefania Alborghetti stefania.alborghe...@datastax.com
Authored: Wed Aug 5 15:32:55 2015 +0800
Committer: Benedict Elliott Smith bened...@apache.org
Committed: Fri Aug 7 10:00:56 2015 +0200

--
 .../compaction/AbstractCompactionStrategy.java  |  2 +-
 .../cassandra/db/compaction/CompactionTask.java | 11 +--
 .../DateTieredCompactionStrategy.java   |  6 ++--
 .../db/compaction/LeveledCompactionTask.java|  6 ++--
 .../db/compaction/SSTableSplitter.java  |  4 +--
 .../cassandra/db/compaction/Scrubber.java   | 32 
 .../SizeTieredCompactionStrategy.java   | 12 
 .../writers/CompactionAwareWriter.java  | 12 ++--
 .../writers/DefaultCompactionWriter.java|  9 --
 .../writers/MajorLeveledCompactionWriter.java   | 17 +--
 .../writers/MaxSSTableSizeWriter.java   | 19 ++--
 .../SplittingSizeTieredCompactionWriter.java|  2 +-
 .../cassandra/db/lifecycle/TransactionLogs.java |  2 +-
 .../cassandra/io/sstable/SSTableTxnWriter.java  |  6 ++--
 .../cassandra/tools/StandaloneSplitter.java |  7 +
 .../db/compaction/LongCompactionsTest.java  |  2 +-
 .../unit/org/apache/cassandra/db/ScrubTest.java |  7 +++--
 .../compaction/CompactionAwareWriterTest.java   |  6 ++--
 .../io/sstable/BigTableWriterTest.java  | 13 ++--
 .../io/sstable/SSTableRewriterTest.java |  2 --
 20 files changed, 110 insertions(+), 67 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/605bcdcf/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
index 379d3de..4279f6e 100644
--- 
a/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
+++ 
b/src/java/org/apache/cassandra/db/compaction/AbstractCompactionStrategy.java
@@ -180,7 +180,7 @@ public abstract class AbstractCompactionStrategy
 
 public AbstractCompactionTask getCompactionTask(LifecycleTransaction txn, 
final int gcBefore, long maxSSTableBytes)
 {
-return new CompactionTask(cfs, txn, gcBefore, false);
+return new CompactionTask(cfs, txn, gcBefore);
 }
 
 /**

http://git-wip-us.apache.org/repos/asf/cassandra/blob/605bcdcf/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
--
diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
index 7897a1a..0bd6aae 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionTask.java
@@ -52,14 +52,21 @@ public class CompactionTask extends AbstractCompactionTask
 protected static final Logger logger = 
LoggerFactory.getLogger(CompactionTask.class);
 protected final int gcBefore;
 private final boolean offline;
+private final boolean keepOriginals;
 protected static long totalBytesCompacted = 0;
 private CompactionExecutorStatsCollector collector;
 
-public CompactionTask(ColumnFamilyStore cfs, LifecycleTransaction txn, int 
gcBefore, boolean offline)
+public CompactionTask(ColumnFamilyStore cfs, LifecycleTransaction txn, int 
gcBefore)
+{
+this(cfs, txn, gcBefore, false, false);
+}
+
+public CompactionTask(ColumnFamilyStore cfs, LifecycleTransaction txn, int 
gcBefore, boolean offline, boolean keepOriginals)
 {
 super(cfs, txn);
 this.gcBefore = gcBefore;
 this.offline = offline;
+this.keepOriginals = keepOriginals;
 }
 
 public static synchronized long addToTotalBytesCompacted(long 
bytesCompacted)
@@ -224,7 +231,7 @@ public class CompactionTask extends AbstractCompactionTask
   LifecycleTransaction 
transaction,
   SetSSTableReader 
nonExpiredSSTables)
 {
-return new DefaultCompactionWriter(cfs, transaction, 
nonExpiredSSTables, offline);
+return new DefaultCompactionWriter(cfs, transaction, 

[jira] [Commented] (CASSANDRA-9960) UDTs still visible after drop/recreate keyspace

2015-08-07 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661507#comment-14661507
 ] 

Robert Stupp commented on CASSANDRA-9960:
-

bq. single local cassandra, one node, no replications

That makes it much easier ;)

 UDTs still visible after drop/recreate keyspace
 ---

 Key: CASSANDRA-9960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9960
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik
Assignee: Robert Stupp
Priority: Critical
 Fix For: 2.2.x


 When deploying my app from the scratch I run sequence - drop keyspaces, 
 create keyspaces, create UDTs, create tables, generate lots of data... After 
 few cycles, randomly, cassandra ends in state, where I cannot see anything in 
 table system.schema_usertypes, when I select all rows, but queries with 
 specified keyspace_name and type_name return old values. Usually it helps to 
 restart C* and old data disapear, sometimes it needs to delete all C* data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9960) UDTs still visible after drop/recreate keyspace

2015-08-07 Thread Jaroslav Kamenik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661417#comment-14661417
 ] 

Jaroslav Kamenik commented on CASSANDRA-9960:
-

I am using single local cassandra, one node, no replications, for development, 
and small virtualized cluster for testing, it has 5 nodes in one DC, NTS with 
rep factor 3. Experienced problem on both, after move to 2.2.

 UDTs still visible after drop/recreate keyspace
 ---

 Key: CASSANDRA-9960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9960
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik
Assignee: Robert Stupp
Priority: Critical
 Fix For: 2.2.x


 When deploying my app from the scratch I run sequence - drop keyspaces, 
 create keyspaces, create UDTs, create tables, generate lots of data... After 
 few cycles, randomly, cassandra ends in state, where I cannot see anything in 
 table system.schema_usertypes, when I select all rows, but queries with 
 specified keyspace_name and type_name return old values. Usually it helps to 
 restart C* and old data disapear, sometimes it needs to delete all C* data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10011) I want to develop transactions for Cassandra and I want your feedback

2015-08-07 Thread Marek Lewandowski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661432#comment-14661432
 ] 

Marek Lewandowski commented on CASSANDRA-10011:
---

thanks, done that
http://www.mail-archive.com/dev@cassandra.apache.org/msg08175.html

 I want to develop transactions for Cassandra and I want your feedback
 -

 Key: CASSANDRA-10011
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10011
 Project: Cassandra
  Issue Type: New Feature
Reporter: Marek Lewandowski
  Labels: features

 Hello everyone,
 *TL;DR;* I want to develop transactions (similar to those relational ones) 
 for Cassandra, I have some ideas and I'd like to hear your feedback.
 *Long story short:* I want to develop prototype of solution that features 
 transactions spanning multiple Cassandra partitions resembling those in 
 relational databases. I understand that in Cassandra's world such 
 transactions will be quite a bit different than in relational db world. That 
 prototype or research if you will, is subject of my master's thesis. 
 It's been some time since I've been dancing around that subject and 
 postponing it, but what I understood in the process is that I cannot do 
 anything useful being hidden from community's feedback. I want you to provide 
 feedback on my ideas. You also have a power of changing it completely because 
 first and foremost _I want to develop something actually useful, not only get 
 my master's degree._
 To not scare anyone with huge wall of text, for now I'll just post very brief 
 description of my ideas and longer block of text describing how I can see 
 committing and rolling back data. I have more detailed descriptions prepared 
 already, but I don't want to share 4 pages of text in one go.
 Scope of the project (assuming I'll do it alone) is an experiment, not 
 production ready solution.
 Such experiment can use any tools possible to actually perform it.
 Baseline for any idea is:
 - asynchronous execution - think Akka and actors with non blocking execution 
 and message passing 
 - doesn't have to be completely transparent for end user - solution may 
 enforce certain workflow of interacting with it and/or introduce new concepts 
 (like akka messages instead of CQL and binary protocol).
 So far after reading a lot and thinking even more I have two ideas that I'd 
 like to share.
 h3. Ideas are (with brief description, details will come later):
 h4. Idea 1: Event streaming 
 - Imagine that every modiifcation is represented by an _Event_. 
 - Imagine you can group these events into _Event Groups_. 
 - Imagine that such groups are time series data
 - Imagine you can read such groups as a stream of events (think reactive 
 stream)
 Idea is that: you don't need to lock data when you are sure there is no one 
 else to compete with.
 There is 1 guy called _Cursor_ that reads Event Stream and executes Event 
 Groups one by one advacing its position on the stream when Event Group has 
 been executed.
 Seems like a system where you have only 1 transaction at any given time, but 
 there are many areas to optimize that and to allow more than that. However 
 I'll stop here.
 h4. Idea 2: Locking data 
 - uses additional tables to acquire read/write locks
 - seperate tables to append modifications - as in Rollback: Appending to 
 seperate table.  
 - supports different isolation levels. 
 - more traditional approach, kind of translation of traditional locking to 
 cassandra reality.
 ---
 Common part of two is approach to doing commit and rollback.
 h3. Doing Rollback and commit
 I have two ideas for rollback. I like 2nd one more, because it is simpler and 
 potentially faster.
 h4. Rollback: Query rewriting
 It modifies original data, but before that it copies original data so that 
 state can be restored. Then when failure is detected, modification query can 
 be rewritten so that original data can be restored.
 Query rewriting seems like a complex functionality. I tried few simple and a 
 little bit more complex statements and in general for basic stuff algorithm 
 is not that complicated, but to support everything CQL has to offer it might 
 be hard. 
 Still such transactional system might have some restrictions over CQL 
 statements used, because first of all when someone wants to have these 
 transactions they already want something non standard.
 I will skip details of that approach for now.
 h4. Rollback: Appending to seperate table.
 Image we have table A that we want to have transactions on.
 This requires another table A_tx which has same schema as A, but has *1 more 
 clustering column* and few new columns. A_tx will be additionally clustered 
 by 

[jira] [Updated] (CASSANDRA-10003) RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk

2015-08-07 Thread Daniel Chia (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Chia updated CASSANDRA-10003:

Attachment: 0002-trunk-10003.patch

Actually, looking more carefully at the contract of 
RowAndDeletionMergeIterator, it seems like it's intended to not return shadowed 
data, and the flag there is really to optimize for cases where we know data 
cannot be shadowed.

Updated the test to test this behavior better, and also added one more test 
case to verify non-shadowed rows inside a range tombstone is correctly returned 
still.

 RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk
 ---

 Key: CASSANDRA-10003
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10003
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Daniel Chia
Priority: Blocker
 Fix For: 3.0.x

 Attachments: 0001-trunk-10003.patch, 0002-trunk-10003.patch


 RowAndDeletionMergeIteratorTest is falling with the followings:
 {code}
 [junit] Testcase: 
 testWithAtMostRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostRangeTombstone(RowAndDeletionMergeIteratorTest.java:134)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
   FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:179)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithAtMostAndGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
  FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostAndGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:207)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithIncludingEndExcludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithIncludingEndExcludingStartMarker(RowAndDeletionMergeIteratorTest.java:257)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithExcludingEndIncludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithExcludingEndIncludingStartMarker(RowAndDeletionMergeIteratorTest.java:297)
 {code}
 cassci started to show the failure recently, but I can go back as far as 
 {{2457599427d361314dce4833abeb5cd4915d0b06}} to see the failure on local.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661541#comment-14661541
 ] 

Stefania commented on CASSANDRA-8630:
-

About byte ordering, it seems OHC insists on native byte ordering, which is 
little-endian on linux x86_64. Not a big problem, we can force the ordering to 
big-endian in the serializers.

However, I think this means we always pay the price of swapping bytes when 
using direct byte buffers. Here is the implementation of {{getInt()}} in 
DirectByteBuffer.java:

{code}
private int getInt(long a) {
if (unaligned) {
int x = unsafe.getInt(a);
return (nativeByteOrder ? x : Bits.swap(x));
}
return Bits.getInt(a, bigEndian);
}
{code}

Forcing byte ordering to big-endian doesn't mean {{nativeByteOrder}] becomes 
true:

{code}
public final ByteBuffer order(ByteOrder bo) {
bigEndian = (bo == ByteOrder.BIG_ENDIAN);
nativeByteOrder =
(bigEndian == (Bits.byteOrder() == ByteOrder.BIG_ENDIAN));
return this;
}
{code}

where {{Bits.byteOrder()}} return the platform endianess. 

So wouldn't we be better off forcing native byte ordering rather than 
big-endian?


 Faster sequential IO (on compaction, streaming, etc)
 

 Key: CASSANDRA-8630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
 Project: Cassandra
  Issue Type: Improvement
  Components: Core, Tools
Reporter: Oleg Anastasyev
Assignee: Stefania
  Labels: compaction, performance
 Fix For: 3.x

 Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
 flight_recorder_001_files.tar.gz


 When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
 of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
 This is because default implementations of readShort,readLong, etc as well as 
 their matching write* are implemented with numerous calls of byte by byte 
 read and write. 
 This makes a lot of syscalls as well.
 A quick microbench shows than just reimplementation of these methods in 
 either way gives 8x speed increase.
 A patch attached implements RandomAccessReader.readType and 
 SequencialWriter.writeType methods in more efficient way.
 I also eliminated some extra byte copies in CompositeType.split and 
 ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
 list during tests.
 A stress tests on my laptop show that this patch makes compaction 25-30% 
 faster  on uncompressed sstables and 15% faster for compressed ones.
 A deployment to production shows much less CPU load for compaction. 
 (I attached a cpu load graph from one of our production, orange is niced CPU 
 load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-07 Thread Benedict (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661560#comment-14661560
 ] 

Benedict commented on CASSANDRA-8630:
-

It would be better, from that perspective to use native order. However:

# Swapping byte order is a single cycle instruction, so is probably not a big 
deal, given how inefficient we are otherwise right now
# We would have to require that users have nodes all sharing the same 
architecture (or at least endianness), which may or may not be limiting. Or we 
could pick the common platform endianness and go little endian
# We would have to support both at least during upgrade

All told I think it's easier to stick with big endian format as is Java 
standard and standard across our codebase. But I'm not opposed to the 
alternative of providing a patch to support both, followed by an upgrade period 
and dropping support for big endian in 4.0



 Faster sequential IO (on compaction, streaming, etc)
 

 Key: CASSANDRA-8630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
 Project: Cassandra
  Issue Type: Improvement
  Components: Core, Tools
Reporter: Oleg Anastasyev
Assignee: Stefania
  Labels: compaction, performance
 Fix For: 3.x

 Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
 flight_recorder_001_files.tar.gz


 When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
 of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
 This is because default implementations of readShort,readLong, etc as well as 
 their matching write* are implemented with numerous calls of byte by byte 
 read and write. 
 This makes a lot of syscalls as well.
 A quick microbench shows than just reimplementation of these methods in 
 either way gives 8x speed increase.
 A patch attached implements RandomAccessReader.readType and 
 SequencialWriter.writeType methods in more efficient way.
 I also eliminated some extra byte copies in CompositeType.split and 
 ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
 list during tests.
 A stress tests on my laptop show that this patch makes compaction 25-30% 
 faster  on uncompressed sstables and 15% faster for compressed ones.
 A deployment to production shows much less CPU load for compaction. 
 (I attached a cpu load graph from one of our production, orange is niced CPU 
 load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-07 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661615#comment-14661615
 ] 

Robert Stupp commented on CASSANDRA-8630:
-

Here you go :)
ohc 0.4.1 has Java byte order for everything user-facing.
You can download the jars 
[here|https://oss.sonatype.org/content/repositories/releases/org/caffinitas/ohc/ohc-core/0.4.1/]
 and 
[here|https://oss.sonatype.org/content/repositories/releases/org/caffinitas/ohc/ohc-core-j8/0.4.1/]
  (Maven central needs some time to replicate and index).

 Faster sequential IO (on compaction, streaming, etc)
 

 Key: CASSANDRA-8630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
 Project: Cassandra
  Issue Type: Improvement
  Components: Core, Tools
Reporter: Oleg Anastasyev
Assignee: Stefania
  Labels: compaction, performance
 Fix For: 3.x

 Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
 flight_recorder_001_files.tar.gz


 When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
 of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
 This is because default implementations of readShort,readLong, etc as well as 
 their matching write* are implemented with numerous calls of byte by byte 
 read and write. 
 This makes a lot of syscalls as well.
 A quick microbench shows than just reimplementation of these methods in 
 either way gives 8x speed increase.
 A patch attached implements RandomAccessReader.readType and 
 SequencialWriter.writeType methods in more efficient way.
 I also eliminated some extra byte copies in CompositeType.split and 
 ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
 list during tests.
 A stress tests on my laptop show that this patch makes compaction 25-30% 
 faster  on uncompressed sstables and 15% faster for compressed ones.
 A deployment to production shows much less CPU load for compaction. 
 (I attached a cpu load graph from one of our production, orange is niced CPU 
 load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-07 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661567#comment-14661567
 ] 

Robert Stupp commented on CASSANDRA-8630:
-

I'm also +1 on option 1 (stick with big endian). ByteBuffer intrinsics are 
quite good nowadays. Providing an upgrade period allowing little and big endian 
might bring more problems and hard to detect bugs than it buys.

Do you need a patched OHC version?

 Faster sequential IO (on compaction, streaming, etc)
 

 Key: CASSANDRA-8630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
 Project: Cassandra
  Issue Type: Improvement
  Components: Core, Tools
Reporter: Oleg Anastasyev
Assignee: Stefania
  Labels: compaction, performance
 Fix For: 3.x

 Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
 flight_recorder_001_files.tar.gz


 When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
 of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
 This is because default implementations of readShort,readLong, etc as well as 
 their matching write* are implemented with numerous calls of byte by byte 
 read and write. 
 This makes a lot of syscalls as well.
 A quick microbench shows than just reimplementation of these methods in 
 either way gives 8x speed increase.
 A patch attached implements RandomAccessReader.readType and 
 SequencialWriter.writeType methods in more efficient way.
 I also eliminated some extra byte copies in CompositeType.split and 
 ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
 list during tests.
 A stress tests on my laptop show that this patch makes compaction 25-30% 
 faster  on uncompressed sstables and 15% faster for compressed ones.
 A deployment to production shows much less CPU load for compaction. 
 (I attached a cpu load graph from one of our production, orange is niced CPU 
 load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661625#comment-14661625
 ] 

Stefania commented on CASSANDRA-8630:
-

Thank you! :)

 Faster sequential IO (on compaction, streaming, etc)
 

 Key: CASSANDRA-8630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
 Project: Cassandra
  Issue Type: Improvement
  Components: Core, Tools
Reporter: Oleg Anastasyev
Assignee: Stefania
  Labels: compaction, performance
 Fix For: 3.x

 Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
 flight_recorder_001_files.tar.gz


 When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
 of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
 This is because default implementations of readShort,readLong, etc as well as 
 their matching write* are implemented with numerous calls of byte by byte 
 read and write. 
 This makes a lot of syscalls as well.
 A quick microbench shows than just reimplementation of these methods in 
 either way gives 8x speed increase.
 A patch attached implements RandomAccessReader.readType and 
 SequencialWriter.writeType methods in more efficient way.
 I also eliminated some extra byte copies in CompositeType.split and 
 ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
 list during tests.
 A stress tests on my laptop show that this patch makes compaction 25-30% 
 faster  on uncompressed sstables and 15% faster for compressed ones.
 A deployment to production shows much less CPU load for compaction. 
 (I attached a cpu load graph from one of our production, orange is niced CPU 
 load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8630) Faster sequential IO (on compaction, streaming, etc)

2015-08-07 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661569#comment-14661569
 ] 

Stefania commented on CASSANDRA-8630:
-

I see, let's stick to big-endian for now. Maybe I'll do a little benchmark to 
compare if I have some time.

Robert, can you provide the configuration switch if it isn't too much trouble? 
At the moment I change the ordering in the serializers. It works but it's not 
very nice.

 Faster sequential IO (on compaction, streaming, etc)
 

 Key: CASSANDRA-8630
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8630
 Project: Cassandra
  Issue Type: Improvement
  Components: Core, Tools
Reporter: Oleg Anastasyev
Assignee: Stefania
  Labels: compaction, performance
 Fix For: 3.x

 Attachments: 8630-FasterSequencialReadsAndWrites.txt, cpu_load.png, 
 flight_recorder_001_files.tar.gz


 When node is doing a lot of sequencial IO (streaming, compacting, etc) a lot 
 of CPU is lost in calls to RAF's int read() and DataOutputStream's write(int).
 This is because default implementations of readShort,readLong, etc as well as 
 their matching write* are implemented with numerous calls of byte by byte 
 read and write. 
 This makes a lot of syscalls as well.
 A quick microbench shows than just reimplementation of these methods in 
 either way gives 8x speed increase.
 A patch attached implements RandomAccessReader.readType and 
 SequencialWriter.writeType methods in more efficient way.
 I also eliminated some extra byte copies in CompositeType.split and 
 ColumnNameHelper.maxComponents, which were on my profiler's hotspot method 
 list during tests.
 A stress tests on my laptop show that this patch makes compaction 25-30% 
 faster  on uncompressed sstables and 15% faster for compressed ones.
 A deployment to production shows much less CPU load for compaction. 
 (I attached a cpu load graph from one of our production, orange is niced CPU 
 load - i.e. compaction; yellow is user - i.e. not compaction related tasks)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9960) UDTs still visible after drop/recreate keyspace

2015-08-07 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661607#comment-14661607
 ] 

Robert Stupp commented on CASSANDRA-9960:
-

[~shinigami], I tried to reproduce it using a unit test. But my test passes 
against 2.2.0. Do you have some more information? Any changes to cassandra.yaml?

{code:java}
@Test
public void testRecreateKs() throws Throwable
{
// note: keyspace already exists
for (int i = 0; i  100; i++)
{
execute(CREATE TYPE  + KEYSPACE_PER_TEST + .foo (a int, b text, 
c inet, d maptext,int));
for (int t = 0; t  10; t++)
{
execute(CREATE TABLE  + KEYSPACE_PER_TEST + .tab + t +  
(pk int PRIMARY KEY, udt frozenfoo, val text));
for (int row = 0; row  100; row++)
{
execute(INSERT INTO  + KEYSPACE_PER_TEST + .tab + t +  
(pk, val) VALUES (?,?), row, Integer.toBinaryString(row));
}
}

// assert UDT is in system table
assertRows(execute(SELECT keyspace_name, type_name FROM  + 
SystemKeyspace.NAME + '.' + LegacySchemaTables.USERTYPES),
   row(KEYSPACE_PER_TEST, foo));
assertRows(execute(SELECT keyspace_name, type_name FROM  + 
SystemKeyspace.NAME + '.' + LegacySchemaTables.USERTYPES +
WHERE keyspace_name=' + KEYSPACE_PER_TEST + 
' AND type_name='foo'),
   row(KEYSPACE_PER_TEST, foo));

// unconditional DROP + CREATE
dropPerTestKeyspace(false);
createPerTestKeyspace(false);

// assert UDT not in system table
assertRows(execute(SELECT keyspace_name, type_name FROM  + 
SystemKeyspace.NAME + '.' + LegacySchemaTables.USERTYPES));
assertRows(execute(SELECT keyspace_name, type_name FROM  + 
SystemKeyspace.NAME + '.' + LegacySchemaTables.USERTYPES +
WHERE keyspace_name=' + KEYSPACE_PER_TEST + 
' AND type_name='foo'));
}
}
{code}

 UDTs still visible after drop/recreate keyspace
 ---

 Key: CASSANDRA-9960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9960
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik
Assignee: Robert Stupp
Priority: Critical
 Fix For: 2.2.x


 When deploying my app from the scratch I run sequence - drop keyspaces, 
 create keyspaces, create UDTs, create tables, generate lots of data... After 
 few cycles, randomly, cassandra ends in state, where I cannot see anything in 
 table system.schema_usertypes, when I select all rows, but queries with 
 specified keyspace_name and type_name return old values. Usually it helps to 
 restart C* and old data disapear, sometimes it needs to delete all C* data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9960) UDTs still visible after drop/recreate keyspace

2015-08-07 Thread Jaroslav Kamenik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661624#comment-14661624
 ] 

Jaroslav Kamenik commented on CASSANDRA-9960:
-

I have changed cluster name and paths to data dirs only. 

The issue doesnt appear imediatelly, but after some time, few release cycles, 
sometimes hours, sometimes days.. I have around 30 types and 150 tables and 
push few GBs of data during release, with hundreds of threads parallely.

 UDTs still visible after drop/recreate keyspace
 ---

 Key: CASSANDRA-9960
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9960
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Jaroslav Kamenik
Assignee: Robert Stupp
Priority: Critical
 Fix For: 2.2.x


 When deploying my app from the scratch I run sequence - drop keyspaces, 
 create keyspaces, create UDTs, create tables, generate lots of data... After 
 few cycles, randomly, cassandra ends in state, where I cannot see anything in 
 table system.schema_usertypes, when I select all rows, but queries with 
 specified keyspace_name and type_name return old values. Usually it helps to 
 restart C* and old data disapear, sometimes it needs to delete all C* data. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9129) HintedHandoff in pending state forever after upgrading to 2.0.14 from 2.0.11 and 2.0.12

2015-08-07 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661897#comment-14661897
 ] 

Aleksey Yeschenko commented on CASSANDRA-9129:
--

Remove {{JMXEnabledScheduledThreadPoolExecutor}} on commit, otherwise LGTM.

 HintedHandoff in pending state forever after upgrading to 2.0.14 from 2.0.11 
 and 2.0.12
 ---

 Key: CASSANDRA-9129
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9129
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 12.04.5 LTS
 AWS (m3.xlarge)
 15G RAM
 4 core Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
 Cassandra 2.0.14
Reporter: Russ Lavoie
Assignee: Sam Tunnicliffe
 Fix For: 2.0.x

 Attachments: 9129-2.0.txt


 Upgrading from Cassandra 2.0.11 or 2.0.12 to 2.0.14 I am seeing a pending 
 hinted hand off that never clears.  New hinted hand offs that go into pending 
 waiting for a node to come up clear as expected.  But 1 always remains.
 I through the following steps.
 1) stop cassandra
 2) Upgrade cassandra to 2.0.14
 3) Start cassandra
 4) nodetool tpstats
 There are no errors in the logs, to help with this issue.  I ran a few 
 nodetool commands to get some data and pasted them below:
 Below is what is shown after running nodetool status on each node in the ring
 {code}Status=Up/Down
 |/ State=Normal/Leaving/Joining/Moving
 --  Address   Load   Tokens  Owns   Host ID   Rack
 UN  NODE1  279.8 MB   256 34.9%  HOSTID   rack1
 UN  NODE2  279.79 MB  256 33.0%  HOSTID   rack1
 UN  NODE3  279.87 MB  256 32.1%  HOSTID   rack1
 {code}
 Below is what is shown after running nodetool tpstats on each node in the 
 ring showing a single HintedHandoff in pending status that never clears
 {code}
 Pool NameActive   Pending  Completed   Blocked  All 
 time blocked
 ReadStage 0 0  14550 0
  0
 RequestResponseStage  0 0 113040 0
  0
 MutationStage 0 0 168873 0
  0
 ReadRepairStage   0 0   1147 0
  0
 ReplicateOnWriteStage 0 0  0 0
  0
 GossipStage   0 0 232112 0
  0
 CacheCleanupExecutor  0 0  0 0
  0
 MigrationStage0 0  0 0
  0
 MemoryMeter   0 0  6 0
  0
 FlushWriter   0 0 38 0
  0
 ValidationExecutor0 0  0 0
  0
 InternalResponseStage 0 0  0 0
  0
 AntiEntropyStage  0 0  0 0
  0
 MemtablePostFlusher   0 0   1333 0
  0
 MiscStage 0 0  0 0
  0
 PendingRangeCalculator0 0  6 0
  0
 CompactionExecutor0 0178 0
  0
 commitlog_archiver0 0  0 0
  0
 HintedHandoff 0 1133 0
  0
 Message type   Dropped
 RANGE_SLICE  0
 READ_REPAIR  0
 PAGED_RANGE  0
 BINARY   0
 READ 0
 MUTATION 0
 _TRACE   0
 REQUEST_RESPONSE 0
 COUNTER_MUTATION 0
 {code}
 Below is what is shown after running nodetool cfstats system.hints on all 3 
 nodes.
 {code}
 Keyspace: system
   Read Count: 0
   Read Latency: NaN ms.
   Write Count: 0
   Write Latency: NaN ms.
   Pending Tasks: 0
   Table: hints
   SSTable count: 0
   Space used (live), bytes: 0
   Space used (total), bytes: 0
   Off heap memory used (total), bytes: 0
   SSTable Compression Ratio: 0.0
   Number of keys (estimate): 0
   Memtable cell count: 0
   Memtable data size, bytes: 0
   Memtable switch count: 0
   Local read count: 0
   Local read latency: 0.000 ms
   Local write count: 0
   Local write latency: 0.000 ms
   Pending 

[jira] [Commented] (CASSANDRA-9999) Improve usage of HashMap and HashSet in NetworkTopologyStrategy

2015-08-07 Thread Tommy Stendahl (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14661908#comment-14661908
 ] 

Tommy Stendahl commented on CASSANDRA-:
---

I have attached a patch that changes how the initial size of the HashMaps and 
HashSets are calculated. It should merge to all branches without conflict.

 Improve usage of HashMap and HashSet in NetworkTopologyStrategy
 ---

 Key: CASSANDRA-
 URL: https://issues.apache.org/jira/browse/CASSANDRA-
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tommy Stendahl
Assignee: Tommy Stendahl
Priority: Minor
 Attachments: .txt


 In NetworkTopologyStrategy there are HashMaps and HashSets created with a 
 specified initial size. I assume that this is done avoid having them re sized 
 then when objects are inserted to them. Unfortunately they are created with 
 the size of the expected number of object that will be inserted into them. 
 Since the default load factor is 0.75 this will almost guarantee that 
 HashMaps and HashSets are re sized, to avoid re size the initial size should 
 be set to no of objects / 0.75 +1.
 Since this is done every time calculateNaturalEndpoints() is called this 
 might have some performance impact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10003) RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk

2015-08-07 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-10003:
---
Reviewer: Benjamin Lerer

[~blerer] Can you review?

 RowAndDeletionMergeIteratorTest is failing on 3.0 and trunk
 ---

 Key: CASSANDRA-10003
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10003
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuki Morishita
Assignee: Daniel Chia
Priority: Blocker
 Fix For: 3.0.x

 Attachments: 0001-trunk-10003.patch, 0002-trunk-10003.patch


 RowAndDeletionMergeIteratorTest is falling with the followings:
 {code}
 [junit] Testcase: 
 testWithAtMostRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostRangeTombstone(RowAndDeletionMergeIteratorTest.java:134)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
   FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:179)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithAtMostAndGreaterThanRangeTombstone(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
  FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithAtMostAndGreaterThanRangeTombstone(RowAndDeletionMergeIteratorTest.java:207)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithIncludingEndExcludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithIncludingEndExcludingStartMarker(RowAndDeletionMergeIteratorTest.java:257)
 [junit]
 [junit]
 [junit] Testcase: 
 testWithExcludingEndIncludingStartMarker(org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest):
FAILED
 [junit] expected:ROW but was:RANGE_TOMBSTONE_MARKER
 [junit] junit.framework.AssertionFailedError: expected:ROW but 
 was:RANGE_TOMBSTONE_MARKER
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.assertRow(RowAndDeletionMergeIteratorTest.java:328)
 [junit] at 
 org.apache.cassandra.db.rows.RowAndDeletionMergeIteratorTest.testWithExcludingEndIncludingStartMarker(RowAndDeletionMergeIteratorTest.java:297)
 {code}
 cassci started to show the failure recently, but I can go back as far as 
 {{2457599427d361314dce4833abeb5cd4915d0b06}} to see the failure on local.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10015) Create tool to debug why expired sstables are not getting dropped

2015-08-07 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-10015:
---

 Summary: Create tool to debug why expired sstables are not getting 
dropped
 Key: CASSANDRA-10015
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10015
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 3.x, 2.1.x, 2.0.x, 2.2.x


Sometimes fully expired sstables are not getting dropped, and it is a real pain 
to manually find out why.

A tool that outputs which sstable blocks (by having older data than the newest 
tombstone in an expired sstable) expired ones would save a lot of time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10014) Deletions using clustering keys not reflected in MV

2015-08-07 Thread Stefan Podkowinski (JIRA)
Stefan Podkowinski created CASSANDRA-10014:
--

 Summary: Deletions using clustering keys not reflected in MV
 Key: CASSANDRA-10014
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10014
 Project: Cassandra
  Issue Type: Bug
Reporter: Stefan Podkowinski


I wrote a test to reproduce an 
[issue|http://stackoverflow.com/questions/31810841/cassandra-materialized-view-shows-stale-data/31860487]
 reported on SO and turns out this is easily reproducible. There seems to be a 
bug preventing deletes to be propagated to MVs in case a clustering key is 
used. See 
[here|https://github.com/spodkowinski/cassandra/commit/1c064523c8d8dbee30d46a03a0f58d3be97800dc]
 for test case (testClusteringKeyTombstone should fail).

It seems {{MaterializedView.updateAffectsView()}} will not consider the delete 
relevant for the view as {{partition.deletionInfo().isLive()}} will be true 
during the test. In other test cases isLive will return false, which seems to 
be the actual problem here. I'm not even sure the root cause is MV specific, 
but wasn't able to dig much deeper as I'm not familiar with the slightly 
confusing semantics around DeletionInfo.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10001) Bug in merging of collections

2015-08-07 Thread T Jake Luciani (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

T Jake Luciani updated CASSANDRA-10001:
---
Summary: Bug in merging of collections  (was: Bug in encoding of sstables)

 Bug in merging of collections
 -

 Key: CASSANDRA-10001
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10001
 Project: Cassandra
  Issue Type: Bug
Reporter: T Jake Luciani
Assignee: Stefania
Priority: Blocker
 Fix For: 3.0 beta 1


 Fixing the compaction dtest I noticed we aren't encoding map data correctly 
 in sstables.
 The following code fails from newly committed {{ 
 compaction_test.py:TestCompaction_with_SizeTieredCompactionStrategy.large_compaction_warning_test}}
 {code}
  session.execute(CREATE TABLE large(userid text PRIMARY KEY, properties 
 mapint, text) with compression = {})
 for i in range(200):  # ensures partition size larger than 
 compaction_large_partition_warning_threshold_mb   
   
   
 session.execute(UPDATE ks.large SET properties[%i] = '%s' WHERE 
 userid = 'user' % (i, get_random_word(strlen)))
 ret = session.execute(SELECT properties from ks.large where userid = 
 'user')
 assert len(ret) == 1
   self.assertEqual(200, len(ret[0][0].keys()))
 {code}
 The last assert is failing with only 91 keys.  The large values are causing 
 flushes vs staying in the memtable so the issue is somewhere in the 
 serialization of collections in sstables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9999) Improve usage of HashMap and HashSet in NetworkTopologyStrategy

2015-08-07 Thread Tommy Stendahl (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommy Stendahl updated CASSANDRA-:
--
Attachment: .txt

 Improve usage of HashMap and HashSet in NetworkTopologyStrategy
 ---

 Key: CASSANDRA-
 URL: https://issues.apache.org/jira/browse/CASSANDRA-
 Project: Cassandra
  Issue Type: Improvement
Reporter: Tommy Stendahl
Assignee: Tommy Stendahl
Priority: Minor
 Attachments: .txt


 In NetworkTopologyStrategy there are HashMaps and HashSets created with a 
 specified initial size. I assume that this is done avoid having them re sized 
 then when objects are inserted to them. Unfortunately they are created with 
 the size of the expected number of object that will be inserted into them. 
 Since the default load factor is 0.75 this will almost guarantee that 
 HashMaps and HashSets are re sized, to avoid re size the initial size should 
 be set to no of objects / 0.75 +1.
 Since this is done every time calculateNaturalEndpoints() is called this 
 might have some performance impact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10016) Materialized view metrics pushes out tpstats formatting

2015-08-07 Thread Sam Tunnicliffe (JIRA)
Sam Tunnicliffe created CASSANDRA-10016:
---

 Summary: Materialized view metrics pushes out tpstats formatting 
 Key: CASSANDRA-10016
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10016
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Sam Tunnicliffe
Priority: Minor
 Fix For: 3.0.0 rc1


{noformat}
Pool NameActive   Pending  Completed   Blocked  All 
time blocked
ReadStage 0 0  3 0  
   0
MutationStage 0 0  1 0  
   0
CounterMutationStage  0 0  0 0  
   0
BatchlogMutationStage 0 0  0 0  
   0
MaterializedViewMutationStage 0 0  0 0  
   0
GossipStage   0 0  0 0  
   0
RequestResponseStage  0 0  0 0  
   0
AntiEntropyStage  0 0  0 0  
   0
MigrationStage0 0  3 0  
   0
MiscStage 0 0  0 0  
   0
InternalResponseStage 0 0  0 0  
   0
ReadRepairStage   0 0  0 0  
   0

Message type   Dropped
READ 0
RANGE_SLICE  0
_TRACE   0
BATCHLOG_MUTATION0
MUTATION 0
COUNTER_MUTATION 0
REQUEST_RESPONSE 0
PAGED_RANGE  0
READ_REPAIR  0
MATERIALIZED_VIEW_MUTATION 0
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >