[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-11-11 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001781#comment-15001781
 ] 

Stefania commented on CASSANDRA-10534:
--

_TOC.txt_ is only used by standalone tools, we do a listing of the folder in 
the CFS constructor. I could not find where the digest file is read, at least 
not on the 2.1 code. Other components I have not checked yet. For sure index 
and data files are sync-ed.

We should probably sync all sstable components that we write but should we fix 
this regression and open a new ticket for better visibility?

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> a

[jira] [Created] (CASSANDRA-10692) Don't remove level info when doing upgradesstables

2015-11-11 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-10692:
---

 Summary: Don't remove level info when doing upgradesstables
 Key: CASSANDRA-10692
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10692
 Project: Cassandra
  Issue Type: Bug
Reporter: Marcus Eriksson
Assignee: Marcus Eriksson
 Fix For: 2.1.x, 2.2.x


Seems we blow away the level info when doing upgradesstables. Introduced in  
CASSANDRA-8004



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7904) Repair hangs

2015-11-11 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001562#comment-15001562
 ] 

Michael Shuler commented on CASSANDRA-7904:
---

The cassandra-2.0 branch is no longer under development. Does this still occur 
in the latest 2.1.X release (2.1.11, as of today), or 2.2.X (2.2.3, currently)?
(The cassandra-2.1 branch will soon be discontinued for active development, 
since 3.0.0 was released, just so you're aware.)

[~eanujwa], if you're keen on digging around the code in at least the 2.1 
branch (starting at 2.2 would be better), and seeing if you can identify your 
numbered scenarios above and open a new ticket on the topic with 2.1/2.2 as a 
starting point, that would make sense, as far as where the active development 
is occurring in Cassandra.

> Repair hangs
> 
>
> Key: CASSANDRA-7904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7904
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.0.10, ubuntu 14.04, Java HotSpot(TM) 64-Bit Server, 
> java version "1.7.0_45"
>Reporter: Duncan Sands
> Attachments: ls-172.18.68.138, ls-192.168.21.13, ls-192.168.60.134, 
> ls-192.168.60.136
>
>
> Cluster of 22 nodes spread over 4 data centres.  Not used on the weekend, so 
> repair is run on all nodes (in a staggered fashion) on the weekend.  Nodetool 
> options: -par -pr.  There is usually some overlap in the repairs: repair on 
> one node may well still be running when repair is started on the next node.  
> Repair hangs for some of the nodes almost every weekend.  It hung last 
> weekend, here are the details:
> In the whole cluster, only one node had an exception since C* was last 
> restarted.  This node is 192.168.60.136 and the exception is harmless: a 
> client disconnected abruptly.
> tpstats
>   4 nodes have a non-zero value for "active" or "pending" in 
> AntiEntropySessions.  These nodes all have Active => 1 and Pending => 1.  The 
> nodes are:
>   192.168.21.13 (data centre R)
>   192.168.60.134 (data centre A)
>   192.168.60.136 (data centre A)
>   172.18.68.138 (data centre Z)
> compactionstats:
>   No compactions.  All nodes have:
> pending tasks: 0
> Active compaction remaining time :n/a
> netstats:
>   All except one node have nothing.  One node (192.168.60.131, not one of the 
> nodes listed in the tpstats section above) has (note the Responses Pending 
> value of 1):
> Mode: NORMAL
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 4233
> Mismatch (Blocking): 0
> Mismatch (Background): 243
> Pool NameActive   Pending  Completed
> Commandsn/a 0   34785445
> Responses   n/a 1   38567167
> Repair sessions
>   I looked for repair sessions that failed to complete.  On 3 of the 4 nodes 
> mentioned in tpstats above I found that they had sent merkle tree requests 
> and got responses from all but one node.  In the log file for the node that 
> failed to respond there is no sign that it ever received the request.  On 1 
> node (172.18.68.138) it looks like responses were received from every node, 
> some streaming was done, and then... nothing.  Details:
>   Node 192.168.21.13 (data centre R):
> Sent merkle trees to /172.18.33.24, /192.168.60.140, /192.168.60.142, 
> /172.18.68.139, /172.18.68.138, /172.18.33.22, /192.168.21.13 for table 
> brokers, never got a response from /172.18.68.139.  On /172.18.68.139, just 
> before this time it sent a response for the same repair session but a 
> different table, and there is no record of it receiving a request for table 
> brokers.
>   Node 192.168.60.134 (data centre A):
> Sent merkle trees to /172.18.68.139, /172.18.68.138, /192.168.60.132, 
> /192.168.21.14, /192.168.60.134 for table swxess_outbound, never got a 
> response from /172.18.68.138.  On /172.18.68.138, just before this time it 
> sent a response for the same repair session but a different table, and there 
> is no record of it receiving a request for table swxess_outbound.
>   Node 192.168.60.136 (data centre A):
> Sent merkle trees to /192.168.60.142, /172.18.68.139, /192.168.60.136 for 
> table rollups7200, never got a response from /172.18.68.139.  This repair 
> session is never mentioned in the /172.18.68.139 log.
>   Node 172.18.68.138 (data centre Z):
> The issue here seems to be repair session 
> #a55c16e1-35eb-11e4-8e7e-51c077eaf311.  It got responses for all its merkle 
> tree requests, did some streaming, but seems to have stopped after finishing 
> with one table (rollups60).  I found it as follows: it is the only repair for 
> which there is no "session completed successfully" message in the log.
> Some log file snippets are attached.



--
This message was sent by Atlas

[jira] [Created] (CASSANDRA-10691) SSTABLEUPGRADE from 2.0.x -> 2.1.x appears to remove LCS level info

2015-11-11 Thread Thom Valley (JIRA)
Thom Valley created CASSANDRA-10691:
---

 Summary: SSTABLEUPGRADE from 2.0.x -> 2.1.x appears to remove LCS 
level info
 Key: CASSANDRA-10691
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10691
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
 Environment: DSE 4.8.0
RedHat 7
Java 1.8 u60
Reporter: Thom Valley


Upgraded POC cluster from DSE 4.6.6 -> DSE 4.8.0
Cluster operated fine with strong workload after upgrade.

Later attempt to run SSTABLEUPGRADE resulted in a large percentage of SSTABLES 
being dropped to L0 and thousands of compactions in backlog.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10485) Missing host ID on hinted handoff write

2015-11-11 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001310#comment-15001310
 ] 

Paulo Motta commented on CASSANDRA-10485:
-

bq. If we submit a hint to an endpoint that left, when will the hint be cleaned 
up and discarded? Is there a race there?

The race window is very small, the node needs to be removed exactly between 
getting the ID from TokenMetadata and the hint being actually written by the 
HintsManager. If that happens, on 2.1 and 2.2 the hint will expire by ttl after 
gc_grace_seconds. On 3.0+, a hint file might be created for the removed node, 
and needs to be removed manually or via nodetool truncatehints.

Fixed minor nit and removed {{isMemberJoining}} assertion. Tests were 
resubmitted and seem OK.

||2.1||2.2||3.0||trunk||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.1...pauloricardomg:2.1-10485-ultimate]|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-10485-ultimate]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-10485-ultimate]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-10485-ultimate]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.1-10485-ultimate-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10485-ultimate-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10485-ultimate-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10485-ultimate-testall/lastCompletedBuild/testReport/]|
|[dtests|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.1-10485-ultimate-dtest/lastCompletedBuild/testReport/]|[dtests|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10485-ultimate-dtest/lastCompletedBuild/testReport/]|[dtests|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10485-ultimate-dtest/lastCompletedBuild/testReport/]|[dtests|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10485-ultimate-dtest/lastCompletedBuild/testReport/]|

> Missing host ID on hinted handoff write
> ---
>
> Key: CASSANDRA-10485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10485
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> when I restart one of them I receive the error "Missing host ID":
> {noformat}
> WARN  [SharedPool-Worker-1] 2015-10-08 13:15:33,882 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.AssertionError: Missing host ID for 63.251.156.141
> at 
> org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:978)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:950)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2235)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.3.jar:2.1.3]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {noformat}
> If I made nodetool status, the problematic node has ID:
> {noformat}
> UN  10.10.10.12  1.3 TB 1   ?   
> 4d5c8fd2-a909-4f09-a23c-4cd6040f338a  rack3
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10005) Streaming not enough bytes error

2015-11-11 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita resolved CASSANDRA-10005.

   Resolution: Duplicate
Fix Version/s: (was: 2.2.x)

Closing as dupe of CASSANDRA-10012.

> Streaming not enough bytes error
> 
>
> Key: CASSANDRA-10005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10005
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Moos
>Assignee: Yuki Morishita
>Priority: Minor
>  Labels: triaged
> Attachments: deadlock.txt, errors.txt
>
>
> I'm adding a new node to the cluster and I'm seeing a bunch of the errors 
> below and the node never joins. It looks like a deadlock.
> After looking through the code it looks like IncomingFileMessage will tell 
> the session to retry on Exceptions (except IOException) but the 
> CompressedInputStream thread is still running and then the retry happens and 
> the deadlock ensues. It might be best to close the StreamReader (and stop the 
> thread) if an Exception happens before retrying.
> I'm not sure why I am getting this error to begin with though, might it have 
> something to do with not being able to upgrade my SSTables after going from 
> 2.1.2 -> 2.2.0?
> {code}
> error: null
> -- StackTrace --
> java.lang.AssertionError
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.checkUnused(LifecycleTransaction.java:428)
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.split(LifecycleTransaction.java:408)
> at 
> org.apache.cassandra.db.compaction.CompactionManager.parallelAllSSTableOperation(CompactionManager.java:268)
> at 
> org.apache.cassandra.db.compaction.CompactionManager.performSSTableRewrite(CompactionManager.java:373)
> at 
> org.apache.cassandra.db.ColumnFamilyStore.sstablesRewrite(ColumnFamilyStore.java:1524)
> at 
> org.apache.cassandra.service.StorageService.upgradeSSTables(StorageService.java:2521)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10680) Deal with small compression chunk size better during streaming plan setup

2015-11-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001251#comment-15001251
 ] 

Yuki Morishita commented on CASSANDRA-10680:


Patch is ready for review.

| 2.1 | 2.2 | 3.0.x (3.1) |
| [branch|https://github.com/yukim/cassandra/tree/10680] | 
[branch|https://github.com/yukim/cassandra/tree/10680-2.2] | 
[branch|https://github.com/yukim/cassandra/tree/10680-3.0] |
| 
[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10680-testall/lastCompletedBuild/testReport/]
 | 
[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10680-2.2-testall/lastCompletedBuild/testReport/]
 | 
[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10680-3.0-testall/lastCompletedBuild/testReport/]
 |
| 
[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10680-dtest/lastCompletedBuild/testReport/]
 | 
[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10680-2.2-dtest/lastCompletedBuild/testReport/]
 | 
[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-10680-2.2-dtest/lastCompletedBuild/testReport/]
 |

> Deal with small compression chunk size better during streaming plan setup
> -
>
> Key: CASSANDRA-10680
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10680
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>Assignee: Yuki Morishita
> Fix For: 2.1.x
>
>
> For clusters using small compression chunk size and terabytes of data, the 
> streaming plan calculations will instantiate hundreds of millions of 
> compressionmetadata$chunk objects, which will create unreasonable amounts of 
> heap pressure. Rather than instantiating all of those at once, streaming 
> should instantiate only as many as needed for a single file per table at a 
> time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[07/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-11 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4875535
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4875535
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4875535

Branch: refs/heads/cassandra-2.2
Commit: e487553575d95cb5fcf28a98a8be6d4b8a26bced
Parents: 6bb6bb0 1c3ff92
Author: Yuki Morishita 
Authored: Wed Nov 11 16:13:58 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:13:58 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 35 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 35 +++
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 51 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 290 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/CHANGES.txt
--
diff --cc CHANGES.txt
index 0557786,92244a0..0fcf037
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -2.1.12
 +2.2.4
 + * (Hadoop) fix splits calculation (CASSANDRA-10640)
 + * (Hadoop) ensure that Cluster instances are always closed (CASSANDRA-10058)
 + * (cqlsh) show partial trace if incomplete after max_trace_wait 
(CASSANDRA-7645)
 + * Use most up-to-date version of schema for system tables (CASSANDRA-10652)
 + * Deprecate memory_allocator in cassandra.yaml (CASSANDRA-10581,10628)
 + * Expose phi values from failure detector via JMX and tweak debug
 +   and trace logging (CASSANDRA-9526)
 + * Fix RangeNamesQueryPager (CASSANDRA-10509)
 + * Deprecate Pig support (CASSANDRA-10542)
 + * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
 +Merged from 2.1:
+  * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
   * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
   * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
   * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index d553f4d,54f6fff..2d58219
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -2519,6 -2505,37 +2519,41 @@@ public class ColumnFamilyStore implemen
  
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
  }
  
+ public int invalidateRowCache(Collection> 
boundsToInvalidate)
+ {
+ int invalidatedKeys = 0;
 -for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
++for (Iterator keyIter = 
CacheService.instance.rowCache.keyIterator();
++ keyIter.hasNext(); )
+ {
++RowCacheKey key = keyIter.next();
+ DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+ if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+ {
+ invalidateCachedRow(dk);
+ invalidatedKeys++;
+ }
+ }
+ 
+ return invalidatedKeys;
+ }
+ 
+ public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+ {
+ int invalidatedKeys = 0;
 -for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
++for (Iterator keyIter = 
CacheService.instance.counterCache.keyIterator();
++ keyIter.hasNext(); )
+ {
++CounterCacheKey key = keyIter.next();
+ DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+ if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+ {
+ CacheService.instance.counterCache.remove(key);
+ invalidatedKeys++;
+ }
+ }
+ return invalidatedKeys;
+ }
+ 
  /**
   * @return true if @param key is contained in the row cache
   */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/src/java/org/apache/cassandra/db/compaction/CompactionController.java

[09/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-11 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4875535
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4875535
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4875535

Branch: refs/heads/cassandra-3.1
Commit: e487553575d95cb5fcf28a98a8be6d4b8a26bced
Parents: 6bb6bb0 1c3ff92
Author: Yuki Morishita 
Authored: Wed Nov 11 16:13:58 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:13:58 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 35 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 35 +++
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 51 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 290 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/CHANGES.txt
--
diff --cc CHANGES.txt
index 0557786,92244a0..0fcf037
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -2.1.12
 +2.2.4
 + * (Hadoop) fix splits calculation (CASSANDRA-10640)
 + * (Hadoop) ensure that Cluster instances are always closed (CASSANDRA-10058)
 + * (cqlsh) show partial trace if incomplete after max_trace_wait 
(CASSANDRA-7645)
 + * Use most up-to-date version of schema for system tables (CASSANDRA-10652)
 + * Deprecate memory_allocator in cassandra.yaml (CASSANDRA-10581,10628)
 + * Expose phi values from failure detector via JMX and tweak debug
 +   and trace logging (CASSANDRA-9526)
 + * Fix RangeNamesQueryPager (CASSANDRA-10509)
 + * Deprecate Pig support (CASSANDRA-10542)
 + * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
 +Merged from 2.1:
+  * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
   * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
   * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
   * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index d553f4d,54f6fff..2d58219
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -2519,6 -2505,37 +2519,41 @@@ public class ColumnFamilyStore implemen
  
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
  }
  
+ public int invalidateRowCache(Collection> 
boundsToInvalidate)
+ {
+ int invalidatedKeys = 0;
 -for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
++for (Iterator keyIter = 
CacheService.instance.rowCache.keyIterator();
++ keyIter.hasNext(); )
+ {
++RowCacheKey key = keyIter.next();
+ DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+ if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+ {
+ invalidateCachedRow(dk);
+ invalidatedKeys++;
+ }
+ }
+ 
+ return invalidatedKeys;
+ }
+ 
+ public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+ {
+ int invalidatedKeys = 0;
 -for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
++for (Iterator keyIter = 
CacheService.instance.counterCache.keyIterator();
++ keyIter.hasNext(); )
+ {
++CounterCacheKey key = keyIter.next();
+ DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+ if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+ {
+ CacheService.instance.counterCache.remove(key);
+ invalidatedKeys++;
+ }
+ }
+ return invalidatedKeys;
+ }
+ 
  /**
   * @return true if @param key is contained in the row cache
   */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/src/java/org/apache/cassandra/db/compaction/CompactionController.java

[10/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-11 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0de23f20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0de23f20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0de23f20

Branch: refs/heads/trunk
Commit: 0de23f20ae4bd95f040017e2db653c6c1b5eabe9
Parents: 9a90e98 e487553
Author: Yuki Morishita 
Authored: Wed Nov 11 16:16:23 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:16:23 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 34 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/streaming/StreamReader.java   | 12 ++--
 .../cassandra/streaming/StreamReceiveTask.java  | 37 +++-
 .../compress/CompressedStreamReader.java|  2 +-
 .../apache/cassandra/db/CounterCacheTest.java   | 48 +++
 .../org/apache/cassandra/db/RowCacheTest.java   | 50 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 298 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0de23f20/CHANGES.txt
--
diff --cc CHANGES.txt
index d271c95,0fcf037..02dc249
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,50 -1,6 +1,51 @@@
 -2.2.4
 +3.0.1
 + * Keep the file open in trySkipCache (CASSANDRA-10669)
 + * Updated trigger example (CASSANDRA-10257)
 +Merged from 2.2:
   * (Hadoop) fix splits calculation (CASSANDRA-10640)
   * (Hadoop) ensure that Cluster instances are always closed (CASSANDRA-10058)
 +Merged from 2.1:
++ * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
 + * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
 + * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
 +
 +
 +3.0
 + * Fix AssertionError while flushing memtable due to materialized views
 +   incorrectly inserting empty rows (CASSANDRA-10614)
 + * Store UDA initcond as CQL literal in the schema table, instead of a blob 
(CASSANDRA-10650)
 + * Don't use -1 for the position of partition key in schema (CASSANDRA-10491)
 + * Fix distinct queries in mixed version cluster (CASSANDRA-10573)
 + * Skip sstable on clustering in names query (CASSANDRA-10571)
 + * Remove value skipping as it breaks read-repair (CASSANDRA-10655)
 + * Fix bootstrapping with MVs (CASSANDRA-10621)
 + * Make sure EACH_QUORUM reads are using NTS (CASSANDRA-10584)
 + * Fix MV replica filtering for non-NetworkTopologyStrategy (CASSANDRA-10634)
 + * (Hadoop) fix CIF describeSplits() not handling 0 size estimates 
(CASSANDRA-10600)
 + * Fix reading of legacy sstables (CASSANDRA-10590)
 + * Use CQL type names in schema metadata tables (CASSANDRA-10365)
 + * Guard batchlog replay against integer division by zero (CASSANDRA-9223)
 + * Fix bug when adding a column to thrift with the same name than a primary 
key (CASSANDRA-10608)
 + * Add client address argument to IAuthenticator::newSaslNegotiator 
(CASSANDRA-8068)
 + * Fix implementation of LegacyLayout.LegacyBoundComparator (CASSANDRA-10602)
 + * Don't use 'names query' read path for counters (CASSANDRA-10572)
 + * Fix backward compatibility for counters (CASSANDRA-10470)
 + * Remove memory_allocator paramter from cassandra.yaml 
(CASSANDRA-10581,10628)
 + * Execute the metadata reload task of all registered indexes on CFS::reload 
(CASSANDRA-10604)
 + * Fix thrift cas operations with defined columns (CASSANDRA-10576)
 + * Fix PartitionUpdate.operationCount()for updates with static column 
operations (CASSANDRA-10606)
 + * Fix thrift get() queries with defined columns (CASSANDRA-10586)
 + * Fix marking of indexes as built and removed (CASSANDRA-10601)
 + * Skip initialization of non-registered 2i instances, remove 
Index::getIndexName (CASSANDRA-10595)
 + * Fix batches on multiple tables (CASSANDRA-10554)
 + * Ensure compaction options are validated when updating KeyspaceMetadata 
(CASSANDRA-10569)
 + * Flatten Iterator Transformation Hierarchy (CASSANDRA-9975)
 + * Remove token generator (CASSANDRA-5261)
 + * RolesCache should not be created for any authenticator that does not 
requireAuthentication (CASSANDRA-10562)
 + * Fix LogTransaction checking only a single directory for files 
(CASSANDRA-10421)
 + * Fix handling of range tombstones when reading old format sstables 
(CASSANDRA-10360)
 + * Aggregate with Initial Condition fails with C* 3.0 (CASSANDRA-10367)
 +Merged from 2.2:
   * (cqlsh) show partial trace if incomplete after max_trace_wait 
(CASSANDRA-7645)
   * Use most up-to-date version of schema for system tables (CASSANDRA-10652)
   * Deprecate

[06/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-11 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4875535
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4875535
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4875535

Branch: refs/heads/trunk
Commit: e487553575d95cb5fcf28a98a8be6d4b8a26bced
Parents: 6bb6bb0 1c3ff92
Author: Yuki Morishita 
Authored: Wed Nov 11 16:13:58 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:13:58 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 35 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 35 +++
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 51 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 290 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/CHANGES.txt
--
diff --cc CHANGES.txt
index 0557786,92244a0..0fcf037
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -2.1.12
 +2.2.4
 + * (Hadoop) fix splits calculation (CASSANDRA-10640)
 + * (Hadoop) ensure that Cluster instances are always closed (CASSANDRA-10058)
 + * (cqlsh) show partial trace if incomplete after max_trace_wait 
(CASSANDRA-7645)
 + * Use most up-to-date version of schema for system tables (CASSANDRA-10652)
 + * Deprecate memory_allocator in cassandra.yaml (CASSANDRA-10581,10628)
 + * Expose phi values from failure detector via JMX and tweak debug
 +   and trace logging (CASSANDRA-9526)
 + * Fix RangeNamesQueryPager (CASSANDRA-10509)
 + * Deprecate Pig support (CASSANDRA-10542)
 + * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
 +Merged from 2.1:
+  * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
   * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
   * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
   * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index d553f4d,54f6fff..2d58219
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -2519,6 -2505,37 +2519,41 @@@ public class ColumnFamilyStore implemen
  
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
  }
  
+ public int invalidateRowCache(Collection> 
boundsToInvalidate)
+ {
+ int invalidatedKeys = 0;
 -for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
++for (Iterator keyIter = 
CacheService.instance.rowCache.keyIterator();
++ keyIter.hasNext(); )
+ {
++RowCacheKey key = keyIter.next();
+ DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+ if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+ {
+ invalidateCachedRow(dk);
+ invalidatedKeys++;
+ }
+ }
+ 
+ return invalidatedKeys;
+ }
+ 
+ public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+ {
+ int invalidatedKeys = 0;
 -for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
++for (Iterator keyIter = 
CacheService.instance.counterCache.keyIterator();
++ keyIter.hasNext(); )
+ {
++CounterCacheKey key = keyIter.next();
+ DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+ if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+ {
+ CacheService.instance.counterCache.remove(key);
+ invalidatedKeys++;
+ }
+ }
+ return invalidatedKeys;
+ }
+ 
  /**
   * @return true if @param key is contained in the row cache
   */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/src/java/org/apache/cassandra/db/compaction/CompactionController.java

[01/15] cassandra git commit: Invalidate row/counter cache after stream receive task is completed

2015-11-11 Thread yukim
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 6bad57fc3 -> 1c3ff9242
  refs/heads/cassandra-2.2 6bb6bb005 -> e48755357
  refs/heads/cassandra-3.0 9a90e9894 -> 0de23f20a
  refs/heads/cassandra-3.1 1fe90d34b -> 0cafccfc5
  refs/heads/trunk 7d6dbf897 -> 186efefe8


Invalidate row/counter cache after stream receive task is completed

patch by Paulo Motta; reviewed by yukim for CASSANDRA-10341


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c3ff924
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c3ff924
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c3ff924

Branch: refs/heads/cassandra-2.1
Commit: 1c3ff9242a0bfc5c544c69f68ee7b17a464a5ab3
Parents: 6bad57f
Author: Paulo Motta 
Authored: Wed Nov 11 13:26:22 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 15:52:37 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 31 ++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 36 
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 61 +--
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 291 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa2017a..92244a0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.12
+ * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
  * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
  * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
  * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 906e18c..54f6fff 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2505,6 +2505,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
 }
 
+public int invalidateRowCache(Collection> boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+invalidateCachedRow(dk);
+invalidatedKeys++;
+}
+}
+
+return invalidatedKeys;
+}
+
+public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+CacheService.instance.counterCache.remove(key);
+invalidatedKeys++;
+}
+}
+return invalidatedKeys;
+}
+
 /**
  * @return true if @param key is contained in the row cache
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index f8ff163..35d0832 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -189,11 +189,6 @@ public class CompactionController implements AutoCloseable
 return min;
 }
 
-public void invalidateCachedRow(DecoratedKey key)
-{
-cfs.invalidateCachedRow(key

[02/15] cassandra git commit: Invalidate row/counter cache after stream receive task is completed

2015-11-11 Thread yukim
Invalidate row/counter cache after stream receive task is completed

patch by Paulo Motta; reviewed by yukim for CASSANDRA-10341


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c3ff924
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c3ff924
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c3ff924

Branch: refs/heads/cassandra-2.2
Commit: 1c3ff9242a0bfc5c544c69f68ee7b17a464a5ab3
Parents: 6bad57f
Author: Paulo Motta 
Authored: Wed Nov 11 13:26:22 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 15:52:37 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 31 ++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 36 
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 61 +--
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 291 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa2017a..92244a0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.12
+ * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
  * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
  * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
  * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 906e18c..54f6fff 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2505,6 +2505,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
 }
 
+public int invalidateRowCache(Collection> boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+invalidateCachedRow(dk);
+invalidatedKeys++;
+}
+}
+
+return invalidatedKeys;
+}
+
+public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+CacheService.instance.counterCache.remove(key);
+invalidatedKeys++;
+}
+}
+return invalidatedKeys;
+}
+
 /**
  * @return true if @param key is contained in the row cache
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index f8ff163..35d0832 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -189,11 +189,6 @@ public class CompactionController implements AutoCloseable
 return min;
 }
 
-public void invalidateCachedRow(DecoratedKey key)
-{
-cfs.invalidateCachedRow(key);
-}
-
 public void close()
 {
 overlappingSSTables.release();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/dht/Bounds.java
--
diff --git a/src/ja

[15/15] cassandra git commit: Merge branch 'cassandra-3.1' into trunk

2015-11-11 Thread yukim
Merge branch 'cassandra-3.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/186efefe
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/186efefe
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/186efefe

Branch: refs/heads/trunk
Commit: 186efefe869995ba2ff1a4dfe861d240dd4ac5b7
Parents: 7d6dbf8 0cafccf
Author: Yuki Morishita 
Authored: Wed Nov 11 16:16:58 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:16:58 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 34 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/streaming/StreamReader.java   | 12 ++--
 .../cassandra/streaming/StreamReceiveTask.java  | 37 +++-
 .../compress/CompressedStreamReader.java|  2 +-
 .../apache/cassandra/db/CounterCacheTest.java   | 48 +++
 .../org/apache/cassandra/db/RowCacheTest.java   | 50 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 298 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/186efefe/CHANGES.txt
--

http://git-wip-us.apache.org/repos/asf/cassandra/blob/186efefe/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--



[14/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-11-11 Thread yukim
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0cafccfc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0cafccfc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0cafccfc

Branch: refs/heads/trunk
Commit: 0cafccfc5ce80fc91817de6a5cd74702836de508
Parents: 1fe90d3 0de23f2
Author: Yuki Morishita 
Authored: Wed Nov 11 16:16:44 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:16:44 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 34 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/streaming/StreamReader.java   | 12 ++--
 .../cassandra/streaming/StreamReceiveTask.java  | 37 +++-
 .../compress/CompressedStreamReader.java|  2 +-
 .../apache/cassandra/db/CounterCacheTest.java   | 48 +++
 .../org/apache/cassandra/db/RowCacheTest.java   | 50 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 298 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cafccfc/CHANGES.txt
--



[12/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-11 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0de23f20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0de23f20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0de23f20

Branch: refs/heads/cassandra-3.0
Commit: 0de23f20ae4bd95f040017e2db653c6c1b5eabe9
Parents: 9a90e98 e487553
Author: Yuki Morishita 
Authored: Wed Nov 11 16:16:23 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:16:23 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 34 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/streaming/StreamReader.java   | 12 ++--
 .../cassandra/streaming/StreamReceiveTask.java  | 37 +++-
 .../compress/CompressedStreamReader.java|  2 +-
 .../apache/cassandra/db/CounterCacheTest.java   | 48 +++
 .../org/apache/cassandra/db/RowCacheTest.java   | 50 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 298 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0de23f20/CHANGES.txt
--
diff --cc CHANGES.txt
index d271c95,0fcf037..02dc249
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,50 -1,6 +1,51 @@@
 -2.2.4
 +3.0.1
 + * Keep the file open in trySkipCache (CASSANDRA-10669)
 + * Updated trigger example (CASSANDRA-10257)
 +Merged from 2.2:
   * (Hadoop) fix splits calculation (CASSANDRA-10640)
   * (Hadoop) ensure that Cluster instances are always closed (CASSANDRA-10058)
 +Merged from 2.1:
++ * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
 + * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
 + * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
 +
 +
 +3.0
 + * Fix AssertionError while flushing memtable due to materialized views
 +   incorrectly inserting empty rows (CASSANDRA-10614)
 + * Store UDA initcond as CQL literal in the schema table, instead of a blob 
(CASSANDRA-10650)
 + * Don't use -1 for the position of partition key in schema (CASSANDRA-10491)
 + * Fix distinct queries in mixed version cluster (CASSANDRA-10573)
 + * Skip sstable on clustering in names query (CASSANDRA-10571)
 + * Remove value skipping as it breaks read-repair (CASSANDRA-10655)
 + * Fix bootstrapping with MVs (CASSANDRA-10621)
 + * Make sure EACH_QUORUM reads are using NTS (CASSANDRA-10584)
 + * Fix MV replica filtering for non-NetworkTopologyStrategy (CASSANDRA-10634)
 + * (Hadoop) fix CIF describeSplits() not handling 0 size estimates 
(CASSANDRA-10600)
 + * Fix reading of legacy sstables (CASSANDRA-10590)
 + * Use CQL type names in schema metadata tables (CASSANDRA-10365)
 + * Guard batchlog replay against integer division by zero (CASSANDRA-9223)
 + * Fix bug when adding a column to thrift with the same name than a primary 
key (CASSANDRA-10608)
 + * Add client address argument to IAuthenticator::newSaslNegotiator 
(CASSANDRA-8068)
 + * Fix implementation of LegacyLayout.LegacyBoundComparator (CASSANDRA-10602)
 + * Don't use 'names query' read path for counters (CASSANDRA-10572)
 + * Fix backward compatibility for counters (CASSANDRA-10470)
 + * Remove memory_allocator paramter from cassandra.yaml 
(CASSANDRA-10581,10628)
 + * Execute the metadata reload task of all registered indexes on CFS::reload 
(CASSANDRA-10604)
 + * Fix thrift cas operations with defined columns (CASSANDRA-10576)
 + * Fix PartitionUpdate.operationCount()for updates with static column 
operations (CASSANDRA-10606)
 + * Fix thrift get() queries with defined columns (CASSANDRA-10586)
 + * Fix marking of indexes as built and removed (CASSANDRA-10601)
 + * Skip initialization of non-registered 2i instances, remove 
Index::getIndexName (CASSANDRA-10595)
 + * Fix batches on multiple tables (CASSANDRA-10554)
 + * Ensure compaction options are validated when updating KeyspaceMetadata 
(CASSANDRA-10569)
 + * Flatten Iterator Transformation Hierarchy (CASSANDRA-9975)
 + * Remove token generator (CASSANDRA-5261)
 + * RolesCache should not be created for any authenticator that does not 
requireAuthentication (CASSANDRA-10562)
 + * Fix LogTransaction checking only a single directory for files 
(CASSANDRA-10421)
 + * Fix handling of range tombstones when reading old format sstables 
(CASSANDRA-10360)
 + * Aggregate with Initial Condition fails with C* 3.0 (CASSANDRA-10367)
 +Merged from 2.2:
   * (cqlsh) show partial trace if incomplete after max_trace_wait 
(CASSANDRA-7645)
   * Use most up-to-date version of schema for system tables (CASSANDRA-10652)
   * D

[08/15] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-11 Thread yukim
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e4875535
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e4875535
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e4875535

Branch: refs/heads/cassandra-3.0
Commit: e487553575d95cb5fcf28a98a8be6d4b8a26bced
Parents: 6bb6bb0 1c3ff92
Author: Yuki Morishita 
Authored: Wed Nov 11 16:13:58 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:13:58 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 35 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 35 +++
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 51 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 290 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/CHANGES.txt
--
diff --cc CHANGES.txt
index 0557786,92244a0..0fcf037
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,5 +1,16 @@@
 -2.1.12
 +2.2.4
 + * (Hadoop) fix splits calculation (CASSANDRA-10640)
 + * (Hadoop) ensure that Cluster instances are always closed (CASSANDRA-10058)
 + * (cqlsh) show partial trace if incomplete after max_trace_wait 
(CASSANDRA-7645)
 + * Use most up-to-date version of schema for system tables (CASSANDRA-10652)
 + * Deprecate memory_allocator in cassandra.yaml (CASSANDRA-10581,10628)
 + * Expose phi values from failure detector via JMX and tweak debug
 +   and trace logging (CASSANDRA-9526)
 + * Fix RangeNamesQueryPager (CASSANDRA-10509)
 + * Deprecate Pig support (CASSANDRA-10542)
 + * Reduce contention getting instances of CompositeType (CASSANDRA-10433)
 +Merged from 2.1:
+  * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
   * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
   * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
   * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --cc src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index d553f4d,54f6fff..2d58219
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@@ -2519,6 -2505,37 +2519,41 @@@ public class ColumnFamilyStore implemen
  
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
  }
  
+ public int invalidateRowCache(Collection> 
boundsToInvalidate)
+ {
+ int invalidatedKeys = 0;
 -for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
++for (Iterator keyIter = 
CacheService.instance.rowCache.keyIterator();
++ keyIter.hasNext(); )
+ {
++RowCacheKey key = keyIter.next();
+ DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+ if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+ {
+ invalidateCachedRow(dk);
+ invalidatedKeys++;
+ }
+ }
+ 
+ return invalidatedKeys;
+ }
+ 
+ public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+ {
+ int invalidatedKeys = 0;
 -for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
++for (Iterator keyIter = 
CacheService.instance.counterCache.keyIterator();
++ keyIter.hasNext(); )
+ {
++CounterCacheKey key = keyIter.next();
+ DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+ if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+ {
+ CacheService.instance.counterCache.remove(key);
+ invalidatedKeys++;
+ }
+ }
+ return invalidatedKeys;
+ }
+ 
  /**
   * @return true if @param key is contained in the row cache
   */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e4875535/src/java/org/apache/cassandra/db/compaction/CompactionController.java

[03/15] cassandra git commit: Invalidate row/counter cache after stream receive task is completed

2015-11-11 Thread yukim
Invalidate row/counter cache after stream receive task is completed

patch by Paulo Motta; reviewed by yukim for CASSANDRA-10341


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c3ff924
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c3ff924
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c3ff924

Branch: refs/heads/trunk
Commit: 1c3ff9242a0bfc5c544c69f68ee7b17a464a5ab3
Parents: 6bad57f
Author: Paulo Motta 
Authored: Wed Nov 11 13:26:22 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 15:52:37 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 31 ++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 36 
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 61 +--
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 291 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa2017a..92244a0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.12
+ * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
  * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
  * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
  * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 906e18c..54f6fff 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2505,6 +2505,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
 }
 
+public int invalidateRowCache(Collection> boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+invalidateCachedRow(dk);
+invalidatedKeys++;
+}
+}
+
+return invalidatedKeys;
+}
+
+public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+CacheService.instance.counterCache.remove(key);
+invalidatedKeys++;
+}
+}
+return invalidatedKeys;
+}
+
 /**
  * @return true if @param key is contained in the row cache
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index f8ff163..35d0832 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -189,11 +189,6 @@ public class CompactionController implements AutoCloseable
 return min;
 }
 
-public void invalidateCachedRow(DecoratedKey key)
-{
-cfs.invalidateCachedRow(key);
-}
-
 public void close()
 {
 overlappingSSTables.release();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/dht/Bounds.java
--
diff --git a/src/java/org/a

[11/15] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-11 Thread yukim
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0de23f20
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0de23f20
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0de23f20

Branch: refs/heads/cassandra-3.1
Commit: 0de23f20ae4bd95f040017e2db653c6c1b5eabe9
Parents: 9a90e98 e487553
Author: Yuki Morishita 
Authored: Wed Nov 11 16:16:23 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:16:23 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 34 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/streaming/StreamReader.java   | 12 ++--
 .../cassandra/streaming/StreamReceiveTask.java  | 37 +++-
 .../compress/CompressedStreamReader.java|  2 +-
 .../apache/cassandra/db/CounterCacheTest.java   | 48 +++
 .../org/apache/cassandra/db/RowCacheTest.java   | 50 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 298 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0de23f20/CHANGES.txt
--
diff --cc CHANGES.txt
index d271c95,0fcf037..02dc249
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,50 -1,6 +1,51 @@@
 -2.2.4
 +3.0.1
 + * Keep the file open in trySkipCache (CASSANDRA-10669)
 + * Updated trigger example (CASSANDRA-10257)
 +Merged from 2.2:
   * (Hadoop) fix splits calculation (CASSANDRA-10640)
   * (Hadoop) ensure that Cluster instances are always closed (CASSANDRA-10058)
 +Merged from 2.1:
++ * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
 + * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
 + * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
 +
 +
 +3.0
 + * Fix AssertionError while flushing memtable due to materialized views
 +   incorrectly inserting empty rows (CASSANDRA-10614)
 + * Store UDA initcond as CQL literal in the schema table, instead of a blob 
(CASSANDRA-10650)
 + * Don't use -1 for the position of partition key in schema (CASSANDRA-10491)
 + * Fix distinct queries in mixed version cluster (CASSANDRA-10573)
 + * Skip sstable on clustering in names query (CASSANDRA-10571)
 + * Remove value skipping as it breaks read-repair (CASSANDRA-10655)
 + * Fix bootstrapping with MVs (CASSANDRA-10621)
 + * Make sure EACH_QUORUM reads are using NTS (CASSANDRA-10584)
 + * Fix MV replica filtering for non-NetworkTopologyStrategy (CASSANDRA-10634)
 + * (Hadoop) fix CIF describeSplits() not handling 0 size estimates 
(CASSANDRA-10600)
 + * Fix reading of legacy sstables (CASSANDRA-10590)
 + * Use CQL type names in schema metadata tables (CASSANDRA-10365)
 + * Guard batchlog replay against integer division by zero (CASSANDRA-9223)
 + * Fix bug when adding a column to thrift with the same name than a primary 
key (CASSANDRA-10608)
 + * Add client address argument to IAuthenticator::newSaslNegotiator 
(CASSANDRA-8068)
 + * Fix implementation of LegacyLayout.LegacyBoundComparator (CASSANDRA-10602)
 + * Don't use 'names query' read path for counters (CASSANDRA-10572)
 + * Fix backward compatibility for counters (CASSANDRA-10470)
 + * Remove memory_allocator paramter from cassandra.yaml 
(CASSANDRA-10581,10628)
 + * Execute the metadata reload task of all registered indexes on CFS::reload 
(CASSANDRA-10604)
 + * Fix thrift cas operations with defined columns (CASSANDRA-10576)
 + * Fix PartitionUpdate.operationCount()for updates with static column 
operations (CASSANDRA-10606)
 + * Fix thrift get() queries with defined columns (CASSANDRA-10586)
 + * Fix marking of indexes as built and removed (CASSANDRA-10601)
 + * Skip initialization of non-registered 2i instances, remove 
Index::getIndexName (CASSANDRA-10595)
 + * Fix batches on multiple tables (CASSANDRA-10554)
 + * Ensure compaction options are validated when updating KeyspaceMetadata 
(CASSANDRA-10569)
 + * Flatten Iterator Transformation Hierarchy (CASSANDRA-9975)
 + * Remove token generator (CASSANDRA-5261)
 + * RolesCache should not be created for any authenticator that does not 
requireAuthentication (CASSANDRA-10562)
 + * Fix LogTransaction checking only a single directory for files 
(CASSANDRA-10421)
 + * Fix handling of range tombstones when reading old format sstables 
(CASSANDRA-10360)
 + * Aggregate with Initial Condition fails with C* 3.0 (CASSANDRA-10367)
 +Merged from 2.2:
   * (cqlsh) show partial trace if incomplete after max_trace_wait 
(CASSANDRA-7645)
   * Use most up-to-date version of schema for system tables (CASSANDRA-10652)
   * D

[05/15] cassandra git commit: Invalidate row/counter cache after stream receive task is completed

2015-11-11 Thread yukim
Invalidate row/counter cache after stream receive task is completed

patch by Paulo Motta; reviewed by yukim for CASSANDRA-10341


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c3ff924
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c3ff924
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c3ff924

Branch: refs/heads/cassandra-3.1
Commit: 1c3ff9242a0bfc5c544c69f68ee7b17a464a5ab3
Parents: 6bad57f
Author: Paulo Motta 
Authored: Wed Nov 11 13:26:22 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 15:52:37 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 31 ++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 36 
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 61 +--
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 291 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa2017a..92244a0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.12
+ * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
  * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
  * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
  * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 906e18c..54f6fff 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2505,6 +2505,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
 }
 
+public int invalidateRowCache(Collection> boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+invalidateCachedRow(dk);
+invalidatedKeys++;
+}
+}
+
+return invalidatedKeys;
+}
+
+public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+CacheService.instance.counterCache.remove(key);
+invalidatedKeys++;
+}
+}
+return invalidatedKeys;
+}
+
 /**
  * @return true if @param key is contained in the row cache
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index f8ff163..35d0832 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -189,11 +189,6 @@ public class CompactionController implements AutoCloseable
 return min;
 }
 
-public void invalidateCachedRow(DecoratedKey key)
-{
-cfs.invalidateCachedRow(key);
-}
-
 public void close()
 {
 overlappingSSTables.release();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/dht/Bounds.java
--
diff --git a/src/ja

[13/15] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-11-11 Thread yukim
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/0cafccfc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/0cafccfc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/0cafccfc

Branch: refs/heads/cassandra-3.1
Commit: 0cafccfc5ce80fc91817de6a5cd74702836de508
Parents: 1fe90d3 0de23f2
Author: Yuki Morishita 
Authored: Wed Nov 11 16:16:44 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 16:16:44 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 34 +++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/streaming/StreamReader.java   | 12 ++--
 .../cassandra/streaming/StreamReceiveTask.java  | 37 +++-
 .../compress/CompressedStreamReader.java|  2 +-
 .../apache/cassandra/db/CounterCacheTest.java   | 48 +++
 .../org/apache/cassandra/db/RowCacheTest.java   | 50 
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 298 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/0cafccfc/CHANGES.txt
--



[04/15] cassandra git commit: Invalidate row/counter cache after stream receive task is completed

2015-11-11 Thread yukim
Invalidate row/counter cache after stream receive task is completed

patch by Paulo Motta; reviewed by yukim for CASSANDRA-10341


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1c3ff924
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1c3ff924
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1c3ff924

Branch: refs/heads/cassandra-3.0
Commit: 1c3ff9242a0bfc5c544c69f68ee7b17a464a5ab3
Parents: 6bad57f
Author: Paulo Motta 
Authored: Wed Nov 11 13:26:22 2015 -0600
Committer: Yuki Morishita 
Committed: Wed Nov 11 15:52:37 2015 -0600

--
 CHANGES.txt |  1 +
 .../apache/cassandra/db/ColumnFamilyStore.java  | 31 ++
 .../db/compaction/CompactionController.java |  5 --
 src/java/org/apache/cassandra/dht/Bounds.java   | 62 
 .../cassandra/io/sstable/SSTableRewriter.java   |  1 -
 .../cassandra/streaming/StreamReader.java   |  1 -
 .../cassandra/streaming/StreamReceiveTask.java  | 36 
 .../apache/cassandra/db/CounterCacheTest.java   | 45 ++
 .../org/apache/cassandra/db/RowCacheTest.java   | 61 +--
 .../org/apache/cassandra/dht/BoundsTest.java| 61 +++
 10 files changed, 291 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index fa2017a..92244a0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 2.1.12
+ * Invalidate cache after stream receive task is completed (CASSANDRA-10341)
  * Reject counter writes in CQLSSTableWriter (CASSANDRA-10258)
  * Remove superfluous COUNTER_MUTATION stage mapping (CASSANDRA-10605)
  * Improve json2sstable error reporting on nonexistent columns 
(CASSANDRA-10401)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
--
diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java 
b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
index 906e18c..54f6fff 100644
--- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
+++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java
@@ -2505,6 +2505,37 @@ public class ColumnFamilyStore implements 
ColumnFamilyStoreMBean
 
CacheService.instance.invalidateCounterCacheForCf(metadata.ksAndCFName);
 }
 
+public int invalidateRowCache(Collection> boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (RowCacheKey key : CacheService.instance.rowCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.key));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+invalidateCachedRow(dk);
+invalidatedKeys++;
+}
+}
+
+return invalidatedKeys;
+}
+
+public int invalidateCounterCache(Collection> 
boundsToInvalidate)
+{
+int invalidatedKeys = 0;
+for (CounterCacheKey key : 
CacheService.instance.counterCache.getKeySet())
+{
+DecoratedKey dk = 
partitioner.decorateKey(ByteBuffer.wrap(key.partitionKey));
+if (key.ksAndCFName.equals(metadata.ksAndCFName) && 
Bounds.isInBounds(dk.getToken(), boundsToInvalidate))
+{
+CacheService.instance.counterCache.remove(key);
+invalidatedKeys++;
+}
+}
+return invalidatedKeys;
+}
+
 /**
  * @return true if @param key is contained in the row cache
  */

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/db/compaction/CompactionController.java
--
diff --git 
a/src/java/org/apache/cassandra/db/compaction/CompactionController.java 
b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
index f8ff163..35d0832 100644
--- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java
+++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java
@@ -189,11 +189,6 @@ public class CompactionController implements AutoCloseable
 return min;
 }
 
-public void invalidateCachedRow(DecoratedKey key)
-{
-cfs.invalidateCachedRow(key);
-}
-
 public void close()
 {
 overlappingSSTables.release();

http://git-wip-us.apache.org/repos/asf/cassandra/blob/1c3ff924/src/java/org/apache/cassandra/dht/Bounds.java
--
diff --git a/src/ja

[jira] [Commented] (CASSANDRA-8505) Invalid results are returned while secondary index are being build

2015-11-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001195#comment-15001195
 ] 

Sam Tunnicliffe commented on CASSANDRA-8505:


We could certainly do that in a utest (and we have plenty of tests with such 
custom indexes), but it not a dtest as it would require the custom index to be 
on the classpath. Naturally, a utest won't exercise the distributed side of 
things, but it's still better than no testing, so +1

> Invalid results are returned while secondary index are being build
> --
>
> Key: CASSANDRA-8505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.2.x, 3.0.x
>
>
> If you request an index creation and then execute a query that use the index 
> the results returned might be invalid until the index is fully build. This is 
> caused by the fact that the table column will be marked as indexed before the 
> index is ready.
> The following unit tests can be use to reproduce the problem:
> {code}
> @Test
> public void testIndexCreatedAfterInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> 
> createIndex("CREATE INDEX ON %s(b)");
> 
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> 
> @Test
> public void testIndexCreatedBeforeInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> createIndex("CREATE INDEX ON %s(b)");
> 
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> {code}
> The first test will fail while the second will work. 
> In my opinion the first test should reject the request as invalid (as if the 
> index was not existing) until the index is fully build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8505) Invalid results are returned while secondary index are being build

2015-11-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001168#comment-15001168
 ] 

Tyler Hobbs commented on CASSANDRA-8505:


It occurred to me that we could also create a custom secondary index that 
delayed the build completion, either by waiting for some sort of signal or by 
sleeping.

> Invalid results are returned while secondary index are being build
> --
>
> Key: CASSANDRA-8505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.2.x, 3.0.x
>
>
> If you request an index creation and then execute a query that use the index 
> the results returned might be invalid until the index is fully build. This is 
> caused by the fact that the table column will be marked as indexed before the 
> index is ready.
> The following unit tests can be use to reproduce the problem:
> {code}
> @Test
> public void testIndexCreatedAfterInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> 
> createIndex("CREATE INDEX ON %s(b)");
> 
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> 
> @Test
> public void testIndexCreatedBeforeInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> createIndex("CREATE INDEX ON %s(b)");
> 
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> {code}
> The first test will fail while the second will work. 
> In my opinion the first test should reject the request as invalid (as if the 
> index was not existing) until the index is fully build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9085) Bind JMX to localhost unless explicitly configured otherwise

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001071#comment-15001071
 ] 

Ariel Weisberg edited comment on CASSANDRA-9085 at 11/11/15 8:59 PM:
-

I've had to deal with the RMI GC issue before. There are properties you can set 
to have the GCs occur less often/never so you can allow explicit gc for things 
like DBB memory. I also think it's longer than 30 seconds by default?

This is where I last had to tackle it https://issues.voltdb.com/browse/ENG-5856

We actually don't want to disable explicit GC as it has valid uses for 
reclaiming direct byte buffers. The alternative is an OOM when maybe there 
doesn't have to be.

If you are very confident in your DBB handling and deallocate explicitly then 
maybe it isn't an issue but it makes me uncomfortable.


was (Author: aweisberg):
I've had to deal with the RMI GC issue before. There are properties you can set 
to have the GCs occur less often/never so you can allow explicit gc for things 
like DBB memory. I also think it's longer than 30 seconds by default?

This is where I last had to tackle it https://issues.voltdb.com/browse/ENG-5856

We actually don't want to enable explicit GC as it has valid uses for 
reclaiming direct byte buffers.

> Bind JMX to localhost unless explicitly configured otherwise
> 
>
> Key: CASSANDRA-9085
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9085
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Critical
> Fix For: 2.0.14, 2.1.4
>
>
> Cassandra's default JMX config can lead to someone executing arbitrary code:  
> see http://www.mail-archive.com/user@cassandra.apache.org/msg41819.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7904) Repair hangs

2015-11-11 Thread Anuj Wadehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001053#comment-15001053
 ] 

Anuj Wadehra edited comment on CASSANDRA-7904 at 11/11/15 8:58 PM:
---

[#Aleksey Yeschenko] I am sorry. I think that this issue must be reopened. We 
are facing this issue in 2.0.14. You have marked it a duplicate of 
CASSANDRA-7909 which was fixed in 2.0.11 so the issue must not be there in 
2.0.14.

We have 2 DCs with 3 nodes each, at remote locations with 10GBps 
connectivity.We are able to complete repair on 5 nodes. On only one node in 
DC2, we are unable to complete repair (-par -pr) as it always hangs. Node sends 
Merkle Tree requests, but one or more nodes in DC1 (remote) never show that 
they sent the merkle tree reply to requesting node.
Repair hangs infinitely. 

After increasing request_timeout_in_ms on affected node, we were able to 
successfully run repair on one of the two occassions.

I analyzed some code in OutboundTcpConnection.java of 2.0.14 and see multiple 
possible issues there:
1. Scenario where 2 consecutive merkle tree requests fail is not handled. No 
Exception is printed in logs in such a case, tpstats also dont display repair 
messages as dropped and repair will hang infinitely.
2. Only IOException leads to retry of a request. In case some Runtime Exception 
occurs, no retry is done and exception is written at DEBUG instead of ERROR. 
Repair should hang here too.
3. When isTimeOut method always returns false for non-droppable message such as 
Merkle Tree Request(verb=REPAIR_MESSAGE),why increasing request timeout is 
solving problem of many people -[#Duncan Sands],[#Razi Khaja] and me. Is the 
logic broken?
4. Increasing request timeout can only be a temporary workaround not a fix. 
Root Cause Analysis of problem and permanent fix is needed.
 


was (Author: eanujwa):
[#Aleksey Yeschenko] I am sorry. I think that this issue must be reopened. We 
are facing this issue in 2.0.14. You have marked it a duplicate of 
CASSANDRA-7909 which was fixed in 2.0.11 so the issue must not be there in 
2.0.14.

We have 2 DCs at remote locations with 10GBps connectivity.On only one node in 
DC2, we are unable to complete repair (-par -pr) as it always hangs. Node sends 
Merkle Tree requests, but one or more nodes in DC1 (remote) never show that 
they sent the merkle tree reply to requesting node.
Repair hangs infinitely. 

After increasing request_timeout_in_ms on affected node, we were able to 
successfully run repair on one of the two occassions.

I analyzed some code in OutboundTcpConnection.java of 2.0.14 and see multiple 
possible issues there:
1. Scenario where 2 consecutive merkle tree requests fail is not handled. No 
Exception is printed in logs in such a case, tpstats also dont display repair 
messages as dropped and repair will hang infinitely.
2. Only IOException leads to retry of a request. In case some Runtime Exception 
occurs, no retry is done and exception is written at DEBUG instead of ERROR. 
Repair should hang here too.
3. When isTimeOut method always returns false for non-droppable message such as 
Merkle Tree Request(verb=REPAIR_MESSAGE),why increasing request timeout is 
solving problem of many people -[#Duncan Sands],[#Razi Khaja] and me. Is the 
logic broken?
4. Increasing request timeout can only be a temporary workaround not a fix. 
Root Cause Analysis of problem and permanent fix is needed.
 

> Repair hangs
> 
>
> Key: CASSANDRA-7904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7904
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.0.10, ubuntu 14.04, Java HotSpot(TM) 64-Bit Server, 
> java version "1.7.0_45"
>Reporter: Duncan Sands
> Attachments: ls-172.18.68.138, ls-192.168.21.13, ls-192.168.60.134, 
> ls-192.168.60.136
>
>
> Cluster of 22 nodes spread over 4 data centres.  Not used on the weekend, so 
> repair is run on all nodes (in a staggered fashion) on the weekend.  Nodetool 
> options: -par -pr.  There is usually some overlap in the repairs: repair on 
> one node may well still be running when repair is started on the next node.  
> Repair hangs for some of the nodes almost every weekend.  It hung last 
> weekend, here are the details:
> In the whole cluster, only one node had an exception since C* was last 
> restarted.  This node is 192.168.60.136 and the exception is harmless: a 
> client disconnected abruptly.
> tpstats
>   4 nodes have a non-zero value for "active" or "pending" in 
> AntiEntropySessions.  These nodes all have Active => 1 and Pending => 1.  The 
> nodes are:
>   192.168.21.13 (data centre R)
>   192.168.60.134 (data centre A)
>   192.168.60.136 (data centre A)
>   172.18.68.138 (data centre Z)
> compactionstats:
>   No compactions.  All nodes have:
> pending tasks: 0
> Active comp

[jira] [Comment Edited] (CASSANDRA-9085) Bind JMX to localhost unless explicitly configured otherwise

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001071#comment-15001071
 ] 

Ariel Weisberg edited comment on CASSANDRA-9085 at 11/11/15 8:57 PM:
-

I've had to deal with the RMI GC issue before. There are properties you can set 
to have the GCs occur less often/never so you can allow explicit gc for things 
like DBB memory. I also think it's longer than 30 seconds by default?

This is where I last had to tackle it https://issues.voltdb.com/browse/ENG-5856

We actually don't want to enable explicit GC as it has valid uses for 
reclaiming direct byte buffers.


was (Author: aweisberg):
I've had to deal with the RMI GC issue before. There are properties you can set 
to have the GCs occur less often/never so you can allow explicit gc for things 
like DBB memory. I also think it's longer than 30 seconds by default?

This is where I last had to tackle it https://issues.voltdb.com/browse/ENG-5856

> Bind JMX to localhost unless explicitly configured otherwise
> 
>
> Key: CASSANDRA-9085
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9085
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Critical
> Fix For: 2.0.14, 2.1.4
>
>
> Cassandra's default JMX config can lead to someone executing arbitrary code:  
> see http://www.mail-archive.com/user@cassandra.apache.org/msg41819.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9085) Bind JMX to localhost unless explicitly configured otherwise

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001071#comment-15001071
 ] 

Ariel Weisberg commented on CASSANDRA-9085:
---

I've had to deal with the RMI GC issue before. There are properties you can set 
to have the GCs occur less often/never so you can allow explicit gc for things 
like DBB memory. I also think it's longer than 30 seconds by default?

This is where I last had to tackle it https://issues.voltdb.com/browse/ENG-5856

> Bind JMX to localhost unless explicitly configured otherwise
> 
>
> Key: CASSANDRA-9085
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9085
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Critical
> Fix For: 2.0.14, 2.1.4
>
>
> Cassandra's default JMX config can lead to someone executing arbitrary code:  
> see http://www.mail-archive.com/user@cassandra.apache.org/msg41819.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10690) Secondary index does not process deletes unless columns are specified

2015-11-11 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-10690:

Since Version: 3.0.0 rc1  (was: 3.0 beta 1)

> Secondary index does not process deletes unless columns are specified
> -
>
> Key: CASSANDRA-10690
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10690
> Project: Cassandra
>  Issue Type: Bug
>  Components: index
>Reporter: Tyler Hobbs
> Fix For: 3.0.1, 3.1
>
>
> The new secondary index API does not notify indexes of single-row or slice 
> deletions unless specific columns are deleted.  I believe the problem is that 
> in {{SecondaryIndexManager.newUpdateTransaction()}}, we skip indexes unless 
> {{index.indexes(update.columns())}}.  When no columns are specified in the 
> the deletion, {{update.columns()}} is empty, which causes all indexes to be 
> skipped.
> I think the correct fix is to do something like this in the 
> {{ModificationStatement}} constructor:
> {code}
> if (type == StatementType.DELETE && modifiedColumns.isEmpty())
> modifiedColumns = cfm.partitionColumns();
> {code}
> However, I'm not sure if that may have unintended side-effects.  What do you 
> think, [~slebresne]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7904) Repair hangs

2015-11-11 Thread Anuj Wadehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001053#comment-15001053
 ] 

Anuj Wadehra edited comment on CASSANDRA-7904 at 11/11/15 8:51 PM:
---

[#Aleksey Yeschenko] I am sorry. I think that this issue must be reopened. We 
are facing this issue in 2.0.14. You have marked it a duplicate of 
CASSANDRA-7909 which was fixed in 2.0.11 so the issue must not be there in 
2.0.14.

We have 2 DCs at remote locations with 10GBps connectivity.On only one node in 
DC2, we are unable to complete repair (-par -pr) as it always hangs. Node sends 
Merkle Tree requests, but one or more nodes in DC1 (remote) never show that 
they sent the merkle tree reply to requesting node.
Repair hangs infinitely. 

After increasing request_timeout_in_ms on affected node, we were able to 
successfully run repair on one of the two occassions.

I analyzed some code in OutboundTcpConnection.java of 2.0.14 and see multiple 
possible issues there:
1. Scenario where 2 consecutive merkle tree requests fail is not handled. No 
Exception is printed in logs in such a case, tpstats also dont display repair 
messages as dropped and repair will hang infinitely.
2. Only IOException leads to retry of a request. In case some Runtime Exception 
occurs, no retry is done and exception is written at DEBUG instead of ERROR. 
Repair should hang here too.
3. When isTimeOut method always returns false for non-droppable message such as 
Merkle Tree Request(verb=REPAIR_MESSAGE),why increasing request timeout is 
solving problem of many people -[#Duncan Sands],[#Razi Khaja] and me. Is the 
logic broken?
4. Increasing request timeout can only be a temporary workaround not a fix. 
Root Cause Analysis of problem and permanent fix is needed.
 


was (Author: eanujwa):
[#Aleksey Yeschenko] I am sorry. I think that this issue must be reopened. We 
are facing this issue in 2.0.14. You have marked it a duplicate of 
CASSANDRA-7909 which was fixed in 2.0.11 so the issue must not be there in 
2.0.14.

We have 2 DCs at remote locations with 10GBps connectivity.On only one node in 
DC2, we are unable to complete repair (-par -pr) as it always hangs. Node sends 
Merkle Tree requests, but one or more nodes in DC1 (remote) never show that 
they sent the merkle tree reply to requesting node.
Repair hangs infinitely. 

After increasing request_timeout_in_ms on affected node, we were able to 
successfully run repair on one of the two occassions.

I analyzed some code in OutboundTcpConnection.java of 2.0.14 and see multiple 
possible issues there:
1. Scenario where 2 consecutive merkle tree requests fail is not handled. No 
Exception is printed in logs in such a case, tpstats also dont display repair 
messages as dropped and repair will hang infinitely.
2. Only IOException leads to retry of a request. In case some Runtime Exception 
occurs, no retry is done and exception is written at DEBUG instead of ERROR. 
Repair should hang here too.
3. When isTimeOut method always returns false for non-droppable message such as 
Merkle Tree Request(verb=REPAIR_MESSAGE),why increasing request timeout is 
solving problem of many people? Is the logic broken?

Exception handling must be improved. Its impossible to troubleshoot such issue 
in PROD, as no relevant error is logged.

> Repair hangs
> 
>
> Key: CASSANDRA-7904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7904
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.0.10, ubuntu 14.04, Java HotSpot(TM) 64-Bit Server, 
> java version "1.7.0_45"
>Reporter: Duncan Sands
> Attachments: ls-172.18.68.138, ls-192.168.21.13, ls-192.168.60.134, 
> ls-192.168.60.136
>
>
> Cluster of 22 nodes spread over 4 data centres.  Not used on the weekend, so 
> repair is run on all nodes (in a staggered fashion) on the weekend.  Nodetool 
> options: -par -pr.  There is usually some overlap in the repairs: repair on 
> one node may well still be running when repair is started on the next node.  
> Repair hangs for some of the nodes almost every weekend.  It hung last 
> weekend, here are the details:
> In the whole cluster, only one node had an exception since C* was last 
> restarted.  This node is 192.168.60.136 and the exception is harmless: a 
> client disconnected abruptly.
> tpstats
>   4 nodes have a non-zero value for "active" or "pending" in 
> AntiEntropySessions.  These nodes all have Active => 1 and Pending => 1.  The 
> nodes are:
>   192.168.21.13 (data centre R)
>   192.168.60.134 (data centre A)
>   192.168.60.136 (data centre A)
>   172.18.68.138 (data centre Z)
> compactionstats:
>   No compactions.  All nodes have:
> pending tasks: 0
> Active compaction remaining time :n/a
> netstats:
>   All except one node have nothing.  One node (192.168.60.131, not one

[jira] [Commented] (CASSANDRA-10690) Secondary index does not process deletes unless columns are specified

2015-11-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001067#comment-15001067
 ] 

Tyler Hobbs commented on CASSANDRA-10690:
-

It looks like this bug was introduced by CASSANDRA-10220 (commit 
{{c3bc85641bc3297d151cdd983666544dd240f941}}).

> Secondary index does not process deletes unless columns are specified
> -
>
> Key: CASSANDRA-10690
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10690
> Project: Cassandra
>  Issue Type: Bug
>  Components: index
>Reporter: Tyler Hobbs
> Fix For: 3.0.1, 3.1
>
>
> The new secondary index API does not notify indexes of single-row or slice 
> deletions unless specific columns are deleted.  I believe the problem is that 
> in {{SecondaryIndexManager.newUpdateTransaction()}}, we skip indexes unless 
> {{index.indexes(update.columns())}}.  When no columns are specified in the 
> the deletion, {{update.columns()}} is empty, which causes all indexes to be 
> skipped.
> I think the correct fix is to do something like this in the 
> {{ModificationStatement}} constructor:
> {code}
> if (type == StatementType.DELETE && modifiedColumns.isEmpty())
> modifiedColumns = cfm.partitionColumns();
> {code}
> However, I'm not sure if that may have unintended side-effects.  What do you 
> think, [~slebresne]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7904) Repair hangs

2015-11-11 Thread Anuj Wadehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001053#comment-15001053
 ] 

Anuj Wadehra edited comment on CASSANDRA-7904 at 11/11/15 8:44 PM:
---

[#Aleksey Yeschenko] I am sorry. I think that this issue must be reopened. We 
are facing this issue in 2.0.14. You have marked it a duplicate of 
CASSANDRA-7909 which was fixed in 2.0.11 so the issue must not be there in 
2.0.14.

We have 2 DCs at remote locations with 10GBps connectivity.On only one node in 
DC2, we are unable to complete repair (-par -pr) as it always hangs. Node sends 
Merkle Tree requests, but one or more nodes in DC1 (remote) never show that 
they sent the merkle tree reply to requesting node.
Repair hangs infinitely. 

After increasing request_timeout_in_ms on affected node, we were able to 
successfully run repair on one of the two occassions.

I analyzed some code in OutboundTcpConnection.java of 2.0.14 and see multiple 
possible issues there:
1. Scenario where 2 consecutive merkle tree requests fail is not handled. No 
Exception is printed in logs in such a case, tpstats also dont display repair 
messages as dropped and repair will hang infinitely.
2. Only IOException leads to retry of a request. In case some Runtime Exception 
occurs, no retry is done and exception is written at DEBUG instead of ERROR. 
Repair should hang here too.
3. When isTimeOut method always returns false for non-droppable message such as 
Merkle Tree Request(verb=REPAIR_MESSAGE),why increasing request timeout is 
solving problem of many people? Is the logic broken?

Exception handling must be improved. Its impossible to troubleshoot such issue 
in PROD, as no relevant error is logged.


was (Author: eanujwa):
[#Aleksey Yeschenko] I am sorry. I think that this issue must be reopened. We 
are facing this issue in 2.0.14. You have marked it a duplicate of 
CASSANDRA-7909 which was fixed in 2.0.11 so the issue must not be there 2.0.14.

We have 2 DCs at remote locations with 10GBps connectivity.On only one node in 
DC2, we are unable to complete repair (-par -pr) as it always hangs. Node sends 
Merkle Tree requests, but one or more nodes in DC1 (remote) never show that 
they sent the merkle tree reply to requesting node.
Repair hangs infinitely. 

After increasing request_timeout_in_ms on affected node, we were able to 
successfully run repair on one of the two occassions.

I analyzed some code in OutboundTcpConnection.java of 2.0.14 and see multiple 
possible issues there:
1. Scenario where 2 consecutive merkle tree requests fail is not handled. No 
Exception is printed in logs in such a case, tpstats also dont display repair 
messages as dropped and repair will hang infinitely.
2. Only IOException leads to retry of a request. In case some Runtime Exception 
occurs, no retry is done and exception is written at DEBUG instead of ERROR. 
Repair should hang here too.
3. When isTimeOut method always returns false for non-droppable message such as 
Merkle Tree Request(verb=REPAIR_MESSAGE),why increasing request timeout is 
solving problem of many people? Is the logic broken?

Exception handling must be improved. Its impossible to troubleshoot such issue 
in PROD, as no relevant error is logged.

> Repair hangs
> 
>
> Key: CASSANDRA-7904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7904
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.0.10, ubuntu 14.04, Java HotSpot(TM) 64-Bit Server, 
> java version "1.7.0_45"
>Reporter: Duncan Sands
> Attachments: ls-172.18.68.138, ls-192.168.21.13, ls-192.168.60.134, 
> ls-192.168.60.136
>
>
> Cluster of 22 nodes spread over 4 data centres.  Not used on the weekend, so 
> repair is run on all nodes (in a staggered fashion) on the weekend.  Nodetool 
> options: -par -pr.  There is usually some overlap in the repairs: repair on 
> one node may well still be running when repair is started on the next node.  
> Repair hangs for some of the nodes almost every weekend.  It hung last 
> weekend, here are the details:
> In the whole cluster, only one node had an exception since C* was last 
> restarted.  This node is 192.168.60.136 and the exception is harmless: a 
> client disconnected abruptly.
> tpstats
>   4 nodes have a non-zero value for "active" or "pending" in 
> AntiEntropySessions.  These nodes all have Active => 1 and Pending => 1.  The 
> nodes are:
>   192.168.21.13 (data centre R)
>   192.168.60.134 (data centre A)
>   192.168.60.136 (data centre A)
>   172.18.68.138 (data centre Z)
> compactionstats:
>   No compactions.  All nodes have:
> pending tasks: 0
> Active compaction remaining time :n/a
> netstats:
>   All except one node have nothing.  One node (192.168.60.131, not one of the 
> nodes listed in the tpstats section above) has (n

[jira] [Commented] (CASSANDRA-8505) Invalid results are returned while secondary index are being build

2015-11-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001055#comment-15001055
 ] 

Sam Tunnicliffe commented on CASSANDRA-8505:


bq. What about accepting either ReadFailureException or the complete, correct 
result? If index building got faster we might stop hitting the 
ReadFailureException case, but at least the test wouldn't flap.

Yes absolutely, even then though we're not going to be certain exactly what's 
being tested (if at all) - e.g. the index building gets faster but we also 
introduce a regression with the {{ReadFailureException}}, we'd never know. But 
like I say, I can't think of anything better.

> Invalid results are returned while secondary index are being build
> --
>
> Key: CASSANDRA-8505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.2.x, 3.0.x
>
>
> If you request an index creation and then execute a query that use the index 
> the results returned might be invalid until the index is fully build. This is 
> caused by the fact that the table column will be marked as indexed before the 
> index is ready.
> The following unit tests can be use to reproduce the problem:
> {code}
> @Test
> public void testIndexCreatedAfterInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> 
> createIndex("CREATE INDEX ON %s(b)");
> 
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> 
> @Test
> public void testIndexCreatedBeforeInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> createIndex("CREATE INDEX ON %s(b)");
> 
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> {code}
> The first test will fail while the second will work. 
> In my opinion the first test should reject the request as invalid (as if the 
> index was not existing) until the index is fully build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7904) Repair hangs

2015-11-11 Thread Anuj Wadehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001053#comment-15001053
 ] 

Anuj Wadehra commented on CASSANDRA-7904:
-

[#Aleksey Yeschenko] I am sorry. I think that this issue must be reopened. We 
are facing this issue in 2.0.14. You have marked it a duplicate of 
CASSANDRA-7909 which was fixed in 2.0.11 so the issue must not be there 2.0.14.

We have 2 DCs at remote locations with 10GBps connectivity.On only one node in 
DC2, we are unable to complete repair (-par -pr) as it always hangs. Node sends 
Merkle Tree requests, but one or more nodes in DC1 (remote) never show that 
they sent the merkle tree reply to requesting node.
Repair hangs infinitely. 

After increasing request_timeout_in_ms on affected node, we were able to 
successfully run repair on one of the two occassions.

I analyzed some code in OutboundTcpConnection.java of 2.0.14 and see multiple 
possible issues there:
1. Scenario where 2 consecutive merkle tree requests fail is not handled. No 
Exception is printed in logs in such a case, tpstats also dont display repair 
messages as dropped and repair will hang infinitely.
2. Only IOException leads to retry of a request. In case some Runtime Exception 
occurs, no retry is done and exception is written at DEBUG instead of ERROR. 
Repair should hang here too.
3. When isTimeOut method always returns false for non-droppable message such as 
Merkle Tree Request(verb=REPAIR_MESSAGE),why increasing request timeout is 
solving problem of many people? Is the logic broken?

Exception handling must be improved. Its impossible to troubleshoot such issue 
in PROD, as no relevant error is logged.

> Repair hangs
> 
>
> Key: CASSANDRA-7904
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7904
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.0.10, ubuntu 14.04, Java HotSpot(TM) 64-Bit Server, 
> java version "1.7.0_45"
>Reporter: Duncan Sands
> Attachments: ls-172.18.68.138, ls-192.168.21.13, ls-192.168.60.134, 
> ls-192.168.60.136
>
>
> Cluster of 22 nodes spread over 4 data centres.  Not used on the weekend, so 
> repair is run on all nodes (in a staggered fashion) on the weekend.  Nodetool 
> options: -par -pr.  There is usually some overlap in the repairs: repair on 
> one node may well still be running when repair is started on the next node.  
> Repair hangs for some of the nodes almost every weekend.  It hung last 
> weekend, here are the details:
> In the whole cluster, only one node had an exception since C* was last 
> restarted.  This node is 192.168.60.136 and the exception is harmless: a 
> client disconnected abruptly.
> tpstats
>   4 nodes have a non-zero value for "active" or "pending" in 
> AntiEntropySessions.  These nodes all have Active => 1 and Pending => 1.  The 
> nodes are:
>   192.168.21.13 (data centre R)
>   192.168.60.134 (data centre A)
>   192.168.60.136 (data centre A)
>   172.18.68.138 (data centre Z)
> compactionstats:
>   No compactions.  All nodes have:
> pending tasks: 0
> Active compaction remaining time :n/a
> netstats:
>   All except one node have nothing.  One node (192.168.60.131, not one of the 
> nodes listed in the tpstats section above) has (note the Responses Pending 
> value of 1):
> Mode: NORMAL
> Not sending any streams.
> Read Repair Statistics:
> Attempted: 4233
> Mismatch (Blocking): 0
> Mismatch (Background): 243
> Pool NameActive   Pending  Completed
> Commandsn/a 0   34785445
> Responses   n/a 1   38567167
> Repair sessions
>   I looked for repair sessions that failed to complete.  On 3 of the 4 nodes 
> mentioned in tpstats above I found that they had sent merkle tree requests 
> and got responses from all but one node.  In the log file for the node that 
> failed to respond there is no sign that it ever received the request.  On 1 
> node (172.18.68.138) it looks like responses were received from every node, 
> some streaming was done, and then... nothing.  Details:
>   Node 192.168.21.13 (data centre R):
> Sent merkle trees to /172.18.33.24, /192.168.60.140, /192.168.60.142, 
> /172.18.68.139, /172.18.68.138, /172.18.33.22, /192.168.21.13 for table 
> brokers, never got a response from /172.18.68.139.  On /172.18.68.139, just 
> before this time it sent a response for the same repair session but a 
> different table, and there is no record of it receiving a request for table 
> brokers.
>   Node 192.168.60.134 (data centre A):
> Sent merkle trees to /172.18.68.139, /172.18.68.138, /192.168.60.132, 
> /192.168.21.14, /192.168.60.134 for table swxess_outbound, never got a 
> response from /172.18.68.138.  On /172.18.68.138, just before this time it 
> se

[jira] [Commented] (CASSANDRA-10646) crash_during_decommission_test dtest fails on windows

2015-11-11 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001048#comment-15001048
 ] 

Paulo Motta commented on CASSANDRA-10646:
-

Closing with merge of PR.

> crash_during_decommission_test dtest fails on windows
> -
>
> Key: CASSANDRA-10646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10646
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Paulo Motta
> Fix For: 3.1
>
>
> {{topology_test.py:TestTopology.crash_during_decommission_test}} flaps on on 
> C* 3.0 on Windows:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/topology_test/TestTopology/crash_during_decommission_test/history/
> Since this test raises 2 errors on failure, there are 2 histories on CassCI 
> for it:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/100/testReport/topology_test/TestTopology/crash_during_decommission_test_2/history/
> It looks like it fails because of contention over the temporary file where 
> {{cassandra.env}} is stored:
> http://cassci.datastax.com/view/win32/job/cassandra-3.0_dtest_win32/101/testReport/junit/topology_test/TestTopology/crash_during_decommission_test/
> Looks like this happens when {{nodetool status}} is called, since 
> {{nodetool}} sources {{cassandra-env.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10689) java.lang.OutOfMemoryError: Direct buffer memory

2015-11-11 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001046#comment-15001046
 ] 

Jeff Jirsa commented on CASSANDRA-10689:


The server likely logged an exception, too 

> java.lang.OutOfMemoryError: Direct buffer memory
> 
>
> Key: CASSANDRA-10689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10689
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mlowicki
>
> {code}
> ERROR [SharedPool-Worker-63] 2015-11-11 17:53:16,161 
> JVMStabilityInspector.java:117 - JVM state determined to be unstable.  
> Exiting forcefully due to:
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_80]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.7.0_80]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) 
> ~[na:1.7.0_80]
> at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) 
> ~[na:1.7.0_80]
> at sun.nio.ch.IOUtil.read(IOUtil.java:195) ~[na:1.7.0_80]
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:149) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:104)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]  
> at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:310)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:64)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1894)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.setToRowStart(IndexedSliceReader.java:107)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.(IndexedSliceReader.java:83)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:246)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:270)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1994)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1837)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:85)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.11.jar:2.1.11]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9085) Bind JMX to localhost unless explicitly configured otherwise

2015-11-11 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001045#comment-15001045
 ] 

T Jake Luciani commented on CASSANDRA-9085:
---

We only add that for the local RMI JMX server.  The reason is RMI server forces 
a GC every 30 seconds. That's the only way to disable it.

If you want to support forced GCs you should just secure the proper JMX as the 
docs direct.

> Bind JMX to localhost unless explicitly configured otherwise
> 
>
> Key: CASSANDRA-9085
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9085
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Critical
> Fix For: 2.0.14, 2.1.4
>
>
> Cassandra's default JMX config can lead to someone executing arbitrary code:  
> see http://www.mail-archive.com/user@cassandra.apache.org/msg41819.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name

2015-11-11 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-10582:

Fix Version/s: (was: 3.1)
   2.2.x
   2.1.x

> CorruptSSTableException should print the SS Table Name
> --
>
> Key: CASSANDRA-10582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10582
> Project: Cassandra
>  Issue Type: Bug
> Environment: Azure
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 2.1.x, 2.2.x
>
> Attachments: 
> 0001-Add-file-path-to-CorruptSSTableException-message.patch
>
>
> We should print the SS Table name that's being reported as corrupt to help 
> with quick recovery.
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214
>  (23832772 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398
>  (149675 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707
>  (18270 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656
>  (814588 bytes)
> ERROR 16:32:15  Exiting forcefully due to file system exception on startup, 
> disk failure policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name

2015-11-11 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-10582:

Attachment: 0001-Add-file-path-to-CorruptSSTableException-message.patch

> CorruptSSTableException should print the SS Table Name
> --
>
> Key: CASSANDRA-10582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10582
> Project: Cassandra
>  Issue Type: Bug
> Environment: Azure
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 3.1
>
> Attachments: 
> 0001-Add-file-path-to-CorruptSSTableException-message.patch
>
>
> We should print the SS Table name that's being reported as corrupt to help 
> with quick recovery.
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214
>  (23832772 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398
>  (149675 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707
>  (18270 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656
>  (814588 bytes)
> ERROR 16:32:15  Exiting forcefully due to file system exception on startup, 
> disk failure policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name

2015-11-11 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reopened CASSANDRA-10582:
-

> CorruptSSTableException should print the SS Table Name
> --
>
> Key: CASSANDRA-10582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10582
> Project: Cassandra
>  Issue Type: Bug
> Environment: Azure
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 3.1
>
>
> We should print the SS Table name that's being reported as corrupt to help 
> with quick recovery.
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214
>  (23832772 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398
>  (149675 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707
>  (18270 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656
>  (814588 bytes)
> ERROR 16:32:15  Exiting forcefully due to file system exception on startup, 
> disk failure policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10689) java.lang.OutOfMemoryError: Direct buffer memory

2015-11-11 Thread mlowicki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001037#comment-15001037
 ] 

mlowicki commented on CASSANDRA-10689:
--

Running {{scrub}} on nodes with corrupted blocks gives:
{code}
root@db7:~# time nodetool scrub sync entity2



error: null
-- StackTrace --
java.io.EOFException
at java.io.DataInputStream.readByte(DataInputStream.java:267)
at 
sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:214)
at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161)
at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source)
at javax.management.remote.rmi.RMIConnectionImpl_Stub.invoke(Unknown 
Source)
at 
javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:1022)
at 
javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:292)
at com.sun.proxy.$Proxy7.scrub(Unknown Source)
at org.apache.cassandra.tools.NodeProbe.scrub(NodeProbe.java:247)
at org.apache.cassandra.tools.NodeProbe.scrub(NodeProbe.java:266)
at org.apache.cassandra.tools.NodeTool$Scrub.execute(NodeTool.java:1277)
at 
org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:289)
at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:203)


real11m38.347s
user0m2.356s
sys 0m0.168s
{code}

> java.lang.OutOfMemoryError: Direct buffer memory
> 
>
> Key: CASSANDRA-10689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10689
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mlowicki
>
> {code}
> ERROR [SharedPool-Worker-63] 2015-11-11 17:53:16,161 
> JVMStabilityInspector.java:117 - JVM state determined to be unstable.  
> Exiting forcefully due to:
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_80]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.7.0_80]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) 
> ~[na:1.7.0_80]
> at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) 
> ~[na:1.7.0_80]
> at sun.nio.ch.IOUtil.read(IOUtil.java:195) ~[na:1.7.0_80]
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:149) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:104)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]  
> at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:310)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:64)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1894)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.setToRowStart(IndexedSliceReader.java:107)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.(IndexedSliceReader.java:83)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:246)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:270)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1994)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1837)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadC

[jira] [Commented] (CASSANDRA-8505) Invalid results are returned while secondary index are being build

2015-11-11 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001018#comment-15001018
 ] 

Benjamin Lerer commented on CASSANDRA-8505:
---

I think we do not really have the choice. I could easily reproduce the problem 
with a unit test on my machine but CI is usually much slower.
For reproducing it with cqlsh I add to put a breack point in the building task.

> Invalid results are returned while secondary index are being build
> --
>
> Key: CASSANDRA-8505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.2.x, 3.0.x
>
>
> If you request an index creation and then execute a query that use the index 
> the results returned might be invalid until the index is fully build. This is 
> caused by the fact that the table column will be marked as indexed before the 
> index is ready.
> The following unit tests can be use to reproduce the problem:
> {code}
> @Test
> public void testIndexCreatedAfterInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> 
> createIndex("CREATE INDEX ON %s(b)");
> 
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> 
> @Test
> public void testIndexCreatedBeforeInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> createIndex("CREATE INDEX ON %s(b)");
> 
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> {code}
> The first test will fail while the second will work. 
> In my opinion the first test should reject the request as invalid (as if the 
> index was not existing) until the index is fully build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7217) Native transport performance (with cassandra-stress) drops precipitously past around 1000 threads

2015-11-11 Thread Joshua McKenzie (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joshua McKenzie updated CASSANDRA-7217:
---
Assignee: Ariel Weisberg  (was: Ryan McGuire)

> Native transport performance (with cassandra-stress) drops precipitously past 
> around 1000 threads
> -
>
> Key: CASSANDRA-7217
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7217
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Benedict
>Assignee: Ariel Weisberg
>  Labels: performance, stress, triaged
> Fix For: 3.1
>
>
> This is obviously bad. Let's figure out why it's happening and put a stop to 
> it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10690) Secondary index does not process deletes unless columns are specified

2015-11-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001001#comment-15001001
 ] 

Tyler Hobbs commented on CASSANDRA-10690:
-

By the way, this doesn't visibly affect the built-in secondary indexes due to 
stale entry handling.  However, it will negatively affect performance for 
indexes on regularly-deleted columns.  Additionally, this may break custom 
secondary indexes entirely.

> Secondary index does not process deletes unless columns are specified
> -
>
> Key: CASSANDRA-10690
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10690
> Project: Cassandra
>  Issue Type: Bug
>  Components: index
>Reporter: Tyler Hobbs
> Fix For: 3.0.1, 3.1
>
>
> The new secondary index API does not notify indexes of single-row or slice 
> deletions unless specific columns are deleted.  I believe the problem is that 
> in {{SecondaryIndexManager.newUpdateTransaction()}}, we skip indexes unless 
> {{index.indexes(update.columns())}}.  When no columns are specified in the 
> the deletion, {{update.columns()}} is empty, which causes all indexes to be 
> skipped.
> I think the correct fix is to do something like this in the 
> {{ModificationStatement}} constructor:
> {code}
> if (type == StatementType.DELETE && modifiedColumns.isEmpty())
> modifiedColumns = cfm.partitionColumns();
> {code}
> However, I'm not sure if that may have unintended side-effects.  What do you 
> think, [~slebresne]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[05/22] cassandra git commit: Fix NPE in Gossip handleStateNormal

2015-11-11 Thread jmckenzie
Fix NPE in Gossip handleStateNormal

Patch by stefania; reviewed by jknighton for CASSANDRA-10089


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bad57fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bad57fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bad57fc

Branch: refs/heads/cassandra-3.0
Commit: 6bad57fc3cf967838a220d8402db37ed9a5b3b4e
Parents: 3674ad9
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:02:26 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:02:26 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   8 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   8 +-
 9 files changed, 283 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bad57fc/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 1029374..3e29295 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -116,7 +140,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 

[12/22] cassandra git commit: 10089 - 2.2 patch

2015-11-11 Thread jmckenzie
10089 - 2.2 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bb6bb00
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bb6bb00
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bb6bb00

Branch: refs/heads/trunk
Commit: 6bb6bb005197c33fa94026d472ff78d4f36613cc
Parents: 87fe1e0
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:04:25 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:04:25 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 283 insertions(+), 88 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bb6bb00/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 0e6985a..931da8d 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -133,7 +157,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 }
 }
 
@@ -146,12 +170,12 @@ class EndpointStateSerializer implements 
IVersionedSeriali

[08/22] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87fe1e09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87fe1e09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87fe1e09

Branch: refs/heads/trunk
Commit: 87fe1e09f15b373fd74473dddee12e289287b7aa
Parents: 9fc957c 6bad57f
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:03:24 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:03:24 2015 -0500

--

--




[18/22] cassandra git commit: 10089 - 3.0 patch

2015-11-11 Thread jmckenzie
10089 - 3.0 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a90e989
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a90e989
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a90e989

Branch: refs/heads/cassandra-3.0
Commit: 9a90e9894e9e079058876cf2b16a47d29ba0a32a
Parents: 30eecb2
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:05:35 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:05:35 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  61 ---
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 282 insertions(+), 85 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a90e989/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index d1c023a..70f2a68 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -26,8 +30,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -41,7 +43,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -49,7 +51,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -67,21 +75,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -132,7 +156,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 }
 }
 
@@ -145,12 +169,12 @@ cl

[17/22] cassandra git commit: 10089 - 3.0 patch

2015-11-11 Thread jmckenzie
10089 - 3.0 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a90e989
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a90e989
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a90e989

Branch: refs/heads/cassandra-3.1
Commit: 9a90e9894e9e079058876cf2b16a47d29ba0a32a
Parents: 30eecb2
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:05:35 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:05:35 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  61 ---
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 282 insertions(+), 85 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a90e989/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index d1c023a..70f2a68 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -26,8 +30,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -41,7 +43,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -49,7 +51,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -67,21 +75,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -132,7 +156,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 }
 }
 
@@ -145,12 +169,12 @@ cl

[03/22] cassandra git commit: Fix NPE in Gossip handleStateNormal

2015-11-11 Thread jmckenzie
Fix NPE in Gossip handleStateNormal

Patch by stefania; reviewed by jknighton for CASSANDRA-10089


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bad57fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bad57fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bad57fc

Branch: refs/heads/cassandra-2.2
Commit: 6bad57fc3cf967838a220d8402db37ed9a5b3b4e
Parents: 3674ad9
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:02:26 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:02:26 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   8 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   8 +-
 9 files changed, 283 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bad57fc/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 1029374..3e29295 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -116,7 +140,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 

[10/22] cassandra git commit: 10089 - 2.2 patch

2015-11-11 Thread jmckenzie
10089 - 2.2 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bb6bb00
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bb6bb00
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bb6bb00

Branch: refs/heads/cassandra-3.0
Commit: 6bb6bb005197c33fa94026d472ff78d4f36613cc
Parents: 87fe1e0
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:04:25 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:04:25 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 283 insertions(+), 88 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bb6bb00/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 0e6985a..931da8d 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -133,7 +157,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 }
 }
 
@@ -146,12 +170,12 @@ class EndpointStateSerializer implements 
IVersione

[22/22] cassandra git commit: Merge branch 'cassandra-3.1' into trunk

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-3.1' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7d6dbf89
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7d6dbf89
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7d6dbf89

Branch: refs/heads/trunk
Commit: 7d6dbf897cd14e6c5811a0588f25e5c11385a9fd
Parents: 55811e5 1fe90d3
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:06:23 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:06:23 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  61 ---
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 282 insertions(+), 85 deletions(-)
--




[13/22] cassandra git commit: 10089 - 2.2 patch

2015-11-11 Thread jmckenzie
10089 - 2.2 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bb6bb00
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bb6bb00
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bb6bb00

Branch: refs/heads/cassandra-2.2
Commit: 6bb6bb005197c33fa94026d472ff78d4f36613cc
Parents: 87fe1e0
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:04:25 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:04:25 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 283 insertions(+), 88 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bb6bb00/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 0e6985a..931da8d 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -133,7 +157,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 }
 }
 
@@ -146,12 +170,12 @@ class EndpointStateSerializer implements 
IVersione

[16/22] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30eecb23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30eecb23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30eecb23

Branch: refs/heads/trunk
Commit: 30eecb23dcd3a785d40b924ebe923831f5276795
Parents: 49c9c01 6bb6bb0
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:05:11 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:05:11 2015 -0500

--

--




[19/22] cassandra git commit: 10089 - 3.0 patch

2015-11-11 Thread jmckenzie
10089 - 3.0 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9a90e989
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9a90e989
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9a90e989

Branch: refs/heads/trunk
Commit: 9a90e9894e9e079058876cf2b16a47d29ba0a32a
Parents: 30eecb2
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:05:35 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:05:35 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  61 ---
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 282 insertions(+), 85 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/9a90e989/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index d1c023a..70f2a68 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -26,8 +30,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataInputPlus;
 import org.apache.cassandra.io.util.DataOutputPlus;
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -41,7 +43,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -49,7 +51,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -67,21 +75,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -132,7 +156,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 }
 }
 
@@ -145,12 +169,12 @@ class Endp

[02/22] cassandra git commit: Fix NPE in Gossip handleStateNormal

2015-11-11 Thread jmckenzie
Fix NPE in Gossip handleStateNormal

Patch by stefania; reviewed by jknighton for CASSANDRA-10089


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bad57fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bad57fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bad57fc

Branch: refs/heads/cassandra-3.1
Commit: 6bad57fc3cf967838a220d8402db37ed9a5b3b4e
Parents: 3674ad9
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:02:26 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:02:26 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   8 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   8 +-
 9 files changed, 283 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bad57fc/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 1029374..3e29295 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -116,7 +140,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 

[15/22] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30eecb23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30eecb23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30eecb23

Branch: refs/heads/cassandra-3.1
Commit: 30eecb23dcd3a785d40b924ebe923831f5276795
Parents: 49c9c01 6bb6bb0
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:05:11 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:05:11 2015 -0500

--

--




[14/22] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/30eecb23
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/30eecb23
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/30eecb23

Branch: refs/heads/cassandra-3.0
Commit: 30eecb23dcd3a785d40b924ebe923831f5276795
Parents: 49c9c01 6bb6bb0
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:05:11 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:05:11 2015 -0500

--

--




[09/22] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87fe1e09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87fe1e09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87fe1e09

Branch: refs/heads/cassandra-3.0
Commit: 87fe1e09f15b373fd74473dddee12e289287b7aa
Parents: 9fc957c 6bad57f
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:03:24 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:03:24 2015 -0500

--

--




[07/22] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87fe1e09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87fe1e09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87fe1e09

Branch: refs/heads/cassandra-3.1
Commit: 87fe1e09f15b373fd74473dddee12e289287b7aa
Parents: 9fc957c 6bad57f
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:03:24 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:03:24 2015 -0500

--

--




[21/22] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1fe90d34
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1fe90d34
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1fe90d34

Branch: refs/heads/trunk
Commit: 1fe90d34bb7282df0b383289b13a9a190162ce4a
Parents: 6f7b389 9a90e98
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:06:08 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:06:08 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  61 ---
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 282 insertions(+), 85 deletions(-)
--




[20/22] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.1

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-3.0' into cassandra-3.1


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1fe90d34
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1fe90d34
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1fe90d34

Branch: refs/heads/cassandra-3.1
Commit: 1fe90d34bb7282df0b383289b13a9a190162ce4a
Parents: 6f7b389 9a90e98
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:06:08 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:06:08 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  61 ---
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 282 insertions(+), 85 deletions(-)
--




[04/22] cassandra git commit: Fix NPE in Gossip handleStateNormal

2015-11-11 Thread jmckenzie
Fix NPE in Gossip handleStateNormal

Patch by stefania; reviewed by jknighton for CASSANDRA-10089


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bad57fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bad57fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bad57fc

Branch: refs/heads/trunk
Commit: 6bad57fc3cf967838a220d8402db37ed9a5b3b4e
Parents: 3674ad9
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:02:26 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:02:26 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   8 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   8 +-
 9 files changed, 283 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bad57fc/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 1029374..3e29295 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -116,7 +140,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 }
 }
 
@

[06/22] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2015-11-11 Thread jmckenzie
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/87fe1e09
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/87fe1e09
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/87fe1e09

Branch: refs/heads/cassandra-2.2
Commit: 87fe1e09f15b373fd74473dddee12e289287b7aa
Parents: 9fc957c 6bad57f
Author: Joshua McKenzie 
Authored: Wed Nov 11 15:03:24 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:03:24 2015 -0500

--

--




[01/22] cassandra git commit: Fix NPE in Gossip handleStateNormal

2015-11-11 Thread jmckenzie
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 3674ad9da -> 6bad57fc3
  refs/heads/cassandra-2.2 9fc957cf3 -> 6bb6bb005
  refs/heads/cassandra-3.0 49c9c01f5 -> 9a90e9894
  refs/heads/cassandra-3.1 6f7b38987 -> 1fe90d34b
  refs/heads/trunk 55811e561 -> 7d6dbf897


Fix NPE in Gossip handleStateNormal

Patch by stefania; reviewed by jknighton for CASSANDRA-10089


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bad57fc
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bad57fc
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bad57fc

Branch: refs/heads/cassandra-2.1
Commit: 6bad57fc3cf967838a220d8402db37ed9a5b3b4e
Parents: 3674ad9
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:02:26 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:02:26 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   8 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   8 +-
 9 files changed, 283 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bad57fc/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 1029374..3e29295 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -116,7 +140,7 @@

[11/22] cassandra git commit: 10089 - 2.2 patch

2015-11-11 Thread jmckenzie
10089 - 2.2 patch


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6bb6bb00
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6bb6bb00
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6bb6bb00

Branch: refs/heads/cassandra-3.1
Commit: 6bb6bb005197c33fa94026d472ff78d4f36613cc
Parents: 87fe1e0
Author: Stefania Alborghetti 
Authored: Wed Nov 11 15:04:25 2015 -0500
Committer: Joshua McKenzie 
Committed: Wed Nov 11 15:04:25 2015 -0500

--
 .../org/apache/cassandra/gms/EndpointState.java |  76 ++---
 .../apache/cassandra/gms/FailureDetector.java   |   7 +-
 src/java/org/apache/cassandra/gms/Gossiper.java |  47 +++---
 .../apache/cassandra/gms/VersionedValue.java|   5 +
 .../cassandra/service/StorageService.java   |  65 
 .../apache/cassandra/gms/EndpointStateTest.java | 159 +++
 .../cassandra/locator/CloudstackSnitchTest.java |   4 +-
 .../apache/cassandra/locator/EC2SnitchTest.java |   4 +-
 .../locator/GoogleCloudSnitchTest.java  |   4 +-
 9 files changed, 283 insertions(+), 88 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6bb6bb00/src/java/org/apache/cassandra/gms/EndpointState.java
--
diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java 
b/src/java/org/apache/cassandra/gms/EndpointState.java
index 0e6985a..931da8d 100644
--- a/src/java/org/apache/cassandra/gms/EndpointState.java
+++ b/src/java/org/apache/cassandra/gms/EndpointState.java
@@ -18,7 +18,11 @@
 package org.apache.cassandra.gms;
 
 import java.io.*;
+import java.util.Collections;
+import java.util.EnumMap;
 import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -27,8 +31,6 @@ import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.IVersionedSerializer;
 import org.apache.cassandra.io.util.DataOutputPlus;
 
-import org.cliffc.high_scale_lib.NonBlockingHashMap;
-
 /**
  * This abstraction represents both the HeartBeatState and the 
ApplicationState in an EndpointState
  * instance. Any state for a given endpoint can be retrieved from this 
instance.
@@ -42,7 +44,7 @@ public class EndpointState
 public final static IVersionedSerializer serializer = new 
EndpointStateSerializer();
 
 private volatile HeartBeatState hbState;
-final Map applicationState = new 
NonBlockingHashMap();
+private final AtomicReference> 
applicationState;
 
 /* fields below do not get serialized */
 private volatile long updateTimestamp;
@@ -50,7 +52,13 @@ public class EndpointState
 
 EndpointState(HeartBeatState initialHbState)
 {
+this(initialHbState, new EnumMap(ApplicationState.class));
+}
+
+EndpointState(HeartBeatState initialHbState, Map states)
+{
 hbState = initialHbState;
+applicationState = new AtomicReference>(new EnumMap<>(states));
 updateTimestamp = System.nanoTime();
 isAlive = true;
 }
@@ -68,21 +76,37 @@ public class EndpointState
 
 public VersionedValue getApplicationState(ApplicationState key)
 {
-return applicationState.get(key);
+return applicationState.get().get(key);
 }
 
-/**
- * TODO replace this with operations that don't expose private state
- */
-@Deprecated
-public Map getApplicationStateMap()
+public Set> states()
+{
+return applicationState.get().entrySet();
+}
+
+public void addApplicationState(ApplicationState key, VersionedValue value)
 {
-return applicationState;
+addApplicationStates(Collections.singletonMap(key, value));
 }
 
-void addApplicationState(ApplicationState key, VersionedValue value)
+public void addApplicationStates(Map 
values)
 {
-applicationState.put(key, value);
+addApplicationStates(values.entrySet());
+}
+
+public void addApplicationStates(Set> values)
+{
+while (true)
+{
+Map orig = 
applicationState.get();
+Map copy = new EnumMap<>(orig);
+
+for (Map.Entry value : values)
+copy.put(value.getKey(), value.getValue());
+
+if (applicationState.compareAndSet(orig, copy))
+return;
+}
 }
 
 /* getters and setters */
@@ -133,7 +157,7 @@ public class EndpointState
 
 public String toString()
 {
-return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState;
+return "EndpointState: HeartBeatState = " + hbState + ", AppStateMap = 
" + applicationState.get();
 }
 }
 
@@ -146,12 +170,12 @@ class EndpointStateSerializer implements 
IVersione

[jira] [Commented] (CASSANDRA-9085) Bind JMX to localhost unless explicitly configured otherwise

2015-11-11 Thread Brian Hawkins (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000984#comment-15000984
 ] 

Brian Hawkins commented on CASSANDRA-9085:
--

I know this was done a while ago buy why was -XX:+DisableExplicitGC added to 
the JVM options as part of this fix?  Can you explain the relevance of that 
switch to disabling JMX?  We are unable to trigger GC because of this switch.

> Bind JMX to localhost unless explicitly configured otherwise
> 
>
> Key: CASSANDRA-9085
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9085
> Project: Cassandra
>  Issue Type: Bug
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Critical
> Fix For: 2.0.14, 2.1.4
>
>
> Cassandra's default JMX config can lead to someone executing arbitrary code:  
> see http://www.mail-archive.com/user@cassandra.apache.org/msg41819.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10690) Secondary index does not process deletes unless columns are specified

2015-11-11 Thread Tyler Hobbs (JIRA)
Tyler Hobbs created CASSANDRA-10690:
---

 Summary: Secondary index does not process deletes unless columns 
are specified
 Key: CASSANDRA-10690
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10690
 Project: Cassandra
  Issue Type: Bug
  Components: index
Reporter: Tyler Hobbs
 Fix For: 3.0.1, 3.1


The new secondary index API does not notify indexes of single-row or slice 
deletions unless specific columns are deleted.  I believe the problem is that 
in {{SecondaryIndexManager.newUpdateTransaction()}}, we skip indexes unless 
{{index.indexes(update.columns())}}.  When no columns are specified in the the 
deletion, {{update.columns()}} is empty, which causes all indexes to be skipped.

I think the correct fix is to do something like this in the 
{{ModificationStatement}} constructor:

{code}
if (type == StatementType.DELETE && modifiedColumns.isEmpty())
modifiedColumns = cfm.partitionColumns();
{code}

However, I'm not sure if that may have unintended side-effects.  What do you 
think, [~slebresne]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7408) System hints corruption - dataSize ... would be larger than file

2015-11-11 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000950#comment-15000950
 ] 

Jeff Griffith edited comment on CASSANDRA-7408 at 11/11/15 7:46 PM:


no problem [~iamaleksey]  i seem to recall this being related to an issue i 
reported separately that was fixed where a short integer was overflowing. 
pretty sure it's all good now.


was (Author: jeffery.griffith):
no problem [~iamaleksey]  i seem to recall this being related to an issue i 
reported separately where a short integer was overflowing. pretty sure it's all 
good now.

> System hints corruption - dataSize ... would be larger than file
> 
>
> Key: CASSANDRA-7408
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7408
> Project: Cassandra
>  Issue Type: Bug
> Environment: RHEL 6.5
> Cassandra 1.2.16
> RF=3
> Thrift
>Reporter: Jeff Griffith
>
> I've found several unresolved JIRA tickets related to SSTable corruption but 
> not sure if they apply to the case we are seeing in system/hints. We see 
> periodic exceptions such as:
> {noformat}
> dataSize of 144115248479299639 starting at 17209 would be larger than file 
> /home/y/var/cassandra/data/system/hints/system-hints-ic-219-Data.db length 
> 35542
> {noformat}
> Is there something we could possibly be doing from the application to cause 
> this sort of corruption? We also see it on some of our own column families 
> also some *negative* lengths which are presumably a similar corruption.
> {noformat}
> ERROR [HintedHandoff:57] 2014-06-17 17:08:04,690 CassandraDaemon.java (line 
> 191) Exception in thread Thread[HintedHandoff:57,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: 
> dataSize of 144115248479299639 starting at 17209 would be larger than file 
> /home/y/var/cassandra/data/system/hints/system-hints-ic-219-Data.db length 
> 35542
> at 
> org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:441)
> at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
> at 
> org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:90)
> at 
> org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:508)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: 
> dataSize of 144115248479299639 starting at 17209 would be larger than file 
> /home/y/var/cassandra/data/system/hints/system-hints-ic-219-Data.db length 
> 35542
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:188)
> at 
> org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:437)
> ... 6 more
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> java.io.IOException: dataSize of 144115248479299639 starting at 17209 would 
> be larger than file 
> /home/y/var/cassandra/data/system/hints/system-hints-ic-219-Data.db length 
> 35542
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:167)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:83)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:69)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:180)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:155)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:142)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:38)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:145)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:122)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:96)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.compaction.CompactionT

[jira] [Commented] (CASSANDRA-10445) Cassandra-stress throws max frame size error when SSL certification is enabled

2015-11-11 Thread Charles Crawford (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000961#comment-15000961
 ] 

Charles Crawford commented on CASSANDRA-10445:
--

I am having the same issue. Any updates on a workaround or fix?

> Cassandra-stress throws max frame size error when SSL certification is enabled
> --
>
> Key: CASSANDRA-10445
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10445
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sam Goldberg
>  Labels: stress
> Fix For: 2.1.x
>
>
> Running cassandra-stress when SSL is enabled gives the following error and 
> does not finish executing:
> {quote}
> cassandra-stress write n=100
> Exception in thread "main" java.lang.RuntimeException: 
> org.apache.thrift.transport.TTransportException: Frame size (352518912) 
> larger than max length (15728640)!
> at 
> org.apache.cassandra.stress.settings.StressSettings.getRawThriftClient(StressSettings.java:144)
> at 
> org.apache.cassandra.stress.settings.StressSettings.getRawThriftClient(StressSettings.java:110)
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesThrift(SettingsSchema.java:111)
> at 
> org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:59)
> at 
> org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:205)
> at org.apache.cassandra.stress.StressAction.run(StressAction.java:55)
> at org.apache.cassandra.stress.Stress.main(Stress.java:109)
> {quote}
> I was able to reproduce this issue consistently via the following steps:
> 1) Spin up 3 node cassandra cluster running 2.1.8
> 2) Perform cassandra-stress write n=100
> 3) Everything works!
> 4) Generate keystore and truststore for each node in the cluster and 
> distribute appropriately 
> 5) Modify cassandra.yaml on each node to enable SSL:
> client_encryption_options:
> enabled: true
> keystore: /
> # require_client_auth: false
> # Set trustore and truststore_password if require_client_auth is true
> truststore:  /
> truststore_password: 
> # More advanced defaults below:
> protocol: ssl
> 6) Restart each node.
> 7) Perform cassandra-stress write n=100
> 8) Get Frame Size error, cassandra-stress fails
> This may be related to CASSANDRA-9325.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9947) nodetool verify is broken

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000955#comment-15000955
 ] 

Ariel Weisberg commented on CASSANDRA-9947:
---

The fact that we never validate checksums on uncompressed data on reads creates 
problems for repair even before verify is run. We can propagate corrupted data 
because the merkle tree is going to detect the corruption and attempt to 
propagate it without validating the checksum of the corrupt data.

Right now scrub isn't going to validate checksums on uncompressed files, based 
on my reading, so scrubbing won't improve the situation.. I also don't see how 
scrub can fix a corrupted compressed tables since the checksum is not per 
record. It's going to be an arbitrary 64k page. You could try and parse the 
page anyways, but that is not what is currently done since the reader will just 
throw an exception if you try. Corrupted sstables work fine in the regular path 
because the index points you do a valid place to start reading from, but that 
won't work for a sequential walk through the file.

It seems to me like we are shuffling deck chairs on the titanic once we allow 
repair to propagate corrupted data. You could say the same about returning 
corrupted data to user queries since those can be used to propagate the 
corruption back into C* at all replicas.

If there are flows of handling corruption we want to have it might make sense 
to create some test cases for the various file formats and see what the 
existing code actually does. My suspicion is that sequential access is going to 
fail in the compressed compressed stuff and blindly succeed in uncompressed 
case. 

We also need to nail down fix versions since coalescing to something that works 
might not be possible/worthwhile against existing formats. And while we are at 
it maybe we should nail down file formats we are more happy with in terms of 
being flexible about block sizes, implementing a page cache etc.

> nodetool verify is broken
> -
>
> Key: CASSANDRA-9947
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9947
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Jonathan Ellis
>Priority: Critical
> Fix For: 2.2.4
>
>
> Raised these issues on CASSANDRA-5791, but didn't revert/re-open, so they 
> were ignored:
> We mark sstables that fail verification as unrepaired, but that's not going 
> to do what you think.  What it means is that the local node will use that 
> sstable in the next repair, but other nodes will not. So all we'll end up 
> doing is streaming whatever data we can read from it, to the other replicas.  
> If we could magically mark whatever sstables correspond on the remote nodes, 
> to the data in the local sstable, that would work, but we can't.
> IMO what we should do is:
> *scrub, because it's quite likely we'll fail reading from the sstable 
> otherwise and
> *full repair across the data range covered by the sstable
> Additionally,
> * I'm not sure that keeping "extended verify" code around is worth it. Since 
> the point is to work around not having a checksum, we could just scrub 
> instead. This is slightly more heavyweight but it would be a one-time cost 
> (scrub would build a new checksum) and we wouldn't have to worry about 
> keeping two versions of almost-the-same-code in sync.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7408) System hints corruption - dataSize ... would be larger than file

2015-11-11 Thread Jeff Griffith (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000950#comment-15000950
 ] 

Jeff Griffith commented on CASSANDRA-7408:
--

no problem [~iamaleksey]  i seem to recall this being related to an issue i 
reported separately where a short integer was overflowing. pretty sure it's all 
good now.

> System hints corruption - dataSize ... would be larger than file
> 
>
> Key: CASSANDRA-7408
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7408
> Project: Cassandra
>  Issue Type: Bug
> Environment: RHEL 6.5
> Cassandra 1.2.16
> RF=3
> Thrift
>Reporter: Jeff Griffith
>
> I've found several unresolved JIRA tickets related to SSTable corruption but 
> not sure if they apply to the case we are seeing in system/hints. We see 
> periodic exceptions such as:
> {noformat}
> dataSize of 144115248479299639 starting at 17209 would be larger than file 
> /home/y/var/cassandra/data/system/hints/system-hints-ic-219-Data.db length 
> 35542
> {noformat}
> Is there something we could possibly be doing from the application to cause 
> this sort of corruption? We also see it on some of our own column families 
> also some *negative* lengths which are presumably a similar corruption.
> {noformat}
> ERROR [HintedHandoff:57] 2014-06-17 17:08:04,690 CassandraDaemon.java (line 
> 191) Exception in thread Thread[HintedHandoff:57,1,main]
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: 
> dataSize of 144115248479299639 starting at 17209 would be larger than file 
> /home/y/var/cassandra/data/system/hints/system-hints-ic-219-Data.db length 
> 35542
> at 
> org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:441)
> at 
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:282)
> at 
> org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:90)
> at 
> org.apache.cassandra.db.HintedHandOffManager$4.run(HintedHandOffManager.java:508)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: 
> dataSize of 144115248479299639 starting at 17209 would be larger than file 
> /home/y/var/cassandra/data/system/hints/system-hints-ic-219-Data.db length 
> 35542
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:188)
> at 
> org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:437)
> ... 6 more
> Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: 
> java.io.IOException: dataSize of 144115248479299639 starting at 17209 would 
> be larger than file 
> /home/y/var/cassandra/data/system/hints/system-hints-ic-219-Data.db length 
> 35542
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:167)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:83)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:69)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:180)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:155)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:142)
> at 
> org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:38)
> at 
> org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:145)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.advance(MergeIterator.java:122)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:96)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:145)
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.c

[jira] [Commented] (CASSANDRA-8684) Replace usage of Adler32 with CRC32

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000924#comment-15000924
 ] 

Ariel Weisberg commented on CASSANDRA-8684:
---

Thanks for heads up. We are pretty happy with where CRC32 is, and I hear that 
is going to also get faster in JDK 9. It will definitely be nice for people 
running older versions of Cassandra.

> Replace usage of Adler32 with CRC32
> ---
>
> Key: CASSANDRA-8684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8684
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0 beta 1
>
> Attachments: CRCBenchmark.java, PureJavaCrc32.java, Sample.java
>
>
> I could not find a situation in which Adler32 outperformed PureJavaCrc32 much 
> less the intrinsic from Java 8. For small allocations PureJavaCrc32 was much 
> faster probably due to the JNI overhead of invoking the native Adler32 
> implementation where the array has to be allocated and copied.
> I tested on a 65w Sandy Bridge i5 running Ubuntu 14.04 with JDK 1.7.0_71 as 
> well as a c3.8xlarge running Ubuntu 14.04.
> I think it makes sense to stop using Adler32 when generating new checksums.
> c3.8xlarge, results are time in milliseconds, lower is better
> ||Allocation size|Adler32|CRC32|PureJavaCrc32||
> |64|47636|46075|25782|
> |128|36755|36712|23782|
> |256|31194|32211|22731|
> |1024|27194|28792|22010|
> |1048576|25941|27807|21808|
> |536870912|25957|27840|21836|
> i5
> ||Allocation size|Adler32|CRC32|PureJavaCrc32||
> |64|50539|50466|26826|
> |128|37092|38533|24553|
> |256|30630|32938|23459|
> |1024|26064|29079|22592|
> |1048576|24357|27911|22481|
> |536870912|24838|28360|22853|
> Another fun fact. Performance of the CRC32 intrinsic appears to double from 
> Sandy Bridge -> Haswell. Unless I am measuring something different when going 
> from Linux/Sandy to Haswell/OS X.
> The intrinsic/JDK 8 implementation also operates against DirectByteBuffers 
> better and coding against the wrapper will get that boost when run with Java 
> 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8505) Invalid results are returned while secondary index are being build

2015-11-11 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000927#comment-15000927
 ] 

Tyler Hobbs commented on CASSANDRA-8505:


bq. It would be good to have some test coverage of this, although the best I 
could come up with is a dtest which inserts many rows, then adds the index and 
queries immediately expecting ReadFailureException, which is fairly lame and 
fragile.

What about accepting either {{ReadFailureException}} or the complete, correct 
result?  If index building got faster we might stop hitting the 
{{ReadFailureException}} case, but at least the test wouldn't flap.

> Invalid results are returned while secondary index are being build
> --
>
> Key: CASSANDRA-8505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.2.x, 3.0.x
>
>
> If you request an index creation and then execute a query that use the index 
> the results returned might be invalid until the index is fully build. This is 
> caused by the fact that the table column will be marked as indexed before the 
> index is ready.
> The following unit tests can be use to reproduce the problem:
> {code}
> @Test
> public void testIndexCreatedAfterInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> 
> createIndex("CREATE INDEX ON %s(b)");
> 
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> 
> @Test
> public void testIndexCreatedBeforeInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> createIndex("CREATE INDEX ON %s(b)");
> 
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> {code}
> The first test will fail while the second will work. 
> In my opinion the first test should reject the request as invalid (as if the 
> index was not existing) until the index is fully build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name

2015-11-11 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000904#comment-15000904
 ] 

Jeremy Hanna edited comment on CASSANDRA-10582 at 11/11/15 7:09 PM:


++1 can this be applied to 2.1+ as it's simply for the log and debugging 
purposes?  It would be nice to even go back to 2.0, but 2.1 would be great at 
this point.


was (Author: jeromatron):
+1 can this be applied to 2.1+ as it's simply for the log and debugging 
purposes?  It would be nice to even go back to 2.0, but 2.1 would be great at 
this point.

> CorruptSSTableException should print the SS Table Name
> --
>
> Key: CASSANDRA-10582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10582
> Project: Cassandra
>  Issue Type: Bug
> Environment: Azure
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 3.1
>
>
> We should print the SS Table name that's being reported as corrupt to help 
> with quick recovery.
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214
>  (23832772 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398
>  (149675 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707
>  (18270 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656
>  (814588 bytes)
> ERROR 16:32:15  Exiting forcefully due to file system exception on startup, 
> disk failure policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name

2015-11-11 Thread Jeremy Hanna (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000904#comment-15000904
 ] 

Jeremy Hanna commented on CASSANDRA-10582:
--

+1 can this be applied to 2.1+ as it's simply for the log and debugging 
purposes?  It would be nice to even go back to 2.0, but 2.1 would be great at 
this point.

> CorruptSSTableException should print the SS Table Name
> --
>
> Key: CASSANDRA-10582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10582
> Project: Cassandra
>  Issue Type: Bug
> Environment: Azure
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 3.1
>
>
> We should print the SS Table name that's being reported as corrupt to help 
> with quick recovery.
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214
>  (23832772 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398
>  (149675 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707
>  (18270 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656
>  (814588 bytes)
> ERROR 16:32:15  Exiting forcefully due to file system exception on startup, 
> disk failure policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10689) java.lang.OutOfMemoryError: Direct buffer memory

2015-11-11 Thread mlowicki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000901#comment-15000901
 ] 

mlowicki commented on CASSANDRA-10689:
--

After upgrade from 2.1.9 to 2.1.11 two days ago I'm getting lots of:
{code}
WARN  [SharedPool-Worker-28] 2015-11-11 19:01:22,409 
AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
Thread[SharedPool-Worker-28,5,main]: {}
org.apache.cassandra.io.sstable.CorruptSSTableException: 
org.apache.cassandra.io.compress.CorruptBlockException: 
(/var/lib/cassandra/data2/sync/entity2-e24b5040199b11e5a30f75bb514ae072/sync-entity2-ka-392603-Data.db):
 corruption detected, chunk at 11612338 of length 156219476.
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:85)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:310)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:64)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1894)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.setToRowStart(IndexedSliceReader.java:107)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.(IndexedSliceReader.java:83)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:246)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:270)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1994)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1837)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:85)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
~[apache-cassandra-2.1.11.jar:2.1.11]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_80]
at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.11.jar:2.1.11]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
Caused by: org.apache.cassandra.io.compress.CorruptBlockException: 
(/var/lib/cassandra/data2/sync/entity2-e24b5040199b11e5a30f75bb514ae072/sync-entity2-ka-392603-Data.db):
 corruption detected, chunk at 11612338 of length 156219476.
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:116)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
... 21 common frames omitted
Caused by: java.io.IOException: Compressed lengths mismatch
at 
org.apache.cassandra.io.compress.LZ4Compressor.uncompress(LZ4Compressor.java:98)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:112)
 ~[apache-cassandra-2.1.11.jar:2.1.11]
... 22 common frames omitted
{code}

On 3 out of 7 nodes in one data center.

> java.lang.OutOfMemoryError: Direct buffer memory
> 
>
> Key: CASSANDRA-10689
> URL: https://issues.apache.org/jira/brows

[jira] [Updated] (CASSANDRA-10689) java.lang.OutOfMemoryError: Direct buffer memory

2015-11-11 Thread mlowicki (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mlowicki updated CASSANDRA-10689:
-
Reproduced In: 2.1.11
Fix Version/s: (was: 2.1.11)

> java.lang.OutOfMemoryError: Direct buffer memory
> 
>
> Key: CASSANDRA-10689
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10689
> Project: Cassandra
>  Issue Type: Bug
>Reporter: mlowicki
>
> {code}
> ERROR [SharedPool-Worker-63] 2015-11-11 17:53:16,161 
> JVMStabilityInspector.java:117 - JVM state determined to be unstable.  
> Exiting forcefully due to:
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_80]
> at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
> ~[na:1.7.0_80]
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) 
> ~[na:1.7.0_80]
> at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) 
> ~[na:1.7.0_80]
> at sun.nio.ch.IOUtil.read(IOUtil.java:195) ~[na:1.7.0_80]
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:149) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:104)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]  
> at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:310)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:64)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1894)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.setToRowStart(IndexedSliceReader.java:107)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.IndexedSliceReader.(IndexedSliceReader.java:83)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:246)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:270)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1994)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1837)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:85)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
> ~[apache-cassandra-2.1.11.jar:2.1.11]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.11.jar:2.1.11]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.11.jar:2.1.11]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name

2015-11-11 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000885#comment-15000885
 ] 

Wei Deng edited comment on CASSANDRA-10582 at 11/11/15 6:52 PM:


It appears that we have fixed this issue in the uber commit for CASSANDRA-8099 
(https://github.com/apache/cassandra/commit/a991b64811f4d6adb6c7b31c0df52288eb06cf19).
 See this diff:

diff CorruptSSTableException.java 
/code/cassandra-trunk/src/java/org/apache/cassandra/io/sstable
28c28
< super(cause);
---
> super("Corrupted: " + path, cause);

I'd argue that this is important enough (for troubleshooting critical 
conditions and recovering from them quickly) to back-port this code to 2.1 
branch, also it presents almost no risk as it's just a more helpful error 
message.


was (Author: weideng):
It appears that we have fixed this issue in the uber commit for #8099 
(https://github.com/apache/cassandra/commit/a991b64811f4d6adb6c7b31c0df52288eb06cf19).
 See this diff:

diff CorruptSSTableException.java 
/code/cassandra-trunk/src/java/org/apache/cassandra/io/sstable
28c28
< super(cause);
---
> super("Corrupted: " + path, cause);

I'd argue that this is important enough (for troubleshooting critical 
conditions and recovering from them quickly) to back-port this code to 2.1 
branch, also it presents almost no risk as it's just a more helpful error 
message.

> CorruptSSTableException should print the SS Table Name
> --
>
> Key: CASSANDRA-10582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10582
> Project: Cassandra
>  Issue Type: Bug
> Environment: Azure
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 3.1
>
>
> We should print the SS Table name that's being reported as corrupt to help 
> with quick recovery.
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214
>  (23832772 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398
>  (149675 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707
>  (18270 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656
>  (814588 bytes)
> ERROR 16:32:15  Exiting forcefully due to file system exception on startup, 
> disk failure policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10582) CorruptSSTableException should print the SS Table Name

2015-11-11 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000885#comment-15000885
 ] 

Wei Deng commented on CASSANDRA-10582:
--

It appears that we have fixed this issue in the uber commit for #8099 
(https://github.com/apache/cassandra/commit/a991b64811f4d6adb6c7b31c0df52288eb06cf19).
 See this diff:

diff CorruptSSTableException.java 
/code/cassandra-trunk/src/java/org/apache/cassandra/io/sstable
28c28
< super(cause);
---
> super("Corrupted: " + path, cause);

I'd argue that this is important enough (for troubleshooting critical 
conditions and recovering from them quickly) to back-port this code to 2.1 
branch, also it presents almost no risk as it's just a more helpful error 
message.

> CorruptSSTableException should print the SS Table Name
> --
>
> Key: CASSANDRA-10582
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10582
> Project: Cassandra
>  Issue Type: Bug
> Environment: Azure
>Reporter: Anubhav Kale
>Priority: Minor
> Fix For: 3.1
>
>
> We should print the SS Table name that's being reported as corrupt to help 
> with quick recovery.
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-21214
>  (23832772 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-18398
>  (149675 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-23707
>  (18270 bytes)
> INFO  16:32:15  Opening 
> /mnt/cassandra/data/exchangecf/udsuserhourlysnapshot-d1260590711511e587125dc4955cc492/exchangecf-udsuserhourlysnapshot-ka-13656
>  (814588 bytes)
> ERROR 16:32:15  Exiting forcefully due to file system exception on startup, 
> disk failure policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9-SNAPSHOT.jar:2.1.9-SNAPSHOT]
> at



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8684) Replace usage of Adler32 with CRC32

2015-11-11 Thread Nitsan Wakart (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000884#comment-15000884
 ] 

Nitsan Wakart commented on CASSANDRA-8684:
--

Please note that an Adler32 intrinsic is coming in JDK9.

> Replace usage of Adler32 with CRC32
> ---
>
> Key: CASSANDRA-8684
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8684
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
> Fix For: 3.0 beta 1
>
> Attachments: CRCBenchmark.java, PureJavaCrc32.java, Sample.java
>
>
> I could not find a situation in which Adler32 outperformed PureJavaCrc32 much 
> less the intrinsic from Java 8. For small allocations PureJavaCrc32 was much 
> faster probably due to the JNI overhead of invoking the native Adler32 
> implementation where the array has to be allocated and copied.
> I tested on a 65w Sandy Bridge i5 running Ubuntu 14.04 with JDK 1.7.0_71 as 
> well as a c3.8xlarge running Ubuntu 14.04.
> I think it makes sense to stop using Adler32 when generating new checksums.
> c3.8xlarge, results are time in milliseconds, lower is better
> ||Allocation size|Adler32|CRC32|PureJavaCrc32||
> |64|47636|46075|25782|
> |128|36755|36712|23782|
> |256|31194|32211|22731|
> |1024|27194|28792|22010|
> |1048576|25941|27807|21808|
> |536870912|25957|27840|21836|
> i5
> ||Allocation size|Adler32|CRC32|PureJavaCrc32||
> |64|50539|50466|26826|
> |128|37092|38533|24553|
> |256|30630|32938|23459|
> |1024|26064|29079|22592|
> |1048576|24357|27911|22481|
> |536870912|24838|28360|22853|
> Another fun fact. Performance of the CRC32 intrinsic appears to double from 
> Sandy Bridge -> Haswell. Unless I am measuring something different when going 
> from Linux/Sandy to Haswell/OS X.
> The intrinsic/JDK 8 implementation also operates against DirectByteBuffers 
> better and coding against the wrapper will get that boost when run with Java 
> 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10249) Make buffered read size configurable

2015-11-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000881#comment-15000881
 ] 

Aleksey Yeschenko commented on CASSANDRA-10249:
---

[~tobert] Sure, I'm okay with committing to just 2.1 and 2.2. The provided 2.1 
patch doesn't merge with 2.2, though, so I need that version.

Thanks.

> Make buffered read size configurable
> 
>
> Key: CASSANDRA-10249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10249
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Albert P Tobey
>Assignee: Albert P Tobey
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
> Attachments: Screenshot 2015-09-11 09.32.04.png, Screenshot 
> 2015-09-11 09.34.10.png, patched-2.1.9-dstat-lvn10.png, 
> stock-2.1.9-dstat-lvn10.png, yourkit-screenshot.png
>
>
> On read workloads, Cassandra 2.1 reads drastically more data than it emits 
> over the network. This causes problems throughput the system by wasting disk 
> IO and causing unnecessary GC.
> I have reproduce the issue on clusters and locally with a single instance. 
> The only requirement to reproduce the issue is enough data to blow through 
> the page cache. The default schema and data size with cassandra-stress is 
> sufficient for exposing the issue.
> With stock 2.1.9 I regularly observed anywhere from 300:1  to 500 
> disk:network ratio. That is to say, for 1MB/s of network IO, Cassandra was 
> doing 300-500MB/s of disk reads, saturating the drive.
> After applying this patch for standard IO mode 
> https://gist.github.com/tobert/10c307cf3709a585a7cf the ratio fell to around 
> 100:1 on my local test rig. Latency improved considerably and GC became a lot 
> less frequent.
> I tested with 512 byte reads as well, but got the same performance, which 
> makes sense since all HDD and SSD made in the last few years have a 4K block 
> size (many of them lie and say 512).
> I'm re-running the numbers now and will post them tomorrow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10538) Assertion failed in LogFile when disk is full

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000871#comment-15000871
 ] 

Ariel Weisberg commented on CASSANDRA-10538:


How does Throwables.perfom handle AssertionError? It looks like it swallows it? 
Seems like AssertionError shouldn't be caught and should be allowed to 
terminate the JVM?

To make sure I understand the fix. The issue was that we marked something 
committed in memory when committing (or aborting) fails to persist to disk 
because the disk is full. The fix was to write to disk first then memory, and 
if writing to disk for commit fails we can hit the abort path and then that can 
fail as well. 

Or is this hitting abort and abort like you would expect given that the disk is 
full and the transaction probably can't complete successfully?

> Assertion failed in LogFile when disk is full
> -
>
> Key: CASSANDRA-10538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10538
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefania
>Assignee: Stefania
> Fix For: 3.x
>
> Attachments: 
> ma_txn_compaction_67311da0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_696059b0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8ac58b70-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_8be24610-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_95500fc0-72b4-11e5-9eb9-b14fa4bbe709.log, 
> ma_txn_compaction_a41caa90-72b4-11e5-9eb9-b14fa4bbe709.log
>
>
> [~carlyeks] was running a stress job which filled up the disk. At the end of 
> the system logs there are several assertion errors:
> {code}
> ERROR [CompactionExecutor:1] 2015-10-14 20:46:55,467 CassandraDaemon.java:195 
> - Exception in thread Thread[CompactionExecutor:1,1,main]
> java.lang.RuntimeException: Insufficient disk space to write 2097152 bytes
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.getWriteDirectory(CompactionAwareWriter.java:156)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.MaxSSTableSizeWriter.realAppend(MaxSSTableSizeWriter.java:77)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.writers.CompactionAwareWriter.append(CompactionAwareWriter.java:110)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:220)
>  ~[main/:na]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> INFO  [IndexSummaryManager:1] 2015-10-14 21:10:40,099 
> IndexSummaryManager.java:257 - Redistributing index summaries
> ERROR [IndexSummaryManager:1] 2015-10-14 21:10:42,275 
> CassandraDaemon.java:195 - Exception in thread 
> Thread[IndexSummaryManager:1,1,main]
> java.lang.AssertionError: Already completed!
> at org.apache.cassandra.db.lifecycle.LogFile.abort(LogFile.java:221) 
> ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LogTransaction.doAbort(LogTransaction.java:376)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.doAbort(LifecycleTransaction.java:259)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:144)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.abort(Transactional.java:193)
>  ~[main/:na]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.close(Transactional.java:158)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(IndexSummaryManager.java:242)
>  ~[main/:na]
> at 
> org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow(IndexSummaryManager.java:134

[jira] [Commented] (CASSANDRA-10249) Make buffered read size configurable

2015-11-11 Thread Al Tobey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000868#comment-15000868
 ] 

Al Tobey commented on CASSANDRA-10249:
--

Can do for 2.2. Last time I looked at master/3.0 it had moved to an auto-tuning 
approach and probably does not need this patch. I'll take a look anyways to see 
where things are after I get my C* dev environment set up again.

> Make buffered read size configurable
> 
>
> Key: CASSANDRA-10249
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10249
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Albert P Tobey
>Assignee: Albert P Tobey
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
> Attachments: Screenshot 2015-09-11 09.32.04.png, Screenshot 
> 2015-09-11 09.34.10.png, patched-2.1.9-dstat-lvn10.png, 
> stock-2.1.9-dstat-lvn10.png, yourkit-screenshot.png
>
>
> On read workloads, Cassandra 2.1 reads drastically more data than it emits 
> over the network. This causes problems throughput the system by wasting disk 
> IO and causing unnecessary GC.
> I have reproduce the issue on clusters and locally with a single instance. 
> The only requirement to reproduce the issue is enough data to blow through 
> the page cache. The default schema and data size with cassandra-stress is 
> sufficient for exposing the issue.
> With stock 2.1.9 I regularly observed anywhere from 300:1  to 500 
> disk:network ratio. That is to say, for 1MB/s of network IO, Cassandra was 
> doing 300-500MB/s of disk reads, saturating the drive.
> After applying this patch for standard IO mode 
> https://gist.github.com/tobert/10c307cf3709a585a7cf the ratio fell to around 
> 100:1 on my local test rig. Latency improved considerably and GC became a lot 
> less frequent.
> I tested with 512 byte reads as well, but got the same performance, which 
> makes sense since all HDD and SSD made in the last few years have a 4K block 
> size (many of them lie and say 512).
> I'm re-running the numbers now and will post them tomorrow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8879) Alter table on compact storage broken

2015-11-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-8879:
-
Priority: Minor  (was: Major)

> Alter table on compact storage broken
> -
>
> Key: CASSANDRA-8879
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8879
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Nick Bailey
>Assignee: Aleksey Yeschenko
>Priority: Minor
> Fix For: 2.1.x
>
> Attachments: 8879-2.0.txt
>
>
> In 2.0 HEAD, alter table on compact storage tables seems to be broken. With 
> the following table definition, altering the column breaks cqlsh and 
> generates a stack trace in the log.
> {noformat}
> CREATE TABLE settings (
>   key blob,
>   column1 blob,
>   value blob,
>   PRIMARY KEY ((key), column1)
> ) WITH COMPACT STORAGE
> {noformat}
> {noformat}
> cqlsh:OpsCenter> alter table settings ALTER column1 TYPE ascii ;
> TSocket read 0 bytes
> cqlsh:OpsCenter> DESC TABLE settings;
> {noformat}
> {noformat}
> ERROR [Thrift:7] 2015-02-26 17:20:24,640 CassandraDaemon.java (line 199) 
> Exception in thread Thread[Thrift:7,5,main]
> java.lang.AssertionError
> >...at 
> >org.apache.cassandra.cql3.statements.AlterTableStatement.announceMigration(AlterTableStatement.java:198)
> >...at 
> >org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:79)
> >...at 
> >org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:158)
> >...at 
> >org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:175)
> >...at 
> >org.apache.cassandra.thrift.CassandraServer.execute_cql3_query(CassandraServer.java:1958)
> >...at 
> >org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4486)
> >...at 
> >org.apache.cassandra.thrift.Cassandra$Processor$execute_cql3_query.getResult(Cassandra.java:4470)
> >...at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> >...at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> >...at 
> >org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:204)
> >...at 
> >java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> >...at 
> >java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >...at java.lang.Thread.run(Thread.java:724)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7190) Add schema to snapshot manifest

2015-11-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7190:
-
Fix Version/s: (was: 2.1.x)
   3.x

> Add schema to snapshot manifest
> ---
>
> Key: CASSANDRA-7190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Max Barnash
>Priority: Minor
> Fix For: 3.x
>
>
> followup from CASSANDRA-6326



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7190) Add schema to snapshot manifest

2015-11-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-7190:
-
Assignee: Max Barnash

> Add schema to snapshot manifest
> ---
>
> Key: CASSANDRA-7190
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7190
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Jonathan Ellis
>Assignee: Max Barnash
>Priority: Minor
> Fix For: 3.x
>
>
> followup from CASSANDRA-6326



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10689) java.lang.OutOfMemoryError: Direct buffer memory

2015-11-11 Thread mlowicki (JIRA)
mlowicki created CASSANDRA-10689:


 Summary: java.lang.OutOfMemoryError: Direct buffer memory
 Key: CASSANDRA-10689
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10689
 Project: Cassandra
  Issue Type: Bug
Reporter: mlowicki
 Fix For: 2.1.11


{code}
ERROR [SharedPool-Worker-63] 2015-11-11 17:53:16,161 
JVMStabilityInspector.java:117 - JVM state determined to be unstable.  Exiting 
forcefully due to:

java.lang.OutOfMemoryError: Direct buffer memory

at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_80]

at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) 
~[na:1.7.0_80]

at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) 
~[na:1.7.0_80]

at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) 
~[na:1.7.0_80]

at sun.nio.ch.IOUtil.read(IOUtil.java:195) ~[na:1.7.0_80]

at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:149) 
~[na:1.7.0_80]

at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:104)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:81)
 ~[apache-cassandra-2.1.11.jar:2.1.11]  

at 
org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:310)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:64)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1894)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.setToRowStart(IndexedSliceReader.java:107)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.columniterator.IndexedSliceReader.(IndexedSliceReader.java:83)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.columniterator.SSTableSliceIterator.(SSTableSliceIterator.java:42)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:246)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:270)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1994)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1837)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:353) 
~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:85)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) 
~[apache-cassandra-2.1.11.jar:2.1.11]

at 
org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:64) 
~[apache-cassandra-2.1.11.jar:2.1.11]

at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
~[na:1.7.0_80]

at 
org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
 ~[apache-cassandra-2.1.11.jar:2.1.11]

at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
[apache-cassandra-2.1.11.jar:2.1.11]

at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10534) CompressionInfo not being fsynced on close

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000786#comment-15000786
 ] 

Ariel Weisberg commented on CASSANDRA-10534:


I don't see why we would ever not sync the other files? Are they really not 
necessary for the sstable to be readable? If they are required then they need 
to be synced as well otherwise we are going to take actions based on the 
sstable being durable/readable when it isn't really readable.

> CompressionInfo not being fsynced on close
> --
>
> Key: CASSANDRA-10534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10534
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sharvanath Pathak
>Assignee: Stefania
> Fix For: 2.1.x
>
>
> I was seeing SSTable corruption due to a CompressionInfo.db file of size 0, 
> this happened multiple times in our testing with hard node reboots. After 
> some investigation it seems like these file is not being fsynced, and that 
> can potentially lead to data corruption. I am working with version 2.1.9.
> I checked for fsync calls using strace, and found them happening for all but 
> the following components: CompressionInfo, TOC.txt and digest.sha1. All of 
> these but the CompressionInfo seem tolerable. Also a quick look through the 
> code did not reveal any fsync calls. Moreover, I suspect the commit  
> 4e95953f29d89a441dfe06d3f0393ed7dd8586df 
> (https://github.com/apache/cassandra/commit/4e95953f29d89a441dfe06d3f0393ed7dd8586df#diff-b7e48a1398e39a936c11d0397d5d1966R344)
>  has caused the regression, which removed the line
> {noformat}
>  getChannel().force(true);
> {noformat}
> from CompressionMetadata.Writer.close.
> Following is the trace I saw in system.log:
> {noformat}
> INFO  [SSTableBatchOpen:1] 2015-09-29 19:24:39,170 SSTableReader.java:478 - 
> Opening 
> /var/lib/cassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-13368
>  (79 bytes)
> ERROR [SSTableBatchOpen:1] 2015-09-29 19:24:39,177 FileUtils.java:447 - 
> Exiting forcefully due to file system exception on startup, disk failure 
> policy "stop"
> org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.EOFException
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:131)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.create(CompressionMetadata.java:85)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.metadata(CompressedSegmentedFile.java:79)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.CompressedPoolingSegmentedFile$Builder.complete(CompressedPoolingSegmentedFile.java:72)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:168)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:752) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:703) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:491) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:387) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> org.apache.cassandra.io.sstable.SSTableReader$4.run(SSTableReader.java:534) 
> ~[apache-cassandra-2.1.9.jar:2.1.9]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> [na:1.7.0_80]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  [na:1.7.0_80]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_80]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_80]
> Caused by: java.io.EOFException: null
> at 
> java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:589) 
> ~[na:1.7.0_80]
> at java.io.DataInputStream.readUTF(DataInputStream.java:564) 
> ~[na:1.7.0_80]
> at 
> org.apache.cassandra.io.compress.CompressionMetadata.(CompressionMetadata.java:106)
>  ~[apache-cassandra-2.1.9.jar:2.1.9]
> ... 14 common frames omitted
> {noformat}
> Following is the result of ls on the data directory of a corrupted SSTable 
> after the hard reboot:
> {noformat}
> $ ls -l 
> /var/lib/cassandra/data/system/sstable_activity-5a1ff

[jira] [Comment Edited] (CASSANDRA-6061) Rewrite TokenMetadata

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000740#comment-15000740
 ] 

Ariel Weisberg edited comment on CASSANDRA-6061 at 11/11/15 5:48 PM:
-

As a start just make it COW so readers can pull a copy and not have it change 
underneath whiles they are using it causing NPEs and assertions to fire. Right 
now it's very error prone and in --CASSANDRA-6061-- CASSANDRA-10485 we have had 
to go through a couple of iterations to have it not accidentally access TMD in 
a racy way.

That doesn't fix everything since any concurrent changes to TMD result in all 
readers who have persisted state based on the old world view needing to take 
corrective action. Maybe have listeners and instead of getting access to TMD as 
a global singleton receive updated references as they are created so subsystems 
can reference local copies and then on state transitions take whatever actions 
are necessary.

Hints for instance might need to drop stale hints (not sure how this is handled 
now).


was (Author: aweisberg):
As a start just make it COW so readers can pull a copy and not have it change 
underneath whiles they are using it causing NPEs and assertions to fire. Right 
now it's very error prone and in CASSANDRA-6061 we have had to go through a 
couple of iterations to have it not accidentally access TMD in a racy way.

That doesn't fix everything since any concurrent changes to TMD result in all 
readers who have persisted state based on the old world view needing to take 
corrective action. Maybe have listeners and instead of getting access to TMD as 
a global singleton receive updated references as they are created so subsystems 
can reference local copies and then on state transitions take whatever actions 
are necessary.

Hints for instance might need to drop stale hints (not sure how this is handled 
now).

> Rewrite TokenMetadata
> -
>
> Key: CASSANDRA-6061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6061
> Project: Cassandra
>  Issue Type: Task
>Reporter: Jonathan Ellis
>Priority: Minor
>
> Feels like this "mostly works" but is generally fragile (see: shuffle).
> Would be good to get a fresh perspective on it and see if we can do better.
> Bonus would be, ability to bootstrap multiple nodes w/o Two Minute Rule.  
> Probably would involve using LWT on pending ranges state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8838) Resumable bootstrap streaming

2015-11-11 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000771#comment-15000771
 ] 

Yuki Morishita commented on CASSANDRA-8838:
---

So the problem happens when bootstrapping node goes down for some reason, and 
keep on down for more than RING_DELAY.
In that case, yes, hints are not stored because the node is evicted from TMD.

We have flag to do normal bootstrap for that case 
(-Dcassandra.reset_bootstrap_progress).

> Resumable bootstrap streaming
> -
>
> Key: CASSANDRA-8838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8838
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
>  Labels: dense-storage
> Fix For: 2.2.0 beta 1
>
>
> This allows the bootstrapping node not to be streamed already received data.
> The bootstrapping node records received keyspace/ranges as one stream session 
> completes. When some sessions with other nodes fail, bootstrapping fails 
> also, though next time it re-bootstraps, already received keyspace/ranges are 
> skipped to be streamed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8505) Invalid results are returned while secondary index are being build

2015-11-11 Thread Sam Tunnicliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000767#comment-15000767
 ] 

Sam Tunnicliffe commented on CASSANDRA-8505:


I think the various index states can be reduced to a simple ready/not ready 
check. What's more unless we intend to change the established behaviour fairly 
significantly, once an index moves to a ready state it never moves back to 
being not ready. The only times when we modify the status in the system table 
are when the index is removed (in which case we have no problem with being able 
to query using it) or during a rebuild. In the latter case though, we probably 
shouldn't reject queries (and we don't currently), as an index rebuild is 
incremental. That is, we don't scrap the existing index tables and rebuild 
everything from scratch, just write new index SSTables to supercede the old 
ones. So although it's certainly possible to get incorrect results during a 
rebuild (because of missing/stale entries), the results only get more correct 
as the rebuild progresses. Changing this so that all queries against that index 
return errors until all rebuilds complete seems like a step backwards. It seems 
more reasonable to reject queries until the initial build has been performed, 
as per the example in the description, but this only requires a simple boolean 
to track state between instantiating/registering the index and its initial 
build task completing (if one is required). 

It would be good to have some test coverage of this, although the best I could 
come up with is a dtest which inserts many rows, then adds the index and 
queries immediately expecting ReadFailureException, which is fairly lame and 
fragile.

A couple of points specific to the 3.0 patch:

* The fix for CASSANDRA-10595 has been lost. If an index doesn't register 
itself in {{createIndex}}, don't ask it for an initalization task, just set 
{{initialBuildTask == null}}. 
* {{SIM::reloadIndex}} has changed since the patch was created (due to 
CASSANDRA-10604) - I think that no changes to this method are now required. I 
did notice though that the current implementation actually makes a redundant 
call to {{getMetadataReloadTask}}, so if you could fix that while you're here, 
that'd be great.

bq. Secondary index and their build/not build status are node-local. By 
consequence it is not possible to know on a coordinator node if the index is 
fully build. It can be built on the coordinator but still building on other 
nodes

For future reference on this point, we also have CASSANDRA-9967 which has a 
very similar intent.

> Invalid results are returned while secondary index are being build
> --
>
> Key: CASSANDRA-8505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8505
> Project: Cassandra
>  Issue Type: Bug
>  Components: Coordination
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
> Fix For: 2.2.x, 3.0.x
>
>
> If you request an index creation and then execute a query that use the index 
> the results returned might be invalid until the index is fully build. This is 
> caused by the fact that the table column will be marked as indexed before the 
> index is ready.
> The following unit tests can be use to reproduce the problem:
> {code}
> @Test
> public void testIndexCreatedAfterInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> 
> createIndex("CREATE INDEX ON %s(b)");
> 
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> 
> @Test
> public void testIndexCreatedBeforeInsert() throws Throwable
> {
> createTable("CREATE TABLE %s (a int, b int, c int, primary key((a, 
> b)))");
> createIndex("CREATE INDEX ON %s(b)");
> 
> execute("INSERT INTO %s (a, b, c) VALUES (0, 0, 0);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 1, 1);");
> execute("INSERT INTO %s (a, b, c) VALUES (0, 2, 2);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 0, 3);");
> execute("INSERT INTO %s (a, b, c) VALUES (1, 1, 4);");
> assertRows(execute("SELECT * FROM %s WHERE b = ?;", 1),
>row(0, 1, 1),
>row(1, 1, 4));
> }
> {code}
> The first test will fail while the second will work. 
> In my o

[jira] [Commented] (CASSANDRA-6091) Better Vnode support in hadoop/pig

2015-11-11 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000768#comment-15000768
 ] 

Aleksey Yeschenko commented on CASSANDRA-6091:
--

[~michaelsembwever] I don't know why it was moved to {{Testing}}, and I don't 
know if it's still relevant. Sorry for the annoying delay.

At this point it will most likely not go into 2.1.x and 2.2.x (or 3.0.x), but, 
if still relevant for 3.x, might go into 3.2. Can you have a look/cook a proper 
patch, if so?

> Better Vnode support in hadoop/pig
> --
>
> Key: CASSANDRA-6091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6091
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alex Liu
>Assignee: mck
>Priority: Minor
> Attachments: cassandra-2.0-6091.txt, cassandra-2.1-6091.txt, 
> trunk-6091.txt
>
>
> CASSANDRA-6084 shows there are some issues during running hadoop/pig job if 
> vnodes are enable. Also the hadoop performance of vnode enabled nodes  are 
> bad for there are so many splits.
> The idea is to combine vnode splits into a big sudo splits so it work like 
> vnode is disable for hadoop/pig job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6061) Rewrite TokenMetadata

2015-11-11 Thread Joel Knighton (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000766#comment-15000766
 ] 

Joel Knighton commented on CASSANDRA-6061:
--

I think Ariel meant to link to [CASSANDRA-10485]. I agree that TokenMetadata is 
fragile in its current state; in another ticket, we just got done reworking the 
application state map inside endpoint states to COW.  In general, I've found 
this whole subsystem prone to races and think a rework needs careful 
consideration.

I think COW is a reasonable solution in the interim.

> Rewrite TokenMetadata
> -
>
> Key: CASSANDRA-6061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6061
> Project: Cassandra
>  Issue Type: Task
>Reporter: Jonathan Ellis
>Priority: Minor
>
> Feels like this "mostly works" but is generally fragile (see: shuffle).
> Would be good to get a fresh perspective on it and see if we can do better.
> Bonus would be, ability to bootstrap multiple nodes w/o Two Minute Rule.  
> Probably would involve using LWT on pending ranges state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6091) Better Vnode support in hadoop/pig

2015-11-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6091:
-
Priority: Minor  (was: Major)

> Better Vnode support in hadoop/pig
> --
>
> Key: CASSANDRA-6091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6091
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alex Liu
>Assignee: mck
>Priority: Minor
> Attachments: cassandra-2.0-6091.txt, cassandra-2.1-6091.txt, 
> trunk-6091.txt
>
>
> CASSANDRA-6084 shows there are some issues during running hadoop/pig job if 
> vnodes are enable. Also the hadoop performance of vnode enabled nodes  are 
> bad for there are so many splits.
> The idea is to combine vnode splits into a big sudo splits so it work like 
> vnode is disable for hadoop/pig job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6091) Better Vnode support in hadoop/pig

2015-11-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-6091:
-
Issue Type: Improvement  (was: Bug)

> Better Vnode support in hadoop/pig
> --
>
> Key: CASSANDRA-6091
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6091
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Alex Liu
>Assignee: mck
> Attachments: cassandra-2.0-6091.txt, cassandra-2.1-6091.txt, 
> trunk-6091.txt
>
>
> CASSANDRA-6084 shows there are some issues during running hadoop/pig job if 
> vnodes are enable. Also the hadoop performance of vnode enabled nodes  are 
> bad for there are so many splits.
> The idea is to combine vnode splits into a big sudo splits so it work like 
> vnode is disable for hadoop/pig job



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10485) Missing host ID on hinted handoff write

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000754#comment-15000754
 ] 

Ariel Weisberg commented on CASSANDRA-10485:


[There is an extra ')' in this trace 
statement|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-10485-ultimate#diff-71f06c193f5b5e270cf8ac695164f43aR2492]

Test look good near as I can tell.

Why is isMemberJoining better than just using getHostID being null as an 
indicator of whether the hint should be written?

> Missing host ID on hinted handoff write
> ---
>
> Key: CASSANDRA-10485
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10485
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 2.1.x, 2.2.x, 3.0.x
>
>
> when I restart one of them I receive the error "Missing host ID":
> {noformat}
> WARN  [SharedPool-Worker-1] 2015-10-08 13:15:33,882 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-1,5,main]: {}
> java.lang.AssertionError: Missing host ID for 63.251.156.141
> at 
> org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:978)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:950)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:2235)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_60]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.3.jar:2.1.3]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.3.jar:2.1.3]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
> {noformat}
> If I made nodetool status, the problematic node has ID:
> {noformat}
> UN  10.10.10.12  1.3 TB 1   ?   
> 4d5c8fd2-a909-4f09-a23c-4cd6040f338a  rack3
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-8642) Cassandra crashed after stress test of write

2015-11-11 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-8642.
--
   Resolution: Cannot Reproduce
Fix Version/s: (was: 2.1.x)

> Cassandra crashed after stress test of write
> 
>
> Key: CASSANDRA-8642
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8642
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra 2.1.2, single node cluster, Ubuntu, 8 core 
> CPU, 16GB memory (heapsize 8G), Vmware virtual machine.
>Reporter: ZhongYu
>Assignee: Philip Thompson
> Attachments: QQ拼音截图未命名.png
>
>
> When I am perform stress test of write using YCSB, Cassandra crashed. I look 
> at the logs, and here are the last  and only log:
> {code}
> WARN  [SharedPool-Worker-25] 2015-01-18 17:35:16,611 
> AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread 
> Thread[SharedPool-Worker-25,5,main]: {}
> java.lang.InternalError: a fault occurred in a recent unsafe memory access 
> operation in compiled Java code
> at 
> org.apache.cassandra.utils.concurrent.OpOrder$Group.isBlockingSignal(OpOrder.java:302)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:177)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:174) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1126) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) 
> ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:999)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2117)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_71]
> at 
> org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164)
>  ~[apache-cassandra-2.1.2.jar:2.1.2]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) 
> [apache-cassandra-2.1.2.jar:2.1.2]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8838) Resumable bootstrap streaming

2015-11-11 Thread Brandon Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000741#comment-15000741
 ] 

Brandon Williams commented on CASSANDRA-8838:
-

Write survey mode works fine as long as the surveying node is alive... once 
it's gone, as Paulo noted, it will be removed after ring_delay because it was a 
fat client (all bootstrapping nodes are fat clients)

> Resumable bootstrap streaming
> -
>
> Key: CASSANDRA-8838
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8838
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Yuki Morishita
>Assignee: Yuki Morishita
>Priority: Minor
>  Labels: dense-storage
> Fix For: 2.2.0 beta 1
>
>
> This allows the bootstrapping node not to be streamed already received data.
> The bootstrapping node records received keyspace/ranges as one stream session 
> completes. When some sessions with other nodes fail, bootstrapping fails 
> also, though next time it re-bootstraps, already received keyspace/ranges are 
> skipped to be streamed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6061) Rewrite TokenMetadata

2015-11-11 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000740#comment-15000740
 ] 

Ariel Weisberg commented on CASSANDRA-6061:
---

As a start just make it COW so readers can pull a copy and not have it change 
underneath whiles they are using it causing NPEs and assertions to fire. Right 
now it's very error prone and in CASSANDRA-6061 we have had to go through a 
couple of iterations to have it not accidentally access TMD in a racy way.

That doesn't fix everything since any concurrent changes to TMD result in all 
readers who have persisted state based on the old world view needing to take 
corrective action. Maybe have listeners and instead of getting access to TMD as 
a global singleton receive updated references as they are created so subsystems 
can reference local copies and then on state transitions take whatever actions 
are necessary.

Hints for instance might need to drop stale hints (not sure how this is handled 
now).

> Rewrite TokenMetadata
> -
>
> Key: CASSANDRA-6061
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6061
> Project: Cassandra
>  Issue Type: Task
>Reporter: Jonathan Ellis
>Priority: Minor
>
> Feels like this "mostly works" but is generally fragile (see: shuffle).
> Would be good to get a fresh perspective on it and see if we can do better.
> Bonus would be, ability to bootstrap multiple nodes w/o Two Minute Rule.  
> Probably would involve using LWT on pending ranges state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >