[jira] [Commented] (CASSANDRA-12910) SASI: calculatePrimary() always returns null

2016-12-06 Thread Corentin Chary (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728043#comment-15728043
 ] 

Corentin Chary commented on CASSANDRA-12910:


That's totally right. Thanks for the detailed explanation.

> SASI: calculatePrimary() always returns null
> 
>
> Key: CASSANDRA-12910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12910
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0002-sasi-fix-calculatePrimary.patch
>
>
> While investigating performance issues with SASI  
> (https://github.com/criteo/biggraphite/issues/174 if you want to know more) I 
> ended finding calculatePrimary() in QueryController.java which apparently 
> should return the "primary index".
> It lacks documentation, and I'm unsure what the "primary index" should be, 
> but apparently this function never returns one because primaryIndexes.size() 
> is always 0.
> https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L237
> I'm unsure if the proper fix is checking if the collection is empty or 
> reversing the operator (selecting the index with higher cardinality versus 
> the one with lower cardinality).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12883) Remove support for non-JavaScript UDFs

2016-12-06 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12883:
-
Status: Patch Available  (was: Open)

Provided patch removes support for "plug-in" JSR 223 script languages - i.e. 
only Java and JavaScript (Nashorn) are supported. Documentation's also updated.

||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:12883-remove-jsr223-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-12883-remove-jsr223-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-12883-remove-jsr223-trunk-dtest/lastSuccessfulBuild/]


> Remove support for non-JavaScript UDFs
> --
>
> Key: CASSANDRA-12883
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12883
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.0
>
>
> As recently reported in the user mailing list, JSR-223 languages other than 
> JavaScript no longer work since version 3.0.
> The reason is that the sandbox implemented in CASSANDRA-9402 restricts the 
> use of "evil" packages, classes and functions. Unfortunately, even "non-evil" 
> packages from JSR-223 providers are blocked.
> In order to get a JSR-223 provider working fine, we need to allow JSR-223 
> provider specific packages and also allow specific runtime permissions.
> The fact that "arbitrary" JSR-223 providers no longer work since 3.0 has just 
> been reported recently, means that this functionality (i.e. non-JavaSCript 
> JSR-223 UDFs) is obviously not used.
> Therefore I propose to remove support for UDFs that do not use Java or 
> JavaScript in 4.0. This will also allow to specialize scripted UDFs on 
> Nashorn and allow to use its security features, although these are limited, 
> more extensively. (Clarification: this ticket is just about to remove that 
> support)
> Also want to point out that we never "officially" supported UDFs that are not 
> Java or JavaScript.
> Sample error message:
> {code}
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1264, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.5.0.post0-d8d0456.zip/cassandra-driver-3.5.0.post0-d8d0456/cassandra/cluster.py",
>  line 3650, in result
> raise self._final_exception
> FunctionFailure: Error from server: code=1400 [User Defined Function failure] 
> message="execution of 'e.test123[bigint]' failed: 
> java.security.AccessControlException: access denied: 
> ("java.lang.RuntimePermission" 
> "accessClassInPackage.org.python.jline.console")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13011) heap exhaustion when cleaning table with wide partitions and a secondary index attached to it

2016-12-06 Thread Milan Majercik (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Milan Majercik updated CASSANDRA-13011:
---
Description: 
We have a table with rather wide partitions and a secondary index attached to 
it. When tried to clean unused data on a node after expansion of our cluster 
via issuing {{nodetool cleanup}} command we observed a heap exhaustion issue. 
The culprit appears to be in method 
{{org.apache.cassandra.db.compaction.CompactionManager.CleanupStrategy.Full.cleanup}}
 as it tries to remove related secondary index entries. The method first 
populates a list will all cells belonging to the given partition...
{code:java}
while (row.hasNext())
{
OnDiskAtom column = row.next();

if (column instanceof Cell && 
cfs.indexManager.indexes((Cell) column))
{
if (indexedColumnsInRow == null)
indexedColumnsInRow = new ArrayList<>();

indexedColumnsInRow.add((Cell) column);
}
}
{code}

... and then submits it to the index manager for removal.
{code:java}
// acquire memtable lock here because secondary index 
deletion may cause a race. See CASSANDRA-3712
try (OpOrder.Group opGroup = 
cfs.keyspace.writeOrder.start())
{
cfs.indexManager.deleteFromIndexes(row.getKey(), 
indexedColumnsInRow, opGroup);
}
{code}

After imposing a limit on array size and implementing some sort of pagination 
the cleanup worked fine.

> heap exhaustion when cleaning table with wide partitions and a secondary 
> index attached to it
> -
>
> Key: CASSANDRA-13011
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13011
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Milan Majercik
>
> We have a table with rather wide partitions and a secondary index attached to 
> it. When tried to clean unused data on a node after expansion of our cluster 
> via issuing {{nodetool cleanup}} command we observed a heap exhaustion issue. 
> The culprit appears to be in method 
> {{org.apache.cassandra.db.compaction.CompactionManager.CleanupStrategy.Full.cleanup}}
>  as it tries to remove related secondary index entries. The method first 
> populates a list will all cells belonging to the given partition...
> {code:java}
> while (row.hasNext())
> {
> OnDiskAtom column = row.next();
> if (column instanceof Cell && 
> cfs.indexManager.indexes((Cell) column))
> {
> if (indexedColumnsInRow == null)
> indexedColumnsInRow = new ArrayList<>();
> indexedColumnsInRow.add((Cell) column);
> }
> }
> {code}
> ... and then submits it to the index manager for removal.
> {code:java}
> // acquire memtable lock here because secondary index 
> deletion may cause a race. See CASSANDRA-3712
> try (OpOrder.Group opGroup = 
> cfs.keyspace.writeOrder.start())
> {
> cfs.indexManager.deleteFromIndexes(row.getKey(), 
> indexedColumnsInRow, opGroup);
> }
> {code}
> After imposing a limit on array size and implementing some sort of pagination 
> the cleanup worked fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-13011) heap exhaustion when cleaning table with wide partitions and a secondary index attached to it

2016-12-06 Thread Milan Majercik (JIRA)
Milan Majercik created CASSANDRA-13011:
--

 Summary: heap exhaustion when cleaning table with wide partitions 
and a secondary index attached to it
 Key: CASSANDRA-13011
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13011
 Project: Cassandra
  Issue Type: Bug
  Components: Compaction
Reporter: Milan Majercik






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12673) Nodes cannot see each other in multi-DC, non-EC2 environment with two-interface nodes due to outbound node-to-node connection binding to private interface

2016-12-06 Thread Milan Majercik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727952#comment-15727952
 ] 

Milan Majercik edited comment on CASSANDRA-12673 at 12/7/16 7:07 AM:
-

I also did run all relevant unit and distributed tests, meaning all tests 
related to streaming.


was (Author: mmajercik):
I did run all relevant unit and distributed tests, meaning all tests related to 
streaming.

> Nodes cannot see each other in multi-DC, non-EC2 environment with 
> two-interface nodes due to outbound node-to-node connection binding to 
> private interface
> --
>
> Key: CASSANDRA-12673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12673
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Multi-DC, non-EC2 environment with two-interface nodes
>Reporter: Milan Majercik
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> We have a two-DC cluster in non-EC2 environment with each node containing two 
> interfaces, one using private addresses for intra-DC communication and the 
> other using public addresses for inter-DC communication. After proper 
> configuration setup needed for this kind of environment we observed nodes 
> cannot see each other.
> The configuration changes made for this purpose are as follows:
> *listen_address*: bound to private interface
> *broadcast_address*: bound to public address
> *listen_on_broadcast_address*: true
> *endpoint_snitch*: GossipingPropertyFileSnitch
> *prefer_local*=true (in cassandra-rackdc.properties)
> Upon restart, cassandra node contacts other nodes with their public addresses 
> which is essential for making contacts to foreign data centers. After 
> exhaustive investigation we found cassandra binds outbound node-to-node 
> connections to private interface (the one specified in listen_address) that 
> poses a problem for our environment as these data centers _do not allow 
> connections from private interface to public network_.
> A portion of cassandra code responsible for local binding of outbound 
> connections can be found in method 
> {{org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket}}:
> {code}
> if (!Config.getOutboundBindAny())
> channel.bind(new 
> InetSocketAddress(FBUtilities.getLocalAddress(), 0));
> {code}
> After we commented out these two lines and deployed cassandra.jar across the 
> cluster, the nodes were able to see each other and everything appears to be 
> working fine, including two-DC setup.
> Do you think it's possible to remove these two lines without negative 
> consequences? Alternatively, if the local binding serves some specific 
> purpose of which I'm ignorant would it be possible to make it configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12673) Nodes cannot see each other in multi-DC, non-EC2 environment with two-interface nodes due to outbound node-to-node connection binding to private interface

2016-12-06 Thread Milan Majercik (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727952#comment-15727952
 ] 

Milan Majercik commented on CASSANDRA-12673:


I did run all relevant unit and distributed tests, meaning all tests related to 
streaming.

> Nodes cannot see each other in multi-DC, non-EC2 environment with 
> two-interface nodes due to outbound node-to-node connection binding to 
> private interface
> --
>
> Key: CASSANDRA-12673
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12673
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
> Environment: Multi-DC, non-EC2 environment with two-interface nodes
>Reporter: Milan Majercik
>Priority: Minor
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> We have a two-DC cluster in non-EC2 environment with each node containing two 
> interfaces, one using private addresses for intra-DC communication and the 
> other using public addresses for inter-DC communication. After proper 
> configuration setup needed for this kind of environment we observed nodes 
> cannot see each other.
> The configuration changes made for this purpose are as follows:
> *listen_address*: bound to private interface
> *broadcast_address*: bound to public address
> *listen_on_broadcast_address*: true
> *endpoint_snitch*: GossipingPropertyFileSnitch
> *prefer_local*=true (in cassandra-rackdc.properties)
> Upon restart, cassandra node contacts other nodes with their public addresses 
> which is essential for making contacts to foreign data centers. After 
> exhaustive investigation we found cassandra binds outbound node-to-node 
> connections to private interface (the one specified in listen_address) that 
> poses a problem for our environment as these data centers _do not allow 
> connections from private interface to public network_.
> A portion of cassandra code responsible for local binding of outbound 
> connections can be found in method 
> {{org.apache.cassandra.net.OutboundTcpConnectionPool.newSocket}}:
> {code}
> if (!Config.getOutboundBindAny())
> channel.bind(new 
> InetSocketAddress(FBUtilities.getLocalAddress(), 0));
> {code}
> After we commented out these two lines and deployed cassandra.jar across the 
> cluster, the nodes were able to see each other and everything appears to be 
> working fine, including two-DC setup.
> Do you think it's possible to remove these two lines without negative 
> consequences? Alternatively, if the local binding serves some specific 
> purpose of which I'm ignorant would it be possible to make it configurable?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13006) Disable automatic heap dumps on OOM error

2016-12-06 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727672#comment-15727672
 ] 

Jeremiah Jordan commented on CASSANDRA-13006:
-

[~cnlwsu] This isn't proposing we remove heap dumps, just that we honor the JVM 
flags for making and storing them. I think that is probably a good idea.

We should still default our flags to make them in env files, but users should 
have control over if they are to be written out, besides hacks like setting the 
output to a read only location.

> Disable automatic heap dumps on OOM error
> -
>
> Key: CASSANDRA-13006
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13006
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: anmols
>Priority: Minor
>
> With CASSANDRA-9861, a change was added to enable collecting heap dumps by 
> default if the process encountered an OOM error. These heap dumps are stored 
> in the Apache Cassandra home directory unless configured otherwise (see 
> [Cassandra Support 
> Document|https://support.datastax.com/hc/en-us/articles/204225959-Generating-and-Analyzing-Heap-Dumps]
>  for this feature).
>  
> The creation and storage of heap dumps aides debugging and investigative 
> workflows, but is not be desirable for a production environment where these 
> heap dumps may occupy a large amount of disk space and require manual 
> intervention for cleanups. 
>  
> Managing heap dumps on out of memory errors and configuring the paths for 
> these heap dumps are available as JVM options in JVM. The current behavior 
> conflicts with the Boolean JVM flag HeapDumpOnOutOfMemoryError. 
>  
> A patch can be proposed here that would make the heap dump on OOM error honor 
> the HeapDumpOnOutOfMemoryError flag. Users who would want to still generate 
> heap dumps on OOM errors can set the -XX:+HeapDumpOnOutOfMemoryError JVM 
> option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2016-12-06 Thread Jeremiah Jordan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-13010:

Issue Type: New Feature  (was: Bug)

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jon Haddad
>  Labels: lhf
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13008) Add vm.max_map_count StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13008:
---
Summary: Add vm.max_map_count StartupCheck  (was: Add vm.max_map_count 
startup StartupCheck)

> Add vm.max_map_count StartupCheck
> -
>
> Key: CASSANDRA-13008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13008-3.0.txt
>
>
> It's recommended to set {{vm.max_map_count}} to 1048575 (CASSANDRA-3563)
> When the max_map_count is low, it throws OOM exception, which is hard to link 
> to the real issue of vm.max_map_count.
> The problem happened when we tried to remove one node, all the other nodes in 
> cluster crashed. As each node was trying to load more local SSTable files for 
> streaming.
> I would suggest to add a StartupCheck for max_map_count, at least it could 
> give a warning message to help the debug.
> {code}
> ERROR [STREAM-IN-] JVMStabilityInspector.java:140 - JVM state determined to 
> be unstable.  Exiting forcefully due to:
> java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) 
> ~[na:1.8.0_112]
> at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:70) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:58) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:96) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.(CompressedSegmentedFile.java:47)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:276)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$600(BigTableWriter.java:50)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:313)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:213)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:56)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:109)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:599) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:482)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:296)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12649) Add BATCH metrics

2016-12-06 Thread Alwyn Davis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727484#comment-15727484
 ] 

Alwyn Davis commented on CASSANDRA-12649:
-

[~blerer] No concerns from me, thanks for identifying the CAS issue!

> Add BATCH metrics
> -
>
> Key: CASSANDRA-12649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12649
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Alwyn Davis
>Assignee: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12649-3.x-v2.patch, 12649-3.x.patch, 
> stress-batch-metrics.tar.gz, stress-trunk.tar.gz, trunk-12649.txt
>
>
> To identify causes of load on a cluster, it would be useful to have some 
> additional metrics:
> * *Mutation size distribution:* I believe this would be relevant when 
> tracking the performance of unlogged batches.
> * *Logged / Unlogged Partitions per batch distribution:* This would also give 
> a count of batch types processed. Multiple distinct tables in batch would 
> just be considered as separate partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2016-12-06 Thread Jeremy Hanna (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremy Hanna updated CASSANDRA-13010:
-
Labels: lhf  (was: )

> nodetool compactionstats should say which disk a compaction is writing to
> -
>
> Key: CASSANDRA-13010
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>  Labels: lhf
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12649) Add BATCH metrics

2016-12-06 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727459#comment-15727459
 ] 

Aleksey Yeschenko commented on CASSANDRA-12649:
---

Looks all good to me. Could fix if/else brackets formatting in 
{{updatePartitionsPerBatchMetrics()}} if you feel like it, but the logic seems 
correct.

> Add BATCH metrics
> -
>
> Key: CASSANDRA-12649
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12649
> Project: Cassandra
>  Issue Type: Wish
>Reporter: Alwyn Davis
>Assignee: Alwyn Davis
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12649-3.x-v2.patch, 12649-3.x.patch, 
> stress-batch-metrics.tar.gz, stress-trunk.tar.gz, trunk-12649.txt
>
>
> To identify causes of load on a cluster, it would be useful to have some 
> additional metrics:
> * *Mutation size distribution:* I believe this would be relevant when 
> tracking the performance of unlogged batches.
> * *Logged / Unlogged Partitions per batch distribution:* This would also give 
> a count of batch types processed. Multiple distinct tables in batch would 
> just be considered as separate partitions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11115) Thrift removal

2016-12-06 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727443#comment-15727443
 ] 

Aleksey Yeschenko commented on CASSANDRA-5:
---

9. {{UnfilteredRowIteratorWithLowerBound}}, {{nowInSec}} field is now unused
10. {{KeysSearcher}} now has 2 new unused imports

Have looked through everything, don't see any issues, only the listed nits.

Have three more spots to go through again, first thing tomorrow:

1. In {{LegacyLayout}}, treatment of the removed {{readAllAsDynamic}} arg as 
always false.
2. {{CqlInputFormat}}/{{ConfigHelper}} whether or not nulls where valid/legal 
before for job range start/end keys/tokens
3. Make another pass to convince myself that the new {{TokenRange}} for 
{{describeRing()}} does exactly the same thing.

Other than that I'm done, all LGTM.

> Thrift removal
> --
>
> Key: CASSANDRA-5
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 4.0
>
>
> Thrift removal [has been announced for 
> 4.0|http://mail-archives.apache.org/mod_mbox/cassandra-user/201601.mbox/%3ccaldd-zgagnldu3pqbd6wp0jb0x73qjdr9phpxmmo+gq+2e5...@mail.gmail.com%3E].
>  This ticket is meant to serve as a general task for that removal, but also 
> to track issue related to that, either things that we should do in 3.x to 
> make that removal as smooth as possible, or sub-tasks that it makes sense to 
> separate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-13010) nodetool compactionstats should say which disk a compaction is writing to

2016-12-06 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-13010:
--

 Summary: nodetool compactionstats should say which disk a 
compaction is writing to
 Key: CASSANDRA-13010
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13010
 Project: Cassandra
  Issue Type: Bug
Reporter: Jon Haddad






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13009) Speculative retry bugs

2016-12-06 Thread Simon Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Zhou updated CASSANDRA-13009:
---
Status: Patch Available  (was: Open)

We should have some unit tests for SpeculatingReadExecutor and 
AbstractReadExecutor#getReadExecutor. I'll try to add some after the initial 
review.

> Speculative retry bugs
> --
>
> Key: CASSANDRA-13009
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13009
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
> Fix For: 3.0.11
>
> Attachments: CASSANDRA-13009-v1.patch
>
>
> There are a few issues with speculative retry:
> 1. Time unit bugs. These are from ColumnFamilyStore (v3.0.10):
> The left hand side is in nanos, as the name suggests, while the right hand 
> side is in millis.
> {code}
> sampleLatencyNanos = DatabaseDescriptor.getReadRpcTimeout() / 2;
> {code}
> Here coordinatorReadLatency is already in nanos and we shouldn't multiple the 
> value by 1000. This was a regression in 8896a70 when we switch metrics 
> library and the two libraries use different time units.
> {code}
> sampleLatencyNanos = (long) 
> (metric.coordinatorReadLatency.getSnapshot().getValue(retryPolicy.threshold())
>  * 1000d);
> {code}
> 2. Confusing overload protection and retry delay. As the name 
> "sampleLatencyNanos" suggests, it should be used to keep the actually sampled 
> read latency. However, we assign it the retry threshold in the case of 
> CUSTOM. Then we compare the retry threshold with read timeout (defaults to 
> 5000ms). This means, if we use speculative_retry=10ms for the table, we won't 
> be able to avoid being overloaded. We should compare the actual read latency 
> with the read timeout for overload protection. See line 450 of 
> ColumnFamilyStore.java and line 279 of AbstractReadExecutor.java.
> My proposals are:
> a. We use sampled p99 delay and compare it with a customizable threshold 
> (-Dcassandra.overload.threshold) for overload detection.
> b. Introduce another variable retryDelayNanos for waiting time before retry. 
> This is the value from table setting (PERCENTILE or CUSTOM).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13009) Speculative retry bugs

2016-12-06 Thread Simon Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Zhou updated CASSANDRA-13009:
---
Attachment: CASSANDRA-13009-v1.patch

> Speculative retry bugs
> --
>
> Key: CASSANDRA-13009
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13009
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
> Fix For: 3.0.11
>
> Attachments: CASSANDRA-13009-v1.patch
>
>
> There are a few issues with speculative retry:
> 1. Time unit bugs. These are from ColumnFamilyStore (v3.0.10):
> The left hand side is in nanos, as the name suggests, while the right hand 
> side is in millis.
> {code}
> sampleLatencyNanos = DatabaseDescriptor.getReadRpcTimeout() / 2;
> {code}
> Here coordinatorReadLatency is already in nanos and we shouldn't multiple the 
> value by 1000. This was a regression in 8896a70 when we switch metrics 
> library and the two libraries use different time units.
> {code}
> sampleLatencyNanos = (long) 
> (metric.coordinatorReadLatency.getSnapshot().getValue(retryPolicy.threshold())
>  * 1000d);
> {code}
> 2. Confusing overload protection and retry delay. As the name 
> "sampleLatencyNanos" suggests, it should be used to keep the actually sampled 
> read latency. However, we assign it the retry threshold in the case of 
> CUSTOM. Then we compare the retry threshold with read timeout (defaults to 
> 5000ms). This means, if we use speculative_retry=10ms for the table, we won't 
> be able to avoid being overloaded. We should compare the actual read latency 
> with the read timeout for overload protection. See line 450 of 
> ColumnFamilyStore.java and line 279 of AbstractReadExecutor.java.
> My proposals are:
> a. We use sampled p99 delay and compare it with a customizable threshold 
> (-Dcassandra.overload.threshold) for overload detection.
> b. Introduce another variable retryDelayNanos for waiting time before retry. 
> This is the value from table setting (PERCENTILE or CUSTOM).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13009) Speculative retry bugs

2016-12-06 Thread Simon Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Zhou updated CASSANDRA-13009:
---
Description: 
There are a few issues with speculative retry:

1. Time unit bugs. These are from ColumnFamilyStore (v3.0.10):

The left hand side is in nanos, as the name suggests, while the right hand side 
is in millis.
{code}
sampleLatencyNanos = DatabaseDescriptor.getReadRpcTimeout() / 2;
{code}

Here coordinatorReadLatency is already in nanos and we shouldn't multiple the 
value by 1000. This was a regression in 8896a70 when we switch metrics library 
and the two libraries use different time units.

{code}
sampleLatencyNanos = (long) 
(metric.coordinatorReadLatency.getSnapshot().getValue(retryPolicy.threshold()) 
* 1000d);
{code}


2. Confusing overload protection and retry delay. As the name 
"sampleLatencyNanos" suggests, it should be used to keep the actually sampled 
read latency. However, we assign it the retry threshold in the case of CUSTOM. 
Then we compare the retry threshold with read timeout (defaults to 5000ms). 
This means, if we use speculative_retry=10ms for the table, we won't be able to 
avoid being overloaded. We should compare the actual read latency with the read 
timeout for overload protection. See line 450 of ColumnFamilyStore.java and 
line 279 of AbstractReadExecutor.java.

My proposals are:
a. We use sampled p99 delay and compare it with a customizable threshold 
(-Dcassandra.overload.threshold) for overload detection.
b. Introduce another variable retryDelayNanos for waiting time before retry. 
This is the value from table setting (PERCENTILE or CUSTOM).


  was:
There are a few issues with speculative retry:

1. Time unit bugs. These are from ColumnFamilyStore (v3.0.10):

The left hand side is in nanos, as the name suggests, while the right hand side 
is in millis.
{code}
sampleLatencyNanos = DatabaseDescriptor.getReadRpcTimeout() / 2;
{code}

Here coordinatorReadLatency is already in nanos and we shouldn't multiple the 
value by 1000. This was a regression in 8896a70 when we switch metrics library 
and the two libraries use different time units.

{code}
sampleLatencyNanos = (long) 
(metric.coordinatorReadLatency.getSnapshot().getValue(retryPolicy.threshold()) 
* 1000d);
{code}


2. Confusing overload protection and retry delay. As the name 
"sampleLatencyNanos" suggests, it should be used to keep the actually sampled 
delay. However, we assign it the retry threshold in the case of CUSTOM. Then we 
compare the retry threshold with read timeout (defaults to 5000ms). This means, 
if we use speculative_retry=10ms for the table, we won't be able to avoid being 
overloaded. We should compare the actual read latency with the read timeout for 
overload protection.

My proposals are:
a. We use sampled p99 delay for overload protection (do not retry if latency is 
already very high) and introduce retry delay for waiting time before retry.
b.  Allow users to set a overload threshold through JVM option 
-Dcassandra.overload.threshold. The default value remains read timeout (5000ms).



> Speculative retry bugs
> --
>
> Key: CASSANDRA-13009
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13009
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Simon Zhou
>Assignee: Simon Zhou
> Fix For: 3.0.11
>
>
> There are a few issues with speculative retry:
> 1. Time unit bugs. These are from ColumnFamilyStore (v3.0.10):
> The left hand side is in nanos, as the name suggests, while the right hand 
> side is in millis.
> {code}
> sampleLatencyNanos = DatabaseDescriptor.getReadRpcTimeout() / 2;
> {code}
> Here coordinatorReadLatency is already in nanos and we shouldn't multiple the 
> value by 1000. This was a regression in 8896a70 when we switch metrics 
> library and the two libraries use different time units.
> {code}
> sampleLatencyNanos = (long) 
> (metric.coordinatorReadLatency.getSnapshot().getValue(retryPolicy.threshold())
>  * 1000d);
> {code}
> 2. Confusing overload protection and retry delay. As the name 
> "sampleLatencyNanos" suggests, it should be used to keep the actually sampled 
> read latency. However, we assign it the retry threshold in the case of 
> CUSTOM. Then we compare the retry threshold with read timeout (defaults to 
> 5000ms). This means, if we use speculative_retry=10ms for the table, we won't 
> be able to avoid being overloaded. We should compare the actual read latency 
> with the read timeout for overload protection. See line 450 of 
> ColumnFamilyStore.java and line 279 of AbstractReadExecutor.java.
> My proposals are:
> a. We use sampled p99 delay and compare it with a customizable threshold 
> (-Dcassandra.overload.threshold) for overload detection.
> b. Introduce another variable retryDelayNanos for waiting time before retry. 
> This is the 

[jira] [Created] (CASSANDRA-13009) Speculative retry bugs

2016-12-06 Thread Simon Zhou (JIRA)
Simon Zhou created CASSANDRA-13009:
--

 Summary: Speculative retry bugs
 Key: CASSANDRA-13009
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13009
 Project: Cassandra
  Issue Type: Bug
Reporter: Simon Zhou
Assignee: Simon Zhou
 Fix For: 3.0.11


There are a few issues with speculative retry:

1. Time unit bugs. These are from ColumnFamilyStore (v3.0.10):

The left hand side is in nanos, as the name suggests, while the right hand side 
is in millis.
{code}
sampleLatencyNanos = DatabaseDescriptor.getReadRpcTimeout() / 2;
{code}

Here coordinatorReadLatency is already in nanos and we shouldn't multiple the 
value by 1000. This was a regression in 8896a70 when we switch metrics library 
and the two libraries use different time units.

{code}
sampleLatencyNanos = (long) 
(metric.coordinatorReadLatency.getSnapshot().getValue(retryPolicy.threshold()) 
* 1000d);
{code}


2. Confusing overload protection and retry delay. As the name 
"sampleLatencyNanos" suggests, it should be used to keep the actually sampled 
delay. However, we assign it the retry threshold in the case of CUSTOM. Then we 
compare the retry threshold with read timeout (defaults to 5000ms). This means, 
if we use speculative_retry=10ms for the table, we won't be able to avoid being 
overloaded. We should compare the actual read latency with the read timeout for 
overload protection.

My proposals are:
a. We use sampled p99 delay for overload protection (do not retry if latency is 
already very high) and introduce retry delay for waiting time before retry.
b.  Allow users to set a overload threshold through JVM option 
-Dcassandra.overload.threshold. The default value remains read timeout (5000ms).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12967) Spec for Cassandra RPM Build

2016-12-06 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler resolved CASSANDRA-12967.

   Resolution: Fixed
Fix Version/s: 4.0
   3.10
   2.2.9
   2.1.17

Committed RPM builds to 2.1, 2.2, 3.0, 3.11, 3.X, and trunk.
https://lists.apache.org/thread.html/71c3d7fa37baba3a720a7a2c5eca0b0da4fcfeaf1bc9eecdbc45cd99@%3Ccommits.cassandra.apache.org%3E

I'll work on how I might fit this into releases, but at the moment, users and 
certainly build RPMs from the in-tree {{redhat}} directories now.

> Spec for Cassandra RPM Build
> 
>
> Key: CASSANDRA-12967
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12967
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Packaging
>Reporter: Bhuvan Rawal
>Assignee: Michael Shuler
> Fix For: 2.1.17, 2.2.9, 3.0.11, 3.10, 4.0
>
>
> Currently there is no RPM packaging for cassandra community. We tried 
> creating a RPM build for cassandra 3.0.9 using cassandra spec found in 
> https://github.com/riptano/cassandra-rpm, but we found some incompatibilities 
> as the spec was meant for 2.X versions. 
> It would be great to have a community RPM build for cassandra, with the 
> recommended cassandra recommendations and tools.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[14/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7b9c6ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7b9c6ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7b9c6ae

Branch: refs/heads/cassandra-3.X
Commit: e7b9c6aebd205af18aee1c705435c04b6f83b893
Parents: 0ecef31
Author: Michael Shuler 
Authored: Fri Dec 2 17:08:01 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:27:03 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 177 
 redhat/default |   7 ++
 6 files changed, 343 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7b9c6ae/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7b9c6ae/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+   

[24/36] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e93398c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e93398c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e93398c

Branch: refs/heads/cassandra-3.X
Commit: 2e93398ce715268d6ccefc8589a2d4435e0302aa
Parents: 6d770c4 3ec7cb0
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:14 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:14 2016 -0600

--

--




[33/36] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2016-12-06 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/78f609a8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/78f609a8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/78f609a8

Branch: refs/heads/cassandra-3.11
Commit: 78f609a8bd227788a03415aa7b4b48aa87775567
Parents: bad1980 dd8244c
Author: Michael Shuler 
Authored: Tue Dec 6 17:34:08 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:34:08 2016 -0600

--

--




[25/36] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e93398c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e93398c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e93398c

Branch: refs/heads/cassandra-2.2
Commit: 2e93398ce715268d6ccefc8589a2d4435e0302aa
Parents: 6d770c4 3ec7cb0
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:14 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:14 2016 -0600

--

--




[32/36] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2016-12-06 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/78f609a8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/78f609a8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/78f609a8

Branch: refs/heads/trunk
Commit: 78f609a8bd227788a03415aa7b4b48aa87775567
Parents: bad1980 dd8244c
Author: Michael Shuler 
Authored: Tue Dec 6 17:34:08 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:34:08 2016 -0600

--

--




[11/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d770c4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d770c4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d770c4f

Branch: refs/heads/trunk
Commit: 6d770c4ff56fbdd195bc4f89e64cb69df5b3cb7c
Parents: a449e8f
Author: Michael Shuler 
Authored: Fri Dec 2 18:15:33 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:34 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 176 
 redhat/default |   7 ++
 6 files changed, 342 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+# 

[03/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ec7cb0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ec7cb0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ec7cb0c

Branch: refs/heads/cassandra-2.2
Commit: 3ec7cb0ca2cd8214233a890d4e7275f223a984c6
Parents: 079029a
Author: Michael Shuler 
Authored: Fri Dec 2 21:01:53 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:04 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 174 
 redhat/default |   7 ++
 6 files changed, 340 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+  

[21/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bad19807
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bad19807
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bad19807

Branch: refs/heads/cassandra-3.X
Commit: bad1980739a566189ba65cf9618cf5a2f0c157f4
Parents: bed3def
Author: Michael Shuler 
Authored: Tue Dec 6 17:32:07 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:32:07 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 178 
 redhat/default |   7 ++
 6 files changed, 344 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bad19807/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bad19807/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+   

[23/36] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e93398c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e93398c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e93398c

Branch: refs/heads/cassandra-3.11
Commit: 2e93398ce715268d6ccefc8589a2d4435e0302aa
Parents: 6d770c4 3ec7cb0
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:14 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:14 2016 -0600

--

--




[06/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ec7cb0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ec7cb0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ec7cb0c

Branch: refs/heads/cassandra-3.11
Commit: 3ec7cb0ca2cd8214233a890d4e7275f223a984c6
Parents: 079029a
Author: Michael Shuler 
Authored: Fri Dec 2 21:01:53 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:04 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 174 
 redhat/default |   7 ++
 6 files changed, 340 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+ 

[27/36] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd8244c5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd8244c5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd8244c5

Branch: refs/heads/trunk
Commit: dd8244c5e0ca0570381d75b82d0c35a3ad167d0f
Parents: e7b9c6a 2e93398
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:48 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:48 2016 -0600

--

--




[26/36] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e93398c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e93398c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e93398c

Branch: refs/heads/trunk
Commit: 2e93398ce715268d6ccefc8589a2d4435e0302aa
Parents: 6d770c4 3ec7cb0
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:14 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:14 2016 -0600

--

--




[16/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/034b6986
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/034b6986
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/034b6986

Branch: refs/heads/cassandra-3.X
Commit: 034b69866995b18b58cac5ef8f458394ea5bc911
Parents: 6b6bc6a
Author: Michael Shuler 
Authored: Fri Dec 2 15:30:44 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:27:28 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 178 
 redhat/default |   7 ++
 6 files changed, 344 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/034b6986/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/034b6986/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+   

[19/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bad19807
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bad19807
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bad19807

Branch: refs/heads/trunk
Commit: bad1980739a566189ba65cf9618cf5a2f0c157f4
Parents: bed3def
Author: Michael Shuler 
Authored: Tue Dec 6 17:32:07 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:32:07 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 178 
 redhat/default |   7 ++
 6 files changed, 344 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bad19807/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bad19807/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+# 

[01/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-2.1 079029a44 -> 3ec7cb0ca
  refs/heads/cassandra-2.2 a449e8f70 -> 2e93398ce
  refs/heads/cassandra-3.0 0ecef3154 -> dd8244c5e
  refs/heads/cassandra-3.11 bed3def9a -> 78f609a8b
  refs/heads/cassandra-3.X 6b6bc6a36 -> edf54aef9
  refs/heads/trunk 3ea0579d8 -> c582336b8


Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ec7cb0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ec7cb0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ec7cb0c

Branch: refs/heads/cassandra-2.1
Commit: 3ec7cb0ca2cd8214233a890d4e7275f223a984c6
Parents: 079029a
Author: Michael Shuler 
Authored: Fri Dec 2 21:01:53 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:04 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 174 
 redhat/default |   7 ++
 6 files changed, 340 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+  

[18/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/829edc92
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/829edc92
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/829edc92

Branch: refs/heads/trunk
Commit: 829edc9248b23ea4a81d07fd106ed68b46ce8a0f
Parents: 3ea0579
Author: Michael Shuler 
Authored: Fri Dec 2 15:17:52 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:28:05 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 178 
 redhat/default |   7 ++
 6 files changed, 344 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/829edc92/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/829edc92/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+# 

[13/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7b9c6ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7b9c6ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7b9c6ae

Branch: refs/heads/cassandra-3.11
Commit: e7b9c6aebd205af18aee1c705435c04b6f83b893
Parents: 0ecef31
Author: Michael Shuler 
Authored: Fri Dec 2 17:08:01 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:27:03 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 177 
 redhat/default |   7 ++
 6 files changed, 343 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7b9c6ae/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7b9c6ae/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+  

[04/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ec7cb0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ec7cb0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ec7cb0c

Branch: refs/heads/cassandra-3.0
Commit: 3ec7cb0ca2cd8214233a890d4e7275f223a984c6
Parents: 079029a
Author: Michael Shuler 
Authored: Fri Dec 2 21:01:53 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:04 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 174 
 redhat/default |   7 ++
 6 files changed, 340 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+  

[28/36] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd8244c5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd8244c5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd8244c5

Branch: refs/heads/cassandra-3.0
Commit: dd8244c5e0ca0570381d75b82d0c35a3ad167d0f
Parents: e7b9c6a 2e93398
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:48 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:48 2016 -0600

--

--




[05/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ec7cb0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ec7cb0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ec7cb0c

Branch: refs/heads/trunk
Commit: 3ec7cb0ca2cd8214233a890d4e7275f223a984c6
Parents: 079029a
Author: Michael Shuler 
Authored: Fri Dec 2 21:01:53 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:04 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 174 
 redhat/default |   7 ++
 6 files changed, 340 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+# 

[02/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ec7cb0c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ec7cb0c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ec7cb0c

Branch: refs/heads/cassandra-3.X
Commit: 3ec7cb0ca2cd8214233a890d4e7275f223a984c6
Parents: 079029a
Author: Michael Shuler 
Authored: Fri Dec 2 21:01:53 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:04 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 174 
 redhat/default |   7 ++
 6 files changed, 340 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ec7cb0c/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+  

[08/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d770c4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d770c4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d770c4f

Branch: refs/heads/cassandra-2.2
Commit: 6d770c4ff56fbdd195bc4f89e64cb69df5b3cb7c
Parents: a449e8f
Author: Michael Shuler 
Authored: Fri Dec 2 18:15:33 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:34 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 176 
 redhat/default |   7 ++
 6 files changed, 342 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+   

[34/36] cassandra git commit: Merge branch 'cassandra-3.11' into cassandra-3.X

2016-12-06 Thread mshuler
Merge branch 'cassandra-3.11' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edf54aef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edf54aef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edf54aef

Branch: refs/heads/cassandra-3.X
Commit: edf54aef93182e16f0feb8e7584661ffd7dd3e67
Parents: 034b698 78f609a
Author: Michael Shuler 
Authored: Tue Dec 6 17:34:25 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:34:25 2016 -0600

--

--




[15/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7b9c6ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7b9c6ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7b9c6ae

Branch: refs/heads/trunk
Commit: e7b9c6aebd205af18aee1c705435c04b6f83b893
Parents: 0ecef31
Author: Michael Shuler 
Authored: Fri Dec 2 17:08:01 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:27:03 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 177 
 redhat/default |   7 ++
 6 files changed, 343 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7b9c6ae/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7b9c6ae/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+# 

[10/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d770c4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d770c4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d770c4f

Branch: refs/heads/cassandra-3.0
Commit: 6d770c4ff56fbdd195bc4f89e64cb69df5b3cb7c
Parents: a449e8f
Author: Michael Shuler 
Authored: Fri Dec 2 18:15:33 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:34 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 176 
 redhat/default |   7 ++
 6 files changed, 342 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+   

[35/36] cassandra git commit: Merge branch 'cassandra-3.11' into cassandra-3.X

2016-12-06 Thread mshuler
Merge branch 'cassandra-3.11' into cassandra-3.X


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/edf54aef
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/edf54aef
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/edf54aef

Branch: refs/heads/trunk
Commit: edf54aef93182e16f0feb8e7584661ffd7dd3e67
Parents: 034b698 78f609a
Author: Michael Shuler 
Authored: Tue Dec 6 17:34:25 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:34:25 2016 -0600

--

--




[07/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d770c4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d770c4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d770c4f

Branch: refs/heads/cassandra-3.11
Commit: 6d770c4ff56fbdd195bc4f89e64cb69df5b3cb7c
Parents: a449e8f
Author: Michael Shuler 
Authored: Fri Dec 2 18:15:33 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:34 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 176 
 redhat/default |   7 ++
 6 files changed, 342 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+  

[29/36] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd8244c5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd8244c5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd8244c5

Branch: refs/heads/cassandra-3.11
Commit: dd8244c5e0ca0570381d75b82d0c35a3ad167d0f
Parents: e7b9c6a 2e93398
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:48 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:48 2016 -0600

--

--




[17/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/034b6986
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/034b6986
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/034b6986

Branch: refs/heads/trunk
Commit: 034b69866995b18b58cac5ef8f458394ea5bc911
Parents: 6b6bc6a
Author: Michael Shuler 
Authored: Fri Dec 2 15:30:44 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:27:28 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 178 
 redhat/default |   7 ++
 6 files changed, 344 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/034b6986/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/034b6986/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+# 

[12/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e7b9c6ae
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e7b9c6ae
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e7b9c6ae

Branch: refs/heads/cassandra-3.0
Commit: e7b9c6aebd205af18aee1c705435c04b6f83b893
Parents: 0ecef31
Author: Michael Shuler 
Authored: Fri Dec 2 17:08:01 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:27:03 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 177 
 redhat/default |   7 ++
 6 files changed, 343 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7b9c6ae/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/e7b9c6ae/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+   

[36/36] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-12-06 Thread mshuler
Merge branch 'cassandra-3.X' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c582336b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c582336b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c582336b

Branch: refs/heads/trunk
Commit: c582336b815aac63c3c1c5e81508248b5f38e432
Parents: 829edc9 edf54ae
Author: Michael Shuler 
Authored: Tue Dec 6 17:34:36 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:34:36 2016 -0600

--

--




[31/36] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11

2016-12-06 Thread mshuler
Merge branch 'cassandra-3.0' into cassandra-3.11


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/78f609a8
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/78f609a8
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/78f609a8

Branch: refs/heads/cassandra-3.X
Commit: 78f609a8bd227788a03415aa7b4b48aa87775567
Parents: bad1980 dd8244c
Author: Michael Shuler 
Authored: Tue Dec 6 17:34:08 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:34:08 2016 -0600

--

--




[30/36] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.2' into cassandra-3.0


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dd8244c5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dd8244c5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dd8244c5

Branch: refs/heads/cassandra-3.X
Commit: dd8244c5e0ca0570381d75b82d0c35a3ad167d0f
Parents: e7b9c6a 2e93398
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:48 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:48 2016 -0600

--

--




[22/36] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2

2016-12-06 Thread mshuler
Merge branch 'cassandra-2.1' into cassandra-2.2


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2e93398c
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2e93398c
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2e93398c

Branch: refs/heads/cassandra-3.0
Commit: 2e93398ce715268d6ccefc8589a2d4435e0302aa
Parents: 6d770c4 3ec7cb0
Author: Michael Shuler 
Authored: Tue Dec 6 17:33:14 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:33:14 2016 -0600

--

--




[09/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6d770c4f
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6d770c4f
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6d770c4f

Branch: refs/heads/cassandra-3.X
Commit: 6d770c4ff56fbdd195bc4f89e64cb69df5b3cb7c
Parents: a449e8f
Author: Michael Shuler 
Authored: Fri Dec 2 18:15:33 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:26:34 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 176 
 redhat/default |   7 ++
 6 files changed, 342 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6d770c4f/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+   

[20/36] cassandra git commit: Add redhat RPM build directory

2016-12-06 Thread mshuler
Add redhat RPM build directory


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bad19807
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bad19807
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bad19807

Branch: refs/heads/cassandra-3.11
Commit: bad1980739a566189ba65cf9618cf5a2f0c157f4
Parents: bed3def
Author: Michael Shuler 
Authored: Tue Dec 6 17:32:07 2016 -0600
Committer: Michael Shuler 
Committed: Tue Dec 6 17:32:07 2016 -0600

--
 redhat/README.md   |  31 
 redhat/cassandra   |  94 +++
 redhat/cassandra.conf  |   4 +
 redhat/cassandra.in.sh |  30 
 redhat/cassandra.spec  | 178 
 redhat/default |   7 ++
 6 files changed, 344 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/bad19807/redhat/README.md
--
diff --git a/redhat/README.md b/redhat/README.md
new file mode 100644
index 000..0b2ab0d
--- /dev/null
+++ b/redhat/README.md
@@ -0,0 +1,31 @@
+# Apache Cassandra rpmbuild
+
+### Requirements:
+- The build system needs to have Apache Cassandra `ant artifacts` build 
dependencies installed.
+- Since Apache Cassandra depends on Python 2.7, the earliest version supported 
is RHEL/CentOS 7.0.
+
+### Step 1:
+- Build and copy sources to build tree:
+```
+ant artifacts
+cp build/apache-cassandra-*-src.tar.gz $RPM_BUILD_DIR/SOURCES/
+```
+
+### Step 2:
+- Since there is no version specified in the SPEC file, one needs to be passed 
at `rpmbuild` time (example with 4.0):
+```
+rpmbuild --define="version 4.0" -ba redhat/cassandra.spec
+```
+
+- RPM files can be found in their respective build tree directories:
+```
+ls -l $RPM_BUILD_DIR/{SRPMS,RPMS}/
+```
+
+### Hint:
+- Don't build packages as root..
+```
+# this makes your RPM_BUILD_DIR = ~/rpmbuild
+mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
+echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
+```

http://git-wip-us.apache.org/repos/asf/cassandra/blob/bad19807/redhat/cassandra
--
diff --git a/redhat/cassandra b/redhat/cassandra
new file mode 100644
index 000..3e59534
--- /dev/null
+++ b/redhat/cassandra
@@ -0,0 +1,94 @@
+#!/bin/bash
+#
+# /etc/init.d/cassandra
+#
+# Startup script for Cassandra
+# 
+# chkconfig: 2345 20 80
+# description: Starts and stops Cassandra
+
+. /etc/rc.d/init.d/functions
+
+export CASSANDRA_HOME=/usr/share/cassandra
+export CASSANDRA_CONF=/etc/cassandra/conf
+export CASSANDRA_INCLUDE=$CASSANDRA_HOME/cassandra.in.sh
+export CASSANDRA_OWNR=cassandra
+NAME="cassandra"
+log_file=/var/log/cassandra/cassandra.log
+pid_file=/var/run/cassandra/cassandra.pid
+lock_file=/var/lock/subsys/$NAME
+CASSANDRA_PROG=/usr/sbin/cassandra
+
+# The first existing directory is used for JAVA_HOME if needed.
+JVM_SEARCH_DIRS="/usr/lib/jvm/jre /usr/lib/jvm/jre-1.7.* 
/usr/lib/jvm/java-1.7.*/jre"
+
+# Read configuration variable file if it is present
+[ -r /etc/default/$NAME ] && . /etc/default/$NAME
+
+# If JAVA_HOME has not been set, try to determine it.
+if [ -z "$JAVA_HOME" ]; then
+# If java is in PATH, use a JAVA_HOME that corresponds to that. This is
+# both consistent with how the upstream startup script works, and with
+# the use of alternatives to set a system JVM (as is done on Debian and
+# Red Hat derivatives).
+java="`/usr/bin/which java 2>/dev/null`"
+if [ -n "$java" ]; then
+java=`readlink --canonicalize "$java"`
+JAVA_HOME=`dirname "\`dirname \$java\`"`
+else
+# No JAVA_HOME set and no java found in PATH; search for a JVM.
+for jdir in $JVM_SEARCH_DIRS; do
+if [ -x "$jdir/bin/java" ]; then
+JAVA_HOME="$jdir"
+break
+fi
+done
+# if JAVA_HOME is still empty here, punt.
+fi
+fi
+JAVA="$JAVA_HOME/bin/java"
+export JAVA_HOME JAVA
+
+case "$1" in
+start)
+# Cassandra startup
+echo -n "Starting Cassandra: "
+su $CASSANDRA_OWNR -c "$CASSANDRA_PROG -p $pid_file" > $log_file 2>&1
+retval=$?
+[ $retval -eq 0 ] && touch $lock_file
+echo "OK"
+;;
+stop)
+# Cassandra shutdown
+echo -n "Shutdown Cassandra: "
+su $CASSANDRA_OWNR -c "kill `cat $pid_file`"
+retval=$?
+[ $retval -eq 0 ] && rm -f $lock_file
+for t in `seq 40`; do $0 status > /dev/null 2>&1 && sleep 0.5 || 
break; done
+# Adding a sleep here to give jmx time to wind down (CASSANDRA-4483). 
Not ideal...
+# Adam Holmberg suggests this, but that would break if the jmx port is 
changed
+  

[jira] [Updated] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-12-06 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-8398:
-
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

committed as {{6b6bc6a36c623b8074f0fb27c656b8c26c27cd7e}}

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
> Attachments: tpstats.png
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12768) CQL often queries static columns unnecessarily

2016-12-06 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727047#comment-15727047
 ] 

Tyler Hobbs commented on CASSANDRA-12768:
-

The latest commits and test runs look good to me, so +1 on committing those.

However, I do still want to figure out what's up with the behavior in 
{{returnStaticContentOnPartitionWithNoRows()}} in 3.x vs 3.0.  [~blerer] can 
you open a new ticket for that if needed, and if not, explain why here?

> CQL often queries static columns unnecessarily
> --
>
> Key: CASSANDRA-12768
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12768
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sylvain Lebresne
>Assignee: Sylvain Lebresne
> Fix For: 3.0.x, 3.x
>
>
> While looking at CASSANDRA-12694 (which isn't directly related, but some of 
> the results in this ticket are explained by this), I realized that CQL was 
> always querying static columns even in cases where this is unnecessary.
> More precisely, for reasons long described elsewhere, we have to query all 
> the columns for a row (we have optimizations, see CASSANDRA-10657, but they 
> don't change that general fact) to be able to distinguish between the case 
> where a row doesn't exist from when it exists but has no values for the 
> columns selected by the query. *However*, this really only extend to 
> "regular" columns (static columns play no role in deciding whether a 
> particular row exists or not) but the implementation in 3.x, which is in 
> {{ColumnFilter}}, still always query all static columns.
> We shouldn't do that and it's arguably a performance regression from 2.x. 
> Which is why I'm tentatively marking this a bug and for the 3.0 line. It's a 
> tiny bit scary for 3.0 though so really more asking for other opinions and 
> I'd be happy to stick to 3.x.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-12-06 Thread Dikang Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dikang Gu updated CASSANDRA-8398:
-
Status: Ready to Commit  (was: Patch Available)

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
> Attachments: tpstats.png
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/3] cassandra git commit: Expose time spent waiting in thread pool queue

2016-12-06 Thread dikang
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.X f1423806e -> 6b6bc6a36
  refs/heads/trunk 48591489d -> 3ea0579d8


Expose time spent waiting in thread pool queue

Patch by Dikang Gu; reviewed by T Jake Luciani for CASSANDRA-8398


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b6bc6a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b6bc6a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b6bc6a3

Branch: refs/heads/cassandra-3.X
Commit: 6b6bc6a36c623b8074f0fb27c656b8c26c27cd7e
Parents: f142380
Author: Dikang Gu 
Authored: Wed Nov 30 13:00:49 2016 -0800
Committer: Dikang Gu 
Committed: Tue Dec 6 15:02:31 2016 -0800

--
 CHANGES.txt |  1 +
 .../cassandra/metrics/MessagingMetrics.java | 16 ++
 .../cassandra/net/MessageDeliveryTask.java  |  6 
 .../org/apache/cassandra/tools/NodeProbe.java   | 14 +
 .../tools/nodetool/stats/TpStatsHolder.java | 13 
 .../tools/nodetool/stats/TpStatsPrinter.java| 17 +--
 .../cassandra/net/MessagingServiceTest.java | 31 
 7 files changed, 96 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6bc6a3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bddd823..8b2bed7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.12
+ * Expose time spent waiting in thread pool queue (CASSANDRA-8398)
  * Conditionally update index built status to avoid unnecessary flushes 
(CASSANDRA-12969)
  * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
  * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6bc6a3/src/java/org/apache/cassandra/metrics/MessagingMetrics.java
--
diff --git a/src/java/org/apache/cassandra/metrics/MessagingMetrics.java 
b/src/java/org/apache/cassandra/metrics/MessagingMetrics.java
index e126c93..5f640b9 100644
--- a/src/java/org/apache/cassandra/metrics/MessagingMetrics.java
+++ b/src/java/org/apache/cassandra/metrics/MessagingMetrics.java
@@ -38,11 +38,13 @@ public class MessagingMetrics
 private static final MetricNameFactory factory = new 
DefaultNameFactory("Messaging");
 public final Timer crossNodeLatency;
 public final ConcurrentHashMap dcLatency;
+public final ConcurrentHashMap queueWaitLatency;
 
 public MessagingMetrics()
 {
 crossNodeLatency = 
Metrics.timer(factory.createMetricName("CrossNodeLatency"));
 dcLatency = new ConcurrentHashMap<>();
+queueWaitLatency = new ConcurrentHashMap<>();
 }
 
 public void addTimeTaken(InetAddress from, long timeTaken)
@@ -56,4 +58,18 @@ public class MessagingMetrics
 timer.update(timeTaken, TimeUnit.MILLISECONDS);
 crossNodeLatency.update(timeTaken, TimeUnit.MILLISECONDS);
 }
+
+public void addQueueWaitTime(String verb, long timeTaken)
+{
+if (timeTaken < 0)
+// the measurement is not accurate, ignore the negative timeTaken
+return;
+
+Timer timer = queueWaitLatency.get(verb);
+if (timer == null)
+{
+timer = queueWaitLatency.computeIfAbsent(verb, k -> 
Metrics.timer(factory.createMetricName(verb + "-WaitLatency")));
+}
+timer.update(timeTaken, TimeUnit.MILLISECONDS);
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6bc6a3/src/java/org/apache/cassandra/net/MessageDeliveryTask.java
--
diff --git a/src/java/org/apache/cassandra/net/MessageDeliveryTask.java 
b/src/java/org/apache/cassandra/net/MessageDeliveryTask.java
index c91e9da..c7fc991 100644
--- a/src/java/org/apache/cassandra/net/MessageDeliveryTask.java
+++ b/src/java/org/apache/cassandra/net/MessageDeliveryTask.java
@@ -24,6 +24,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.filter.TombstoneOverwhelmingException;
+import org.apache.cassandra.db.monitoring.ApproximateTime;
 import org.apache.cassandra.exceptions.RequestFailureReason;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.index.IndexNotAvailableException;
@@ -35,17 +36,22 @@ public class MessageDeliveryTask implements Runnable
 
 private final MessageIn message;
 private final int id;
+private final long enqueueTime;
 
 public MessageDeliveryTask(MessageIn message, int id)
 {
 assert message != null;
   

[3/3] cassandra git commit: Merge branch 'cassandra-3.X' into trunk

2016-12-06 Thread dikang
Merge branch 'cassandra-3.X' into trunk

* cassandra-3.X:
  Expose time spent waiting in thread pool queue


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3ea0579d
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3ea0579d
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3ea0579d

Branch: refs/heads/trunk
Commit: 3ea0579d89d9a6297d4f94d97728892b5a953489
Parents: 4859148 6b6bc6a
Author: Dikang Gu 
Authored: Tue Dec 6 15:03:39 2016 -0800
Committer: Dikang Gu 
Committed: Tue Dec 6 15:04:24 2016 -0800

--
 CHANGES.txt |  1 +
 .../cassandra/metrics/MessagingMetrics.java | 16 ++
 .../cassandra/net/MessageDeliveryTask.java  |  6 
 .../org/apache/cassandra/tools/NodeProbe.java   | 14 +
 .../tools/nodetool/stats/TpStatsHolder.java | 13 
 .../tools/nodetool/stats/TpStatsPrinter.java| 17 +--
 .../cassandra/net/MessagingServiceTest.java | 31 
 7 files changed, 96 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/3ea0579d/CHANGES.txt
--
diff --cc CHANGES.txt
index 40407bc,8b2bed7..07177a4
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,13 -1,5 +1,14 @@@
 +4.0
 + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716)
 + * Add column definition kind to dropped columns in schema (CASSANDRA-12705)
 + * Add (automate) Nodetool Documentation (CASSANDRA-12672)
 + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736)
 + * Reject invalid replication settings when creating or altering a keyspace 
(CASSANDRA-12681)
 + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter 
(CASSANDRA-12422)
 +
 +
  3.12
+  * Expose time spent waiting in thread pool queue (CASSANDRA-8398)
   * Conditionally update index built status to avoid unnecessary flushes 
(CASSANDRA-12969)
   * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
   * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)



[2/3] cassandra git commit: Expose time spent waiting in thread pool queue

2016-12-06 Thread dikang
Expose time spent waiting in thread pool queue

Patch by Dikang Gu; reviewed by T Jake Luciani for CASSANDRA-8398


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6b6bc6a3
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6b6bc6a3
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6b6bc6a3

Branch: refs/heads/trunk
Commit: 6b6bc6a36c623b8074f0fb27c656b8c26c27cd7e
Parents: f142380
Author: Dikang Gu 
Authored: Wed Nov 30 13:00:49 2016 -0800
Committer: Dikang Gu 
Committed: Tue Dec 6 15:02:31 2016 -0800

--
 CHANGES.txt |  1 +
 .../cassandra/metrics/MessagingMetrics.java | 16 ++
 .../cassandra/net/MessageDeliveryTask.java  |  6 
 .../org/apache/cassandra/tools/NodeProbe.java   | 14 +
 .../tools/nodetool/stats/TpStatsHolder.java | 13 
 .../tools/nodetool/stats/TpStatsPrinter.java| 17 +--
 .../cassandra/net/MessagingServiceTest.java | 31 
 7 files changed, 96 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6bc6a3/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index bddd823..8b2bed7 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.12
+ * Expose time spent waiting in thread pool queue (CASSANDRA-8398)
  * Conditionally update index built status to avoid unnecessary flushes 
(CASSANDRA-12969)
  * NoReplicationTokenAllocator should work with zero replication factor 
(CASSANDRA-12983)
  * cqlsh auto completion: refactor definition of compaction strategy options 
(CASSANDRA-12946)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6bc6a3/src/java/org/apache/cassandra/metrics/MessagingMetrics.java
--
diff --git a/src/java/org/apache/cassandra/metrics/MessagingMetrics.java 
b/src/java/org/apache/cassandra/metrics/MessagingMetrics.java
index e126c93..5f640b9 100644
--- a/src/java/org/apache/cassandra/metrics/MessagingMetrics.java
+++ b/src/java/org/apache/cassandra/metrics/MessagingMetrics.java
@@ -38,11 +38,13 @@ public class MessagingMetrics
 private static final MetricNameFactory factory = new 
DefaultNameFactory("Messaging");
 public final Timer crossNodeLatency;
 public final ConcurrentHashMap dcLatency;
+public final ConcurrentHashMap queueWaitLatency;
 
 public MessagingMetrics()
 {
 crossNodeLatency = 
Metrics.timer(factory.createMetricName("CrossNodeLatency"));
 dcLatency = new ConcurrentHashMap<>();
+queueWaitLatency = new ConcurrentHashMap<>();
 }
 
 public void addTimeTaken(InetAddress from, long timeTaken)
@@ -56,4 +58,18 @@ public class MessagingMetrics
 timer.update(timeTaken, TimeUnit.MILLISECONDS);
 crossNodeLatency.update(timeTaken, TimeUnit.MILLISECONDS);
 }
+
+public void addQueueWaitTime(String verb, long timeTaken)
+{
+if (timeTaken < 0)
+// the measurement is not accurate, ignore the negative timeTaken
+return;
+
+Timer timer = queueWaitLatency.get(verb);
+if (timer == null)
+{
+timer = queueWaitLatency.computeIfAbsent(verb, k -> 
Metrics.timer(factory.createMetricName(verb + "-WaitLatency")));
+}
+timer.update(timeTaken, TimeUnit.MILLISECONDS);
+}
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/6b6bc6a3/src/java/org/apache/cassandra/net/MessageDeliveryTask.java
--
diff --git a/src/java/org/apache/cassandra/net/MessageDeliveryTask.java 
b/src/java/org/apache/cassandra/net/MessageDeliveryTask.java
index c91e9da..c7fc991 100644
--- a/src/java/org/apache/cassandra/net/MessageDeliveryTask.java
+++ b/src/java/org/apache/cassandra/net/MessageDeliveryTask.java
@@ -24,6 +24,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.cassandra.db.filter.TombstoneOverwhelmingException;
+import org.apache.cassandra.db.monitoring.ApproximateTime;
 import org.apache.cassandra.exceptions.RequestFailureReason;
 import org.apache.cassandra.gms.Gossiper;
 import org.apache.cassandra.index.IndexNotAvailableException;
@@ -35,17 +36,22 @@ public class MessageDeliveryTask implements Runnable
 
 private final MessageIn message;
 private final int id;
+private final long enqueueTime;
 
 public MessageDeliveryTask(MessageIn message, int id)
 {
 assert message != null;
 this.message = message;
 this.id = id;
+this.enqueueTime = ApproximateTime.currentTimeMillis();
 }
 
 public void 

[jira] [Updated] (CASSANDRA-13005) Cassandra TWCS is not removing fully expired tables

2016-12-06 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-13005:
---
Labels: twcs  (was: )

> Cassandra TWCS is not removing fully expired tables
> ---
>
> Key: CASSANDRA-13005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13005
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
>  Labels: twcs
> Attachments: sstablemetadata-empty-type-that-is-3GB.txt
>
>
> I have a table where all columns are stored with TTL of maximum 4 hours. 
> Usually TWCS compaction properly removes  expired data via tombstone 
> compaction and also removes fully expired tables. The number of SSTables is 
> nearly constant since weeks. Good.
> The problem:  Suddenly TWCS does not remove old SSTables any longer. They are 
> being recreated frequently (judging form the file creation timestamp), but 
> the number of tables is growing. Analysis and actions take so far:
> - sstablemetadata shows strange data, as if the table is completely empty.
> - sstabledump throws an Exception when running it on such a SSTable
> - Even triggering a manual major compaction will not remove the old 
> SSTable's. To be more precise: They are recreated with new id and timestamp 
> (not sure whether they are identical as I cannot inspect content due to the 
> sstabledump crash)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13005) Cassandra TWCS is not removing fully expired tables

2016-12-06 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726936#comment-15726936
 ] 

Jeff Jirsa commented on CASSANDRA-13005:


I'm unsure how you're suddenly missing components. However, a lot of factors go 
into what can/cant be safely deleted. One of the most common reasons TWCS 
doesn't drop a fully expired table is overlapping files. There exists a tool - 
sstableexpiredblockers - meant to help diagnose such a problem. Are you able to 
run it on that table?



> Cassandra TWCS is not removing fully expired tables
> ---
>
> Key: CASSANDRA-13005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13005
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
> Attachments: sstablemetadata-empty-type-that-is-3GB.txt
>
>
> I have a table where all columns are stored with TTL of maximum 4 hours. 
> Usually TWCS compaction properly removes  expired data via tombstone 
> compaction and also removes fully expired tables. The number of SSTables is 
> nearly constant since weeks. Good.
> The problem:  Suddenly TWCS does not remove old SSTables any longer. They are 
> being recreated frequently (judging form the file creation timestamp), but 
> the number of tables is growing. Analysis and actions take so far:
> - sstablemetadata shows strange data, as if the table is completely empty.
> - sstabledump throws an Exception when running it on such a SSTable
> - Even triggering a manual major compaction will not remove the old 
> SSTable's. To be more precise: They are recreated with new id and timestamp 
> (not sure whether they are identical as I cannot inspect content due to the 
> sstabledump crash)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13005) Cassandra TWCS is not removing fully expired tables

2016-12-06 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726865#comment-15726865
 ] 

Chris Lohfink commented on CASSANDRA-13005:
---

should maybe open different ticket to make better error message for sstable 
dump with the missing components

> Cassandra TWCS is not removing fully expired tables
> ---
>
> Key: CASSANDRA-13005
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13005
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: Cassandra 3.0.9
> Java HotSpot(TM) 64-Bit Server VM version 25.112-b15 (Java version 
> 1.8.0_112-b15)
> Linux 3.16
>Reporter: Christian Esken
> Attachments: sstablemetadata-empty-type-that-is-3GB.txt
>
>
> I have a table where all columns are stored with TTL of maximum 4 hours. 
> Usually TWCS compaction properly removes  expired data via tombstone 
> compaction and also removes fully expired tables. The number of SSTables is 
> nearly constant since weeks. Good.
> The problem:  Suddenly TWCS does not remove old SSTables any longer. They are 
> being recreated frequently (judging form the file creation timestamp), but 
> the number of tables is growing. Analysis and actions take so far:
> - sstablemetadata shows strange data, as if the table is completely empty.
> - sstabledump throws an Exception when running it on such a SSTable
> - Even triggering a manual major compaction will not remove the old 
> SSTable's. To be more precise: They are recreated with new id and timestamp 
> (not sure whether they are identical as I cannot inspect content due to the 
> sstabledump crash)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13008:
---
Attachment: 13008-3.0.txt

> Add vm.max_map_count startup StartupCheck
> -
>
> Key: CASSANDRA-13008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13008-3.0.txt
>
>
> It's recommended to set {{vm.max_map_count}} to 1048575 (CASSANDRA-3563)
> When the max_map_count is low, it throws OOM exception, which is hard to link 
> to the real issue of vm.max_map_count.
> The problem happened when we tried to remove one node, all the other nodes in 
> cluster crashed. As each node was trying to load more local SSTable files for 
> streaming.
> I would suggest to add a StartupCheck for max_map_count, at least it could 
> give a warning message to help the debug.
> {code}
> ERROR [STREAM-IN-] JVMStabilityInspector.java:140 - JVM state determined to 
> be unstable.  Exiting forcefully due to:
> java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) 
> ~[na:1.8.0_112]
> at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:70) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:58) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:96) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.(CompressedSegmentedFile.java:47)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:276)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$600(BigTableWriter.java:50)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:313)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:213)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:56)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:109)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:599) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:482)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:296)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13008:
---
Attachment: (was: 13008-3.0.txt)

> Add vm.max_map_count startup StartupCheck
> -
>
> Key: CASSANDRA-13008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13008-3.0.txt
>
>
> It's recommended to set {{vm.max_map_count}} to 1048575 (CASSANDRA-3563)
> When the max_map_count is low, it throws OOM exception, which is hard to link 
> to the real issue of vm.max_map_count.
> The problem happened when we tried to remove one node, all the other nodes in 
> cluster crashed. As each node was trying to load more local SSTable files for 
> streaming.
> I would suggest to add a StartupCheck for max_map_count, at least it could 
> give a warning message to help the debug.
> {code}
> ERROR [STREAM-IN-] JVMStabilityInspector.java:140 - JVM state determined to 
> be unstable.  Exiting forcefully due to:
> java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) 
> ~[na:1.8.0_112]
> at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:70) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:58) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:96) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.(CompressedSegmentedFile.java:47)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:276)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$600(BigTableWriter.java:50)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:313)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:213)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:56)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:109)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:599) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:482)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:296)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13008:
---
Attachment: 13008-3.0.txt

> Add vm.max_map_count startup StartupCheck
> -
>
> Key: CASSANDRA-13008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13008-3.0.txt
>
>
> It's recommended to set {{vm.max_map_count}} to 1048575 (CASSANDRA-3563)
> When the max_map_count is low, it throws OOM exception, which is hard to link 
> to the real issue of vm.max_map_count.
> The problem happened when we tried to remove one node, all the other nodes in 
> cluster crashed. As each node was trying to load more local SSTable files for 
> streaming.
> I would suggest to add a StartupCheck for max_map_count, at least it could 
> give a warning message to help the debug.
> {code}
> ERROR [STREAM-IN-] JVMStabilityInspector.java:140 - JVM state determined to 
> be unstable.  Exiting forcefully due to:
> java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) 
> ~[na:1.8.0_112]
> at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:70) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:58) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:96) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.(CompressedSegmentedFile.java:47)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:276)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$600(BigTableWriter.java:50)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:313)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:213)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:56)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:109)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:599) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:482)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:296)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13008:
---
Status: Patch Available  (was: Open)

Please review:
https://github.com/cooldoger/cassandra/commits/13008-3.0

> Add vm.max_map_count startup StartupCheck
> -
>
> Key: CASSANDRA-13008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Priority: Minor
> Fix For: 3.0.x
>
> Attachments: 13008-3.0.txt
>
>
> It's recommended to set {{vm.max_map_count}} to 1048575 (CASSANDRA-3563)
> When the max_map_count is low, it throws OOM exception, which is hard to link 
> to the real issue of vm.max_map_count.
> The problem happened when we tried to remove one node, all the other nodes in 
> cluster crashed. As each node was trying to load more local SSTable files for 
> streaming.
> I would suggest to add a StartupCheck for max_map_count, at least it could 
> give a warning message to help the debug.
> {code}
> ERROR [STREAM-IN-] JVMStabilityInspector.java:140 - JVM state determined to 
> be unstable.  Exiting forcefully due to:
> java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) 
> ~[na:1.8.0_112]
> at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:70) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:58) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:96) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.(CompressedSegmentedFile.java:47)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:276)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$600(BigTableWriter.java:50)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:313)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:213)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:56)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:109)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:599) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:482)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:296)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13008:
---
Status: Open  (was: Patch Available)

> Add vm.max_map_count startup StartupCheck
> -
>
> Key: CASSANDRA-13008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Priority: Minor
> Fix For: 3.0.x
>
>
> It's recommended to set {{vm.max_map_count}} to 1048575 (CASSANDRA-3563)
> When the max_map_count is low, it throws OOM exception, which is hard to link 
> to the real issue of vm.max_map_count.
> The problem happened when we tried to remove one node, all the other nodes in 
> cluster crashed. As each node was trying to load more local SSTable files for 
> streaming.
> I would suggest to add a StartupCheck for max_map_count, at least it could 
> give a warning message to help the debug.
> {code}
> ERROR [STREAM-IN-] JVMStabilityInspector.java:140 - JVM state determined to 
> be unstable.  Exiting forcefully due to:
> java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) 
> ~[na:1.8.0_112]
> at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:70) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:58) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:96) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.(CompressedSegmentedFile.java:47)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:276)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$600(BigTableWriter.java:50)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:313)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:213)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:56)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:109)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:599) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:482)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:296)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13008:
---
Fix Version/s: 3.0.x
Reproduced In: 3.0.10
   Status: Patch Available  (was: Open)

> Add vm.max_map_count startup StartupCheck
> -
>
> Key: CASSANDRA-13008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Priority: Minor
> Fix For: 3.0.x
>
>
> It's recommended to set {{vm.max_map_count}} to 1048575 (CASSANDRA-3563)
> When the max_map_count is low, it throws OOM exception, which is hard to link 
> to the real issue of vm.max_map_count.
> The problem happened when we tried to remove one node, all the other nodes in 
> cluster crashed. As each node was trying to load more local SSTable files for 
> streaming.
> I would suggest to add a StartupCheck for max_map_count, at least it could 
> give a warning message to help the debug.
> {code}
> ERROR [STREAM-IN-] JVMStabilityInspector.java:140 - JVM state determined to 
> be unstable.  Exiting forcefully due to:
> java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) 
> ~[na:1.8.0_112]
> at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:70) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:58) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:96) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile.(CompressedSegmentedFile.java:47)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:276)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$600(BigTableWriter.java:50)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:313)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:213)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:56)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:109)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:599) 
> ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:482)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:296)
>  ~[apache-cassandra-3.0.10.jar:3.0.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-12-06 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726640#comment-15726640
 ] 

T Jake Luciani commented on CASSANDRA-8398:
---

+1

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
> Attachments: tpstats.png
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726632#comment-15726632
 ] 

Jon Haddad commented on CASSANDRA-12979:


https://github.com/apache/cassandra/pull/87

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 4.0, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726613#comment-15726613
 ] 

Sotirios Delimanolis edited comment on CASSANDRA-12979 at 12/6/16 8:29 PM:
---

+1

We hit this issue recently. A huge set of sstables couldn't get compacted. 
We've been running a version of this patch (in 2.2 and just some added logging) 
in production for a couple of days and it unblocks these compactions.

I suggest you open a separate ticket for the {{RuntimeException}}, though. 
Nothing is currently set up to handle it right now. It doesn't even get logged. 
I assume that's why this issue wasn't identified sooner.


was (Author: s_delima):
+1

We hit this issue recently. A huge set of sstables couldn't get compacted. 
We've been running a version of this patch (just more logging) in production 
for a couple of days and it unblocks these compactions.

I suggest you open a separate ticket for the {{RuntimeException}}, though. 
Nothing is currently set up to handle it right now. It doesn't even get logged. 
I assume that's why this issue wasn't identified sooner.

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 4.0, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Sotirios Delimanolis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726613#comment-15726613
 ] 

Sotirios Delimanolis commented on CASSANDRA-12979:
--

+1

We hit this issue recently. A huge set of sstables couldn't get compacted. 
We've been running a version of this patch (just more logging) in production 
for a couple of days and it unblocks these compactions.

I suggest you open a separate ticket for the {{RuntimeException}}, though. 
Nothing is currently set up to handle it right now. It doesn't even get logged. 
I assume that's why this issue wasn't identified sooner.

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 4.0, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-12-06 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726545#comment-15726545
 ] 

Dikang Gu commented on CASSANDRA-8398:
--

[~tjake], FYI, the unit tests/dtests are done, there are several test failures, 
but they seem to be unrelated to this ticket.

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
> Attachments: tpstats.png
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-13008:
---
Description: 
It's recommended to set {{vm.max_map_count}} to 1048575 (CASSANDRA-3563)
When the max_map_count is low, it throws OOM exception, which is hard to link 
to the real issue of vm.max_map_count.
The problem happened when we tried to remove one node, all the other nodes in 
cluster crashed. As each node was trying to load more local SSTable files for 
streaming.

I would suggest to add a StartupCheck for max_map_count, at least it could give 
a warning message to help the debug.

{code}
ERROR [STREAM-IN-] JVMStabilityInspector.java:140 - JVM state determined to be 
unstable.  Exiting forcefully due to:
java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) ~[na:1.8.0_112]
at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.MmappedRegions$State.add(MmappedRegions.java:280) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.MmappedRegions$State.access$400(MmappedRegions.java:216)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.MmappedRegions.updateState(MmappedRegions.java:173)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:70) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.MmappedRegions.(MmappedRegions.java:58) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at org.apache.cassandra.io.util.MmappedRegions.map(MmappedRegions.java:96) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile.(CompressedSegmentedFile.java:47)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.CompressedSegmentedFile$Builder.complete(CompressedSegmentedFile.java:132)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:177)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.buildData(SegmentedFile.java:193)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:276)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$600(BigTableWriter.java:50)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:313)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:173)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:184)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:213)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.io.sstable.SimpleSSTableMultiWriter.finish(SimpleSSTableMultiWriter.java:56)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.streaming.StreamReceiveTask.received(StreamReceiveTask.java:109)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:599) 
~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:482)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at 
org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:296)
 ~[apache-cassandra-3.0.10.jar:3.0.10]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
{code}

  was:
It's recommended to set `vm.max_map_count` to 1048575 (CASSANDRA-3563)
When the max_map_count is low, it throws OOM exception, which is hard to link 
to the real issue of vm.max_map_count.
The problem happened when we tried to remove one node, all the other nodes in 
cluster crashed. As each node was trying to load more local SSTable files for 
streaming.

I would suggest to add a StartupCheck for max_map_count, at least it could give 
a warning message to help the debug.

```
ERROR [MemtableFlushWriter:109] 2016-11-14 00:22:40,598 
JVMStabilityInspector.java:117 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) ~[na:1.8.0_112]
at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 

[jira] [Commented] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726492#comment-15726492
 ] 

Jay Zhuang commented on CASSANDRA-13008:


I'm working on a fix for this.

> Add vm.max_map_count startup StartupCheck
> -
>
> Key: CASSANDRA-13008
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Priority: Minor
>
> It's recommended to set `vm.max_map_count` to 1048575 (CASSANDRA-3563)
> When the max_map_count is low, it throws OOM exception, which is hard to link 
> to the real issue of vm.max_map_count.
> The problem happened when we tried to remove one node, all the other nodes in 
> cluster crashed. As each node was trying to load more local SSTable files for 
> streaming.
> I would suggest to add a StartupCheck for max_map_count, at least it could 
> give a warning message to help the debug.
> ```
> ERROR [MemtableFlushWriter:109] 2016-11-14 00:22:40,598 
> JVMStabilityInspector.java:117 - JVM state determined to be unstable.  
> Exiting forcefully due to:
> java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) 
> ~[na:1.8.0_112]
> at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:377)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:188)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:179)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:344)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$800(BigTableWriter.java:56)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:385)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:169)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:179)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:205)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:412)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:361) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
>  ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1139)
>  ~[apache-cassandra-2.2.5.jar:2.2.5]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_112]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[na:1.8.0_112]
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_112]
> ```  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-13008) Add vm.max_map_count startup StartupCheck

2016-12-06 Thread Jay Zhuang (JIRA)
Jay Zhuang created CASSANDRA-13008:
--

 Summary: Add vm.max_map_count startup StartupCheck
 Key: CASSANDRA-13008
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13008
 Project: Cassandra
  Issue Type: Improvement
  Components: Configuration
Reporter: Jay Zhuang
Priority: Minor


It's recommended to set `vm.max_map_count` to 1048575 (CASSANDRA-3563)
When the max_map_count is low, it throws OOM exception, which is hard to link 
to the real issue of vm.max_map_count.
The problem happened when we tried to remove one node, all the other nodes in 
cluster crashed. As each node was trying to load more local SSTable files for 
streaming.

I would suggest to add a StartupCheck for max_map_count, at least it could give 
a warning message to help the debug.

```
ERROR [MemtableFlushWriter:109] 2016-11-14 00:22:40,598 
JVMStabilityInspector.java:117 - JVM state determined to be unstable.  Exiting 
forcefully due to:
java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method) ~[na:1.8.0_112]
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937) ~[na:1.8.0_112]
at org.apache.cassandra.io.util.ChannelProxy.map(ChannelProxy.java:152) 
~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:377)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:188)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.io.util.SegmentedFile$Builder.complete(SegmentedFile.java:179)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.openFinal(BigTableWriter.java:344)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter.access$800(BigTableWriter.java:56)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.io.sstable.format.big.BigTableWriter$TransactionalProxy.doPrepare(BigTableWriter.java:385)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.prepareToCommit(Transactional.java:169)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish(Transactional.java:179)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.io.sstable.format.SSTableWriter.finish(SSTableWriter.java:205)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:412)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
org.apache.cassandra.db.Memtable$FlushRunnable.runMayThrow(Memtable.java:361) 
~[apache-cassandra-2.2.5.jar:2.2.5]
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-2.2.5.jar:2.2.5]
at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 ~[guava-16.0.jar:na]
at 
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1139)
 ~[apache-cassandra-2.2.5.jar:2.2.5]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
~[na:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
~[na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_112]
```  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13007) Make compaction more testable

2016-12-06 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726434#comment-15726434
 ] 

Jon Haddad commented on CASSANDRA-13007:


I didn't know about compaction stress, that's good to know.

What I'm looking to improve here is the testability of compaction's edge cases. 
 For instance, trying to test if the right thing happens (using unit tests) if 
you have an almost full disk with more sstables than space CASSANDRA-12979 and 
you want to test if {{reduceScopeForLimitedSpace}} works correctly, it's kind 
of a nightmare.  There wasn't a test for it before, and given the direct calls 
to the file system that eventually happen I'm not sure how you would do it 
properly.  Even if you came up with a testing harness, you're now destroying 
any chance at running concurrent tests since you have to fill your disk up (who 
knows how many errors that would trigger).

> Make compaction more testable
> -
>
> Key: CASSANDRA-13007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13007
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
>
> Compaction is written in a manner that makes it difficult to unit test edge 
> cases.  I'm opening this up as a blanket issue, hopefully we can get enough 
> requirements in here to make some decisions to improve the testability and 
> maintainability of compaction code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12910) SASI: calculatePrimary() always returns null

2016-12-06 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12910:

Status: Ready to Commit  (was: Patch Available)

> SASI: calculatePrimary() always returns null
> 
>
> Key: CASSANDRA-12910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12910
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0002-sasi-fix-calculatePrimary.patch
>
>
> While investigating performance issues with SASI  
> (https://github.com/criteo/biggraphite/issues/174 if you want to know more) I 
> ended finding calculatePrimary() in QueryController.java which apparently 
> should return the "primary index".
> It lacks documentation, and I'm unsure what the "primary index" should be, 
> but apparently this function never returns one because primaryIndexes.size() 
> is always 0.
> https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L237
> I'm unsure if the proper fix is checking if the collection is empty or 
> reversing the operator (selecting the index with higher cardinality versus 
> the one with lower cardinality).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12910) SASI: calculatePrimary() always returns null

2016-12-06 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726019#comment-15726019
 ] 

Alex Petrov edited comment on CASSANDRA-12910 at 12/6/16 7:20 PM:
--

I've used a simple test to verify and I think your fix is good. Although I do 
not see a good way to test it without exposing a lot of internals for now.

So what happens is in {{getView}} instead of querying all SSTables, we're going 
to query only those which would (possibly) yield results for the expression 
that involves the least amount of sstables. The order does not matter, as it's 
going to be changed later in {{RangeIntersectionIterator}}, depending on the 
size of token trees. 

I've tested the patch on our CI: 

|[3.X|https://github.com/ifesdjeen/cassandra/tree/12910-3.x]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-dtest/]|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/12910-trunk-v2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-v2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-v2-dtest/]|


was (Author: ifesdjeen):
I've used a simple test to verify and I think your fix is good. Although I do 
not see a good way to test it without exposing a lot of internals for now.

So what happens is in {{getView}} instead of querying all SSTables, we're going 
to query only those which would (possibly) yield results for the expression 
that involves the least amount of sstables. The order does not matter, as it's 
going to be changed later in {{RangeIntersectionIterator}}, depending on the 
size of token trees. 

I've tested the patch on our CI: 

|[3.X|https://github.com/ifesdjeen/cassandra/tree/12910-3.x]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-dtest/]|
|[trunk-v2|https://github.com/ifesdjeen/cassandra/tree/12910-trunk-v2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-v2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-v2-dtest/]|

> SASI: calculatePrimary() always returns null
> 
>
> Key: CASSANDRA-12910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12910
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0002-sasi-fix-calculatePrimary.patch
>
>
> While investigating performance issues with SASI  
> (https://github.com/criteo/biggraphite/issues/174 if you want to know more) I 
> ended finding calculatePrimary() in QueryController.java which apparently 
> should return the "primary index".
> It lacks documentation, and I'm unsure what the "primary index" should be, 
> but apparently this function never returns one because primaryIndexes.size() 
> is always 0.
> https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L237
> I'm unsure if the proper fix is checking if the collection is empty or 
> reversing the operator (selecting the index with higher cardinality versus 
> the one with lower cardinality).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12910) SASI: calculatePrimary() always returns null

2016-12-06 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726019#comment-15726019
 ] 

Alex Petrov edited comment on CASSANDRA-12910 at 12/6/16 7:20 PM:
--

I've used a simple test to verify and I think your fix is good. Although I do 
not see a good way to test it without exposing a lot of internals for now.

So what happens is in {{getView}} instead of querying all SSTables, we're going 
to query only those which would (possibly) yield results for the expression 
that involves the least amount of sstables. The order does not matter, as it's 
going to be changed later in {{RangeIntersectionIterator}}, depending on the 
size of token trees. 

I've tested the patch on our CI: 

|[3.X|https://github.com/ifesdjeen/cassandra/tree/12910-3.x]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-dtest/]|
|[trunk-v2|https://github.com/ifesdjeen/cassandra/tree/12910-trunk-v2]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-v2-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-v2-dtest/]|


was (Author: ifesdjeen):
I've used a simple test to verify and I think your fix is good. Although I do 
not see a good way to test it without exposing a lot of internals for now.

So what happens is in {{getView}} instead of querying all SSTables, we're going 
to query only those which would (possibly) yield results for the expression 
that involves the least amount of sstables. The order does not matter, as it's 
going to be changed later in {{RangeIntersectionIterator}}, depending on the 
size of token trees. 

I've tested the patch on our CI: 

|[3.X|https://github.com/ifesdjeen/cassandra/tree/12910-3.x]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-dtest/]|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/12910-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-dtest/]|


> SASI: calculatePrimary() always returns null
> 
>
> Key: CASSANDRA-12910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12910
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0002-sasi-fix-calculatePrimary.patch
>
>
> While investigating performance issues with SASI  
> (https://github.com/criteo/biggraphite/issues/174 if you want to know more) I 
> ended finding calculatePrimary() in QueryController.java which apparently 
> should return the "primary index".
> It lacks documentation, and I'm unsure what the "primary index" should be, 
> but apparently this function never returns one because primaryIndexes.size() 
> is always 0.
> https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L237
> I'm unsure if the proper fix is checking if the collection is empty or 
> reversing the operator (selecting the index with higher cardinality versus 
> the one with lower cardinality).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12910) SASI: calculatePrimary() always returns null

2016-12-06 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726419#comment-15726419
 ] 

Alex Petrov edited comment on CASSANDRA-12910 at 12/6/16 7:18 PM:
--

I can see no correlation between filled columns in rows and this patch. 

Let's say there are two sstables: 

{code}
| a | b | c |
| 1 | 1 | 1 |
| 2 | 2 | 2 |
| 3 | 3 | 3 |

| a | b | c |
| 4 | 4 | 4 |
| 5 | 5 | 2 |
{code}

With a {{PRIMARY KEY a}} . When querying for {{SELECT * FROM tbl WHERE b = 5 
AND c = 2}}. Now, results for the column {{b}} are only in the second sstable. 
Results for the column {{c}} are both in the first and in second sstable. Since 
we're doing {{AND}} query, we can conclude that in order to obtain all 
necessary results, it will be enough to query the second sstable, so we're 
picking the index on the column {{b}} as primary and instead of using indexes 
over two sstables, are using indexes for only one sstable, as specified 
[here|https://github.com/ifesdjeen/cassandra/blob/8a64718d8447029584e24b3a5b75cde70e835dd7/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L208-L212].
 


was (Author: ifesdjeen):
I can see no correlation between filled columns in rows and this patch. 

Let's say there are two sstables: 

{code}
| a | b | c |
| 1  | 1 | 1 |
| 2 | 2 | 2 |
| 3 | 3 | 3 |

| a | b | c |
| 4  | 4 | 4 |
| 5 | 5 | 2 |
{code}

With a {{PRIMARY KEY a}} . When querying for {{SELECT * FROM tbl WHERE b = 5 
AND c = 2}}. Now, results for the column {{b}} are only in the second sstable. 
Results for the column {{c}} are both in the first and in second sstable. Since 
we're doing {{AND}} query, we can conclude that in order to obtain all 
necessary results, it will be enough to query the second sstable, so we're 
picking the index on the column {{b}} as primary and instead of using indexes 
over two sstables, are using indexes for only one sstable, as specified 
[here|https://github.com/ifesdjeen/cassandra/blob/8a64718d8447029584e24b3a5b75cde70e835dd7/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L208-L212].
 

> SASI: calculatePrimary() always returns null
> 
>
> Key: CASSANDRA-12910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12910
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0002-sasi-fix-calculatePrimary.patch
>
>
> While investigating performance issues with SASI  
> (https://github.com/criteo/biggraphite/issues/174 if you want to know more) I 
> ended finding calculatePrimary() in QueryController.java which apparently 
> should return the "primary index".
> It lacks documentation, and I'm unsure what the "primary index" should be, 
> but apparently this function never returns one because primaryIndexes.size() 
> is always 0.
> https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L237
> I'm unsure if the proper fix is checking if the collection is empty or 
> reversing the operator (selecting the index with higher cardinality versus 
> the one with lower cardinality).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12910) SASI: calculatePrimary() always returns null

2016-12-06 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726419#comment-15726419
 ] 

Alex Petrov commented on CASSANDRA-12910:
-

I can see no correlation between filled columns in rows and this patch. 

Let's say there are two sstables: 

{code}
| a | b | c |
| 1  | 1 | 1 |
| 2 | 2 | 2 |
| 3 | 3 | 3 |

| a | b | c |
| 4  | 4 | 4 |
| 5 | 5 | 2 |
{code}

With a {{PRIMARY KEY a}} . When querying for {{SELECT * FROM tbl WHERE b = 5 
AND c = 2}}. Now, results for the column {{b}} are only in the second sstable. 
Results for the column {{c}} are both in the first and in second sstable. Since 
we're doing {{AND}} query, we can conclude that in order to obtain all 
necessary results, it will be enough to query the second sstable, so we're 
picking the index on the column {{b}} as primary and instead of using indexes 
over two sstables, are using indexes for only one sstable, as specified 
[here|https://github.com/ifesdjeen/cassandra/blob/8a64718d8447029584e24b3a5b75cde70e835dd7/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L208-L212].
 

> SASI: calculatePrimary() always returns null
> 
>
> Key: CASSANDRA-12910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12910
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0002-sasi-fix-calculatePrimary.patch
>
>
> While investigating performance issues with SASI  
> (https://github.com/criteo/biggraphite/issues/174 if you want to know more) I 
> ended finding calculatePrimary() in QueryController.java which apparently 
> should return the "primary index".
> It lacks documentation, and I'm unsure what the "primary index" should be, 
> but apparently this function never returns one because primaryIndexes.size() 
> is always 0.
> https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L237
> I'm unsure if the proper fix is checking if the collection is empty or 
> reversing the operator (selecting the index with higher cardinality versus 
> the one with lower cardinality).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13007) Make compaction more testable

2016-12-06 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726408#comment-15726408
 ] 

T Jake Luciani commented on CASSANDRA-13007:


CASSANDRA-11844 in case you didn't know about it.

> Make compaction more testable
> -
>
> Key: CASSANDRA-13007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13007
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
>
> Compaction is written in a manner that makes it difficult to unit test edge 
> cases.  I'm opening this up as a blanket issue, hopefully we can get enough 
> requirements in here to make some decisions to improve the testability and 
> maintainability of compaction code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Nate McCall (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate McCall updated CASSANDRA-12979:

Fix Version/s: 4.0

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 4.0, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-13007) Make compaction more testable

2016-12-06 Thread Jon Haddad (JIRA)
Jon Haddad created CASSANDRA-13007:
--

 Summary: Make compaction more testable
 Key: CASSANDRA-13007
 URL: https://issues.apache.org/jira/browse/CASSANDRA-13007
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jon Haddad


Compaction is written in a manner that makes it difficult to unit test edge 
cases.  I'm opening this up as a blanket issue, hopefully we can get enough 
requirements in here to make some decisions to improve the testability and 
maintainability of compaction code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-12979:
---
Status: Patch Available  (was: Open)

See 
https://github.com/apache/cassandra/compare/trunk...rustyrazorblade:12979_fix_check_disk_space?expand=1

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Jon Haddad (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726381#comment-15726381
 ] 

Jon Haddad commented on CASSANDRA-12979:


I've written a small patch against trunk to deal with this issue.  A couple 
notes.

* I moved the calculation for the space taken up by this compaction into 
{{checkAvailableDiskSpace()}}
* I improved the logging in reduceScopeForLimitedSpace to print out the disk 
space that was estimated / required.  This is what I had initially done that 
lead to finding this issue
* checkAvailableDiskSpace() no longer accepts the estimates as args since 
they're recalculated internally
* as far as I can tell there's no way to test this (it wasn't tested before).  
I'm creating follow up tickets for refactorings that will help test these sorts 
of edge cases
* this bug has existed at least since 2.2, and probably before that.  the patch 
is trivial enough to be backported.

I consider this a critical fix for anyone who's running low on disk space, as 
compactions completely freeze up without this patch, so I'd be happy to supply 
patches for 2.1 -> 3.x.

For initial review: 

https://github.com/apache/cassandra/compare/trunk...rustyrazorblade:12979_fix_check_disk_space?expand=1

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12979) checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread scope

2016-12-06 Thread Jon Haddad (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Haddad updated CASSANDRA-12979:
---
Fix Version/s: 3.x
   3.0.11
   2.2.9

> checkAvailableDiskSpace doesn't update expectedWriteSize when reducing thread 
> scope
> ---
>
> Key: CASSANDRA-12979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12979
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jon Haddad
>Assignee: Jon Haddad
> Fix For: 2.2.9, 3.0.11, 3.x
>
>
> If a compaction occurs that looks like it'll take up more space than 
> remaining disk available, the compaction manager attempts to reduce the scope 
> of the compaction by calling {{reduceScopeForLimitedSpace()}} repeatedly.  
> Unfortunately, the while loop passes the {{estimatedWriteSize}} calculated 
> from the original call to {{hasAvailableDiskSpace}}, so the comparisons that 
> are done will always be against the size of the original compaction, rather 
> than the reduced scope one.
> Full method below:
> {code}
> protected void checkAvailableDiskSpace(long estimatedSSTables, long 
> expectedWriteSize)
> {
> if(!cfs.isCompactionDiskSpaceCheckEnabled() && compactionType == 
> OperationType.COMPACTION)
> {
> logger.info("Compaction space check is disabled");
> return;
> }
> while (!getDirectories().hasAvailableDiskSpace(estimatedSSTables, 
> expectedWriteSize))
> {
> if (!reduceScopeForLimitedSpace())
> throw new RuntimeException(String.format("Not enough space 
> for compaction, estimated sstables = %d, expected write size = %d", 
> estimatedSSTables, expectedWriteSize));
>   
> }
> }
> {code}
> I'm proposing to recalculate the {{estimatedSSTables}} and 
> {{expectedWriteSize}} after each iteration of {{reduceScopeForLimitedSpace}}. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-12-06 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726199#comment-15726199
 ] 

Dikang Gu commented on CASSANDRA-8398:
--

Yes, [~mshuler] helped me hook to cassci last month. Here is the unit 
test/dtests link

|[patch | 
https://github.com/DikangGu/cassandra/commit/447a58a75019998214eb16dfef90387da9c05583]|
 [unit test | 
https://cassci.datastax.com/view/Dev/view/DikangGu/job/DikangGu-CASSANDRA-8398-trunk-testall/]|
 [dtest | 
https://cassci.datastax.com/view/Dev/view/DikangGu/job/DikangGu-CASSANDRA-8398-trunk-dtest/]|

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
> Attachments: tpstats.png
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-13006) Disable automatic heap dumps on OOM error

2016-12-06 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726095#comment-15726095
 ] 

Benjamin Lerer edited comment on CASSANDRA-13006 at 12/6/16 5:14 PM:
-

[~anmolsharma.141] Out of curiosity, how often do your nodes dies of an OOM 
error? 


was (Author: blerer):
[~anmolsharma.141]] Out of curiosity, how often do your nodes dies of an OOM 
error? 

> Disable automatic heap dumps on OOM error
> -
>
> Key: CASSANDRA-13006
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13006
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: anmols
>Priority: Minor
>
> With CASSANDRA-9861, a change was added to enable collecting heap dumps by 
> default if the process encountered an OOM error. These heap dumps are stored 
> in the Apache Cassandra home directory unless configured otherwise (see 
> [Cassandra Support 
> Document|https://support.datastax.com/hc/en-us/articles/204225959-Generating-and-Analyzing-Heap-Dumps]
>  for this feature).
>  
> The creation and storage of heap dumps aides debugging and investigative 
> workflows, but is not be desirable for a production environment where these 
> heap dumps may occupy a large amount of disk space and require manual 
> intervention for cleanups. 
>  
> Managing heap dumps on out of memory errors and configuring the paths for 
> these heap dumps are available as JVM options in JVM. The current behavior 
> conflicts with the Boolean JVM flag HeapDumpOnOutOfMemoryError. 
>  
> A patch can be proposed here that would make the heap dump on OOM error honor 
> the HeapDumpOnOutOfMemoryError flag. Users who would want to still generate 
> heap dumps on OOM errors can set the -XX:+HeapDumpOnOutOfMemoryError JVM 
> option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-13006) Disable automatic heap dumps on OOM error

2016-12-06 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726095#comment-15726095
 ] 

Benjamin Lerer commented on CASSANDRA-13006:


[~anmolsharma.141]] Out of curiosity, how often do your nodes dies of an OOM 
error? 

> Disable automatic heap dumps on OOM error
> -
>
> Key: CASSANDRA-13006
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13006
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: anmols
>Priority: Minor
>
> With CASSANDRA-9861, a change was added to enable collecting heap dumps by 
> default if the process encountered an OOM error. These heap dumps are stored 
> in the Apache Cassandra home directory unless configured otherwise (see 
> [Cassandra Support 
> Document|https://support.datastax.com/hc/en-us/articles/204225959-Generating-and-Analyzing-Heap-Dumps]
>  for this feature).
>  
> The creation and storage of heap dumps aides debugging and investigative 
> workflows, but is not be desirable for a production environment where these 
> heap dumps may occupy a large amount of disk space and require manual 
> intervention for cleanups. 
>  
> Managing heap dumps on out of memory errors and configuring the paths for 
> these heap dumps are available as JVM options in JVM. The current behavior 
> conflicts with the Boolean JVM flag HeapDumpOnOutOfMemoryError. 
>  
> A patch can be proposed here that would make the heap dump on OOM error honor 
> the HeapDumpOnOutOfMemoryError flag. Users who would want to still generate 
> heap dumps on OOM errors can set the -XX:+HeapDumpOnOutOfMemoryError JVM 
> option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12910) SASI: calculatePrimary() always returns null

2016-12-06 Thread Corentin Chary (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726086#comment-15726086
 ] 

Corentin Chary commented on CASSANDRA-12910:


Note that this only has a positive effect on tables that are really sparse, it 
won't change anything if all your columns are filled in all your rows.

> SASI: calculatePrimary() always returns null
> 
>
> Key: CASSANDRA-12910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12910
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0002-sasi-fix-calculatePrimary.patch
>
>
> While investigating performance issues with SASI  
> (https://github.com/criteo/biggraphite/issues/174 if you want to know more) I 
> ended finding calculatePrimary() in QueryController.java which apparently 
> should return the "primary index".
> It lacks documentation, and I'm unsure what the "primary index" should be, 
> but apparently this function never returns one because primaryIndexes.size() 
> is always 0.
> https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L237
> I'm unsure if the proper fix is checking if the collection is empty or 
> reversing the operator (selecting the index with higher cardinality versus 
> the one with lower cardinality).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12915) SASI: Index intersection can be very inefficient

2016-12-06 Thread Corentin Chary (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726079#comment-15726079
 ] 

Corentin Chary commented on CASSANDRA-12915:


{code}
CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 
'replication_factor': '1'}  AND durable_writes = true;

CREATE TABLE test.test (
r text PRIMARY KEY,
a text,
b text,
c text,
data text
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE CUSTOM INDEX test_a_idx ON test.test (a) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = {'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'true'};
CREATE CUSTOM INDEX test_c_idx ON test.test (c) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = {'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'true'};
CREATE CUSTOM INDEX test_b_idx ON test.test (b) USING 
'org.apache.cassandra.index.sasi.SASIIndex' WITH OPTIONS = {'analyzer_class': 
'org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer', 
'case_sensitive': 'true'};
{code}

{code}
$ cat > generate.py
import sys
import random

def main(args):
n = int(args[1])

for i in xrange(n):
a = '0'
b = i % 10
c = i % (n / 10) + random.randint(0, 10)
print ("%d,%s,%d,%d,%d" % (i, a, b, c, i))

if __name__ == '__main__':
main(sys.argv)
$ python generate.py 200 > test.csv
{code}


{code}
COPY test.test FROM 'test.csv'  WITH MAXBATCHSIZE = 100 AND MAXATTEMPTS = 10 
AND MAXINSERTERRORS = 99;
SELECT * FROM test.test WHERE a = '0' AND c = '44859' ALLOW FILTERING;
{code}

3.X - with logs but without optimisations - 20ms (according to tracing)

{code}
TRACE [ReadStage-2] 2016-12-06 17:58:08,743 QueryPlan.java:60 - Analyzing 
org.apache.cassandra.config.CFMetaData@450c6f26[cfId=04e98180-bbb7-11e6-9680-cf420076f59b,ksName=test,cfName=test,flags=[COMPOUND],params=TableParams{comment=,
 read_repair_chance=0.0, dclocal_read_repair_chance=0.1, 
bloom_filter_fp_chance=0.01, crc_check_chance=1.0, gc_grace_seconds=864000, 
default_time_to_live=0, memtable_flush_period_in_ms=0, min_index_interval=128, 
max_index_interval=2048, speculative_retry=99PERCENTILE, caching={'keys' : 
'ALL', 'rows_per_partition' : 'NONE'}, 
compaction=CompactionParams{class=org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,
 options={max_threshold=32, min_threshold=4}}, 
compression=org.apache.cassandra.schema.CompressionParams@8f72cea3, 
extensions={}, cdc=false},comparator=comparator(),partitionColumns=[[] | [a b c 
data]],partitionKeyColumns=[r],clusteringColumns=[],keyValidator=org.apache.cassandra.db.marshal.UTF8Type,columnMetadata=[a,
 r, b, c, 
data],droppedColumns={},triggers=[],indexes=[org.apache.cassandra.schema.IndexMetadata@78b5523d[id=2e3c652f-e00c-3bd3-b2e9-90de1c170900,name=test_a_idx,kind=CUSTOM,options={analyzer_class=org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer,
 case_sensitive=true, class_name=org.apache.cassandra.index.sasi.SASIIndex, 
target=a}], 
org.apache.cassandra.schema.IndexMetadata@4e4f20d2[id=6b00489b-7010-396e-9348-9f32f5167f88,name=test_b_idx,kind=CUSTOM,options={analyzer_class=org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer,
 case_sensitive=true, class_name=org.apache.cassandra.index.sasi.SASIIndex, 
target=b}], 
org.apache.cassandra.schema.IndexMetadata@681c0362[id=45fcb286-b87a-3d18-a04b-b899a9880c91,name=test_c_idx,kind=CUSTOM,options={analyzer_class=org.apache.cassandra.index.sasi.analyzer.NonTokenizingAnalyzer,
 case_sensitive=true, class_name=org.apache.cassandra.index.sasi.SASIIndex, 
target=c}]]] [a = 0, c = 44859]
TRACE [ReadStage-2] 2016-12-06 17:58:08,744 QueryController.java:289 - 
Expressions: [Expression{name: c, op: EQ, lower: (44859, true), upper: (44859, 
true), exclusions: []}, Expression{name: a, op: EQ, lower: (0, true), upper: 
(0, true), exclusions: []}], Op: AND
TRACE [ReadStage-2] 2016-12-06 17:58:08,745 QueryController.java:321 - Final 
view: {Expression{name: a, op: EQ, lower: (0, true), upper: (0, true), 
exclusions: []}=[SSTableIndex(column: a, SSTable: 

[jira] [Commented] (CASSANDRA-12910) SASI: calculatePrimary() always returns null

2016-12-06 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726019#comment-15726019
 ] 

Alex Petrov commented on CASSANDRA-12910:
-

I've used a simple test to verify and I think your fix is good. Although I do 
not see a good way to test it without exposing a lot of internals for now.

So what happens is in {{getView}} instead of querying all SSTables, we're going 
to query only those which would (possibly) yield results for the expression 
that involves the least amount of sstables. The order does not matter, as it's 
going to be changed later in {{RangeIntersectionIterator}}, depending on the 
size of token trees. 

I've tested the patch on our CI: 

|[3.X|https://github.com/ifesdjeen/cassandra/tree/12910-3.x]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-3.x-dtest/]|
|[trunk|https://github.com/ifesdjeen/cassandra/tree/12910-trunk]|[utest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-testall/]|[dtest|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-12910-trunk-dtest/]|


> SASI: calculatePrimary() always returns null
> 
>
> Key: CASSANDRA-12910
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12910
> Project: Cassandra
>  Issue Type: Bug
>  Components: sasi
>Reporter: Corentin Chary
>Assignee: Corentin Chary
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 0002-sasi-fix-calculatePrimary.patch
>
>
> While investigating performance issues with SASI  
> (https://github.com/criteo/biggraphite/issues/174 if you want to know more) I 
> ended finding calculatePrimary() in QueryController.java which apparently 
> should return the "primary index".
> It lacks documentation, and I'm unsure what the "primary index" should be, 
> but apparently this function never returns one because primaryIndexes.size() 
> is always 0.
> https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/index/sasi/plan/QueryController.java#L237
> I'm unsure if the proper fix is checking if the collection is empty or 
> reversing the operator (selecting the index with higher cardinality versus 
> the one with lower cardinality).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8398) Expose time spent waiting in thread pool queue

2016-12-06 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725999#comment-15725999
 ] 

Michael Shuler commented on CASSANDRA-8398:
---

I just sent [~dikanggu] an email with details.

> Expose time spent waiting in thread pool queue 
> ---
>
> Key: CASSANDRA-8398
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8398
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: T Jake Luciani
>Assignee: Dikang Gu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.12
>
> Attachments: tpstats.png
>
>
> We are missing an important source of latency in our system, the time waiting 
> to be processed by thread pools.  We should add a metric for this so someone 
> can easily see how much time is spent just waiting to be processed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-12859) Column-level permissions

2016-12-06 Thread Boris Melamed (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725984#comment-15725984
 ] 

Boris Melamed edited comment on CASSANDRA-12859 at 12/6/16 4:36 PM:


I've got a working implementation with unit tests and dtests. 
Planning to prepare a clean commit for review, by tomorrow.
Still to do, at this stage: 
Testing:
- A few more tests (special chars in columns names, MVs)

Implementation and testing:
- LIST PERMISSIONS.
- Purging of column constraints after dropping columns.


was (Author: bmel):
I've got a working implementation with unit tests and dtests. 
Planning to prepare a clean commit for review, by tomorrow.
Still to do, at this stage: 
Testing:
- A few more tests (special chars in columns names, MVs)
Implementation and testing:
- LIST PERMISSIONS.
- Purging of column constraints after dropping columns.

> Column-level permissions
> 
>
> Key: CASSANDRA-12859
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12859
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core, CQL
>Reporter: Boris Melamed
>  Labels: doc-impacting
> Attachments: Cassandra Proposal - Column-level permissions v2.docx, 
> Cassandra Proposal - Column-level permissions.docx
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> h4. Here is a draft of: 
> Cassandra Proposal - Column-level permissions.docx (attached)
> h4. Quoting the 'Overview' section:
> The purpose of this proposal is to add column-level (field-level) permissions 
> to Cassandra. It is my intent to soon start implementing this feature in a 
> fork, and to submit a pull request once it’s ready.
> h4. Motivation
> Cassandra already supports permissions on keyspace and table (column family) 
> level. Sources:
> * http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra
> * https://cassandra.apache.org/doc/latest/cql/security.html#data-control
> At IBM, we have use cases in the area of big data analytics where 
> column-level access permissions are also a requirement. All industry RDBMS 
> products are supporting this level of permission control, and regulators are 
> expecting it from all data-based systems.
> h4. Main day-one requirements
> # Extend CQL (Cassandra Query Language) to be able to optionally specify a 
> list of individual columns, in the {{GRANT}} statement. The relevant 
> permission types are: {{MODIFY}} (for {{UPDATE}} and {{INSERT}}) and 
> {{SELECT}}.
> # Persist the optional information in the appropriate system table 
> ‘system_auth.role_permissions’.
> # Enforce the column access restrictions during execution. Details:
> #* Should fit with the existing permission propagation down a role chain.
> #* Proposed message format when a user’s roles give access to the queried 
> table but not to all of the selected, inserted, or updated columns:
>   "User %s has no %s permission on column %s of table %s"
> #* Error will report only the first checked column. 
> Nice to have: list all inaccessible columns.
> #* Error code is the same as for table access denial: 2100.
> h4. Additional day-one requirements
> # Reflect the column-level permissions in statements of type 
> {{LIST ALL PERMISSIONS OF someuser;}}
> # When columns are dropped or renamed, trigger purging or adapting of their 
> permissions
> # Performance should not degrade in any significant way.
> # Backwards compatibility
> #* Permission enforcement for DBs created before the upgrade should continue 
> to work with the same behavior after upgrading to a version that allows 
> column-level permissions.
> #* Previous CQL syntax will remain valid, and have the same effect as before.
> h4. Documentation
> * 
> https://cassandra.apache.org/doc/latest/cql/security.html#grammar-token-permission
> * Feedback request: any others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12859) Column-level permissions

2016-12-06 Thread Boris Melamed (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725984#comment-15725984
 ] 

Boris Melamed commented on CASSANDRA-12859:
---

I've got a working implementation with unit tests and dtests. 
Planning to prepare a clean commit for review, by tomorrow.
Still to do, at this stage: 
Testing:
- A few more tests (special chars in columns names, MVs)
Implementation and testing:
- LIST PERMISSIONS.
- Purging of column constraints after dropping columns.

> Column-level permissions
> 
>
> Key: CASSANDRA-12859
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12859
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core, CQL
>Reporter: Boris Melamed
>  Labels: doc-impacting
> Attachments: Cassandra Proposal - Column-level permissions v2.docx, 
> Cassandra Proposal - Column-level permissions.docx
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> h4. Here is a draft of: 
> Cassandra Proposal - Column-level permissions.docx (attached)
> h4. Quoting the 'Overview' section:
> The purpose of this proposal is to add column-level (field-level) permissions 
> to Cassandra. It is my intent to soon start implementing this feature in a 
> fork, and to submit a pull request once it’s ready.
> h4. Motivation
> Cassandra already supports permissions on keyspace and table (column family) 
> level. Sources:
> * http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra
> * https://cassandra.apache.org/doc/latest/cql/security.html#data-control
> At IBM, we have use cases in the area of big data analytics where 
> column-level access permissions are also a requirement. All industry RDBMS 
> products are supporting this level of permission control, and regulators are 
> expecting it from all data-based systems.
> h4. Main day-one requirements
> # Extend CQL (Cassandra Query Language) to be able to optionally specify a 
> list of individual columns, in the {{GRANT}} statement. The relevant 
> permission types are: {{MODIFY}} (for {{UPDATE}} and {{INSERT}}) and 
> {{SELECT}}.
> # Persist the optional information in the appropriate system table 
> ‘system_auth.role_permissions’.
> # Enforce the column access restrictions during execution. Details:
> #* Should fit with the existing permission propagation down a role chain.
> #* Proposed message format when a user’s roles give access to the queried 
> table but not to all of the selected, inserted, or updated columns:
>   "User %s has no %s permission on column %s of table %s"
> #* Error will report only the first checked column. 
> Nice to have: list all inaccessible columns.
> #* Error code is the same as for table access denial: 2100.
> h4. Additional day-one requirements
> # Reflect the column-level permissions in statements of type 
> {{LIST ALL PERMISSIONS OF someuser;}}
> # When columns are dropped or renamed, trigger purging or adapting of their 
> permissions
> # Performance should not degrade in any significant way.
> # Backwards compatibility
> #* Permission enforcement for DBs created before the upgrade should continue 
> to work with the same behavior after upgrading to a version that allows 
> column-level permissions.
> #* Previous CQL syntax will remain valid, and have the same effect as before.
> h4. Documentation
> * 
> https://cassandra.apache.org/doc/latest/cql/security.html#grammar-token-permission
> * Feedback request: any others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12859) Column-level permissions

2016-12-06 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15725974#comment-15725974
 ] 

Jeremiah Jordan commented on CASSANDRA-12859:
-

bq. 5.  Granting permissions on columns is additive, not replacing. This may 
indeed be more intuitive. For now, however, one cannot specify columns on 
REVOKE. Therefore, the only way to revoke a column permission is to revoke that 
permission on the whole table and then to grant the permission on the 
previously included columns except c1.

That is very user un-friendly.  If GRANT is going to be additive, not 
replacing, then REVOKE needs to optionally take columns as well so that it is 
subtractive.

> Column-level permissions
> 
>
> Key: CASSANDRA-12859
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12859
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Core, CQL
>Reporter: Boris Melamed
>  Labels: doc-impacting
> Attachments: Cassandra Proposal - Column-level permissions v2.docx, 
> Cassandra Proposal - Column-level permissions.docx
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> h4. Here is a draft of: 
> Cassandra Proposal - Column-level permissions.docx (attached)
> h4. Quoting the 'Overview' section:
> The purpose of this proposal is to add column-level (field-level) permissions 
> to Cassandra. It is my intent to soon start implementing this feature in a 
> fork, and to submit a pull request once it’s ready.
> h4. Motivation
> Cassandra already supports permissions on keyspace and table (column family) 
> level. Sources:
> * http://www.datastax.com/dev/blog/role-based-access-control-in-cassandra
> * https://cassandra.apache.org/doc/latest/cql/security.html#data-control
> At IBM, we have use cases in the area of big data analytics where 
> column-level access permissions are also a requirement. All industry RDBMS 
> products are supporting this level of permission control, and regulators are 
> expecting it from all data-based systems.
> h4. Main day-one requirements
> # Extend CQL (Cassandra Query Language) to be able to optionally specify a 
> list of individual columns, in the {{GRANT}} statement. The relevant 
> permission types are: {{MODIFY}} (for {{UPDATE}} and {{INSERT}}) and 
> {{SELECT}}.
> # Persist the optional information in the appropriate system table 
> ‘system_auth.role_permissions’.
> # Enforce the column access restrictions during execution. Details:
> #* Should fit with the existing permission propagation down a role chain.
> #* Proposed message format when a user’s roles give access to the queried 
> table but not to all of the selected, inserted, or updated columns:
>   "User %s has no %s permission on column %s of table %s"
> #* Error will report only the first checked column. 
> Nice to have: list all inaccessible columns.
> #* Error code is the same as for table access denial: 2100.
> h4. Additional day-one requirements
> # Reflect the column-level permissions in statements of type 
> {{LIST ALL PERMISSIONS OF someuser;}}
> # When columns are dropped or renamed, trigger purging or adapting of their 
> permissions
> # Performance should not degrade in any significant way.
> # Backwards compatibility
> #* Permission enforcement for DBs created before the upgrade should continue 
> to work with the same behavior after upgrading to a version that allows 
> column-level permissions.
> #* Previous CQL syntax will remain valid, and have the same effect as before.
> h4. Documentation
> * 
> https://cassandra.apache.org/doc/latest/cql/security.html#grammar-token-permission
> * Feedback request: any others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >