[jira] [Commented] (CASSANDRA-12481) dtest failure in cqlshlib.test.test_cqlsh_output.TestCqlshOutput.test_describe_keyspace_output

2016-08-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15429245#comment-15429245
 ] 

Stefania commented on CASSANDRA-12481:
--

May be related to CASSANDRA-12315?

> dtest failure in 
> cqlshlib.test.test_cqlsh_output.TestCqlshOutput.test_describe_keyspace_output
> --
>
> Key: CASSANDRA-12481
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12481
> Project: Cassandra
>  Issue Type: Test
>Reporter: Craig Kodman
>Assignee: DS Test Eng
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.0_cqlsh_tests/29/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_describe_keyspace_output
> {code}
> Error Message
> errors={'127.0.0.1': 'Client request timeout. See 
> Session.execute[_async](timeout)'}, last_host=127.0.0.1
> {code}
> http://cassci.datastax.com/job/cassandra-3.0_cqlsh_tests/lastCompletedBuild/cython=no,label=ctool-lab/testReport/cqlshlib.test.test_cqlsh_output/TestCqlshOutput/test_describe_keyspace_output/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6216) Level Compaction should persist last compacted key per level

2016-08-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15429243#comment-15429243
 ] 

Marcus Eriksson commented on CASSANDRA-6216:


If we want to start compaction in level n, maybe we could find the newest (by 
creation or modification time) file in level n+1 and use its last token when 
finding the start point in level n?

> Level Compaction should persist last compacted key per level
> 
>
> Key: CASSANDRA-6216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6216
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: sankalp kohli
>Priority: Minor
>  Labels: compaction, lcs
> Attachments: JIRA-6216.diff
>
>
> Level compaction does not persist the last compacted key per level. This is 
> important for higher levels. 
> The sstables with higher token and in higher levels wont get a chance to 
> compact as the last compacted key will get reset after a restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11841) Add keep-alive to stream protocol

2016-08-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15429101#comment-15429101
 ] 

Paulo Motta commented on CASSANDRA-11841:
-

Thanks for review!

bq. Ninja fix though needs to be updated

oops, fixed above branches and resubmitted tests (will update when CI looks 
good).

> Add keep-alive to stream protocol
> -
>
> Key: CASSANDRA-11841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11841
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 3.x
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6216) Level Compaction should persist last compacted key per level

2016-08-19 Thread Dikang Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15429086#comment-15429086
 ] 

Dikang Gu commented on CASSANDRA-6216:
--

I'd like to work on this, I'll see if I can store the last compacted keys in 
CFMetaData, sounds a correct direction?

> Level Compaction should persist last compacted key per level
> 
>
> Key: CASSANDRA-6216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6216
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: sankalp kohli
>Priority: Minor
>  Labels: compaction, lcs
> Attachments: JIRA-6216.diff
>
>
> Level compaction does not persist the last compacted key per level. This is 
> important for higher levels. 
> The sstables with higher token and in higher levels wont get a chance to 
> compact as the last compacted key will get reset after a restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-2848) Make the Client API support passing down timeouts

2016-08-19 Thread Geoffrey Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-2848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Yu updated CASSANDRA-2848:
---
Attachment: 2848-trunk-v2.txt

I'm attaching a second version of the patch that incorporates the changes in 
CASSANDRA-12256.

*TL;DR:* The timeout is represented as an {{OptionalLong}} that is encoded in 
{{QueryOptions}}. It is passed all the way to the replica nodes on reads 
through {{ReadCommand}}, but is only kept on the coordinator for writes.


The optional client specified timeout is decoded as a part of {{QueryOptions}}. 
Since this timeout may or may not be specified by a client, I opted to use an 
{{OptionalLong}} in an effort to make it clearer in the code that this is 
optional. I’ve gated the use of the new timeout flag (and encoding the timeout) 
to protocol v5 and above.

On the read path, the timeout is kept within the {{ReadCommand}} and referenced 
in the {{ReadCallback.awaitResults()}}. It is also serialized within the 
{{ReadCommand}} so that replica nodes can use it when setting the monitoring 
time in {{ReadCommandVerbHandler}}. Of course, because the time when the query 
started is not propagated to the replicas, this will only enforce the timeout 
from when the {{MessageIn}} was constructed.

On the write path, the timeout is just passed through the call stack into the 
{{AbstractWriteResponseHandler}}/{{AbstractPaxosCallback}} where it is 
referenced in the respective {{await()}} calls.

I had investigated the possibility of passing the timeout to the replicas on 
the write path. To do so we'd need to incorporate it into the outgoing 
internode message when making a write, meaning placing it into {{Mutation}} or 
otherwise creating some sort of wrapper around a mutation that can hold the 
timeout. It seemed like this would be a very invasive change for minimal gain, 
considering being able to abort an in progress write didn't seem as useful 
compared to aborting an in progress read.

This still requires a version bump in the internode protocol to support the 
change in serialization of {{ReadCommand}} (I haven't touched 
{{MessagingService.current_version}} yet, though). If we don't want to wait 
till 4.0, we can delay this part of the patch and just retain the custom 
timeout on the coordinator (i.e. don't serialize the timeout). Once the branch 
for 4.0 is available, we can modify the serialization to allow us to pass the 
timeout to the replicas.

I'd also like to include some dtests for this, namely to just validate which 
timeout is being used on the coordinator. Is the accepted practice for doing 
something like this to log something and assert for the presence of the log 
entry? I want to avoid relying on the actual timeout observed since that can 
cause the test to be flaky.

> Make the Client API support passing down timeouts
> -
>
> Key: CASSANDRA-2848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2848
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Goffinet
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 2848-trunk-v2.txt, 2848-trunk.txt
>
>
> Having a max server RPC timeout is good for worst case, but many applications 
> that have middleware in front of Cassandra, might have higher timeout 
> requirements. In a fail fast environment, if my application starting at say 
> the front-end, only has 20ms to process a request, and it must connect to X 
> services down the stack, by the time it hits Cassandra, we might only have 
> 10ms. I propose we provide the ability to specify the timeout on each call we 
> do optionally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8616) sstable tools may result in commit log segments be written

2016-08-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15429059#comment-15429059
 ] 

Yuki Morishita commented on CASSANDRA-8616:
---

[~snazy] the problem comes from opening schema in the tools. They need to 
access schema by opening {{system_schema}} keyspace that creates commit log. I 
tried to patch commit log part, but various "online" features described above 
came up after that. CASSANDRA-9587 can be useful here.

> sstable tools may result in commit log segments be written
> --
>
> Key: CASSANDRA-8616
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8616
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Tyler Hobbs
>Assignee: Yuki Morishita
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
> Attachments: 8161-2.0.txt
>
>
> There was a report of sstable2json causing commitlog segments to be written 
> out when run.  I haven't attempted to reproduce this yet, so that's all I 
> know for now.  Since sstable2json loads the conf and schema, I'm thinking 
> that it may inadvertently be triggering the commitlog code.
> sstablescrub, sstableverify, and other sstable tools have the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11067) Improve SASI syntax

2016-08-19 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15425619#comment-15425619
 ] 

Dave Brosius edited comment on CASSANDRA-11067 at 8/19/16 11:07 PM:


[~xedin]

nitpick

PrefixTermTree.search(Expression e)

sanity checks for e == null, but then dereferences it anyway, here  
super.search(e)

maybe just return emptySet() on e == null


was (Author: dbrosius):
nitpick

PrefixTermTree.search(Expression e)

sanity checks for e == null, but then dereferences it anyway, here  
super.search(e)

maybe just return emptySet() on e == null

> Improve SASI syntax
> ---
>
> Key: CASSANDRA-11067
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11067
> Project: Cassandra
>  Issue Type: Task
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Pavel Yaskevich
>  Labels: client-impacting, sasi
> Fix For: 3.4
>
>
> I think everyone agrees that a LIKE operator would be ideal, but that's 
> probably not in scope for an initial 3.4 release.
> Still, I'm uncomfortable with the initial approach of overloading = to mean 
> "satisfies index expression."  The problem is that it will be very difficult 
> to back out of this behavior once people are using it.
> I propose adding a new operator in the interim instead.  Call it MATCHES, 
> maybe.  With the exact same behavior that SASI currently exposes, just with a 
> separate operator rather than being rolled into =.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12208) Estimated droppable tombstones given by sstablemetadata counts tombstones that aren't actually "droppable"

2016-08-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15429002#comment-15429002
 ] 

Yuki Morishita commented on CASSANDRA-12208:


+1.

One nit: use {{new Option(null, GCGS_KEY, ...)}} instead so we can see long 
option name ("{{--gc_grace_seconds}}") in help.

> Estimated droppable tombstones given by sstablemetadata counts tombstones 
> that aren't actually "droppable"
> --
>
> Key: CASSANDRA-12208
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12208
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Thanh
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> => "Estimated droppable tombstones" given by *sstablemetadata* counts 
> tombstones that aren't actually "droppable"
> To be clear, the "Estimated droppable tombstones" calculation counts 
> tombstones that have not yet passed gc_grace_seconds as droppable tombstones, 
> which is unexpected, since such tombstones aren't droppable.
> To observe the problem:
> Create a table using the default gc_grace_seconds (default gc_grace_seconds 
> is 86400 is 1 day).
> Populate the table with a couple of records.
> Do a delete.
> Do a "nodetool flush" to flush the memtable to disk.
> Do an "sstablemetadata " to get the metadata of the sstable you just 
> created by doing the flush, and observe that the Estimated droppable 
> tombstones is greater than 0.0 (actual value depends on the total number 
> inserts/updates/deletes that you did before triggered the flush)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11752) histograms/metrics in 2.2 do not appear recency biased

2016-08-19 Thread Dave Brosius (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428987#comment-15428987
 ] 

Dave Brosius commented on CASSANDRA-11752:
--

[~eperott] DecayingEstimatedHistogramReservoir.rescale does

long newValue = Math.round((decayingBuckets.get(i) / 
rescaleFactor));

ie, rounds a long. Did you want to do something else, do double division 
perhaps?

> histograms/metrics in 2.2 do not appear recency biased
> --
>
> Key: CASSANDRA-11752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11752
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Chris Burroughs
>Assignee: Per Otterström
>  Labels: metrics
> Fix For: 2.2.8, 3.0.9, 3.8
>
> Attachments: 11752-2.2-v2.txt, 11752-2.2.txt, boost-metrics.png, 
> c-jconsole-comparison.png, c-metrics.png, default-histogram.png, 
> server-patch-v2.png
>
>
> In addition to upgrading to metrics3, CASSANDRA-5657 switched to using  a 
> custom histogram implementation.  After upgrading to Cassandra 2.2 
> histograms/timer metrics are not suspiciously flat.  To be useful for 
> graphing and alerting metrics need to be biased towards recent events.
> I have attached images that I think illustrate this.
>  * The first two are a comparison between latency observed by a C* 2.2 (us) 
> cluster shoring very flat lines and a client (using metrics 2.2.0, ms) 
> showing server performance problems.  We can't rule out with total certainty 
> that something else isn't the cause (that's why we measure from both the 
> client & server) but they very rarely disagree.
>  * The 3rd image compares jconsole viewing of metrics on a 2.2 and 2.1 
> cluster over several minutes.  Not a single digit changed on the 2.2 cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11841) Add keep-alive to stream protocol

2016-08-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428953#comment-15428953
 ] 

Yuki Morishita commented on CASSANDRA-11841:


Thanks, trunk patch looks good to me.

Ninja fix though needs to be updated since you omitted {{String.format}} from 
{{logger.error}}("%s" is for the former, you need "{}" for the latter).
So it needs to be:

{code}
logger.error(String.format("Error while reading from socket from %s.", 
socket.getRemoteSocketAddress()), t);
{code}

> Add keep-alive to stream protocol
> -
>
> Key: CASSANDRA-11841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11841
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 3.x
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11841) Add keep-alive to stream protocol

2016-08-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-11841:
---
Fix Version/s: 3.x
   Status: Open  (was: Patch Available)

> Add keep-alive to stream protocol
> -
>
> Key: CASSANDRA-11841
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11841
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Paulo Motta
>Assignee: Paulo Motta
> Fix For: 3.x
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12490) Add sequence distribution type to cassandra stress

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12490:

Reviewer: Stefania

> Add sequence distribution type to cassandra stress
> --
>
> Key: CASSANDRA-12490
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12490
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Ben Slater
>Assignee: Ben Slater
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12490-trunk.patch
>
>
> When using the write command, cassandra stress sequentially generates seeds. 
> This ensures generated values don't overlap (unless the sequence wraps) 
> providing more predictable number of inserted records (and generating a base 
> set of data without wasted writes).
> When using a yaml stress spec there is no sequenced distribution available. 
> It think it would be useful to have this for doing initial load of data for 
> testing 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12367) Add an API to request the size of a CQL partition

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12367:

Reviewer: Marcus Eriksson

> Add an API to request the size of a CQL partition
> -
>
> Key: CASSANDRA-12367
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12367
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
> Fix For: 3.x
>
> Attachments: 12367-trunk.txt
>
>
> It would be useful to have an API that we could use to get the total 
> serialized size of a CQL partition, scoped by keyspace and table, on disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12466) Use different build directories for Eclipse and Ant

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12466:

Reviewer: T Jake Luciani

> Use different build directories for Eclipse and Ant
> ---
>
> Key: CASSANDRA-12466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12466
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Trivial
> Fix For: 3.x
>
> Attachments: 12466-trunk.patch
>
>
> Currently {{ant generate-eclipse-files}} will create an Eclipse project that 
> will write compile classes into the {{build/classes/main/}} output directory, 
> just as ant does. This causes some issues in practice:
> * Eclipse doesn't recognize changes to compiled classes when compiling with 
> ant. You'll have to manually refresh the project and Eclipse acts weird if 
> you forget to do so.
> * Cleaning the project from Eclipse will remove the {{hotspot_compile}} file 
> and {{ant jar}} won't work any longer as this file is required with ant. 
> Eclipse will be happy to delete it during clean but won't be able to generate 
> it.
> * Tests run by ant may pick up classes compiled by Eclipse that may use 
> different build settings and produce different results for testing
> My suggestion is to simply configure Eclipse to use {{build/classes/eclipse}} 
> instead and leave the ant classes alone. See trivial patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12423) Cells missing from compact storage table after upgrading from 2.1.9 to 3.7

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12423:

Reviewer: Tyler Hobbs

> Cells missing from compact storage table after upgrading from 2.1.9 to 3.7
> --
>
> Key: CASSANDRA-12423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12423
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tomasz Grabiec
>Assignee: Stefania
> Attachments: 12423.tar.gz
>
>
> Schema:
> {code}
> create table ks1.test ( id int, c1 text, c2 text, v int, primary key (id, c1, 
> c2)) with compact storage and compression = {'sstable_compression': ''};
> {code}
> sstable2json before upgrading:
> {code}
> [
> {"key": "1",
>  "cells": [["","0",1470761440040513],
>["a","asd",2470761440040513,"t",1470764842],
>["asd:","0",1470761451368658],
>["asd:asd","0",1470761449416613]]}
> ]
> {code}
> Query result with 2.1.9:
> {code}
> cqlsh> select * from ks1.test;
>  id | c1  | c2   | v
> +-+--+---
>   1 | | null | 0
>   1 | asd |  | 0
>   1 | asd |  asd | 0
> (3 rows)
> {code}
> Query result with 3.7:
> {code}
> cqlsh> select * from ks1.test;
>  id | 6331 | 6332 | v
> +--+--+---
>   1 |  | null | 0
> (1 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12478) cassandra stress still uses CFMetaData.compile()

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12478:

Reviewer: Paulo Motta

> cassandra stress still uses CFMetaData.compile()
> 
>
> Key: CASSANDRA-12478
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12478
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Denis Ranger
>  Labels: stress
> Fix For: 3.0.x
>
> Attachments: 
> 0001-Replaced-using-CFMetaData.compile-in-cassandra-stres.patch
>
>
> Using CFMetaData.compile() on a client tool causes permission problems. To 
> reproduce:
> * Start cassandra under user _cassandra_
> * Run {{chmod -R go-rwx /var/lib/cassandra}} to deny access to other users.
> * Use a non-root user to run {{cassandra-stress}} 
> This produces an access denied message on {{/var/lib/cassandra/commitlog}}.
> The attached fix uses client-mode functionality.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12311) Propagate TombstoneOverwhelmingException to the client

2016-08-19 Thread Geoffrey Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428853#comment-15428853
 ] 

Geoffrey Yu commented on CASSANDRA-12311:
-

Thanks for all the help as well! :)

> Propagate TombstoneOverwhelmingException to the client
> --
>
> Key: CASSANDRA-12311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12311
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
>  Labels: client-impacting, doc-impacting
> Fix For: 3.10
>
> Attachments: 12311-dtest.txt, 12311-trunk-v2.txt, 12311-trunk-v3.txt, 
> 12311-trunk-v4.txt, 12311-trunk-v5.txt, 12311-trunk.txt
>
>
> Right now if a data node fails to perform a read because it ran into a 
> {{TombstoneOverwhelmingException}}, it only responds back to the coordinator 
> node with a generic failure. Under this scheme, the coordinator won't be able 
> to know exactly why the request failed and subsequently the client only gets 
> a generic {{ReadFailureException}}. It would be useful to inform the client 
> that their read failed because we read too many tombstones. We should have 
> the data nodes reply with a failure type so the coordinator can pass this 
> information to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12311) Propagate TombstoneOverwhelmingException to the client

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12311:

   Resolution: Fixed
Fix Version/s: (was: 4.x)
   3.10
   Status: Resolved  (was: Awaiting Feedback)

Okay, finally got a good dtest run.  Everything looks good, so +1.

Committed as {{39df31a06a35b221f55f17ed20947a1a2e33ee1a}} to trunk.  I've 
opened a [dtest pull 
request|https://github.com/riptano/cassandra-dtest/pull/1263] for your changes 
there, which should be merged soon.

Thanks [~geoffxy]!

> Propagate TombstoneOverwhelmingException to the client
> --
>
> Key: CASSANDRA-12311
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12311
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Geoffrey Yu
>Assignee: Geoffrey Yu
>Priority: Minor
>  Labels: client-impacting, doc-impacting
> Fix For: 3.10
>
> Attachments: 12311-dtest.txt, 12311-trunk-v2.txt, 12311-trunk-v3.txt, 
> 12311-trunk-v4.txt, 12311-trunk-v5.txt, 12311-trunk.txt
>
>
> Right now if a data node fails to perform a read because it ran into a 
> {{TombstoneOverwhelmingException}}, it only responds back to the coordinator 
> node with a generic failure. Under this scheme, the coordinator won't be able 
> to know exactly why the request failed and subsequently the client only gets 
> a generic {{ReadFailureException}}. It would be useful to inform the client 
> that their read failed because we read too many tombstones. We should have 
> the data nodes reply with a failure type so the coordinator can pass this 
> information to the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


cassandra git commit: Add error code map to read/write failure responses

2016-08-19 Thread tylerhobbs
Repository: cassandra
Updated Branches:
  refs/heads/trunk fa1131679 -> 39df31a06


Add error code map to read/write failure responses

Patch by Geoffrey Yu; reviewed by Tyler Hobbs for CASSANDRA-12311


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/39df31a0
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/39df31a0
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/39df31a0

Branch: refs/heads/trunk
Commit: 39df31a06a35b221f55f17ed20947a1a2e33ee1a
Parents: fa11316
Author: Geoffrey Yu 
Authored: Fri Aug 19 16:04:39 2016 -0500
Committer: Tyler Hobbs 
Committed: Fri Aug 19 16:04:39 2016 -0500

--
 CHANGES.txt |   2 +
 doc/native_protocol_v5.spec |  27 -
 doc/source/operating/error_codes.txt|  31 +
 .../exceptions/ReadFailureException.java|   7 +-
 .../exceptions/RequestFailureException.java |  19 ++-
 .../exceptions/RequestFailureReason.java|  51 
 .../exceptions/WriteFailureException.java   |   7 +-
 .../apache/cassandra/hints/HintsDispatcher.java |   3 +-
 .../net/IAsyncCallbackWithFailure.java  |   4 +-
 .../cassandra/net/MessageDeliveryTask.java  |  16 +++
 .../org/apache/cassandra/net/MessageIn.java |  26 
 .../apache/cassandra/net/MessagingService.java  |   4 +-
 .../cassandra/net/ResponseVerbHandler.java  |   2 +-
 .../cassandra/repair/AnticompactionTask.java|   3 +-
 .../apache/cassandra/repair/SnapshotTask.java   |   3 +-
 .../service/AbstractWriteResponseHandler.java   |  10 +-
 .../cassandra/service/ActiveRepairService.java  |   3 +-
 .../service/BatchlogResponseHandler.java|   6 +-
 .../apache/cassandra/service/ReadCallback.java  |  11 +-
 .../apache/cassandra/service/StorageProxy.java  |  19 +--
 .../org/apache/cassandra/transport/CBUtil.java  |  29 -
 .../transport/messages/ErrorMessage.java|  43 ++-
 .../cassandra/transport/ErrorMessageTest.java   | 121 +++
 23 files changed, 408 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/39df31a0/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 0e1e118..c123d17 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.10
+ * Extend read/write failure messages with a map of replica addresses
+   to error codes in the v5 native protocol (CASSANDRA-12311)
  * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
  * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054)
  * Fix clustering indexes in presence of static columns in SASI 
(CASSANDRA-12378)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/39df31a0/doc/native_protocol_v5.spec
--
diff --git a/doc/native_protocol_v5.spec b/doc/native_protocol_v5.spec
index edf3093..bbde714 100644
--- a/doc/native_protocol_v5.spec
+++ b/doc/native_protocol_v5.spec
@@ -204,6 +204,7 @@ Table of Contents
 
 [int]  A 4 bytes integer
 [long] A 8 bytes integer
+[byte] A 1 byte unsigned integer
 [short]A 2 bytes unsigned integer
 [string]   A [short] n, followed by n bytes representing an UTF-8
string.
@@ -229,6 +230,9 @@ Table of Contents
[byte] representing the IP address (in practice n can only 
be
either 4 (IPv4) or 16 (IPv6)), following by one [int]
representing the port.
+[inetaddr] An IP address (without a port) to a node. It consists of one
+   [byte] n, that represents the address size, followed by n
+   [byte] representing the IP address.
 [consistency]  A consistency level specification. This is a [short]
representing a consistency level with the following
correspondance:
@@ -1088,7 +1092,7 @@ Table of Contents
responded. Otherwise, the value is != 0.
 0x1300Read_failure: A non-timeout exception during a read request. The 
rest
   of the ERROR message body will be
-
+
   where:
  is the [consistency] level of the query having triggered
  the exception.
@@ -1096,8 +1100,12 @@ Table of Contents
answered the request.
  is an [int] representing the number of replicas 
whose
acknowledgement is required to achieve .
- is an [int] representing the number of nodes that
-  experience a failure while executing the request.
+ 

[jira] [Commented] (CASSANDRA-11373) Cancelled compaction leading to infinite loop in compaction strategy getNextBackgroundTask

2016-08-19 Thread Nimi Wariboko Jr. (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428786#comment-15428786
 ] 

Nimi Wariboko Jr. commented on CASSANDRA-11373:
---

I still run into this issue regularly on 3.5. I'm not sure how to reproduce but 
I my trace logs spam `LeveledManifest.java:556 - Choosing candidates for L0` 
for a given table, and the only consistent way to fix is to change the 
compaction strategy from LCS to STS.

> Cancelled compaction leading to infinite loop in compaction strategy 
> getNextBackgroundTask
> --
>
> Key: CASSANDRA-11373
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11373
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Eduard Tudenhoefner
>Assignee: Marcus Eriksson
>  Labels: lcs
> Fix For: 2.2.6, 3.0.5, 3.5
>
> Attachments: jstack.txt
>
>
> Our test is basically running *nodetool repair* on specific keyspaces (such 
> as keyspace1) and the test is also triggering *nodetool compact keyspace1 
> standard1* in the background. 
> And so it looks like running major compactions & repairs lead to that issue 
> when using *LCS*.
> Below is an excerpt from the *thread dump* (the rest is attached)
> {code}
> "CompactionExecutor:2" #33 daemon prio=1 os_prio=4 tid=0x7f5363e64f10 
> nid=0x3c4e waiting for monitor entry [0x7f53340d8000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.handleNotification(CompactionStrategyManager.java:252)
>   - waiting to lock <0x0006c9362c80> (a 
> org.apache.cassandra.db.compaction.CompactionStrategyManager)
>   at 
> org.apache.cassandra.db.lifecycle.Tracker.notifySSTableRepairedStatusChanged(Tracker.java:434)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.performAnticompaction(CompactionManager.java:550)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$7.runMayThrow(CompactionManager.java:465)
>   at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>Locked ownable synchronizers:
>   - <0x0006c9362ca8> (a 
> java.util.concurrent.ThreadPoolExecutor$Worker)
> "CompactionExecutor:1" #32 daemon prio=1 os_prio=4 tid=0x7f5363e618b0 
> nid=0x3c4d runnable [0x7f5334119000]
>java.lang.Thread.State: RUNNABLE
>   at com.google.common.collect.Iterators$7.computeNext(Iterators.java:650)
>   at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>   at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>   at com.google.common.collect.Iterators.addAll(Iterators.java:361)
>   at com.google.common.collect.Iterables.addAll(Iterables.java:354)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getCandidatesFor(LeveledManifest.java:589)
>   at 
> org.apache.cassandra.db.compaction.LeveledManifest.getCompactionCandidates(LeveledManifest.java:349)
>   - locked <0x0006d0f7a6a8> (a 
> org.apache.cassandra.db.compaction.LeveledManifest)
>   at 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy.getNextBackgroundTask(LeveledCompactionStrategy.java:98)
>   - locked <0x0006d0f7a568> (a 
> org.apache.cassandra.db.compaction.LeveledCompactionStrategy)
>   at 
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getNextBackgroundTask(CompactionStrategyManager.java:95)
>   - locked <0x0006c9362c80> (a 
> org.apache.cassandra.db.compaction.CompactionStrategyManager)
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:257)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> *CPU usage is at 100%*
> {code}
> top -p 15386
> top - 12:12:40 up

[jira] [Updated] (CASSANDRA-6216) Level Compaction should persist last compacted key per level

2016-08-19 Thread sankalp kohli (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sankalp kohli updated CASSANDRA-6216:
-
Assignee: (was: sankalp kohli)

> Level Compaction should persist last compacted key per level
> 
>
> Key: CASSANDRA-6216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6216
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: sankalp kohli
>Priority: Minor
>  Labels: compaction, lcs
> Attachments: JIRA-6216.diff
>
>
> Level compaction does not persist the last compacted key per level. This is 
> important for higher levels. 
> The sstables with higher token and in higher levels wont get a chance to 
> compact as the last compacted key will get reset after a restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6216) Level Compaction should persist last compacted key per level

2016-08-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428769#comment-15428769
 ] 

sankalp kohli commented on CASSANDRA-6216:
--

I did not fix it..not sure if someone else has fixed it

> Level Compaction should persist last compacted key per level
> 
>
> Key: CASSANDRA-6216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6216
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: sankalp kohli
>Priority: Minor
>  Labels: compaction, lcs
> Attachments: JIRA-6216.diff
>
>
> Level compaction does not persist the last compacted key per level. This is 
> important for higher levels. 
> The sstables with higher token and in higher levels wont get a chance to 
> compact as the last compacted key will get reset after a restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11635) test-clientutil-jar unit test fails

2016-08-19 Thread Michael Shuler (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Shuler updated CASSANDRA-11635:
---
Fix Version/s: (was: 3.0.9)
   (was: 3.8)
   3.x
   3.0.x

> test-clientutil-jar unit test fails
> ---
>
> Key: CASSANDRA-11635
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11635
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Michael Shuler
>Assignee: Sylvain Lebresne
>  Labels: unittest
> Fix For: 2.2.8, 3.0.x, 3.x
>
>
> {noformat}
> test-clientutil-jar:
> [junit] Testsuite: org.apache.cassandra.serializers.ClientUtilsTest
> [junit] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 0.314 sec
> [junit] 
> [junit] Testcase: test(org.apache.cassandra.serializers.ClientUtilsTest): 
>   Caused an ERROR
> [junit] org/apache/cassandra/utils/SigarLibrary
> [junit] java.lang.NoClassDefFoundError: 
> org/apache/cassandra/utils/SigarLibrary
> [junit] at org.apache.cassandra.utils.UUIDGen.hash(UUIDGen.java:328)
> [junit] at 
> org.apache.cassandra.utils.UUIDGen.makeNode(UUIDGen.java:307)
> [junit] at 
> org.apache.cassandra.utils.UUIDGen.makeClockSeqAndNode(UUIDGen.java:256)
> [junit] at 
> org.apache.cassandra.utils.UUIDGen.(UUIDGen.java:39)
> [junit] at 
> org.apache.cassandra.serializers.ClientUtilsTest.test(ClientUtilsTest.java:56)
> [junit] Caused by: java.lang.ClassNotFoundException: 
> org.apache.cassandra.utils.SigarLibrary
> [junit] at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> [junit] at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> [junit] at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> [junit] 
> [junit] 
> [junit] Test org.apache.cassandra.serializers.ClientUtilsTest FAILED
> BUILD FAILED
> {noformat}
> I'll see if I can find a spot where this passes, but it appears to have been 
> failing for a long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6216) Level Compaction should persist last compacted key per level

2016-08-19 Thread Wei Deng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428713#comment-15428713
 ] 

Wei Deng commented on CASSANDRA-6216:
-

Isn't this already implemented so this should be closed with a proper fixVer 
number?

> Level Compaction should persist last compacted key per level
> 
>
> Key: CASSANDRA-6216
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6216
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: sankalp kohli
>Assignee: sankalp kohli
>Priority: Minor
>  Labels: compaction, lcs
> Attachments: JIRA-6216.diff
>
>
> Level compaction does not persist the last compacted key per level. This is 
> important for higher levels. 
> The sstables with higher token and in higher levels wont get a chance to 
> compact as the last compacted key will get reset after a restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-6323) Create new sstables in the highest possible level

2016-08-19 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-6323:

Labels: compaction lcs  (was: compaction)

> Create new sstables in the highest possible level
> -
>
> Key: CASSANDRA-6323
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6323
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jonathan Ellis
>Assignee: Minh Do
>Priority: Minor
>  Labels: compaction, lcs
> Fix For: 2.1.x
>
>
> See PickLevelForMemTableOutput here: 
> https://github.com/google/leveldb/blob/master/db/version_set.cc#L493
> (Moving from CASSANDRA-5936)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs updated CASSANDRA-12420:

Status: Open  (was: Patch Available)

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-12420.patch
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12510) Decommission process should raise an error flag when nodes in a DC don't have any nodes to stream data to

2016-08-19 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428678#comment-15428678
 ] 

Jeff Jirsa commented on CASSANDRA-12510:


In short: We should guard against decommission that will drop # of replicas 
below configured RF, and allow --force to continue if intentional.


> Decommission process should raise an error flag when nodes in a DC don't have 
> any nodes to stream data to
> -
>
> Key: CASSANDRA-12510
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12510
> Project: Cassandra
>  Issue Type: Bug
>  Components: Streaming and Messaging
> Environment: C* version 3.3
>Reporter: Atin Sood
>Priority: Minor
>
> Steps to replicate :
> - Create a 3 node cluster in DC1 and create a keyspace test_keyspace with 
> table test_table with replication strategy NetworkTopologyStrategy , DC1=3 . 
> Populate some data into this table.
> - Add 5 more nodes to this cluster, but in DC2. Also do not alter the 
> keyspace to add the new DC2 to replication (this is intentional and the 
> reason why the bug shows up). So the desc keyspace should still list 
> NetworkTopologyStrategy with DC1=3 as RF
> - As expected, this will now be a 8 node cluster with 3 nodes in DC1 and 5 in 
> DC2
> - Now start decommissioning the nodes in DC1. Note that the decommission runs 
> fine on all the 3 nodes, but since the new nodes are in DC2 and the RF for 
> keyspace is restricted to DC1, the new 5 nodes won't get any data.
> - You will now end with the 5 node cluster which has no data from the 
> decommissioned 3 nodes and hence ending up in data loss
> I do understand that this problem could have been avoided if we perform an 
> alter stmt and add DC2 replication before adding the 5 nodes. But the fact 
> that decommission ran fine on the 3 nodes on DC1 without complaining that 
> there were no nodes to stream its data seems a little discomforting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12508) nodetool repair returns status code 0 for some errors

2016-08-19 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428661#comment-15428661
 ] 

Blake Eggleston commented on CASSANDRA-12508:
-

| 3.0 | [branch|https://github.com/bdeggleston/cassandra/tree/12508-3.0] | 
[dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-12508-3.0-dtest/]
 | 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-12508-3.0-testall/]
 |
| trunk | [branch|https://github.com/bdeggleston/cassandra/tree/12508-trunk] | 
[dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-12508-trunk-dtest/]
 | 
[testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-12508-trunk-testall/]
 |

> nodetool repair returns status code 0 for some errors
> -
>
> Key: CASSANDRA-12508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12508
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>
> For instance, when specifying hosts that don’t exist, an error message is 
> logged, but the return code is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12510) Decommission process should raise an error flag when nodes in a DC don't have any nodes to stream data to

2016-08-19 Thread Atin Sood (JIRA)
Atin Sood created CASSANDRA-12510:
-

 Summary: Decommission process should raise an error flag when 
nodes in a DC don't have any nodes to stream data to
 Key: CASSANDRA-12510
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12510
 Project: Cassandra
  Issue Type: Bug
  Components: Streaming and Messaging
 Environment: C* version 3.3
Reporter: Atin Sood
Priority: Minor


Steps to replicate :

- Create a 3 node cluster in DC1 and create a keyspace test_keyspace with table 
test_table with replication strategy NetworkTopologyStrategy , DC1=3 . Populate 
some data into this table.

- Add 5 more nodes to this cluster, but in DC2. Also do not alter the keyspace 
to add the new DC2 to replication (this is intentional and the reason why the 
bug shows up). So the desc keyspace should still list NetworkTopologyStrategy 
with DC1=3 as RF

- As expected, this will now be a 8 node cluster with 3 nodes in DC1 and 5 in 
DC2

- Now start decommissioning the nodes in DC1. Note that the decommission runs 
fine on all the 3 nodes, but since the new nodes are in DC2 and the RF for 
keyspace is restricted to DC1, the new 5 nodes won't get any data.

- You will now end with the 5 node cluster which has no data from the 
decommissioned 3 nodes and hence ending up in data loss

I do understand that this problem could have been avoided if we perform an 
alter stmt and add DC2 replication before adding the 5 nodes. But the fact that 
decommission ran fine on the 3 nodes on DC1 without complaining that there were 
no nodes to stream its data seems a little discomforting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12509) Shutdown process triggered twice during if the node is drained

2016-08-19 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-12509:
---

 Summary: Shutdown process triggered twice during if the node is 
drained
 Key: CASSANDRA-12509
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12509
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Petrov


If the node is drained, the {{StorageService#drain}} 
[method|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L4212]
 is called, which triggers shutdown of mutation stage, messaging service, 
compaction, batchlog etc. In the end of this process, the node is moved to 
{{DRAINED}} status with the process still running. 

When JVM is shutdown, the JVM shutdown hooks are ran, which are subscribed 
during the server initialisation: 
{{Runtime.getRuntime().addShutdownHook(drainOnShutdown);}} 
[here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L575-L636].
 

I noticed this behaviour while reviewing [CASSANDRA-12461], as if we'd like add 
custom pre and post-shutdown hooks, most likely it makes sense to run them once 
(or user might expect such behaviour). 

Is this behaviour correct? Should we run whole shutdown process twice or just 
once in "drain" and no-op during JVM shutdown?  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12508) nodetool repair returns status code 0 for some errors

2016-08-19 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428618#comment-15428618
 ] 

Blake Eggleston commented on CASSANDRA-12508:
-

If I'm asking nodetool to do something and there's nothing to do, that's one 
thing, and I'd expect 0. If I'm asking nodetool to do something it can't, I'd 
expect an error. In this case, C* even sends an error notification to nodetool, 
it's just not handled.

> nodetool repair returns status code 0 for some errors
> -
>
> Key: CASSANDRA-12508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12508
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>
> For instance, when specifying hosts that don’t exist, an error message is 
> logged, but the return code is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12239) Add mshuler's key FE4B2BDA to dist/cassandra/KEYS

2016-08-19 Thread Michael Shuler (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428634#comment-15428634
 ] 

Michael Shuler commented on CASSANDRA-12239:


[~urandom] do you have any thoughts on this topic?

Up to now, the committers that have acted as Release Manager have been added to 
{{KEYS}}, since these are the users building the release artifacts uploaded to 
Nexus. These same committers have also signed the debian repository Release 
file.

If multiple people share a common gpg secret key to sign artifacts and deb 
repo, that key will also need to be added to KEYS and the deb repo installation 
instructions need to be updated to import the key. A similar security issue 
could be raised for a shared common key. OS vendors like Debian/Ubuntu/Red Hat 
use system automation to sign package releases, automated by validating trusted 
signatures by uploaders, so that users have a single or small number of keys to 
maintain, updating a key package along the way, as needed.

I looked through a bunch of different Apache project {{KEYS}} and also found 
some examples of installation artifact validation instructions those projects 
provide. All the ones I looked at had {{KEYS}} files containing committer's 
keys. I did not find one that had something that looked like a common shared 
key. I did not exhaustively look at every Apache project, but enough to get a 
general idea of how a lot of other projects use {{KEYS}}.

Example: https://archive.apache.org/dist/httpd/ - the readme section at the end 
has a verification tidbit that instructs to use the KEYS file 
https://archive.apache.org/dist/httpd/KEYS which contains committer's public 
keys in the same manner as Cassandra does currently.

I did not find an Apache project best practices document on how to handle this 
situation on packaging, but it appears that Cassandra has been following how 
other projects proceed.

As for the "list of debian devs" Sylvain's seeing in my patch, as well as our 
existing {{KEYS}} file, those are signatures on mine and Eric's keys.

We should work this out, prior to my publicly releasing any artifacts, 
otherwise, one of the existing committers in KEYS should do the releases.

> Add mshuler's key FE4B2BDA to dist/cassandra/KEYS
> -
>
> Key: CASSANDRA-12239
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12239
> Project: Cassandra
>  Issue Type: Task
>  Components: Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
> Fix For: 3.x
>
> Attachments: KEYS+mshuler.diff.txt
>
>
> I've started working on packaging with the 3.8 release and signed the staging 
> artifacts with FE4B2BDA. This key will need to be added for the debian 
> repository signature to function correctly, if it's released as-is, or 
> perhaps [~tjake] will need to re-sign the release. Users will need to also 
> fetch this new key and add to {{apt-key}}.
> {{KEYS}} patch attached.
> Assigned to myself, but I am not sure exactly where {{KEYS}} lives - in svn 
> somewhere or a direct upload? :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12315) dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_client_warnings

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs resolved CASSANDRA-12315.
-
Resolution: Cannot Reproduce

The multiplexer still hasn't been able to reproduce with a few hundred runs, 
even after reducing the extra logging.  I'm going to close this as Cannot 
Reproduce for now.  If we see another problem similar to this again, we can 
reopen.

> dtest failure in cqlsh_tests.cqlsh_tests.TestCqlsh.test_client_warnings
> ---
>
> Key: CASSANDRA-12315
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12315
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Sean McCarthy
>Assignee: Tyler Hobbs
>  Labels: cqlsh, dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_dtest/1317/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_client_warnings
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/tools.py", line 290, in wrapped
> f(obj)
>   File "/home/automaton/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 
> 1424, in test_client_warnings
> self.assertEqual(len(stderr), 0, "Failed to execute cqlsh: 
> {}".format(stderr))
>   File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual
> assertion_func(first, second, msg=msg)
>   File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual
> raise self.failureException(msg)
> 'Failed to execute cqlsh: :3:OperationTimedOut: errors={\'127.0.0.1\': 
> \'Client request timeout. See Session.execute[_async](timeout)\'}, 
> last_host=127.0.0.1\n:5:InvalidRequest: Error from server: code=2200 
> [Invalid query] message="Keyspace \'client_warnings\' does not 
> exist"\n:7:InvalidRequest: Error from server: code=2200 [Invalid 
> query] message="No keyspace has been specified. USE a keyspace, or explicitly 
> specify keyspace.tablename"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever

2016-08-19 Thread Tyler Hobbs (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428621#comment-15428621
 ] 

Tyler Hobbs commented on CASSANDRA-12420:
-

You are correct that this wouldn't break compatibility with other 2.1.x nodes, 
but it could cause problems during upgrades to 3.x.  In 3.x, another optional 
field (remainingInPartition) has already been added to the end of the 
PagingState serialization format.  Deserialization of the PagingState is only 
conditional on the native protocol version, not on the Cassandra version of 
other nodes in the cluster, so we can't safely introduce this change.

In CASSANDRA-6706, the decision was made to continue returning duplicate 
results in 2.1 when there are duplicate {{IN}} values in order to not make a 
(potentially) breaking change in a bugfix release.  However, this ticket 
represents a pretty big motivation to change that even in 2.1.  So, I'm 
thinking that we should go ahead and make 2.1 behave like 2.2 and 3.x and not 
return duplicate results in order to avoid this.

[~blerer] do you agree with the above?

> Duplicated Key in IN clause with a small fetch size will run forever
> 
>
> Key: CASSANDRA-12420
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12420
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: cassandra 2.1.14, driver 2.1.7.1
>Reporter: ZhaoYang
>Assignee: ZhaoYang
> Fix For: 2.1.x
>
> Attachments: CASSANDRA-12420.patch
>
>
> This can be easily reproduced and fetch size is smaller than the correct 
> number of rows.
> A table has 2 partition key, 1 clustering key, 1 column.
> >Select select = QueryBuilder.select().from("ks", "cf");
> >select.where().and(QueryBuilder.eq("a", 1));
> >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1)));
> >select.setFetchSize(5);
> Now we put a distinct method in client side to eliminate the duplicated key, 
> but it's better to fix inside Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11906) Unstable JVM due too many files when anticompacting big LCS tables

2016-08-19 Thread Sean McCarthy (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428610#comment-15428610
 ] 

Sean McCarthy commented on CASSANDRA-11906:
---

First attempt at reproducing this:
- 3 node cluster,
- stressed 100M using LCS. 7gb data per node.
- 299 sstables of 25mb each
- repair seemed to work out fine.

Trying again with 50gb data per node, sstables of 50mb.

> Unstable JVM due too many files when anticompacting big LCS tables
> --
>
> Key: CASSANDRA-11906
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11906
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Assignee: Sean McCarthy
> Fix For: 3.0.x
>
>
> I have recently moved from C* 2.1.x to C* 3.0.6. The setup is quite 
> heavy:
>   - 13 nodes with spinning disks
>   - ~120 GB of data per node
>   - 50% of CFs are LCS and have quite wide rows.
>   - 2/3 CFs with LCS have more than 200 SStables
> Incremental repairs do not seem to play really well with that.
> I have been running some tests node by node by using the -pr option:
> {code:xml}
> nodetool -h localhost repair -pr keyscheme
> {code}
> and to my surprise the whole process takes quite some time (1 hour
> minimum, 8 hours if I haven't been repairing for 5/6 days).
> Yesterday I tried to run the command with the -seq option so to 
> decrease the number of simultanoues compactions. After a while
> the node on which I was running the repair simply died during
> the anticompaction phase with the following
> exception in the logs.
> {code:xml}
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-25 21:54:21,868 
> ScheduledReporter.java:119 - RuntimeException thrown from 
> GraphiteReporter#report. Exception was suppressed.
> java.lang.RuntimeException: Failed to list files in 
> /data/cassandra/data/keyschema/columnfamily-3996ce80b7ac11e48a9b6776bf484396
>   at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:57)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:547)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.filter(Directories.java:691)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.listFiles(Directories.java:662)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$TrueFilesSizeVisitor.(Directories.java:981)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.getTrueAllocatedSizeIn(Directories.java:893)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.trueSnapshotsSize(Directories.java:883) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.trueSnapshotsSize(ColumnFamilyStore.java:2332)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:637) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:634) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:281)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> ~[metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> ~[metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException: 
> /data/cassandra/data/keyschema/columnfamily-3996ce80b7ac11e48a9b6776bf484396/ma_txn_anticompactionafterrepair_f20b50d0-22bd-11e6-970f-6f22464f4624.log:
>  Too many op

[jira] [Comment Edited] (CASSANDRA-12508) nodetool repair returns status code 0 for some errors

2016-08-19 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428604#comment-15428604
 ] 

Chris Lohfink edited comment on CASSANDRA-12508 at 8/19/16 6:35 PM:


Doesn't {{0}} represent the "nothing to do here, complete" case, where a node 
that doesnt exist it would be the correct response? Its also returns zero on 
things like if a keyspace with RF=1, or if all the ranges are empty. Ive always 
expected the 0 returned here, otherwise subscribing to the command id that will 
have nothing happen in it would just idle forever.


was (Author: cnlwsu):
Doesn't {{0}} represent the "nothing to do here, complete" case, where a node 
that doesnt exist it would be the correct response? Its also returns zero on 
things like if a keyspace with RF=1, or if all the ranges are empty. Ive always 
expected the 0 returned here.

> nodetool repair returns status code 0 for some errors
> -
>
> Key: CASSANDRA-12508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12508
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>
> For instance, when specifying hosts that don’t exist, an error message is 
> logged, but the return code is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12508) nodetool repair returns status code 0 for some errors

2016-08-19 Thread Chris Lohfink (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428604#comment-15428604
 ] 

Chris Lohfink commented on CASSANDRA-12508:
---

Doesn't {{0}} represent the "nothing to do here, complete" case, where a node 
that doesnt exist it would be the correct response? Its also returns zero on 
things like if a keyspace with RF=1, or if all the ranges are empty. Ive always 
expected the 0 returned here.

> nodetool repair returns status code 0 for some errors
> -
>
> Key: CASSANDRA-12508
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12508
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Blake Eggleston
>Assignee: Blake Eggleston
>
> For instance, when specifying hosts that don’t exist, an error message is 
> logged, but the return code is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12508) nodetool repair returns status code 0 for some errors

2016-08-19 Thread Blake Eggleston (JIRA)
Blake Eggleston created CASSANDRA-12508:
---

 Summary: nodetool repair returns status code 0 for some errors
 Key: CASSANDRA-12508
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12508
 Project: Cassandra
  Issue Type: Bug
Reporter: Blake Eggleston
Assignee: Blake Eggleston


For instance, when specifying hosts that don’t exist, an error message is 
logged, but the return code is zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-08-19 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428541#comment-15428541
 ] 

Alex Petrov commented on CASSANDRA-12461:
-

bq. It is a great way to introduce stupid bugs and the performance benefits 
here would be nonexistent.

I was actually advocating for correctness. With a volatile variable we won't be 
able to guarantee exactly-once (we could with locking but it's usually much 
harder). If you're still against it even in that case, I'll take a look at 
locking version you implemented again as I think we should guard the boolean 
variable, as it'd already protect the access to lists. Either way works for me, 
we should just make sure that list accessors (both add and remove) disallow 
modifying lists after shutdown hook is ran.

bq. I only smoke tested things on a shutdown, I'm wondering if that error has 
to do with the shutdown hook for logback being called twice. I'll test that 
case today.

As far as I remember, this error showed up during "normal" shutdown 
(interrupt). And during drain (I've tested by adding some println hooks), I 
just seen all messages duplicated (which is kind of clear from the logic), but 
I'd argue that running whole process twice is not required. And if we do run 
hooks, we most likely should run them once (?), which would be during "normal" 
shutdown. 

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12492) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428530#comment-15428530
 ] 

Russ Hatch commented on CASSANDRA-12492:


100 runs without failure.

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12492
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Russ Hatch
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1562, in cql3_non_compound_range_tombstones_test
> ThriftConsistencyLevel.ALL)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1175, in batch_mutate
> self.recv_batch_mutate()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1201, in recv_batch_mutate
> raise result.te
> "TimedOutException(acknowledged_by=1, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12492) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-19 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-12492.

Resolution: Cannot Reproduce

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12492
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Russ Hatch
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1562, in cql3_non_compound_range_tombstones_test
> ThriftConsistencyLevel.ALL)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1175, in batch_mutate
> self.recv_batch_mutate()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1201, in recv_batch_mutate
> raise result.te
> "TimedOutException(acknowledged_by=1, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11866) nodetool repair does not obey the column family parameter when -st and -et are provided (subrange repair)

2016-08-19 Thread Brian Wawok (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428514#comment-15428514
 ] 

Brian Wawok commented on CASSANDRA-11866:
-

[~pauloricardomg] Pull request here: 
https://github.com/riptano/cassandra-dtest/pull/1259

> nodetool repair does not obey the column family parameter when -st and -et 
> are provided (subrange repair)
> -
>
> Key: CASSANDRA-11866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11866
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Red Hat Enterprise Linux Server release 6.7 (Santiago) 
> x86_64
>Reporter: Shiva Venkateswaran
>Assignee: Vinay Chella
>  Labels: newbie
> Fix For: 2.1.x
>
> Attachments: 0001-CASSANDRA-11866-dtest.patch, 11866-2.1.txt
>
>
> Command 1: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u user-pw ** repair ADL_GLOBAL AssetModifyTimes_data 
> -st 205279477618143669 -et 230991685737746901 -par
> [2016-05-20 17:31:39,116] Starting repair command #9, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:32:21,568] Repair session 3cae2530-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> Command 2: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -st 205279477618143669 -et 
> 230991685737746901 -par -- ADL_GLOBAL AssetModifyTimes_data
> [2016-05-20 17:36:34,473] Starting repair command #10, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:37:15,365] Repair session ecb996d0-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> [2016-05-20 17:37:15,365] Repair command #10 finished
> Command 3: Repairs only the CF ADL3Test1_data in keyspace ADL_GLOBAL
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -- ADL_GLOBAL 
> ADL3Test1_data
> [2016-05-20 17:38:35,781] Starting repair command #11, repairing 1043 ranges 
> for keyspace ADL_GLOBAL (parallelism=SEQUENTIAL, full=true)
> [2016-05-20 17:42:32,682] Repair session 3c8af050-1ed3-11e6-b490-d9df6932c7cf 
> for range (6241639152751626129,6241693909092643958] finished
> [2016-05-20 17:42:32,683] Repair session 3caf1a20-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7096993048358106082,-7095000706885780850] finished
> [2016-05-20 17:42:32,683] Repair session 3ccfc180-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7218939248114487080,-7218289345961492809] finished
> [2016-05-20 17:42:32,683] Repair session 3cf21690-1ed3-11e6-b490-d9df6932c7cf 
> for range (-5244794756638190874,-5190307341355030282] finished
> [2016-05-20 17:42:32,683] Repair session 3d126fd0-1ed3-11e6-b490-d9df6932c7cf 
> for range (3551629701277971766,321736534916502] finished
> [2016-05-20 17:42:32,683] Repair session 3d32f020-1ed3-11e6-b490-d9df6932c7cf 
> for range (-8139355591560661944,-8127928369093576603] finished
> [2016-05-20 17:42:32,683] Repair session 3d537070-1ed3-11e6-b490-d9df6932c7cf 
> for range (7098010153980465751,7100863011896759020] finished
> [2016-05-20 17:42:32,683] Repair session 3d73f0c0-1ed3-11e6-b490-d9df6932c7cf 
> for range (1004538726866173536,1008586133746764703] finished
> [2016-05-20 17:42:32,683] Repair session 3d947110-1ed3-11e6-b490-d9df6932c7cf 
> for range (5770817093573726645,5771418910784831587] finished
> .
> .
> .
> [2016-05-20 17:42:32,732] Repair command #11 finished



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12279) nodetool repair hangs on non-existant table

2016-08-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428505#comment-15428505
 ] 

Paulo Motta commented on CASSANDRA-12279:
-

Thanks for the patch [~myamaguchi1016]! The patch looks good, but it doesn't 
apply to 2.2 and this bug also affects 2.2. Can you also provide a patch based 
on the cassandra-2.2 branch (and check if the trunk patch will apply to 
cassandra-3.0)? You can go ahead and change the nomenclature from columnfamily 
to table where applicable in the 2.2 patch.

I created a dtest to reproduce this and submitted a [pull 
request|https://github.com/riptano/cassandra-dtest/pull/1258].

Also, if you could generate your patch with git format-patch that would be 
preferable. Thanks.

> nodetool repair hangs on non-existant table
> ---
>
> Key: CASSANDRA-12279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12279
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Ubuntu, Openjdk
>Reporter: Benjamin Roth
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-12279-trunk.patch, new_result_example.txt, 
> org_result_example.txt
>
>
> If nodetool repair is called with a table that does not exist, ist hangs 
> infinitely without any error message or logs.
> E.g.
> nodetool repair foo bar
> Keyspace foo exists but table bar does not



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12279) nodetool repair hangs on non-existant table

2016-08-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12279:

Assignee: Masataka Yamaguchi
  Status: Open  (was: Patch Available)

> nodetool repair hangs on non-existant table
> ---
>
> Key: CASSANDRA-12279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12279
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Ubuntu, Openjdk
>Reporter: Benjamin Roth
>Assignee: Masataka Yamaguchi
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-12279-trunk.patch, new_result_example.txt, 
> org_result_example.txt
>
>
> If nodetool repair is called with a table that does not exist, ist hangs 
> infinitely without any error message or logs.
> E.g.
> nodetool repair foo bar
> Keyspace foo exists but table bar does not



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-08-19 Thread Anthony Cozzie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428470#comment-15428470
 ] 

Anthony Cozzie commented on CASSANDRA-12461:


Thanks for your review Alex!  First, the nits:

I am -1 on any sort of synchronization optimization.  It is a great way to 
introduce stupid bugs and the performance benefits here would be nonexistent. 
For example, I'm not sure your patch works there: suppose we have the following 
interleaving:
  * addHook reads the AtomicBoolean, it is false
  * shutdownHook sets the AtomicBoolean to true
  * shutdownHook runs the hooks
  * addHook adds the hook
I don't even know if this analysis is correct; I just don't want to think about 
it.

List add doesn't really seem to return anything useful according to the javadoc 
"Lists that support this operation may place limitations on what elements may 
be added to this list. In particular, some lists will refuse to add null 
elements, and others will impose restrictions on the type of elements that may 
be added. List classes should clearly specify in their documentation any 
restrictions on what elements may be added.".  So I am OK either way.

OK, the more serious stuff:  I did not understand what drain() was doing.  We 
should definitely not run post shutdown hooks and turn off logging and such 
until the very end.  Probably the simplest thing would be to simply clear the 
preshutdownhook list after calling them on drain() so they are not called twice.

I only smoke tested things on a shutdown, I'm wondering if that error has to do 
with the shutdown hook for logback being called twice. I'll test that case 
today.

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9875) Rebuild from targeted replica

2016-08-19 Thread Geoffrey Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Yu updated CASSANDRA-9875:
---
Status: Open  (was: Patch Available)

> Rebuild from targeted replica
> -
>
> Key: CASSANDRA-9875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9875
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Geoffrey Yu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 9875-trunk.txt
>
>
> Nodetool rebuild command will rebuild all the token ranges handled by the 
> endpoint. Sometimes we want to rebuild only a certain token range. We should 
> add this ability to rebuild command. We should also add the ability to stream 
> from a given replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9875) Rebuild from targeted replica

2016-08-19 Thread Geoffrey Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428458#comment-15428458
 ] 

Geoffrey Yu commented on CASSANDRA-9875:


No worries about the delay! Yeah I totally agree, adding a host whitelist would 
be a better interface. I somehow missed the source filter in the 
{{RangeStreamer}}. I'll take a look, make the changes, and add a dtest.

> Rebuild from targeted replica
> -
>
> Key: CASSANDRA-9875
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9875
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Assignee: Geoffrey Yu
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
> Attachments: 9875-trunk.txt
>
>
> Nodetool rebuild command will rebuild all the token ranges handled by the 
> endpoint. Sometimes we want to rebuild only a certain token range. We should 
> add this ability to rebuild command. We should also add the ability to stream 
> from a given replica.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12329) Unreleased Resource: Sockets

2016-08-19 Thread Yuki Morishita (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428456#comment-15428456
 ] 

Yuki Morishita commented on CASSANDRA-12329:


[~arunkumar] Thanks for the patch.

Here are my reviews:

- Probably we want to catch other exceptions other than IOException in this 
case. For example, {{setEnabledProtocols}} can throw 
{{IllegalArgumentException}}.
- We have similar {{getSocket}} methods in the same {{SSLFactory}} class. Can 
you also patch that?

> Unreleased Resource: Sockets
> 
>
> Key: CASSANDRA-12329
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12329
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Eduardo Aguinaga
>Assignee: Arunkumar M
>  Labels: easyfix, newbie, patch
> Fix For: 3.0.x
>
> Attachments: 12329-3.0.txt
>
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> Sockets are low level resources that must be explicitly released so 
> subsequent callers will have access to previously used sockets. In the file 
> SSLFactory.java on line 62 a SSL server socket is acquired and eventually 
> returned to the caller on line 69.
> If an exception is thrown by any of the code between lines 62 and 69 the 
> socket acquired on line 62 will not be released for subsequent reuse..
> {code:java}
> SSLFactory.java, lines 59-70:
> 59 public static SSLServerSocket getServerSocket(EncryptionOptions options, 
> InetAddress address, int port) throws IOException
> 60 {
> 61 SSLContext ctx = createSSLContext(options, true);
> 62 SSLServerSocket serverSocket = 
> (SSLServerSocket)ctx.getServerSocketFactory().createServerSocket();
> 63 serverSocket.setReuseAddress(true);
> 64 String[] suites = 
> filterCipherSuites(serverSocket.getSupportedCipherSuites(), 
> options.cipher_suites);
> 65 serverSocket.setEnabledCipherSuites(suites);
> 66 serverSocket.setNeedClientAuth(options.require_client_auth);
> 67 serverSocket.setEnabledProtocols(ACCEPTED_PROTOCOLS);
> 68 serverSocket.bind(new InetSocketAddress(address, port), 500);
> 69 return serverSocket;
> 70 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12329) Unreleased Resource: Sockets

2016-08-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-12329:
---
Status: Open  (was: Patch Available)

> Unreleased Resource: Sockets
> 
>
> Key: CASSANDRA-12329
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12329
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Eduardo Aguinaga
>Assignee: Arunkumar M
>  Labels: easyfix, newbie, patch
> Fix For: 3.0.x
>
> Attachments: 12329-3.0.txt
>
>
> Overview:
> In May through June of 2016 a static analysis was performed on version 3.0.5 
> of the Cassandra source code. The analysis included an automated analysis 
> using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools 
> Understand v4. The results of that analysis includes the issue below.
> Issue:
> Sockets are low level resources that must be explicitly released so 
> subsequent callers will have access to previously used sockets. In the file 
> SSLFactory.java on line 62 a SSL server socket is acquired and eventually 
> returned to the caller on line 69.
> If an exception is thrown by any of the code between lines 62 and 69 the 
> socket acquired on line 62 will not be released for subsequent reuse..
> {code:java}
> SSLFactory.java, lines 59-70:
> 59 public static SSLServerSocket getServerSocket(EncryptionOptions options, 
> InetAddress address, int port) throws IOException
> 60 {
> 61 SSLContext ctx = createSSLContext(options, true);
> 62 SSLServerSocket serverSocket = 
> (SSLServerSocket)ctx.getServerSocketFactory().createServerSocket();
> 63 serverSocket.setReuseAddress(true);
> 64 String[] suites = 
> filterCipherSuites(serverSocket.getSupportedCipherSuites(), 
> options.cipher_suites);
> 65 serverSocket.setEnabledCipherSuites(suites);
> 66 serverSocket.setNeedClientAuth(options.require_client_auth);
> 67 serverSocket.setEnabledProtocols(ACCEPTED_PROTOCOLS);
> 68 serverSocket.bind(new InetSocketAddress(address, port), 500);
> 69 return serverSocket;
> 70 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11866) nodetool repair does not obey the column family parameter when -st and -et are provided (subrange repair)

2016-08-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11866:

Assignee: Vinay Chella

> nodetool repair does not obey the column family parameter when -st and -et 
> are provided (subrange repair)
> -
>
> Key: CASSANDRA-11866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11866
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Red Hat Enterprise Linux Server release 6.7 (Santiago) 
> x86_64
>Reporter: Shiva Venkateswaran
>Assignee: Vinay Chella
>  Labels: newbie
> Fix For: 2.1.x
>
> Attachments: 0001-CASSANDRA-11866-dtest.patch, 11866-2.1.txt
>
>
> Command 1: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u user-pw ** repair ADL_GLOBAL AssetModifyTimes_data 
> -st 205279477618143669 -et 230991685737746901 -par
> [2016-05-20 17:31:39,116] Starting repair command #9, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:32:21,568] Repair session 3cae2530-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> Command 2: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -st 205279477618143669 -et 
> 230991685737746901 -par -- ADL_GLOBAL AssetModifyTimes_data
> [2016-05-20 17:36:34,473] Starting repair command #10, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:37:15,365] Repair session ecb996d0-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> [2016-05-20 17:37:15,365] Repair command #10 finished
> Command 3: Repairs only the CF ADL3Test1_data in keyspace ADL_GLOBAL
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -- ADL_GLOBAL 
> ADL3Test1_data
> [2016-05-20 17:38:35,781] Starting repair command #11, repairing 1043 ranges 
> for keyspace ADL_GLOBAL (parallelism=SEQUENTIAL, full=true)
> [2016-05-20 17:42:32,682] Repair session 3c8af050-1ed3-11e6-b490-d9df6932c7cf 
> for range (6241639152751626129,6241693909092643958] finished
> [2016-05-20 17:42:32,683] Repair session 3caf1a20-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7096993048358106082,-7095000706885780850] finished
> [2016-05-20 17:42:32,683] Repair session 3ccfc180-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7218939248114487080,-7218289345961492809] finished
> [2016-05-20 17:42:32,683] Repair session 3cf21690-1ed3-11e6-b490-d9df6932c7cf 
> for range (-5244794756638190874,-5190307341355030282] finished
> [2016-05-20 17:42:32,683] Repair session 3d126fd0-1ed3-11e6-b490-d9df6932c7cf 
> for range (3551629701277971766,321736534916502] finished
> [2016-05-20 17:42:32,683] Repair session 3d32f020-1ed3-11e6-b490-d9df6932c7cf 
> for range (-8139355591560661944,-8127928369093576603] finished
> [2016-05-20 17:42:32,683] Repair session 3d537070-1ed3-11e6-b490-d9df6932c7cf 
> for range (7098010153980465751,7100863011896759020] finished
> [2016-05-20 17:42:32,683] Repair session 3d73f0c0-1ed3-11e6-b490-d9df6932c7cf 
> for range (1004538726866173536,1008586133746764703] finished
> [2016-05-20 17:42:32,683] Repair session 3d947110-1ed3-11e6-b490-d9df6932c7cf 
> for range (5770817093573726645,5771418910784831587] finished
> .
> .
> .
> [2016-05-20 17:42:32,732] Repair command #11 finished



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12499) Row cache does not cache partitions on tables without clustering keys

2016-08-19 Thread Jeff Jirsa (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa updated CASSANDRA-12499:
---
Labels: Performance  (was: )

> Row cache does not cache partitions on tables without clustering keys
> -
>
> Key: CASSANDRA-12499
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12499
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jeff Jirsa
>  Labels: Performance
>
> {code}
> MLSEA-JJIRSA01:~ jjirsa$ ccm start
> MLSEA-JJIRSA01:~ jjirsa$ echo "DESCRIBE TABLE test.test; " | ccm node1 cqlsh
> CREATE TABLE test.test (
> id int PRIMARY KEY,
> v text
> ) WITH bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': '100'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$ echo "INSERT INTO test.test(id,v) VALUES(1, 'a'); " 
> | ccm node1 cqlsh
> MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
> node1 cqlsh
>  id | v
> +---
>   1 | a
> (1 rows)
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$ echo "SELECT * FROM test.test WHERE id=1; " | ccm 
> node1 cqlsh
>  id | v
> +---
>   1 | a
> (1 rows)
> MLSEA-JJIRSA01:~ jjirsa$ ccm node1 nodetool info | grep Row
> Row Cache  : entries 0, size 0 bytes, capacity 100 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> MLSEA-JJIRSA01:~ jjirsa$
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12279) nodetool repair hangs on non-existant table

2016-08-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-12279:

Reviewer: Paulo Motta

> nodetool repair hangs on non-existant table
> ---
>
> Key: CASSANDRA-12279
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12279
> Project: Cassandra
>  Issue Type: Bug
> Environment: Linux Ubuntu, Openjdk
>Reporter: Benjamin Roth
>Priority: Minor
>  Labels: lhf
> Attachments: CASSANDRA-12279-trunk.patch, new_result_example.txt, 
> org_result_example.txt
>
>
> If nodetool repair is called with a table that does not exist, ist hangs 
> infinitely without any error message or logs.
> E.g.
> nodetool repair foo bar
> Keyspace foo exists but table bar does not



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11866) nodetool repair does not obey the column family parameter when -st and -et are provided (subrange repair)

2016-08-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11866:

  Tester: Brian Wawok  (was: Shiva Venkateswaran)
Reviewer: Paulo Motta

> nodetool repair does not obey the column family parameter when -st and -et 
> are provided (subrange repair)
> -
>
> Key: CASSANDRA-11866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11866
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Red Hat Enterprise Linux Server release 6.7 (Santiago) 
> x86_64
>Reporter: Shiva Venkateswaran
>  Labels: newbie
> Fix For: 2.1.x
>
> Attachments: 0001-CASSANDRA-11866-dtest.patch, 11866-2.1.txt
>
>
> Command 1: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u user-pw ** repair ADL_GLOBAL AssetModifyTimes_data 
> -st 205279477618143669 -et 230991685737746901 -par
> [2016-05-20 17:31:39,116] Starting repair command #9, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:32:21,568] Repair session 3cae2530-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> Command 2: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -st 205279477618143669 -et 
> 230991685737746901 -par -- ADL_GLOBAL AssetModifyTimes_data
> [2016-05-20 17:36:34,473] Starting repair command #10, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:37:15,365] Repair session ecb996d0-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> [2016-05-20 17:37:15,365] Repair command #10 finished
> Command 3: Repairs only the CF ADL3Test1_data in keyspace ADL_GLOBAL
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -- ADL_GLOBAL 
> ADL3Test1_data
> [2016-05-20 17:38:35,781] Starting repair command #11, repairing 1043 ranges 
> for keyspace ADL_GLOBAL (parallelism=SEQUENTIAL, full=true)
> [2016-05-20 17:42:32,682] Repair session 3c8af050-1ed3-11e6-b490-d9df6932c7cf 
> for range (6241639152751626129,6241693909092643958] finished
> [2016-05-20 17:42:32,683] Repair session 3caf1a20-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7096993048358106082,-7095000706885780850] finished
> [2016-05-20 17:42:32,683] Repair session 3ccfc180-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7218939248114487080,-7218289345961492809] finished
> [2016-05-20 17:42:32,683] Repair session 3cf21690-1ed3-11e6-b490-d9df6932c7cf 
> for range (-5244794756638190874,-5190307341355030282] finished
> [2016-05-20 17:42:32,683] Repair session 3d126fd0-1ed3-11e6-b490-d9df6932c7cf 
> for range (3551629701277971766,321736534916502] finished
> [2016-05-20 17:42:32,683] Repair session 3d32f020-1ed3-11e6-b490-d9df6932c7cf 
> for range (-8139355591560661944,-8127928369093576603] finished
> [2016-05-20 17:42:32,683] Repair session 3d537070-1ed3-11e6-b490-d9df6932c7cf 
> for range (7098010153980465751,7100863011896759020] finished
> [2016-05-20 17:42:32,683] Repair session 3d73f0c0-1ed3-11e6-b490-d9df6932c7cf 
> for range (1004538726866173536,1008586133746764703] finished
> [2016-05-20 17:42:32,683] Repair session 3d947110-1ed3-11e6-b490-d9df6932c7cf 
> for range (5770817093573726645,5771418910784831587] finished
> .
> .
> .
> [2016-05-20 17:42:32,732] Repair command #11 finished



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11866) nodetool repair does not obey the column family parameter when -st and -et are provided (subrange repair)

2016-08-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428428#comment-15428428
 ] 

Paulo Motta edited comment on CASSANDRA-11866 at 8/19/16 4:45 PM:
--

Dtest looks good, thanks! One minor nit is that the test is hanging on 2.2+ due 
to CASSANDRA-12279 since table {{standard2}} does not exist. [~brian.wawok] 
While that is not fixed could you just change it {{counter1}} (which is created 
by stress) and open a pull request to 
[cassandra-dtest|https://github.com/riptano/cassandra-dtest]?

Meanwhile since the patch is very trivial and only affects 2.1 I will mark this 
as ready to commit. I prepared the patch for commit 
[here|https://github.com/pauloricardomg/cassandra/tree/2.1-11866].


was (Author: pauloricardomg):
Dtest looks good, thanks! One minor nit is that the test is hanging on 2.2+ due 
to CASSANDRA-12279 since table {{standard2}} does not exist. While that is not 
fixed could you just change it {{counter1}} (which is created by stress) and 
open a pull request to 
[cassandra-dtest|https://github.com/riptano/cassandra-dtest]?

Meanwhile since the patch is very trivial and only affects 2.1 I will mark this 
as ready to commit. I prepared the patch for commit 
[here|https://github.com/pauloricardomg/cassandra/tree/2.1-11866].

> nodetool repair does not obey the column family parameter when -st and -et 
> are provided (subrange repair)
> -
>
> Key: CASSANDRA-11866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11866
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Red Hat Enterprise Linux Server release 6.7 (Santiago) 
> x86_64
>Reporter: Shiva Venkateswaran
>  Labels: newbie
> Fix For: 2.1.x
>
> Attachments: 0001-CASSANDRA-11866-dtest.patch, 11866-2.1.txt
>
>
> Command 1: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u user-pw ** repair ADL_GLOBAL AssetModifyTimes_data 
> -st 205279477618143669 -et 230991685737746901 -par
> [2016-05-20 17:31:39,116] Starting repair command #9, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:32:21,568] Repair session 3cae2530-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> Command 2: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -st 205279477618143669 -et 
> 230991685737746901 -par -- ADL_GLOBAL AssetModifyTimes_data
> [2016-05-20 17:36:34,473] Starting repair command #10, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:37:15,365] Repair session ecb996d0-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> [2016-05-20 17:37:15,365] Repair command #10 finished
> Command 3: Repairs only the CF ADL3Test1_data in keyspace ADL_GLOBAL
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -- ADL_GLOBAL 
> ADL3Test1_data
> [2016-05-20 17:38:35,781] Starting repair command #11, repairing 1043 ranges 
> for keyspace ADL_GLOBAL (parallelism=SEQUENTIAL, full=true)
> [2016-05-20 17:42:32,682] Repair session 3c8af050-1ed3-11e6-b490-d9df6932c7cf 
> for range (6241639152751626129,6241693909092643958] finished
> [2016-05-20 17:42:32,683] Repair session 3caf1a20-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7096993048358106082,-7095000706885780850] finished
> [2016-05-20 17:42:32,683] Repair session 3ccfc180-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7218939248114487080,-7218289345961492809] finished
> [2016-05-20 17:42:32,683] Repair session 3cf21690-1ed3-11e6-b490-d9df6932c7cf 
> for range (-5244794756638190874,-5190307341355030282] finished
> [2016-05-20 17:42:32,683] Repair session 3d126fd0-1ed3-11e6-b490-d9df6932c7cf 
> for range (3551629701277971766,321736534916502] finished
> [2016-05-20 17:42:32,683] Repair session 3d32f020-1ed3-11e6-b490-d9df6932c7cf 
> for range (-8139355591560661944,-8127928369093576603] finished
> [2016-05-20 17:42:32,683] Repair session 3d537070-1ed3-11e6-b490-d9df6932c7cf 
> for range (7098010153980465751,7100863011896759020] finished
> [2016-05-20 17:42:32,683] Repair session 3d73f0c0-1ed3-11e6-b490-d9df6932c7cf 
> for range (1004538726866173536,1008586133746764703] finished
> [2016-05-20 17:42:32,683] Repair session 3d947110-1ed3-1

[jira] [Updated] (CASSANDRA-11866) nodetool repair does not obey the column family parameter when -st and -et are provided (subrange repair)

2016-08-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11866:

Status: Ready to Commit  (was: Patch Available)

> nodetool repair does not obey the column family parameter when -st and -et 
> are provided (subrange repair)
> -
>
> Key: CASSANDRA-11866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11866
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Red Hat Enterprise Linux Server release 6.7 (Santiago) 
> x86_64
>Reporter: Shiva Venkateswaran
>  Labels: newbie
> Fix For: 2.1.x
>
> Attachments: 0001-CASSANDRA-11866-dtest.patch, 11866-2.1.txt
>
>
> Command 1: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u user-pw ** repair ADL_GLOBAL AssetModifyTimes_data 
> -st 205279477618143669 -et 230991685737746901 -par
> [2016-05-20 17:31:39,116] Starting repair command #9, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:32:21,568] Repair session 3cae2530-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> Command 2: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -st 205279477618143669 -et 
> 230991685737746901 -par -- ADL_GLOBAL AssetModifyTimes_data
> [2016-05-20 17:36:34,473] Starting repair command #10, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:37:15,365] Repair session ecb996d0-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> [2016-05-20 17:37:15,365] Repair command #10 finished
> Command 3: Repairs only the CF ADL3Test1_data in keyspace ADL_GLOBAL
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -- ADL_GLOBAL 
> ADL3Test1_data
> [2016-05-20 17:38:35,781] Starting repair command #11, repairing 1043 ranges 
> for keyspace ADL_GLOBAL (parallelism=SEQUENTIAL, full=true)
> [2016-05-20 17:42:32,682] Repair session 3c8af050-1ed3-11e6-b490-d9df6932c7cf 
> for range (6241639152751626129,6241693909092643958] finished
> [2016-05-20 17:42:32,683] Repair session 3caf1a20-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7096993048358106082,-7095000706885780850] finished
> [2016-05-20 17:42:32,683] Repair session 3ccfc180-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7218939248114487080,-7218289345961492809] finished
> [2016-05-20 17:42:32,683] Repair session 3cf21690-1ed3-11e6-b490-d9df6932c7cf 
> for range (-5244794756638190874,-5190307341355030282] finished
> [2016-05-20 17:42:32,683] Repair session 3d126fd0-1ed3-11e6-b490-d9df6932c7cf 
> for range (3551629701277971766,321736534916502] finished
> [2016-05-20 17:42:32,683] Repair session 3d32f020-1ed3-11e6-b490-d9df6932c7cf 
> for range (-8139355591560661944,-8127928369093576603] finished
> [2016-05-20 17:42:32,683] Repair session 3d537070-1ed3-11e6-b490-d9df6932c7cf 
> for range (7098010153980465751,7100863011896759020] finished
> [2016-05-20 17:42:32,683] Repair session 3d73f0c0-1ed3-11e6-b490-d9df6932c7cf 
> for range (1004538726866173536,1008586133746764703] finished
> [2016-05-20 17:42:32,683] Repair session 3d947110-1ed3-11e6-b490-d9df6932c7cf 
> for range (5770817093573726645,5771418910784831587] finished
> .
> .
> .
> [2016-05-20 17:42:32,732] Repair command #11 finished



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11866) nodetool repair does not obey the column family parameter when -st and -et are provided (subrange repair)

2016-08-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428428#comment-15428428
 ] 

Paulo Motta commented on CASSANDRA-11866:
-

Dtest looks good, thanks! One minor nit is that the test is hanging on 2.2+ due 
to CASSANDRA-12279 since table {{standard2}} does not exist. While that is not 
fixed could you just change it {{counter1}} (which is created by stress) and 
open a pull request to 
[cassandra-dtest|https://github.com/riptano/cassandra-dtest]?

Meanwhile since the patch is very trivial and only affects 2.1 I will mark this 
as ready to commit. I prepared the patch for commit 
[here|https://github.com/pauloricardomg/cassandra/tree/2.1-11866].

> nodetool repair does not obey the column family parameter when -st and -et 
> are provided (subrange repair)
> -
>
> Key: CASSANDRA-11866
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11866
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: Red Hat Enterprise Linux Server release 6.7 (Santiago) 
> x86_64
>Reporter: Shiva Venkateswaran
>  Labels: newbie
> Fix For: 2.1.x
>
> Attachments: 0001-CASSANDRA-11866-dtest.patch, 11866-2.1.txt
>
>
> Command 1: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u user-pw ** repair ADL_GLOBAL AssetModifyTimes_data 
> -st 205279477618143669 -et 230991685737746901 -par
> [2016-05-20 17:31:39,116] Starting repair command #9, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:32:21,568] Repair session 3cae2530-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> Command 2: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the 
> parameter AssetModifyTimes_data used to restrict the CFs
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -st 205279477618143669 -et 
> 230991685737746901 -par -- ADL_GLOBAL AssetModifyTimes_data
> [2016-05-20 17:36:34,473] Starting repair command #10, repairing 1 ranges for 
> keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true)
> [2016-05-20 17:37:15,365] Repair session ecb996d0-1ed2-11e6-b490-d9df6932c7cf 
> for range (205279477618143669,230991685737746901] finished
> [2016-05-20 17:37:15,365] Repair command #10 finished
> Command 3: Repairs only the CF ADL3Test1_data in keyspace ADL_GLOBAL
> Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h 
> localhost -p 7199 -u controlRole -pw ** repair -- ADL_GLOBAL 
> ADL3Test1_data
> [2016-05-20 17:38:35,781] Starting repair command #11, repairing 1043 ranges 
> for keyspace ADL_GLOBAL (parallelism=SEQUENTIAL, full=true)
> [2016-05-20 17:42:32,682] Repair session 3c8af050-1ed3-11e6-b490-d9df6932c7cf 
> for range (6241639152751626129,6241693909092643958] finished
> [2016-05-20 17:42:32,683] Repair session 3caf1a20-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7096993048358106082,-7095000706885780850] finished
> [2016-05-20 17:42:32,683] Repair session 3ccfc180-1ed3-11e6-b490-d9df6932c7cf 
> for range (-7218939248114487080,-7218289345961492809] finished
> [2016-05-20 17:42:32,683] Repair session 3cf21690-1ed3-11e6-b490-d9df6932c7cf 
> for range (-5244794756638190874,-5190307341355030282] finished
> [2016-05-20 17:42:32,683] Repair session 3d126fd0-1ed3-11e6-b490-d9df6932c7cf 
> for range (3551629701277971766,321736534916502] finished
> [2016-05-20 17:42:32,683] Repair session 3d32f020-1ed3-11e6-b490-d9df6932c7cf 
> for range (-8139355591560661944,-8127928369093576603] finished
> [2016-05-20 17:42:32,683] Repair session 3d537070-1ed3-11e6-b490-d9df6932c7cf 
> for range (7098010153980465751,7100863011896759020] finished
> [2016-05-20 17:42:32,683] Repair session 3d73f0c0-1ed3-11e6-b490-d9df6932c7cf 
> for range (1004538726866173536,1008586133746764703] finished
> [2016-05-20 17:42:32,683] Repair session 3d947110-1ed3-11e6-b490-d9df6932c7cf 
> for range (5770817093573726645,5771418910784831587] finished
> .
> .
> .
> [2016-05-20 17:42:32,732] Repair command #11 finished



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12497) COPY ... TO STDOUT regression in 2.2.7

2016-08-19 Thread Tyler Hobbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tyler Hobbs reassigned CASSANDRA-12497:
---

Assignee: Tyler Hobbs

> COPY ... TO STDOUT regression in 2.2.7
> --
>
> Key: CASSANDRA-12497
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12497
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Max Bowsher
>Assignee: Tyler Hobbs
>
> Cassandra 2.2.7 introduces a regression over 2.2.6 breaking COPY ... TO 
> STDOUT.
> In pylib/cqlshlib/copyutil.py, in CopyTask.__init__, self.printmsg is 
> conditionally defined to EITHER a module level function accepting arguments 
> (msg, eol=, encoding=), OR a lambda accepting arguments only (_, eol=).
> Consequently, when the lambda is in use (which requires COPY ... TO STDOUT 
> without --debug), any attempt to call CopyTask.printmsg with an encoding 
> parameter causes an exception.
> This occurs in ExportTask.run, thus rendering all COPY ... TO STDOUT without 
> --debug broken.
> The fix is to update the lambda's arguments to include encoding, or better, 
> replace it with a module-level function defined next to printmsg, so that 
> people realize the two argument lists must be kept in sync.
> The regression was introduced in this commit:
> commit 5de9de1f5832f2a0e92783e2f4412874423e6e15
> Author: Tyler Hobbs 
> Date:   Thu May 5 11:33:35 2016 -0500
> cqlsh: Handle non-ascii chars in error messages
> 
> Patch by Tyler Hobbs; reviewed by Paulo Motta for CASSANDRA-11626



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-19 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428384#comment-15428384
 ] 

Alex Petrov edited comment on CASSANDRA-11031 at 8/19/16 4:18 PM:
--

[~jasonstack] I've fixed most of problems with dtests by now. [Unit 
tests|https://cassci.datastax.com/view/Dev/view/ifesdjeen/job/ifesdjeen-11031-trunk-testall/]
 look good. However, ordering was not correct in limit tests,{{PER PARTITION 
LIMIT}} tests and test with {{ORDER BY}} clause were absent. Could you please 
add those? You can use [my 
branch|https://github.com/ifesdjeen/cassandra-dtest/tree/11031-trunk] as a base 
for further commits.

Please, use {{assert_all}} with {{ignore_order}} if results return more than 
one partition as their order is not guaranteed. {{LIMIT}} tests are quite 
tricky. The easiest would probably be to make sure you know which partition 
you're going to be getting back first, this way order will be consistent. 
Otherwise you have to compare with contents of all possible partitions.


was (Author: ifesdjeen):
[~jasonstack] I've fixed most of problems with dtests by now. Unit tests look 
good. However, ordering was not correct in limit tests,{{PER PARTITION LIMIT}} 
tests and test with {{ORDER BY}} clause were absent. Could you please add those?

Please, use {{assert_all}} with {{ignore_order}} if results return more than 
one partition as their order is not guaranteed. {{LIMIT}} tests are quite 
tricky. The easiest would probably be to make sure you know which partition 
you're going to be getting back first, this way order will be consistent. 
Otherwise you have to compare with contents of all possible partitions.

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11906) Unstable JVM due too many files when anticompacting big LCS tables

2016-08-19 Thread Sean McCarthy (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428385#comment-15428385
 ] 

Sean McCarthy commented on CASSANDRA-11906:
---

I am working on the reproduction now.

> Unstable JVM due too many files when anticompacting big LCS tables
> --
>
> Key: CASSANDRA-11906
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11906
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Assignee: Sean McCarthy
> Fix For: 3.0.x
>
>
> I have recently moved from C* 2.1.x to C* 3.0.6. The setup is quite 
> heavy:
>   - 13 nodes with spinning disks
>   - ~120 GB of data per node
>   - 50% of CFs are LCS and have quite wide rows.
>   - 2/3 CFs with LCS have more than 200 SStables
> Incremental repairs do not seem to play really well with that.
> I have been running some tests node by node by using the -pr option:
> {code:xml}
> nodetool -h localhost repair -pr keyscheme
> {code}
> and to my surprise the whole process takes quite some time (1 hour
> minimum, 8 hours if I haven't been repairing for 5/6 days).
> Yesterday I tried to run the command with the -seq option so to 
> decrease the number of simultanoues compactions. After a while
> the node on which I was running the repair simply died during
> the anticompaction phase with the following
> exception in the logs.
> {code:xml}
> ERROR [metrics-graphite-reporter-1-thread-1] 2016-05-25 21:54:21,868 
> ScheduledReporter.java:119 - RuntimeException thrown from 
> GraphiteReporter#report. Exception was suppressed.
> java.lang.RuntimeException: Failed to list files in 
> /data/cassandra/data/keyschema/columnfamily-3996ce80b7ac11e48a9b6776bf484396
>   at 
> org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:57)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.lifecycle.LifecycleTransaction.getFiles(LifecycleTransaction.java:547)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.filter(Directories.java:691)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$SSTableLister.listFiles(Directories.java:662)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories$TrueFilesSizeVisitor.(Directories.java:981)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.getTrueAllocatedSizeIn(Directories.java:893)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.Directories.trueSnapshotsSize(Directories.java:883) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.trueSnapshotsSize(ColumnFamilyStore.java:2332)
>  ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:637) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> org.apache.cassandra.metrics.TableMetrics$32.getValue(TableMetrics.java:634) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:281)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> ~[metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> ~[metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException: 
> /data/cassandra/data/keyschema/columnfamily-3996ce80b7ac11e48a9b6776bf484396/ma_txn_anticompactionafterrepair_f20b50d0-22bd-11e6-970f-6f22464f4624.log:
>  Too many open files
>   at org.apache.cassandra.io.util.FileUtils.readLines(FileUtils.java:622) 
> ~[apache-cassandra-3.0.6.jar:3.0.6]
>   at java.util.stream.Collectors.lambda$toMap$

[jira] [Commented] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-19 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428384#comment-15428384
 ] 

Alex Petrov commented on CASSANDRA-11031:
-

[~jasonstack] I've fixed most of problems with dtests by now. Unit tests look 
good. However, ordering was not correct in limit tests,{{PER PARTITION LIMIT}} 
tests and test with {{ORDER BY}} clause were absent. Could you please add those?

Please, use {{assert_all}} with {{ignore_order}} if results return more than 
one partition as their order is not guaranteed. {{LIMIT}} tests are quite 
tricky. The easiest would probably be to make sure you know which partition 
you're going to be getting back first, this way order will be consistent. 
Otherwise you have to compare with contents of all possible partitions.

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10848) Upgrade paging dtests involving deletion flap on CassCI

2016-08-19 Thread Russ Hatch (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Russ Hatch resolved CASSANDRA-10848.

Resolution: Fixed

> Upgrade paging dtests involving deletion flap on CassCI
> ---
>
> Key: CASSANDRA-10848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10848
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
>
> A number of dtests in the {{upgrade_tests.paging_tests}} that involve 
> deletion flap with the following error:
> {code}
> Requested pages were not delivered before timeout.
> {code}
> This may just be an effect of CASSANDRA-10730, but it's worth having a look 
> at separately. Here are some examples of tests flapping in this way:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12505) Authentication failure for new nodes

2016-08-19 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated CASSANDRA-12505:
-
Description: 
I added new data center with one node to a cluster. The cluster currently has 5 
nodes in 3 data centers. I changed the replication factor of system_auth 
keyspace according to the number of nodes. I run nodetool repair and rebuild on 
new nodes, but I cannot login into the new nodes. cqlsh reports the following 
error:
{noformat}
Connection error: ('Unable to connect to any servers', {'ip': 
AuthenticationFailed(u'Failed to authenticate to ip: code=0100 [Bad 
credentials] message="Username and/or password are incorrect"',)})
{noformat}

I tried using both the SimpleStrategy and NetworkTopologyStrategy for the 
system_auth keyspace without changes. I am still able to login into old nodes. 
nodetool status reports that every node owns 100% of the system_auth keyspace. 
The log file does not show any errors.

  was:
I added new data center with one node to a cluster. The cluster currently has 5 
nodes in 3 data centers. I changed the replication factor of system_auth 
keyspace according to the number of nodes. I run nodetool repair and rebuild on 
new nodes, but I cannot login to the new nodes. cqlsh reports the following 
error:
{noformat}
Connection error: ('Unable to connect to any servers', {'ip': 
AuthenticationFailed(u'Failed to authenticate to ip: code=0100 [Bad 
credentials] message="Username and/or password are incorrect"',)})
{noformat}

I tried using both the SimpleStrategy and NetworkTopologyStrategy for the 
system_auth keyspace without changes. I am still able to login into old nodes. 
nodetool status reports that every node owns 100% of the system_auth keyspace. 
The log file does not show any errors.


> Authentication failure for new nodes
> 
>
> Key: CASSANDRA-12505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12505
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 16.04
>Reporter: Mike
>Priority: Trivial
>
> I added new data center with one node to a cluster. The cluster currently has 
> 5 nodes in 3 data centers. I changed the replication factor of system_auth 
> keyspace according to the number of nodes. I run nodetool repair and rebuild 
> on new nodes, but I cannot login into the new nodes. cqlsh reports the 
> following error:
> {noformat}
> Connection error: ('Unable to connect to any servers', {'ip': 
> AuthenticationFailed(u'Failed to authenticate to ip: code=0100 [Bad 
> credentials] message="Username and/or password are incorrect"',)})
> {noformat}
> I tried using both the SimpleStrategy and NetworkTopologyStrategy for the 
> system_auth keyspace without changes. I am still able to login into old 
> nodes. nodetool status reports that every node owns 100% of the system_auth 
> keyspace. The log file does not show any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12505) Authentication failure for new nodes

2016-08-19 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike updated CASSANDRA-12505:
-
Priority: Trivial  (was: Critical)

> Authentication failure for new nodes
> 
>
> Key: CASSANDRA-12505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12505
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 16.04
>Reporter: Mike
>Priority: Trivial
>
> I added new data center with one node to a cluster. The cluster currently has 
> 5 nodes in 3 data centers. I changed the replication factor of system_auth 
> keyspace according to the number of nodes. I run nodetool repair and rebuild 
> on new nodes, but I cannot login to the new nodes. cqlsh reports the 
> following error:
> {noformat}
> Connection error: ('Unable to connect to any servers', {'ip': 
> AuthenticationFailed(u'Failed to authenticate to ip: code=0100 [Bad 
> credentials] message="Username and/or password are incorrect"',)})
> {noformat}
> I tried using both the SimpleStrategy and NetworkTopologyStrategy for the 
> system_auth keyspace without changes. I am still able to login into old 
> nodes. nodetool status reports that every node owns 100% of the system_auth 
> keyspace. The log file does not show any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12505) Authentication failure for new nodes

2016-08-19 Thread Mike (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428377#comment-15428377
 ] 

Mike commented on CASSANDRA-12505:
--

I haven't found it in any Cassandra documentation regarding adding new node/DC, 
but after I run nodetool repair -full, I was able to login into new node.

> Authentication failure for new nodes
> 
>
> Key: CASSANDRA-12505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12505
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 16.04
>Reporter: Mike
>Priority: Critical
>
> I added new data center with one node to a cluster. The cluster currently has 
> 5 nodes in 3 data centers. I changed the replication factor of system_auth 
> keyspace according to the number of nodes. I run nodetool repair and rebuild 
> on new nodes, but I cannot login to the new nodes. cqlsh reports the 
> following error:
> {noformat}
> Connection error: ('Unable to connect to any servers', {'ip': 
> AuthenticationFailed(u'Failed to authenticate to ip: code=0100 [Bad 
> credentials] message="Username and/or password are incorrect"',)})
> {noformat}
> I tried using both the SimpleStrategy and NetworkTopologyStrategy for the 
> system_auth keyspace without changes. I am still able to login into old 
> nodes. nodetool status reports that every node owns 100% of the system_auth 
> keyspace. The log file does not show any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10848) Upgrade paging dtests involving deletion flap on CassCI

2016-08-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428379#comment-15428379
 ] 

Russ Hatch commented on CASSANDRA-10848:


Given the current (happy) state of upgrade tests, I think this issue is safe to 
resolve. If there turns out to be a recurrence I think it would be better 
handled by a fresh issue for clarity.

> Upgrade paging dtests involving deletion flap on CassCI
> ---
>
> Key: CASSANDRA-10848
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10848
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>Assignee: Russ Hatch
>  Labels: dtest
> Fix For: 3.0.x, 3.x
>
>
> A number of dtests in the {{upgrade_tests.paging_tests}} that involve 
> deletion flap with the following error:
> {code}
> Requested pages were not delivered before timeout.
> {code}
> This may just be an effect of CASSANDRA-10730, but it's worth having a look 
> at separately. Here are some examples of tests flapping in this way:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12505) Authentication failure for new nodes

2016-08-19 Thread Mike (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike resolved CASSANDRA-12505.
--
Resolution: Fixed

> Authentication failure for new nodes
> 
>
> Key: CASSANDRA-12505
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12505
> Project: Cassandra
>  Issue Type: Bug
> Environment: Ubuntu 16.04
>Reporter: Mike
>Priority: Critical
>
> I added new data center with one node to a cluster. The cluster currently has 
> 5 nodes in 3 data centers. I changed the replication factor of system_auth 
> keyspace according to the number of nodes. I run nodetool repair and rebuild 
> on new nodes, but I cannot login to the new nodes. cqlsh reports the 
> following error:
> {noformat}
> Connection error: ('Unable to connect to any servers', {'ip': 
> AuthenticationFailed(u'Failed to authenticate to ip: code=0100 [Bad 
> credentials] message="Username and/or password are incorrect"',)})
> {noformat}
> I tried using both the SimpleStrategy and NetworkTopologyStrategy for the 
> system_auth keyspace without changes. I am still able to login into old 
> nodes. nodetool status reports that every node owns 100% of the system_auth 
> keyspace. The log file does not show any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12506) nodetool compactionhistory in it's output should have timestamp in human readable format

2016-08-19 Thread Kenneth Failbus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Failbus updated CASSANDRA-12506:

Description: 
While running nodetool compactionhistory the output shows id and other columns. 
I wanted to also see the timestamp for each id that got executed in human 
readable format
So, e.g. in the output if column id can be preceded by human readable timestamp 
format, it will help in understanding when a particular compaction ran and it's 
impact on the system resources.

  was:
While running nodetool compactionhistory the output shows id and other columns. 
I wanted to also see the timestamp for each id that got executed in human 
readable format
So, e.g. in the output if column id can be preceded by timestamp, it will help 
in understanding when a particular compaction ran and it's impact on the system 
resources.


> nodetool compactionhistory in it's output should have timestamp in human 
> readable format
> 
>
> Key: CASSANDRA-12506
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12506
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: AWS
>Reporter: Kenneth Failbus
>Priority: Minor
>
> While running nodetool compactionhistory the output shows id and other 
> columns. I wanted to also see the timestamp for each id that got executed in 
> human readable format
> So, e.g. in the output if column id can be preceded by human readable 
> timestamp format, it will help in understanding when a particular compaction 
> ran and it's impact on the system resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12506) nodetool compactionhistory in it's output should have timestamp in human readable form

2016-08-19 Thread Kenneth Failbus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Failbus updated CASSANDRA-12506:

Description: 
While running nodetool compactionhistory the output shows id and other columns. 
I wanted to also see the timestamp for each id that got executed in human 
readable format
So, e.g. in the output if column id can be preceded by timestamp, it will help 
in understanding when a particular compaction ran and it's impact on the system 
resources.

  was:
While running nodetool compactionhistory the output shows id and other columns. 
I wanted to also see the timestamp for each id that got executed.
So, e.g. in the output if column id can be preceded by timestamp, it will help 
in understanding when a particular compaction ran and it's impact on the system 
resources.


> nodetool compactionhistory in it's output should have timestamp in human 
> readable form
> --
>
> Key: CASSANDRA-12506
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12506
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: AWS
>Reporter: Kenneth Failbus
>Priority: Minor
>
> While running nodetool compactionhistory the output shows id and other 
> columns. I wanted to also see the timestamp for each id that got executed in 
> human readable format
> So, e.g. in the output if column id can be preceded by timestamp, it will 
> help in understanding when a particular compaction ran and it's impact on the 
> system resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12506) nodetool compactionhistory in it's output should have timestamp in human readable form

2016-08-19 Thread Kenneth Failbus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Failbus updated CASSANDRA-12506:

Summary: nodetool compactionhistory in it's output should have timestamp in 
human readable form  (was: nodetool compactionhistory in it's output should 
have timestamp)

> nodetool compactionhistory in it's output should have timestamp in human 
> readable form
> --
>
> Key: CASSANDRA-12506
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12506
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: AWS
>Reporter: Kenneth Failbus
>Priority: Minor
>
> While running nodetool compactionhistory the output shows id and other 
> columns. I wanted to also see the timestamp for each id that got executed.
> So, e.g. in the output if column id can be preceded by timestamp, it will 
> help in understanding when a particular compaction ran and it's impact on the 
> system resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12506) nodetool compactionhistory in it's output should have timestamp in human readable format

2016-08-19 Thread Kenneth Failbus (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenneth Failbus updated CASSANDRA-12506:

Summary: nodetool compactionhistory in it's output should have timestamp in 
human readable format  (was: nodetool compactionhistory in it's output should 
have timestamp in human readable form)

> nodetool compactionhistory in it's output should have timestamp in human 
> readable format
> 
>
> Key: CASSANDRA-12506
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12506
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
> Environment: AWS
>Reporter: Kenneth Failbus
>Priority: Minor
>
> While running nodetool compactionhistory the output shows id and other 
> columns. I wanted to also see the timestamp for each id that got executed in 
> human readable format
> So, e.g. in the output if column id can be preceded by timestamp, it will 
> help in understanding when a particular compaction ran and it's impact on the 
> system resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12492) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test

2016-08-19 Thread Russ Hatch (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428344#comment-15428344
 ] 

Russ Hatch commented on CASSANDRA-12492:


multiplexing here: 
https://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/268/

> dtest failure in 
> upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.cql3_non_compound_range_tombstones_test
> 
>
> Key: CASSANDRA-12492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12492
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Russ Hatch
>  Labels: dtest
> Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, 
> node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-3.9_dtest_upgrade/25/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/cql3_non_compound_range_tombstones_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line 
> 1562, in cql3_non_compound_range_tombstones_test
> ThriftConsistencyLevel.ALL)
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1175, in batch_mutate
> self.recv_batch_mutate()
>   File "/home/automaton/cassandra-dtest/thrift_bindings/v22/Cassandra.py", 
> line 1201, in recv_batch_mutate
> raise result.te
> "TimedOutException(acknowledged_by=1, paxos_in_progress=None, 
> acknowledged_by_batchlog=None)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-12493) dtest failure in auth_test.TestAuth.conditional_create_drop_user_test

2016-08-19 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson resolved CASSANDRA-12493.
-
Resolution: Cannot Reproduce

500 runs, no repro

> dtest failure in auth_test.TestAuth.conditional_create_drop_user_test
> -
>
> Key: CASSANDRA-12493
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12493
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Philip Thompson
>  Labels: dtest
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/458/testReport/auth_test/TestAuth/conditional_create_drop_user_test
> {code}
> Stacktrace
>   File "/usr/lib/python2.7/unittest/case.py", line 329, in run
> testMethod()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 348, in 
> conditional_create_drop_user_test
> self.prepare()
>   File "/home/automaton/cassandra-dtest/auth_test.py", line 978, in prepare
> n = self.wait_for_any_log(self.cluster.nodelist(), 'Created default 
> superuser', 25)
>   File "/home/automaton/cassandra-dtest/dtest.py", line 760, in 
> wait_for_any_log
> found = node.grep_log(pattern, filename=filename)
>   File "/home/automaton/ccm/ccmlib/node.py", line 347, in grep_log
> with open(os.path.join(self.get_path(), 'logs', filename)) as f:
> "[Errno 2] No such file or directory: 
> '/tmp/dtest-XmnSYI/test/node1/logs/system.log'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12503) Structure for netstats output format (JSON, YAML)

2016-08-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-12503:
---
 Assignee: Hiroki Watanabe
 Reviewer: Yuki Morishita
Fix Version/s: 3.x

> Structure for netstats output format (JSON, YAML)
> -
>
> Key: CASSANDRA-12503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12503
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroki Watanabe
>Assignee: Hiroki Watanabe
>Priority: Minor
> Fix For: 3.x
>
> Attachments: new_receiving.def, new_receiving.json, 
> new_receiving.yaml, new_sending.def, new_sending.json, new_sending.yaml, 
> old_receiving.def, old_sending.def, trunk.patch
>
>
> As with nodetool tpstats and tablestats (CASSANDRA-12035), nodetool netstats 
> should also support useful output formats such as JSON or YAML, so we 
> implemented it. 
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12486) Structure for compactionhistory output (JSON, YAML)

2016-08-19 Thread Yuki Morishita (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Morishita updated CASSANDRA-12486:
---
 Assignee: Masataka Yamaguchi
 Reviewer: Yuki Morishita
Fix Version/s: 3.x

> Structure for compactionhistory output (JSON, YAML)
> ---
>
> Key: CASSANDRA-12486
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12486
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Masataka Yamaguchi
>Assignee: Masataka Yamaguchi
>Priority: Minor
> Fix For: 3.x
>
> Attachments: CASSANDRA-12486-trunk.patch, 
> compactionhistory_result.json, compactionhistory_result.txt, 
> compactionhistory_result.yaml
>
>
> As with nodetool tpstats and tablestats (CASSANDRA-12035), nodetool 
> compactionhistory should also support useful output formats such as JSON or 
> YAML, so we implemented it. 
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11701) [windows] dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows

2016-08-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428305#comment-15428305
 ] 

Paulo Motta commented on CASSANDRA-11701:
-

Sorry for the delay here. Patch LGTM and seems like it's going to fix this 
race, marking as ready to commit. Thanks!

> [windows] dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows
> -
>
> Key: CASSANDRA-11701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11701
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest, windows
>
> looks to be an assertion problem, so could be test or cassandra related:
> e.g.:
> {noformat}
> 1 != 331
> {noformat}
> http://cassci.datastax.com/job/trunk_dtest_win32/404/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_skip_and_max_rows
> Failed on CassCI build trunk_dtest_win32 #404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11701) [windows] dtest failure in cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows

2016-08-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11701:

Status: Ready to Commit  (was: Patch Available)

> [windows] dtest failure in 
> cqlsh_tests.cqlsh_copy_tests.CqlshCopyTest.test_reading_with_skip_and_max_rows
> -
>
> Key: CASSANDRA-11701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11701
> Project: Cassandra
>  Issue Type: Test
>Reporter: Russ Hatch
>Assignee: Stefania
>  Labels: dtest, windows
>
> looks to be an assertion problem, so could be test or cassandra related:
> e.g.:
> {noformat}
> 1 != 331
> {noformat}
> http://cassci.datastax.com/job/trunk_dtest_win32/404/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_reading_with_skip_and_max_rows
> Failed on CassCI build trunk_dtest_win32 #404



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9054) Let DatabaseDescriptor not implicitly startup services

2016-08-19 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428261#comment-15428261
 ] 

Jeremiah Jordan commented on CASSANDRA-9054:


+1

> Let DatabaseDescriptor not implicitly startup services
> --
>
> Key: CASSANDRA-9054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9054
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremiah Jordan
>Assignee: Robert Stupp
> Fix For: 3.10
>
>
> Right now to get at Config stuff you go through DatabaseDescriptor.  But when 
> you instantiate DatabaseDescriptor it actually opens system tables and such, 
> which triggers commit log replays, and other things if the right flags aren't 
> set ahead of time.  This makes getting at config stuff from tools annoying, 
> as you have to be very careful about instantiation orders.
> It would be nice if we could break DatabaseDescriptor up into multiple 
> classes, so that getting at config stuff from tools wasn't such a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-9054) Let DatabaseDescriptor not implicitly startup services

2016-08-19 Thread Jeremiah Jordan (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427156#comment-15427156
 ] 

Jeremiah Jordan edited comment on CASSANDRA-9054 at 8/19/16 2:29 PM:
-

Was just looking through this.
{{AuthConfig.applyAuthz()}} is called from both 
{{DatabaseDescriptor.daemonInitialization()}} and 
{{CassandraDaemon.applyConfig()}}, which means it gets called twice, as 
{{applyConfig}} also calls {{daemonInitialization}}

Also a nit on naming {{applyAuthz}} should probably just be called {{apply}} or 
{{applyAuth}} we usually use "Authz" to mean specifically Authori*z*ation, and 
that method sets up both Authorization and Authentication.


was (Author: jjordan):
Was just looking through this.
{{AuthConfig.applyAuthz()}} is called from both 
{{DatabaseDescriptor.daemonInitialization()}} and 
{{CassandraDaemon.applyConfig()}}, which means it gets called twice, as 
{{applyConfig}} also calls {{daemonInitialization}}

Also a nit on naming {{applyAuthz} should probably just be called {{apply}} or 
{{applyAuth}} we usually use "Authz" to mean specifically Authori*z*ation, and 
that method sets up both Authorization and Authentication.

> Let DatabaseDescriptor not implicitly startup services
> --
>
> Key: CASSANDRA-9054
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9054
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jeremiah Jordan
>Assignee: Robert Stupp
> Fix For: 3.10
>
>
> Right now to get at Config stuff you go through DatabaseDescriptor.  But when 
> you instantiate DatabaseDescriptor it actually opens system tables and such, 
> which triggers commit log replays, and other things if the right flags aren't 
> set ahead of time.  This makes getting at config stuff from tools annoying, 
> as you have to be very careful about instantiation orders.
> It would be nice if we could break DatabaseDescriptor up into multiple 
> classes, so that getting at config stuff from tools wasn't such a pain.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Trivial Update of "ThirdPartySupport" by DaveBrosius

2016-08-19 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ThirdPartySupport" page has been changed by DaveBrosius:
https://wiki.apache.org/cassandra/ThirdPartySupport?action=diff&rev1=48&rev2=49

  {{http://www.decisivelabs.com.au/img/platforms/instaclustr-l...@2x.png}} 
[[https://www.instaclustr.com/?cid=casspp|Instaclustr]] provides managed Apache 
Cassandra hosting on Amazon Web Services. Instaclustr dramatically reduces 
administration overheads and support costs by providing automated deployment, 
backups, cluster balancing and performance tuning.
  
  
- 
-  /!\ '''Edit conflict - other version:''' 
  
{{https://opencredo.com/wp-content/uploads/2013/07/OpenCredo-Logo-Alt-CMYK-Process-Converted-300x72.png}}
 [[https://opencredo.com|OpenCredo]] is a pragmatic hands-on software and 
devOps consultancy with a wealth of experience in open source technologies. We 
are Datastax Certified experts and have been working with Cassandra since 2012. 
And so through our real-world experience, we can provide expertise in both 
Apache Cassandra and DataStax Enterprise. Contact us at i...@opencredo.com
  
  


[jira] [Created] (CASSANDRA-12507) Wanted to know if there will be any future work in supporting hybrid snitches

2016-08-19 Thread Kenneth Failbus (JIRA)
Kenneth Failbus created CASSANDRA-12507:
---

 Summary: Wanted to know if there will be any future work in 
supporting hybrid snitches
 Key: CASSANDRA-12507
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12507
 Project: Cassandra
  Issue Type: Wish
  Components: Core
Reporter: Kenneth Failbus
Priority: Minor


I have a mixed cloud technologies on which I run cassandra cluster. I wanted to 
know if I can leverage EC2MRS on EC2 cloud as a snitch where as leverage other 
snitch types on other cloud environment e.g. using GPFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12506) nodetool compactionhistory in it's output should have timestamp

2016-08-19 Thread Kenneth Failbus (JIRA)
Kenneth Failbus created CASSANDRA-12506:
---

 Summary: nodetool compactionhistory in it's output should have 
timestamp
 Key: CASSANDRA-12506
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12506
 Project: Cassandra
  Issue Type: Improvement
  Components: Compaction
 Environment: AWS
Reporter: Kenneth Failbus
Priority: Minor


While running nodetool compactionhistory the output shows id and other columns. 
I wanted to also see the timestamp for each id that got executed.
So, e.g. in the output if column id can be preceded by timestamp, it will help 
in understanding when a particular compaction ran and it's impact on the system 
resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12505) Authentication failure for new nodes

2016-08-19 Thread Mike (JIRA)
Mike created CASSANDRA-12505:


 Summary: Authentication failure for new nodes
 Key: CASSANDRA-12505
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12505
 Project: Cassandra
  Issue Type: Bug
 Environment: Ubuntu 16.04
Reporter: Mike
Priority: Critical


I added new data center with one node to a cluster. The cluster currently has 5 
nodes in 3 data centers. I changed the replication factor of system_auth 
keyspace according to the number of nodes. I run nodetool repair and rebuild on 
new nodes, but I cannot login to the new nodes. cqlsh reports the following 
error:
{noformat}
Connection error: ('Unable to connect to any servers', {'ip': 
AuthenticationFailed(u'Failed to authenticate to ip: code=0100 [Bad 
credentials] message="Username and/or password are incorrect"',)})
{noformat}

I tried using both the SimpleStrategy and NetworkTopologyStrategy for the 
system_auth keyspace without changes. I am still able to login into old nodes. 
nodetool status reports that every node owns 100% of the system_auth keyspace. 
The log file does not show any errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-19 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428166#comment-15428166
 ] 

Benjamin Lerer commented on CASSANDRA-11031:


[~jasonstack] Sorry, I got stuck into some other patches.
I will do the review as soon as Alex is happy with his last changes.

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key

2016-08-19 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428162#comment-15428162
 ] 

Alex Petrov commented on CASSANDRA-11031:
-

After the test run for latest changes I've noticed that dtests are mostly 
incorrect since they're expecting particular partition order and sometimes even 
use limit which can not work since we can not guarantee positioning and order 
of partitions on slice query without re-ordering them on coordinator (which 
does not happen in this case).

> MultiTenant : support “ALLOW FILTERING" for Partition Key
> -
>
> Key: CASSANDRA-11031
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11031
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: ZhaoYang
>Assignee: ZhaoYang
>Priority: Minor
> Fix For: 3.x
>
>
> Currently, Allow Filtering only works for secondary Index column or 
> clustering columns. And it's slow, because Cassandra will read all data from 
> SSTABLE from hard-disk to memory to filter.
> But we can support allow filtering on Partition Key, as far as I know, 
> Partition Key is in memory, so we can easily filter them, and then read 
> required data from SSTable.
> This will similar to "Select * from table" which scan through entire cluster.
> CREATE TABLE multi_tenant_table (
>   tenant_id text,
>   pk2 text,
>   c1 text,
>   c2 text,
>   v1 text,
>   v2 text,
>   PRIMARY KEY ((tenant_id,pk2),c1,c2)
> ) ;
> Select * from multi_tenant_table where tenant_id = "datastax" allow filtering;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Update of "ThirdPartySupport" by Danielle Blake

2016-08-19 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ThirdPartySupport" page has been changed by Danielle Blake:
https://wiki.apache.org/cassandra/ThirdPartySupport?action=diff&rev1=47&rev2=48

  {{http://www.decisivelabs.com.au/img/platforms/instaclustr-l...@2x.png}} 
[[https://www.instaclustr.com/?cid=casspp|Instaclustr]] provides managed Apache 
Cassandra hosting on Amazon Web Services. Instaclustr dramatically reduces 
administration overheads and support costs by providing automated deployment, 
backups, cluster balancing and performance tuning.
  
  
+ 
+  /!\ '''Edit conflict - other version:''' 
  
{{https://opencredo.com/wp-content/uploads/2013/07/OpenCredo-Logo-Alt-CMYK-Process-Converted-300x72.png}}
 [[https://opencredo.com|OpenCredo]] is a pragmatic hands-on software and 
devOps consultancy with a wealth of experience in open source technologies. We 
are Datastax Certified experts and have been working with Cassandra since 2012. 
And so through our real-world experience, we can provide expertise in both 
Apache Cassandra and DataStax Enterprise. Contact us at i...@opencredo.com
  
  


[Cassandra Wiki] Update of "ThirdPartySupport" by Danielle Blake

2016-08-19 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ThirdPartySupport" page has been changed by Danielle Blake:
https://wiki.apache.org/cassandra/ThirdPartySupport?action=diff&rev1=47&rev2=48

  {{http://www.decisivelabs.com.au/img/platforms/instaclustr-l...@2x.png}} 
[[https://www.instaclustr.com/?cid=casspp|Instaclustr]] provides managed Apache 
Cassandra hosting on Amazon Web Services. Instaclustr dramatically reduces 
administration overheads and support costs by providing automated deployment, 
backups, cluster balancing and performance tuning.
  
  
+ 
+  /!\ '''Edit conflict - other version:''' 
  
{{https://opencredo.com/wp-content/uploads/2013/07/OpenCredo-Logo-Alt-CMYK-Process-Converted-300x72.png}}
 [[https://opencredo.com|OpenCredo]] is a pragmatic hands-on software and 
devOps consultancy with a wealth of experience in open source technologies. We 
are Datastax Certified experts and have been working with Cassandra since 2012. 
And so through our real-world experience, we can provide expertise in both 
Apache Cassandra and DataStax Enterprise. Contact us at i...@opencredo.com
  
  


[jira] [Commented] (CASSANDRA-10993) Make read and write requests paths fully non-blocking, eliminate related stages

2016-08-19 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428042#comment-15428042
 ] 

Jason Brown commented on CASSANDRA-10993:
-

bq. Isn't this a good example of a case where we can start with an off the 
shelf API (Rx, or Reactor) and optimize later?

I think this is a good point of departure, and I agree with [~jbellis]'s 
comments . TBH, I'm a bit worried about the time investment to actually build 
the custom solution (note: this is not a jab at anyone's abilities or anything, 
but recognizing the complexity vs. available labor vs. time to ship). If we can 
hit a large percentage of our target performance goal with using an 
off-the-shelf library, and swapping to the custom solution could be reasonable 
after that, I think we should start with the off-the-shelf library to deliver 
value to our users sooner.

> Make read and write requests paths fully non-blocking, eliminate related 
> stages
> ---
>
> Key: CASSANDRA-10993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10993
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Coordination, Local Write-Read Paths
>Reporter: Aleksey Yeschenko
>Assignee: Tyler Hobbs
> Fix For: 3.x
>
> Attachments: 10993-reads-no-evloop-integration-six-node-stress.svg, 
> tpc-benchmarks-2.txt, tpc-benchmarks.txt
>
>
> Building on work done by [~tjake] (CASSANDRA-10528), [~slebresne] 
> (CASSANDRA-5239), and others, convert read and write request paths to be 
> fully non-blocking, to enable the eventual transition from SEDA to TPC 
> (CASSANDRA-10989)
> Eliminate {{MUTATION}}, {{COUNTER_MUTATION}}, {{VIEW_MUTATION}}, {{READ}}, 
> and {{READ_REPAIR}} stages, move read and write execution directly to Netty 
> context.
> For lack of decent async I/O options on Linux, we’ll still have to retain an 
> extra thread pool for serving read requests for data not residing in our page 
> cache (CASSANDRA-5863), however.
> Implementation-wise, we only have two options available to us: explicit FSMs 
> and chained futures. Fibers would be the third, and easiest option, but 
> aren’t feasible in Java without resorting to direct bytecode manipulation 
> (ourselves or using [quasar|https://github.com/puniverse/quasar]).
> I have seen 4 implementations bases on chained futures/promises now - three 
> in Java and one in C++ - and I’m not convinced that it’s the optimal (or 
> sane) choice for representing our complex logic - think 2i quorum read 
> requests with timeouts at all levels, read repair (blocking and 
> non-blocking), and speculative retries in the mix, {{SERIAL}} reads and 
> writes.
> I’m currently leaning towards an implementation based on explicit FSMs, and 
> intend to provide a prototype - soonish - for comparison with 
> {{CompletableFuture}}-like variants.
> Either way the transition is a relatively boring straightforward refactoring.
> There are, however, some extension points on both write and read paths that 
> we do not control:
> - authorisation implementations will have to be non-blocking. We have control 
> over built-in ones, but for any custom implementation we will have to execute 
> them in a separate thread pool
> - 2i hooks on the write path will need to be non-blocking
> - any trigger implementations will not be allowed to block
> - UDFs and UDAs
> We are further limited by API compatibility restrictions in the 3.x line, 
> forbidding us to alter, or add any non-{{default}} interface methods to those 
> extension points, so these pose a problem.
> Depending on logistics, expecting to get this done in time for 3.4 or 3.6 
> feature release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-08-19 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12461:

Status: Awaiting Feedback  (was: Open)

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-08-19 Thread Alex Petrov (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Petrov updated CASSANDRA-12461:

Status: Open  (was: Patch Available)

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12461) Add hooks to StorageService shutdown

2016-08-19 Thread Alex Petrov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15428040#comment-15428040
 ] 

Alex Petrov commented on CASSANDRA-12461:
-

Hi [~acoz]. I have several minor suggestions: 

  * run post-shutdown hooks after setting the mode
  * disallow removing hooks if already in shutdown hook
  * return boolean yielded by `add` of the list
  * currently shutdown hooks will run twice on drained node: once during the 
node drain and second time during the actual shutdown, is that intended / 
known? Should we even run shutdown process for the second time after the node 
is already drained? Or can we use an atomic boolean and ensure exactly one run?
  * it might be possible to avoid syncrhonisation by using atomic boolean 
instead of volatile + syncronised block 

I also suggest working further on fixing [CASSANDRA-12011], since currently it 
results into

{code}
LOGBACK: No context given for 
ch.qos.logback.core.hook.DelayingShutdownHook@5bdd8803
{code}

which in my opinion requires some investigation. I've started reading up on it 
and it seems that,
for instance, Spring Boot is doing it through the context shutdown.

Since logging isn't going to be available for post-shutdown hooks, it'd be hard 
to use them,
as you can not see side-effects.

I've pushed a dirty copy of my suggestions 
[here|https://github.com/ifesdjeen/cassandra/commits/12461-trunk]. It might be 
simpler to look at the code :)

> Add hooks to StorageService shutdown
> 
>
> Key: CASSANDRA-12461
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12461
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Attachments: 
> 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch
>
>
> The JVM will usually run shutdown hooks in parallel.  This can lead to 
> synchronization problems between Cassandra, services that depend on it, and 
> services it depends on.  This patch adds some simple support for shutdown 
> hooks to StorageService.
> This should nearly solve CASSANDRA-12011



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12504) BatchlogManager is shut down twice during drain

2016-08-19 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania reassigned CASSANDRA-12504:


Assignee: Stefania

> BatchlogManager is shut down twice during drain
> ---
>
> Key: CASSANDRA-12504
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12504
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Alex Petrov
>Assignee: Stefania
>Priority: Minor
>
> {{BatchlogManager}} is shut down twice during in the {{StorageService}}, once 
> [here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L4216]
>  and once 
> [here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L4285].
> The first shutdown was already there but the last one seems to have been 
> added 
> [here|https://github.com/apache/cassandra/commit/53a177a9150586e56408f25c959f75110a2997e7#diff-b76a607445d53f18a98c9df14323c7ddR3913].
>  It seems to be harmless, as it’s not required in stages in-between, so 
> second run would be a no-op.
> Following the logic of other shutdown hook, the first one is the good place 
> for it to be (right before {{HintsService}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12504) BatchlogManager is shut down twice during drain

2016-08-19 Thread Alex Petrov (JIRA)
Alex Petrov created CASSANDRA-12504:
---

 Summary: BatchlogManager is shut down twice during drain
 Key: CASSANDRA-12504
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12504
 Project: Cassandra
  Issue Type: Bug
Reporter: Alex Petrov
Priority: Minor


{{BatchlogManager}} is shut down twice during in the {{StorageService}}, once 
[here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L4216]
 and once 
[here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L4285].

The first shutdown was already there but the last one seems to have been added 
[here|https://github.com/apache/cassandra/commit/53a177a9150586e56408f25c959f75110a2997e7#diff-b76a607445d53f18a98c9df14323c7ddR3913].
 It seems to be harmless, as it’s not required in stages in-between, so second 
run would be a no-op.

Following the logic of other shutdown hook, the first one is the good place for 
it to be (right before {{HintsService}}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-12199) Config class uses boxed types but DD exposes primitive types

2016-08-19 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp reassigned CASSANDRA-12199:


Assignee: Robert Stupp

> Config class uses boxed types but DD exposes primitive types
> 
>
> Key: CASSANDRA-12199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12199
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
>  Labels: lhf
> Fix For: 3.x
>
>
> {{Config}} class contains a lot of properties that are defined using boxed 
> types - ({{Config.dynamic_snitch_update_interval_in_ms}}) but the 
> corresponding get-methods in {{DatabaseDescriptor}} require them to be not 
> null. Means, setting such properties to {{null}} will lead to NPEs anyway.
> Proposal:
> * Identify all properties that use boxed values and have a default value 
> (e.g. {{public Integer rpc_port = 9160;}})
> * Refactor those to use primitive types



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12154) "SELECT * FROM foo LIMIT ;" does not error out

2016-08-19 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-12154:
-
Reviewer: Robert Stupp

> "SELECT * FROM foo LIMIT ;" does not error out
> --
>
> Key: CASSANDRA-12154
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12154
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Robert Stupp
>Assignee: Mahdi Mohammadi
>Priority: Minor
>  Labels: lhf
>
> We found out that {{SELECT * FROM foo LIMIT ;}} is unanimously accepted and 
> executed but it should not.
> Have not dug deeper why that is possible (it's not a big issue IMO) but it is 
> strange. Seems it doesn't parse {{LIMIT}} as {{K_LIMIT}} because otherwise it 
> would require an int argument.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-12092) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters

2016-08-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427952#comment-15427952
 ] 

Stefania commented on CASSANDRA-12092:
--

I was able to reproduce one 
[failure|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-dtest-multiplex/23/]
 with debug logs finally, see 
[^jenkins-stef1927-dtest-multiplex-23_logs.000.tar.gz] attached.

It seems that when the test fails, a digest arrives before the local read 
completes at CL ONE, and this triggers async repair due to the digest mismatch 
if the other node has a different counter value, which is expected. However, I 
haven't understood why none of the two nodes returns the up-to-date counter 
from the local data response, since at least one of them should have applied 
the mutation that it received from the leader - given that we write at 
CL.QUORUM. So I'm still investigating.

> dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters
> 
>
> Key: CASSANDRA-12092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12092
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Attachments: jenkins-stef1927-dtest-multiplex-23_logs.000.tar.gz, 
> node1.log, node2.log, node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/484/testReport/consistency_test/TestAccuracy/test_simple_strategy_counters
> Failed on CassCI build cassandra-2.1_dtest #484
> {code}
> Standard Error
> Traceback (most recent call last):
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 514, in run
> valid_fcn(v)
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 497, in 
> validate_counters
> check_all_sessions(s, n, c)
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 490, in 
> check_all_sessions
> "value of %s at key %d, instead got these values: %s" % (write_nodes, 
> val, n, results)
> AssertionError: Failed to read value from sufficient number of nodes, 
> required 2 nodes to have a counter value of 1 at key 200, instead got these 
> values: [0, 0, 1]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12092) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters

2016-08-19 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-12092:
-
Attachment: jenkins-stef1927-dtest-multiplex-23_logs.000.tar.gz

> dtest failure in consistency_test.TestAccuracy.test_simple_strategy_counters
> 
>
> Key: CASSANDRA-12092
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12092
> Project: Cassandra
>  Issue Type: Test
>Reporter: Sean McCarthy
>Assignee: Stefania
>  Labels: dtest
> Attachments: jenkins-stef1927-dtest-multiplex-23_logs.000.tar.gz, 
> node1.log, node2.log, node3.log
>
>
> example failure:
> http://cassci.datastax.com/job/cassandra-2.1_dtest/484/testReport/consistency_test/TestAccuracy/test_simple_strategy_counters
> Failed on CassCI build cassandra-2.1_dtest #484
> {code}
> Standard Error
> Traceback (most recent call last):
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 514, in run
> valid_fcn(v)
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 497, in 
> validate_counters
> check_all_sessions(s, n, c)
>   File "/home/automaton/cassandra-dtest/consistency_test.py", line 490, in 
> check_all_sessions
> "value of %s at key %d, instead got these values: %s" % (write_nodes, 
> val, n, results)
> AssertionError: Failed to read value from sufficient number of nodes, 
> required 2 nodes to have a counter value of 1 at key 200, instead got these 
> values: [0, 0, 1]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12503) Structure for netstats output format (JSON, YAML)

2016-08-19 Thread Hiroki Watanabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroki Watanabe updated CASSANDRA-12503:

Component/s: Tools

> Structure for netstats output format (JSON, YAML)
> -
>
> Key: CASSANDRA-12503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12503
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroki Watanabe
>Priority: Minor
> Attachments: new_receiving.def, new_receiving.json, 
> new_receiving.yaml, new_sending.def, new_sending.json, new_sending.yaml, 
> old_receiving.def, old_sending.def, trunk.patch
>
>
> As with nodetool tpstats and tablestats (CASSANDRA-12035), nodetool netstats 
> should also support useful output formats such as JSON or YAML, so we 
> implemented it. 
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-12503) Structure for netstats output format (JSON, YAML)

2016-08-19 Thread Hiroki Watanabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroki Watanabe updated CASSANDRA-12503:

Status: Patch Available  (was: Open)

> Structure for netstats output format (JSON, YAML)
> -
>
> Key: CASSANDRA-12503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12503
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Hiroki Watanabe
>Priority: Minor
> Attachments: new_receiving.def, new_receiving.json, 
> new_receiving.yaml, new_sending.def, new_sending.json, new_sending.yaml, 
> old_receiving.def, old_sending.def, trunk.patch
>
>
> As with nodetool tpstats and tablestats (CASSANDRA-12035), nodetool netstats 
> should also support useful output formats such as JSON or YAML, so we 
> implemented it. 
> Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-12503) Structure for netstats output format (JSON, YAML)

2016-08-19 Thread Hiroki Watanabe (JIRA)
Hiroki Watanabe created CASSANDRA-12503:
---

 Summary: Structure for netstats output format (JSON, YAML)
 Key: CASSANDRA-12503
 URL: https://issues.apache.org/jira/browse/CASSANDRA-12503
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hiroki Watanabe
Priority: Minor
 Attachments: new_receiving.def, new_receiving.json, 
new_receiving.yaml, new_sending.def, new_sending.json, new_sending.yaml, 
old_receiving.def, old_sending.def, trunk.patch

As with nodetool tpstats and tablestats (CASSANDRA-12035), nodetool netstats 
should also support useful output formats such as JSON or YAML, so we 
implemented it. 
Please review the attached patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[Cassandra Wiki] Trivial Update of "ContributorsGroup" by DaveBrosius

2016-08-19 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for 
change notification.

The "ContributorsGroup" page has been changed by DaveBrosius:
https://wiki.apache.org/cassandra/ContributorsGroup?action=diff&rev1=63&rev2=64

   * ChrisBurroughs
   * daniels
   * Danielle Blake
-  * DanielleBlake
   * EricEvans
   * ErnieHershey
   * FlipKromer


  1   2   >