[jira] [Comment Edited] (CASSANDRA-14393) Incorrect view updates

2018-06-27 Thread Alexander Ivakov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525917#comment-16525917
 ] 

Alexander Ivakov edited comment on CASSANDRA-14393 at 6/28/18 5:52 AM:
---

The above appears to be a design feature implemented in CASSANDRA-11500. See 
point c) in:

{{view row exists if any of following is true:

a. base row pk has live livenessInfo(timestamp) and base row pk satifies view's 
filter conditions if any.
b. or one of base row columns selected in view has live timestamp (via update) 
and base row pk satifies view's filter conditions if any. this is handled by 
existing mechanism of liveness and tombstone since all info are included in 
view row
c. or one of base row columns not selected in view has live timestamp (via 
update) and base row pk satifies view's filter conditions if any. Those 
unselected columns' timestamp/ttl/cell-deletion info are not currently stored 
on view row.}}

from here: 
https://issues.apache.org/jira/browse/CASSANDRA-11500?focusedCommentId=16082241=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16082241

When a base table row is updated, the corresponding view row has its (primary 
key) liveness info updated to that of the update, including ttl and local 
deletion time. This leads to the behaviour you see here if the update had a 
longer ttl than the actual row - the (view) row is marked with the ttl of the 
update and remains alive for that length of time (even if the data in the row 
is already expired). Note, this only affects the view, the base table row 
liveness is not altered (only INSERTs can set the row liveness on tables). 
Hence, if you query the table, no results are returned, while querying the view 
returns a row with empty (expired) data cells.

Note also that DELETE does not do the above, ie. does not update the row 
liveness and ttl of the MV row. 

Perhaps [~jasonstack] can comment?


was (Author: alex.ivakov):
The above appears to be a design feature implemented in CASSANDRA-11500. See 
point c) in:

{{view row exists if any of following is true:

a. base row pk has live livenessInfo(timestamp) and base row pk satifies view's 
filter conditions if any.
b. or one of base row columns selected in view has live timestamp (via update) 
and base row pk satifies view's filter conditions if any. this is handled by 
existing mechanism of liveness and tombstone since all info are included in 
view row
c. or one of base row columns not selected in view has live timestamp (via 
update) and base row pk satifies view's filter conditions if any. Those 
unselected columns' timestamp/ttl/cell-deletion info are not currently stored 
on view row.}}

from here: 
https://issues.apache.org/jira/browse/CASSANDRA-11500?focusedCommentId=16082241=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16082241

When a base table row is updated, the corresponding view row has its (primary 
key) liveness info updated to that of the update, including ttl and local 
deletion time. This leads to the behaviour you see here if the update had a 
longer ttl than the actual row - the row is marked with the ttl of the update 
and remains alive for that length of time (even if the data in the row is 
already expired). Note, this only affects the view, the base table row liveness 
is not altered (only INSERTs can set the row liveness on tables). Hence, if you 
query the table, no results are returned, while querying the view returns a row 
with empty (expired) data cells.

Note also that DELETE does not do the above, ie. does not update the row 
liveness and ttl of the MV row. 

Perhaps [~jasonstack] can comment?

> Incorrect view updates
> --
>
> Key: CASSANDRA-14393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14393
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Duarte Nunes
>Priority: Major
>  Labels: materializedviews
>
> Consider the following:
> {noformat}
> create table t (p int, c int, v1 int, v2 int, primary key(p, c));
> create materialized view mv as select p, c, v1 from t 
> where p is not null and c is not null primary key (c, p);
> insert into t (p, c, v1, v2) VALUES(1, 1, 1, 1) using ttl 5;
> update t using ttl 1000 set v2 = 1 where p = 1 and c = 1;
> delete v2 from t where p = 1 and c = 1;
> // Wait 5 seconds
> select * from mv;
> c | p | v1
> ---+---+--
> 1 | 1 | null{noformat}
> The view row should be dead after 5 seconds, but it is not.
> This is because the liveness info calculated when deleting v2 is based on the 
> base table update liveness info, which has the timestamp of the first insert 
> statement. That liveness info is shadowed by the liveness info created in the 
> update, which has a higher timestamp.



--
This 

[jira] [Commented] (CASSANDRA-14393) Incorrect view updates

2018-06-27 Thread Alexander Ivakov (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525917#comment-16525917
 ] 

Alexander Ivakov commented on CASSANDRA-14393:
--

The above appears to be a design feature implemented in CASSANDRA-11500. See 
point c) in:

{{view row exists if any of following is true:

a. base row pk has live livenessInfo(timestamp) and base row pk satifies view's 
filter conditions if any.
b. or one of base row columns selected in view has live timestamp (via update) 
and base row pk satifies view's filter conditions if any. this is handled by 
existing mechanism of liveness and tombstone since all info are included in 
view row
c. or one of base row columns not selected in view has live timestamp (via 
update) and base row pk satifies view's filter conditions if any. Those 
unselected columns' timestamp/ttl/cell-deletion info are not currently stored 
on view row.}}

from here: 
https://issues.apache.org/jira/browse/CASSANDRA-11500?focusedCommentId=16082241=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16082241

When a base table row is updated, the corresponding view row has its (primary 
key) liveness info updated to that of the update, including ttl and local 
deletion time. This leads to the behaviour you see here if the update had a 
longer ttl than the actual row - the row is marked with the ttl of the update 
and remains alive for that length of time (even if the data in the row is 
already expired). Note, this only affects the view, the base table row liveness 
is not altered (only INSERTs can set the row liveness on tables). Hence, if you 
query the table, no results are returned, while querying the view returns a row 
with empty (expired) data cells.

Note also that DELETE does not do the above, ie. does not update the row 
liveness and ttl of the MV row. 

Perhaps [~jasonstack] can comment?

> Incorrect view updates
> --
>
> Key: CASSANDRA-14393
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14393
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: Duarte Nunes
>Priority: Major
>  Labels: materializedviews
>
> Consider the following:
> {noformat}
> create table t (p int, c int, v1 int, v2 int, primary key(p, c));
> create materialized view mv as select p, c, v1 from t 
> where p is not null and c is not null primary key (c, p);
> insert into t (p, c, v1, v2) VALUES(1, 1, 1, 1) using ttl 5;
> update t using ttl 1000 set v2 = 1 where p = 1 and c = 1;
> delete v2 from t where p = 1 and c = 1;
> // Wait 5 seconds
> select * from mv;
> c | p | v1
> ---+---+--
> 1 | 1 | null{noformat}
> The view row should be dead after 5 seconds, but it is not.
> This is because the liveness info calculated when deleting v2 is based on the 
> base table update liveness info, which has the timestamp of the first insert 
> statement. That liveness info is shadowed by the liveness info created in the 
> update, which has a higher timestamp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-11559) Enhance node representation

2018-06-27 Thread Alex Lourie (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16414721#comment-16414721
 ] 

Alex Lourie edited comment on CASSANDRA-11559 at 6/28/18 4:55 AM:
--

I've done some work on this ticket and it's available at 
[https://github.com/apache/cassandra/compare/trunk...alourie:CASSANDRA-11559#files_bucket].
 Few pointers:
 * I've refactored InetAddressAndPort class into VirtualEndpoint class across 
all the codebase (this will be responsible for the majority of the code 
changes).
 * I've added a UUID field to hold the hostID value of the endpoint and added 
additional methods for working with it.
 * I've reworked the TokenMetadata to hold structures other than maps for 
UUID-host references and they would no longer be needed, i.e. keeping just a 
set of endpoints is enough to hold both address data and the hostID data and to 
also look up hosts by IDs or the vice versa.
 * I've reworked the SystemKeyspace to also acknowledge the hostIDs where 
significant (in local data/peer data storing/fetching), and also only create 
new local id if requested (in most cases only when the node is created for the 
first time, but also useful for tests that require initiating multiple "nodes" 
on the same machine)
I've added a field in DatabaseDescriptor to mark that SystemKeyspace is ready 
to be read. This is required for many unit tests that set up clusters "on the 
fly" and for further endpoint information discovery during the test run.
 * I've updated required unit tests to properly utilise the new object and 
initialise and clean others as required.
 * I've updated the code in some other locations to incorporate this change, 
which does make it simpler on many occasions.

The current state is everything _seems_ to be working and the unit tests pass 
([https://circleci.com/gh/alourie/cassandra/123])

The complication that comes out of this work is with building unit tests - the 
host ID would now be kept in multiple structures:
 * a VirtualEndpoint object when instantiated.
 * SystemKeyspace.localHost (queries the DB)
 * SystemKeyspace.peersInfo (queries the DB)
 * TokenMetadata lists (such as allEndpoints, tokenMap, etc)
 * Gossip.instance.endpointState maps (the specific endpoint is added including 
the uuid)
 * FBUtilities also keeps local reference once fetched

As a result, when creating tests, one needs to update or clear the 
hostID-related information in all relevant places, otherwise, tests would fail 
with really confusing messages (in most cases because in some thread an 
endpoint comparison will happen and UUIDs won't match), such as "no seeds 
found", "host cannot be contacted" or various kinds of timeouts and NPEs. 
Additionally, when SystemKeyspace is ready to be read within a test flow, a 
DatabaseDescriptor.canReadSystemKeyspace field will need to be set to true so 
that the UUID would be fetched from SystemKeyspace.

Additionally, at the moment we are keeping EndpointState separately from this 
object (in Gossip). Considering that now this VirtualEndpoint can include 
basically any information about the endpoint, it may as well incorporate its 
own state, and then all handling of the network/state information about an 
endpoint will be in one place. Supposedly this should simplify things further 
and allow clearing a lot of code.

[~aweisberg] - you have done the previous move away from InetAddress 
representation to InetAddressAndPort, which this current patch changes 
considerably. I'd love your feedback on this.

Any and all feedback is very welcome.


was (Author: alourie):
I've done some work on this ticket and it's available at 
[https://github.com/apache/cassandra/compare/trunk...alourie:CASSANDRA-11559#files_bucket].
 Few pointers:
 * I've refactored InetAddressAndPort class into VirtualEndpoint class across 
all the codebase (this will be responsible for the majority of the code 
changes).
 * I've added a UUID field to hold the hostID value of the endpoint and added 
additional methods for working with it.
 * I've reworked the TokenMetadata to hold structures other than maps for 
UUID-host references and they would no longer be needed, i.e. keeping just a 
set of endpoints is enough to hold both address data and the hostID data and to 
also look up hosts by IDs or the vice versa.
 * I've reworked the SystemKeyspace to also acknowledge the hostIDs where 
significant (in local data/peer data storing/fetching), and also only create 
new local id if requested (in most cases only when the node is created for the 
first time, but also useful for tests that require initiating multiple "nodes" 
on the same machine)
I've added a field in DatabaseDescriptor to mark that SystemKeyspace is ready 
to be read. This is required for many unit tests that set up clusters "on the 
fly" and for further endpoint information discovery during the test run.
 * I've updated 

[jira] [Commented] (CASSANDRA-11559) Enhance node representation

2018-06-27 Thread Alex Lourie (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-11559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525912#comment-16525912
 ] 

Alex Lourie commented on CASSANDRA-11559:
-

[~bdeggleston] I've rebased the work on the recent trunk and changed the name 
of the new object to Endpoint. Would love any other feedback if there's any.

[~aweisberg] If you have the time, I'd really appreciate your take on this.

> Enhance node representation
> ---
>
> Key: CASSANDRA-11559
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11559
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Distributed Metadata
>Reporter: Paulo Motta
>Assignee: Alex Lourie
>Priority: Minor
>
> We currently represent nodes as {{InetAddress}} objects on {{TokenMetadata}}, 
> what causes difficulties when replacing a node with the same address (see 
> CASSANDRA-8523 and CASSANDRA-9244).
> Since CASSANDRA-4120 we index hosts by {{UUID}} in gossip, so I think it's 
> time to move that representation to {{TokenMetadata}}.
> I propose representing nodes as {{InetAddress, UUID}} pairs on 
> {{TokenMetadata}}, encapsulated in a {{VirtualNode}} interface, so it will 
> backward compatible with the current representation, while still allowing us 
> to enhance it in the future with additional metadata (and improved vnode 
> handling) if needed.
> This change will probably affect interfaces of internal classes like 
> {{TokenMetadata}} and {{AbstractReplicationStrategy}}, so I'd like to hear 
> from integrators and other developers if it's possible to change these 
> without major hassle or if we need to wait until 4.0.
> Besides updating {{TokenMetadata}} and {{AbstractReplicationStrategy}} (and 
> subclasses),  we will also need to replace all {{InetAddress}} uses with 
> {{VirtualNode.getEndpoint()}} calls on {{StorageService}} and related classes 
> and tests. We would probably already be able to replace some 
> {{TokenMetadata.getHostId(InetAddress endpoint)}} calls with 
> {{VirtualNode.getHostId()}}.
> While we will still be dealing with {{InetAddress}} on {{StorageService}} in 
> this initial stage, in the future I think we should pass {{VirtualNode}} 
> instances around and only translate from {{VirtualNode}} to {{InetAddress}} 
> in the network layer.
> Public interfaces like {{IEndpointSnitch}} will not be affected by this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14525) streaming failure during bootstrap makes new node into inconsistent state

2018-06-27 Thread Jaydeepkumar Chovatia (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525850#comment-16525850
 ] 

Jaydeepkumar Chovatia edited comment on CASSANDRA-14525 at 6/28/18 2:55 AM:


[~KurtG] I think there was a bug in my previous patch in which it will not 
start native transport in normal scenario if {{isSurveyMode}} is {{true}}. 
 Also I've discovered another bug exists in current open source code in which 
if {{isSurveyMode}} is {{true}} and streaming fails (i.e. {{isBootstrapMode}} 
is {{true}}) then also one can call {{nodetool join}} without {{nodetool 
bootstrap resume}} and have that node join the ring.

 

I've taken care of this bug and your review comments,please find updated patch 
here:
||trunk||3.0||2.x||
|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-trunk.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/76]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-3.0.svg?style=svg!
 |https://circleci.com/gh/jaydeepkumar1984/cassandra/73]  |  
[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-2.2.svg?style=svg!
 |https://circleci.com/gh/jaydeepkumar1984/cassandra/74]|
|[patch 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-trunk]
 
|[patch|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-3.0]
 |[patch 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-2.2]
 |


was (Author: chovatia.jayd...@gmail.com):
[~KurtG] I think there was a bug in my previous patch in which it will not 
start native transport in normal scenario if {{isSurveyMode}} is {{true}}. 
 Also I've discovered another bug exists in current open source code in which 
if {{isSurveyMode}} is {{true}} and if streaming fails (i.e. 
{{isBootstrapMode}} is {{true}}) even then one can call {{nodetool join}} 
without {{nodetool bootstrap resume}} and have that node join the ring.

 

I've taken care of this bug and your review comments,please find updated patch 
here:
||trunk||3.0||2.x||
|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-trunk.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/76]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-3.0.svg?style=svg!
 |https://circleci.com/gh/jaydeepkumar1984/cassandra/73]  |  
[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-2.2.svg?style=svg!
 |https://circleci.com/gh/jaydeepkumar1984/cassandra/74]|
|[patch 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-trunk]
 
|[patch|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-3.0]
 |[patch 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-2.2]
 |

> streaming failure during bootstrap makes new node into inconsistent state
> -
>
> Key: CASSANDRA-14525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Major
> Fix For: 4.0, 2.2.x, 3.0.x
>
>
> If bootstrap fails for newly joining node (most common reason is due to 
> streaming failure) then Cassandra state remains in {{joining}} state which is 
> fine but Cassandra also enables Native transport which makes overall state 
> inconsistent. This further creates NullPointer exception if auth is enabled 
> on the new node, please find reproducible steps here:
> For example if bootstrap fails due to streaming errors like
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) 
> ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1256)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:894)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:660)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:573)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:330) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> 

[jira] [Commented] (CASSANDRA-14525) streaming failure during bootstrap makes new node into inconsistent state

2018-06-27 Thread Jaydeepkumar Chovatia (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525850#comment-16525850
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-14525:
---

[~KurtG] I think there was a bug in my previous patch in which it will not 
start native transport in normal scenario if {{isSurveyMode}} is {{true}}. 
 Also I've discovered another bug exists in current open source code in which 
if {{isSurveyMode}} is {{true}} and if streaming fails (i.e. 
{{isBootstrapMode}} is {{true}}) even then one can call {{nodetool join}} 
without {{nodetool bootstrap resume}} and have that node join the ring.

 

I've taken care of this bug and your review comments,please find updated patch 
here:
||trunk||3.0||2.x||
|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-trunk.svg?style=svg!
 
|https://circleci.com/gh/jaydeepkumar1984/cassandra/76]|[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-3.0.svg?style=svg!
 |https://circleci.com/gh/jaydeepkumar1984/cassandra/73]  |  
[!https://circleci.com/gh/jaydeepkumar1984/cassandra/tree/14525-2.2.svg?style=svg!
 |https://circleci.com/gh/jaydeepkumar1984/cassandra/74]|
|[patch 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-trunk]
 
|[patch|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-3.0]
 |[patch 
|https://github.com/apache/cassandra/compare/trunk...jaydeepkumar1984:14525-2.2]
 |

> streaming failure during bootstrap makes new node into inconsistent state
> -
>
> Key: CASSANDRA-14525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Major
> Fix For: 4.0, 2.2.x, 3.0.x
>
>
> If bootstrap fails for newly joining node (most common reason is due to 
> streaming failure) then Cassandra state remains in {{joining}} state which is 
> fine but Cassandra also enables Native transport which makes overall state 
> inconsistent. This further creates NullPointer exception if auth is enabled 
> on the new node, please find reproducible steps here:
> For example if bootstrap fails due to streaming errors like
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) 
> ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1256)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:894)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:660)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:573)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:330) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:695) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310) 
> ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
>  ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:211)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:187)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> 

[jira] [Comment Edited] (CASSANDRA-14492) Test use of thousands separators and comma decimal separators

2018-06-27 Thread Patrick Bannister (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525752#comment-16525752
 ] 

Patrick Bannister edited comment on CASSANDRA-14492 at 6/28/18 2:10 AM:


The dtest test_number_separators_round_trip() in 
cqlsh_tests/cqlsh_copy_tests.py already tests both of these features. Looking 
further into why initial coverage analysis didn't catch that.


was (Author: ptbannister):
The dtest test_number_separators_round_trip() in 
cqlsh_tests/cqlsh_copy_tests.py already tests both of these features. However, 
since the test occurs in the context of COPY statements, it uses a different 
code path: there's a separate integer formatting function used for COPY TO 
statements, implemented in cqlshlib/copyutil.py, instead of the formatting 
functions in cqlshlib/formatting.py.

It's interesting that we've implemented datatype formatting in more than one 
place.

> Test use of thousands separators and comma decimal separators
> -
>
> Key: CASSANDRA-14492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14492
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, test
> Fix For: 4.x
>
>
> Coverage analysis showed no coverage for functions related to displaying 
> numbers with thousands separators ("$100,000,000,000" instead of 
> "$1000") and displaying numbers with custom decimal separators 
> ("3,1415927" instead of "3.1415927").
> We should add a test that displays numbers like this, or identify an existing 
> test that does it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14492) Test use of thousands separators and comma decimal separators

2018-06-27 Thread Patrick Bannister (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525752#comment-16525752
 ] 

Patrick Bannister commented on CASSANDRA-14492:
---

The dtest test_number_separators_round_trip() in 
cqlsh_tests/cqlsh_copy_tests.py already tests both of these features. However, 
since the test occurs in the context of COPY statements, it uses a different 
code path: there's a separate integer formatting function used for COPY TO 
statements, implemented in cqlshlib/copyutil.py, instead of the formatting 
functions in cqlshlib/formatting.py.

It's interesting that we've implemented datatype formatting in more than one 
place.

> Test use of thousands separators and comma decimal separators
> -
>
> Key: CASSANDRA-14492
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14492
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, test
> Fix For: 4.x
>
>
> Coverage analysis showed no coverage for functions related to displaying 
> numbers with thousands separators ("$100,000,000,000" instead of 
> "$1000") and displaying numbers with custom decimal separators 
> ("3,1415927" instead of "3.1415927").
> We should add a test that displays numbers like this, or identify an existing 
> test that does it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14541) Order of warning and custom payloads is unspecified in the protocol specification

2018-06-27 Thread Jeff Jirsa (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Jirsa reassigned CASSANDRA-14541:
--

Assignee: Avi Kivity

> Order of warning and custom payloads is unspecified in the protocol 
> specification
> -
>
> Key: CASSANDRA-14541
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14541
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Avi Kivity
>Assignee: Avi Kivity
>Priority: Trivial
> Attachments: 
> v1-0001-Document-order-of-tracing-warning-and-custom-payl.patch
>
>
> Section 2.2 of the protocol specification documents the types of tracing, 
> warning, and custom payloads, but does not document their order in the body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14541) Order of warning and custom payloads is unspecified in the protocol specification

2018-06-27 Thread Jeff Jirsa (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525733#comment-16525733
 ] 

Jeff Jirsa commented on CASSANDRA-14541:


[~ifesdjeen] - you've spent some time on protocol / driver, is this something 
you can glance at?

> Order of warning and custom payloads is unspecified in the protocol 
> specification
> -
>
> Key: CASSANDRA-14541
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14541
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation and Website
>Reporter: Avi Kivity
>Priority: Trivial
> Attachments: 
> v1-0001-Document-order-of-tracing-warning-and-custom-payl.patch
>
>
> Section 2.2 of the protocol specification documents the types of tracing, 
> warning, and custom payloads, but does not document their order in the body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14544) Report why native_transport_port fails to bind

2018-06-27 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525699#comment-16525699
 ] 

Ariel Weisberg edited comment on CASSANDRA-14544 at 6/27/18 10:43 PM:
--

Committed as 
[85ceec8855683b8bf71e009c8ed102ec91d85a41|https://github.com/apache/cassandra/commit/85ceec8855683b8bf71e009c8ed102ec91d85a41].
 Thanks!


was (Author: aweisberg):
Committed as 
[85ceec8855683b8bf71e009c8ed102ec91d85a41|][https://github.com/apache/cassandra/commit/85ceec8855683b8bf71e009c8ed102ec91d85a41].]
 Thanks!

> Report why native_transport_port fails to bind
> --
>
> Key: CASSANDRA-14544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14544
> Project: Cassandra
>  Issue Type: Bug
>Reporter: James Roper
>Assignee: James Roper
>Priority: Major
> Fix For: 4.0
>
>
> On line 164 of {{org/apache/cassandra/transport/Server.java}}, the cause of a 
> failure to bind to the server port is swallowed:
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/transport/Server.java#L163-L164]
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
> {code}
> So we're told that the bind failed, but we're left guessing as to why. The 
> cause of the bind failure should be passed to the {{IllegalStateException}}, 
> so that we can then proceed with debugging, like so:
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
> bindFuture.cause());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14544) Report why native_transport_port fails to bind

2018-06-27 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14544:
---
Resolution: Fixed
Status: Resolved  (was: Ready to Commit)

Committed as 
[85ceec8855683b8bf71e009c8ed102ec91d85a41|][https://github.com/apache/cassandra/commit/85ceec8855683b8bf71e009c8ed102ec91d85a41].]
 Thanks!

> Report why native_transport_port fails to bind
> --
>
> Key: CASSANDRA-14544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14544
> Project: Cassandra
>  Issue Type: Bug
>Reporter: James Roper
>Assignee: James Roper
>Priority: Major
> Fix For: 4.0
>
>
> On line 164 of {{org/apache/cassandra/transport/Server.java}}, the cause of a 
> failure to bind to the server port is swallowed:
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/transport/Server.java#L163-L164]
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
> {code}
> So we're told that the bind failed, but we're left guessing as to why. The 
> cause of the bind failure should be passed to the {{IllegalStateException}}, 
> so that we can then proceed with debugging, like so:
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
> bindFuture.cause());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Report why native_transport_port fails to bind

2018-06-27 Thread aweisberg
Repository: cassandra
Updated Branches:
  refs/heads/trunk 06209037e -> 85ceec885


Report why native_transport_port fails to bind

Patch by James Roper; Review by Dinesh Joshi for CASSANDRA-14544


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/85ceec88
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/85ceec88
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/85ceec88

Branch: refs/heads/trunk
Commit: 85ceec8855683b8bf71e009c8ed102ec91d85a41
Parents: 0620903
Author: James Roper 
Authored: Tue Jun 26 18:18:36 2018 -0400
Committer: Ariel Weisberg 
Committed: Wed Jun 27 18:41:05 2018 -0400

--
 CHANGES.txt | 1 +
 src/java/org/apache/cassandra/transport/Server.java | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/85ceec88/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index e99c9ea..df2db42 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Report why native_transport_port fails to bind (CASSANDRA-14544)
  * Optimize internode messaging protocol (CASSANDRA-14485)
  * Internode messaging handshake sends wrong messaging version number 
(CASSANDRA-14540)
  * Add a virtual table to expose active client connections (CASSANDRA-14458)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/85ceec88/src/java/org/apache/cassandra/transport/Server.java
--
diff --git a/src/java/org/apache/cassandra/transport/Server.java 
b/src/java/org/apache/cassandra/transport/Server.java
index 8ef137c..45146c4 100644
--- a/src/java/org/apache/cassandra/transport/Server.java
+++ b/src/java/org/apache/cassandra/transport/Server.java
@@ -161,7 +161,8 @@ public class Server implements CassandraDaemon.Server
 
 ChannelFuture bindFuture = bootstrap.bind(socket);
 if (!bindFuture.awaitUninterruptibly().isSuccess())
-throw new IllegalStateException(String.format("Failed to bind port 
%d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
+throw new IllegalStateException(String.format("Failed to bind port 
%d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
+bindFuture.cause());
 
 connectionTracker.allChannels.add(bindFuture.channel());
 isRunning.set(true);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14544) Report why native_transport_port fails to bind

2018-06-27 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14544:
---
Summary: Report why native_transport_port fails to bind  (was: Server.java 
swallows the reason why binding failed)

> Report why native_transport_port fails to bind
> --
>
> Key: CASSANDRA-14544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14544
> Project: Cassandra
>  Issue Type: Bug
>Reporter: James Roper
>Assignee: James Roper
>Priority: Major
> Fix For: 4.0
>
>
> On line 164 of {{org/apache/cassandra/transport/Server.java}}, the cause of a 
> failure to bind to the server port is swallowed:
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/transport/Server.java#L163-L164]
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
> {code}
> So we're told that the bind failed, but we're left guessing as to why. The 
> cause of the bind failure should be passed to the {{IllegalStateException}}, 
> so that we can then proceed with debugging, like so:
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
> bindFuture.cause());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14543) Hinted handoff to replay purgeable tombstones

2018-06-27 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525672#comment-16525672
 ] 

Aleksey Yeschenko commented on CASSANDRA-14543:
---

Replaying *just* the tombstones might be safe-ish, but it’s only helping with 
your issue in a very narrow time window. And there will be a price to pay for 
this: hint dispatch will have to become less efficient if we end up inspecting 
and filtering out every mutation.

So all in all I’m not a fan of this suggested change, I’m with [~KurtG] on this 
one.

> Hinted handoff to replay purgeable tombstones 
> --
>
> Key: CASSANDRA-14543
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14543
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Priority: Minor
>
> Hinted-handoff currently only dispatches and applies the mutations that are 
> within GCGS: 
> [{{Hint.java:97}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hints/Hint.java#L97].
>  Which is to make sure it won't resurrect any deleted data.
> But replaying tombstones should be safe, it could reduce the chance to have 
> [un-repairable inconsistent 
> data|https://lists.apache.org/thread.html/2d3d39d960143d4d2146ed2530821504ff855e832713dec7d0afd8ac@%3Cdev.cassandra.apache.org%3E].
> Here is the user scenario it tries to fix:
> {noformat}
> 1. Create a 3 nodes cluster
> 2. Create a table with small gc_grace_seconds (for reproducing purpose):
> CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy',
> 'replication_factor': 3};
> CREATE TABLE foo.bar (
> id int PRIMARY KEY,
> name text
> ) WITH gc_grace_seconds=30;
> 3. Insert data with consistency all:
> INSERT INTO foo.bar (id, name) VALUES(1, 'cstar');
> 4. stop 1 node
> $ ccm node2 stop
> 5. Delete the data with consistency quorum:
> DELETE FROM foo.bar WHERE id=1;
> 6. Wait 30 seconds and then start node2:
> $ ccm node2 start
> {noformat}
> Now, node2 has the data, node1/node3 have the purgeable tombstone. It 
> triggers RR every time which sends data from node2 to node1/node3 but repairs 
> nothing.
> With purgeable tombstones hints handoff, it at least will dispatch the 
> tombstone and delete the data on node2. It won't fix the root cause but 
> reduce the chance to have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14544) Server.java swallows the reason why binding failed

2018-06-27 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-14544:
-
Status: Ready to Commit  (was: Patch Available)

> Server.java swallows the reason why binding failed
> --
>
> Key: CASSANDRA-14544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14544
> Project: Cassandra
>  Issue Type: Bug
>Reporter: James Roper
>Assignee: James Roper
>Priority: Major
> Fix For: 4.0
>
>
> On line 164 of {{org/apache/cassandra/transport/Server.java}}, the cause of a 
> failure to bind to the server port is swallowed:
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/transport/Server.java#L163-L164]
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
> {code}
> So we're told that the bind failed, but we're left guessing as to why. The 
> cause of the bind failure should be passed to the {{IllegalStateException}}, 
> so that we can then proceed with debugging, like so:
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
> bindFuture.cause());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14544) Server.java swallows the reason why binding failed

2018-06-27 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525664#comment-16525664
 ] 

Dinesh Joshi commented on CASSANDRA-14544:
--

So... after struggling with {{MessagingServiceTest}} failures on CircleCI, I 
was able to determine that the failures on CircleCI are unrelated to this 
patch. It seems CircleCI containers are not isolated and the failure is due to 
multiple tests attempting to listen on the same IP/Port combination 
simultaneously.

Anyway, I'm +1 on this patch. [~aweisberg] could you please help commit this 
patch?

> Server.java swallows the reason why binding failed
> --
>
> Key: CASSANDRA-14544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14544
> Project: Cassandra
>  Issue Type: Bug
>Reporter: James Roper
>Assignee: James Roper
>Priority: Major
> Fix For: 4.0
>
>
> On line 164 of {{org/apache/cassandra/transport/Server.java}}, the cause of a 
> failure to bind to the server port is swallowed:
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/transport/Server.java#L163-L164]
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
> {code}
> So we're told that the bind failed, but we're left guessing as to why. The 
> cause of the bind failure should be passed to the {{IllegalStateException}}, 
> so that we can then proceed with debugging, like so:
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
> bindFuture.cause());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14534) cqlsh_tests/cqlsh_tests.py::TestCqlsh::test_describe is flaky

2018-06-27 Thread Patrick Bannister (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525500#comment-16525500
 ] 

Patrick Bannister commented on CASSANDRA-14534:
---

This test is working now in development for CASSANDRA-10190.

> cqlsh_tests/cqlsh_tests.py::TestCqlsh::test_describe is flaky
> -
>
> Key: CASSANDRA-14534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14534
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
> Environment: trunk running on Ubuntu 16.04 LTS
>Reporter: Patrick Bannister
>Priority: Minor
>  Labels: cqlsh, dtest, flaky-test, test
> Fix For: 4.x
>
>
> *Summary:* test_describe in cqlsh_tests/cqlsh_tests.py::TestCqlsh is flaky. 
> The test is written to expect the objects of a keyspace to be 
> lexicographically sorted in DESCRIBE KEYSPACE output. However, this almost 
> never happens, because the sort order of objects in DESCRIBE KEYSPACE output 
> is arbitrary. I plan to modify the test to check for expected output without 
> requiring lexicographic sorting of items of the same type; this work would be 
> rolled up under other work in CASSANDRA-10190.
> This is happening in the Cassandra Python driver, in cassandra.metadata, in 
> the KeyspaceMetadata and TableMetadata classes. KeyspaceMetadata and 
> TableMetadata store their child objects in Python dictionaries, and when 
> generating DESCRIBE output, they visit their child objects in the order 
> returned by the values() function of each dictionary. The Python dictionary 
> values() function returns items in an arbitrary order, so they will not 
> necessarily come back sorted lexicographically as expected by the test.
> A simple fix for the test would be to change the way it checks for the 
> expected output so that the order of objects of the same type is no longer 
> important.
> As currently written, the test creates a keyspace like so:
>  
> {code:java}
> CREATE KEYSPACE test WITH REPLICATION = {'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> CREATE TABLE test.users ( userid text PRIMARY KEY, firstname text, lastname 
> text, age int);
> CREATE INDEX myindex ON test.users (age);
> CREATE INDEX "QuotedNameIndex" on test.users (firstName);
> CREATE TABLE test.test (id int, col int, val text, PRIMARY KEY(id, col));
> CREATE INDEX ON test.test (col);
> CREATE INDEX ON test.test (val){code}
>  
> It expects the output of DESCRIBE KEYSPACE test to appear in the following 
> order:
>  
> {code:java}
> CREATE KEYSPACE test...
> CREATE TABLE test.test...
> CREATE INDEX test_col_idx...
> CREATE INDEX test_val_idx...
> CREATE TABLE test.users...
> CREATE INDEX "QuotedNameIndex"...
> CREATE INDEX myindex...{code}
> But as described above, tables aren't sorted lexicographically, and a table's 
> indexes aren't sorted lexicographically, so output for table test.users can 
> come before output for table test.test, and output for index test_val_idx can 
> come before output for index test_col_idx. The planned change to the test 
> would make it so that the test passes regardless of the order of these like 
> objects.
> If lexicographic sorting of objects of like type actually is a requirement 
> for DESCRIBE KEYSPACE, then we could fix this by modifying cqlsh 
> (duplicative, but simple), or by modifying the Cassandra Python driver's 
> cassandra/metadata.py script file (less duplicative but more difficult to 
> distribute).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14534) cqlsh_tests/cqlsh_tests.py::TestCqlsh::test_describe is flaky

2018-06-27 Thread Patrick Bannister (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Bannister resolved CASSANDRA-14534.
---
Resolution: Implemented

> cqlsh_tests/cqlsh_tests.py::TestCqlsh::test_describe is flaky
> -
>
> Key: CASSANDRA-14534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14534
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
> Environment: trunk running on Ubuntu 16.04 LTS
>Reporter: Patrick Bannister
>Priority: Minor
>  Labels: cqlsh, dtest, flaky-test, test
> Fix For: 4.x
>
>
> *Summary:* test_describe in cqlsh_tests/cqlsh_tests.py::TestCqlsh is flaky. 
> The test is written to expect the objects of a keyspace to be 
> lexicographically sorted in DESCRIBE KEYSPACE output. However, this almost 
> never happens, because the sort order of objects in DESCRIBE KEYSPACE output 
> is arbitrary. I plan to modify the test to check for expected output without 
> requiring lexicographic sorting of items of the same type; this work would be 
> rolled up under other work in CASSANDRA-10190.
> This is happening in the Cassandra Python driver, in cassandra.metadata, in 
> the KeyspaceMetadata and TableMetadata classes. KeyspaceMetadata and 
> TableMetadata store their child objects in Python dictionaries, and when 
> generating DESCRIBE output, they visit their child objects in the order 
> returned by the values() function of each dictionary. The Python dictionary 
> values() function returns items in an arbitrary order, so they will not 
> necessarily come back sorted lexicographically as expected by the test.
> A simple fix for the test would be to change the way it checks for the 
> expected output so that the order of objects of the same type is no longer 
> important.
> As currently written, the test creates a keyspace like so:
>  
> {code:java}
> CREATE KEYSPACE test WITH REPLICATION = {'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> CREATE TABLE test.users ( userid text PRIMARY KEY, firstname text, lastname 
> text, age int);
> CREATE INDEX myindex ON test.users (age);
> CREATE INDEX "QuotedNameIndex" on test.users (firstName);
> CREATE TABLE test.test (id int, col int, val text, PRIMARY KEY(id, col));
> CREATE INDEX ON test.test (col);
> CREATE INDEX ON test.test (val){code}
>  
> It expects the output of DESCRIBE KEYSPACE test to appear in the following 
> order:
>  
> {code:java}
> CREATE KEYSPACE test...
> CREATE TABLE test.test...
> CREATE INDEX test_col_idx...
> CREATE INDEX test_val_idx...
> CREATE TABLE test.users...
> CREATE INDEX "QuotedNameIndex"...
> CREATE INDEX myindex...{code}
> But as described above, tables aren't sorted lexicographically, and a table's 
> indexes aren't sorted lexicographically, so output for table test.users can 
> come before output for table test.test, and output for index test_val_idx can 
> come before output for index test_col_idx. The planned change to the test 
> would make it so that the test passes regardless of the order of these like 
> objects.
> If lexicographic sorting of objects of like type actually is a requirement 
> for DESCRIBE KEYSPACE, then we could fix this by modifying cqlsh 
> (duplicative, but simple), or by modifying the Cassandra Python driver's 
> cassandra/metadata.py script file (less duplicative but more difficult to 
> distribute).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14534) cqlsh_tests/cqlsh_tests.py::TestCqlsh::test_describe is flaky

2018-06-27 Thread Patrick Bannister (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Bannister updated CASSANDRA-14534:
--
Issue Type: Sub-task  (was: Test)
Parent: CASSANDRA-10190

> cqlsh_tests/cqlsh_tests.py::TestCqlsh::test_describe is flaky
> -
>
> Key: CASSANDRA-14534
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14534
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: CQL
> Environment: trunk running on Ubuntu 16.04 LTS
>Reporter: Patrick Bannister
>Priority: Minor
>  Labels: cqlsh, dtest, flaky-test, test
> Fix For: 4.x
>
>
> *Summary:* test_describe in cqlsh_tests/cqlsh_tests.py::TestCqlsh is flaky. 
> The test is written to expect the objects of a keyspace to be 
> lexicographically sorted in DESCRIBE KEYSPACE output. However, this almost 
> never happens, because the sort order of objects in DESCRIBE KEYSPACE output 
> is arbitrary. I plan to modify the test to check for expected output without 
> requiring lexicographic sorting of items of the same type; this work would be 
> rolled up under other work in CASSANDRA-10190.
> This is happening in the Cassandra Python driver, in cassandra.metadata, in 
> the KeyspaceMetadata and TableMetadata classes. KeyspaceMetadata and 
> TableMetadata store their child objects in Python dictionaries, and when 
> generating DESCRIBE output, they visit their child objects in the order 
> returned by the values() function of each dictionary. The Python dictionary 
> values() function returns items in an arbitrary order, so they will not 
> necessarily come back sorted lexicographically as expected by the test.
> A simple fix for the test would be to change the way it checks for the 
> expected output so that the order of objects of the same type is no longer 
> important.
> As currently written, the test creates a keyspace like so:
>  
> {code:java}
> CREATE KEYSPACE test WITH REPLICATION = {'class' : 'SimpleStrategy', 
> 'replication_factor' : 1 };
> CREATE TABLE test.users ( userid text PRIMARY KEY, firstname text, lastname 
> text, age int);
> CREATE INDEX myindex ON test.users (age);
> CREATE INDEX "QuotedNameIndex" on test.users (firstName);
> CREATE TABLE test.test (id int, col int, val text, PRIMARY KEY(id, col));
> CREATE INDEX ON test.test (col);
> CREATE INDEX ON test.test (val){code}
>  
> It expects the output of DESCRIBE KEYSPACE test to appear in the following 
> order:
>  
> {code:java}
> CREATE KEYSPACE test...
> CREATE TABLE test.test...
> CREATE INDEX test_col_idx...
> CREATE INDEX test_val_idx...
> CREATE TABLE test.users...
> CREATE INDEX "QuotedNameIndex"...
> CREATE INDEX myindex...{code}
> But as described above, tables aren't sorted lexicographically, and a table's 
> indexes aren't sorted lexicographically, so output for table test.users can 
> come before output for table test.test, and output for index test_val_idx can 
> come before output for index test_col_idx. The planned change to the test 
> would make it so that the test passes regardless of the order of these like 
> objects.
> If lexicographic sorting of objects of like type actually is a requirement 
> for DESCRIBE KEYSPACE, then we could fix this by modifying cqlsh 
> (duplicative, but simple), or by modifying the Cassandra Python driver's 
> cassandra/metadata.py script file (less duplicative but more difficult to 
> distribute).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14466) Enable Direct I/O

2018-06-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525494#comment-16525494
 ] 

ASF GitHub Bot commented on CASSANDRA-14466:


Github user aweisberg closed the pull request at:

https://github.com/apache/cassandra/pull/232


> Enable Direct I/O 
> --
>
> Key: CASSANDRA-14466
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14466
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Local Write-Read Paths
>Reporter: Mulugeta Mammo
>Priority: Major
> Attachments: direct_io.patch
>
>
> Hi,
> JDK 10 introduced a new API for Direct IO that enables applications to bypass 
> the file system cache and potentially improve performance. Details of this 
> feature can be found at [https://bugs.openjdk.java.net/browse/JDK-8164900].
> This patch uses the JDK 10 API to enable Direct IO for the Cassandra read 
> path. By default, we have disabled this feature; but it can be enabled using 
> a  new configuration parameter, enable_direct_io_for_read_path. We have 
> conducted a Cassandra read-only stress test and measured a throughput gain of 
> up to 60% on flash drives.
> The patch requires JDK 10 Cassandra Support - 
> https://issues.apache.org/jira/browse/CASSANDRA-9608 
> Please review the patch and let us know your feedback.
> Thanks,
> [^direct_io.patch]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14544) Server.java swallows the reason why binding failed

2018-06-27 Thread Dinesh Joshi (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Joshi updated CASSANDRA-14544:
-
Status: Patch Available  (was: Open)

> Server.java swallows the reason why binding failed
> --
>
> Key: CASSANDRA-14544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14544
> Project: Cassandra
>  Issue Type: Bug
>Reporter: James Roper
>Assignee: James Roper
>Priority: Major
> Fix For: 4.0
>
>
> On line 164 of {{org/apache/cassandra/transport/Server.java}}, the cause of a 
> failure to bind to the server port is swallowed:
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/transport/Server.java#L163-L164]
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
> {code}
> So we're told that the bind failed, but we're left guessing as to why. The 
> cause of the bind failure should be passed to the {{IllegalStateException}}, 
> so that we can then proceed with debugging, like so:
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
> bindFuture.cause());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14544) Server.java swallows the reason why binding failed

2018-06-27 Thread Ariel Weisberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ariel Weisberg updated CASSANDRA-14544:
---
Fix Version/s: 4.0

> Server.java swallows the reason why binding failed
> --
>
> Key: CASSANDRA-14544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14544
> Project: Cassandra
>  Issue Type: Bug
>Reporter: James Roper
>Assignee: James Roper
>Priority: Major
> Fix For: 4.0
>
>
> On line 164 of {{org/apache/cassandra/transport/Server.java}}, the cause of a 
> failure to bind to the server port is swallowed:
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/transport/Server.java#L163-L164]
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
> {code}
> So we're told that the bind failed, but we're left guessing as to why. The 
> cause of the bind failure should be passed to the {{IllegalStateException}}, 
> so that we can then proceed with debugging, like so:
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
> bindFuture.cause());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14544) Server.java swallows the reason why binding failed

2018-06-27 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525458#comment-16525458
 ] 

Dinesh Joshi commented on CASSANDRA-14544:
--

Thanks, [~jroper] - the tests are running: 
[https://circleci.com/gh/dineshjoshi/workflows/cassandra/tree/throw-cause]

> Server.java swallows the reason why binding failed
> --
>
> Key: CASSANDRA-14544
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14544
> Project: Cassandra
>  Issue Type: Bug
>Reporter: James Roper
>Assignee: James Roper
>Priority: Major
>
> On line 164 of {{org/apache/cassandra/transport/Server.java}}, the cause of a 
> failure to bind to the server port is swallowed:
> [https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/transport/Server.java#L163-L164]
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()));
> {code}
> So we're told that the bind failed, but we're left guessing as to why. The 
> cause of the bind failure should be passed to the {{IllegalStateException}}, 
> so that we can then proceed with debugging, like so:
> {code:java}
> if (!bindFuture.awaitUninterruptibly().isSuccess())
> throw new IllegalStateException(String.format("Failed to bind 
> port %d on %s.", socket.getPort(), socket.getAddress().getHostAddress()),
> bindFuture.cause());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2018-06-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525356#comment-16525356
 ] 

ASF GitHub Bot commented on CASSANDRA-10726:


Github user bdeggleston commented on the issue:

https://github.com/apache/cassandra/pull/94
  
@tedcarroll this PR is out of date. See the patch on 
https://issues.apache.org/jira/browse/CASSANDRA-10726


> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Issue Comment Deleted] (CASSANDRA-10726) Read repair inserts should not be blocking

2018-06-27 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-10726:

Comment: was deleted

(was: Github user tedcarroll commented on the issue:

https://github.com/apache/cassandra/pull/94
  
This patch contains the potential for a ConcurrentModificationException.  
The HashMap at line 47 of DataResolver.java should probably be a 
ConcurrentHashMap.
)

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Issue Comment Deleted] (CASSANDRA-10726) Read repair inserts should not be blocking

2018-06-27 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-10726:

Comment: was deleted

(was: Github user bdeggleston commented on the issue:

https://github.com/apache/cassandra/pull/94
  
@tedcarroll this PR is out of date. See the patch on 
https://issues.apache.org/jira/browse/CASSANDRA-10726
)

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Issue Comment Deleted] (CASSANDRA-10726) Read repair inserts should not be blocking

2018-06-27 Thread Blake Eggleston (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-10726:

Comment: was deleted

(was: Github user tedcarroll commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/94#discussion_r198575430
  
--- Diff: src/java/org/apache/cassandra/service/DataResolver.java ---
@@ -40,18 +40,26 @@
 import org.apache.cassandra.exceptions.ReadTimeoutException;
 import org.apache.cassandra.net.*;
 import org.apache.cassandra.tracing.Tracing;
-import org.apache.cassandra.utils.FBUtilities;
+import org.apache.cassandra.utils.Pair;
 
 public class DataResolver extends ResponseResolver
 {
-@VisibleForTesting
-final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
+private final Map> 
repairResponseRequestMap = new HashMap<>();
--- End diff --

Should be a ConcurrentHashMap.
)

> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14467) Add option to sanity check tombstones on reads/compaction

2018-06-27 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525347#comment-16525347
 ] 

Ariel Weisberg commented on CASSANDRA-14467:


Committed but still set to patch available?

> Add option to sanity check tombstones on reads/compaction
> -
>
> Key: CASSANDRA-14467
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14467
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Minor
> Fix For: 4.x
>
>
> We should add an option to do a quick sanity check of tombstones on reads + 
> compaction. It should either log the error or throw an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2018-06-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525324#comment-16525324
 ] 

ASF GitHub Bot commented on CASSANDRA-10726:


Github user tedcarroll commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/94#discussion_r198575430
  
--- Diff: src/java/org/apache/cassandra/service/DataResolver.java ---
@@ -40,18 +40,26 @@
 import org.apache.cassandra.exceptions.ReadTimeoutException;
 import org.apache.cassandra.net.*;
 import org.apache.cassandra.tracing.Tracing;
-import org.apache.cassandra.utils.FBUtilities;
+import org.apache.cassandra.utils.Pair;
 
 public class DataResolver extends ResponseResolver
 {
-@VisibleForTesting
-final List repairResults = 
Collections.synchronizedList(new ArrayList<>());
+private final Map> 
repairResponseRequestMap = new HashMap<>();
--- End diff --

Should be a ConcurrentHashMap.


> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking

2018-06-27 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16525311#comment-16525311
 ] 

ASF GitHub Bot commented on CASSANDRA-10726:


Github user tedcarroll commented on the issue:

https://github.com/apache/cassandra/pull/94
  
This patch contains the potential for a ConcurrentModificationException.  
The HashMap at line 47 of DataResolver.java should probably be a 
ConcurrentHashMap.


> Read repair inserts should not be blocking
> --
>
> Key: CASSANDRA-10726
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10726
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Richard Low
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.x
>
>
> Today, if there’s a digest mismatch in a foreground read repair, the insert 
> to update out of date replicas is blocking. This means, if it fails, the read 
> fails with a timeout. If a node is dropping writes (maybe it is overloaded or 
> the mutation stage is backed up for some other reason), all reads to a 
> replica set could fail. Further, replicas dropping writes get more out of 
> sync so will require more read repair.
> The comment on the code for why the writes are blocking is:
> {code}
> // wait for the repair writes to be acknowledged, to minimize impact on any 
> replica that's
> // behind on writes in case the out-of-sync row is read multiple times in 
> quick succession
> {code}
> but the bad side effect is that reads timeout. Either the writes should not 
> be blocking or we should return success for the read even if the write times 
> out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14423) SSTables stop being compacted

2018-06-27 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524923#comment-16524923
 ] 

Marcus Eriksson commented on CASSANDRA-14423:
-

the patches lgtm

> SSTables stop being compacted
> -
>
> Key: CASSANDRA-14423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Blocker
> Fix For: 2.2.13, 3.0.17, 3.11.3
>
>
> So seeing a problem in 3.11.0 where SSTables are being lost from the view and 
> not being included in compactions/as candidates for compaction. It seems to 
> get progressively worse until there's only 1-2 SSTables in the view which 
> happen to be the most recent SSTables and thus compactions completely stop 
> for that table.
> The SSTables seem to still be included in reads, just not compactions.
> The issue can be fixed by restarting C*, as it will reload all SSTables into 
> the view, but this is only a temporary fix. User defined/major compactions 
> still work - not clear if they include the result back in the view but is not 
> a good work around.
> This also results in a discrepancy between SSTable count and SSTables in 
> levels for any table using LCS.
> {code:java}
> Keyspace : xxx
> Read Count: 57761088
> Read Latency: 0.10527088681224288 ms.
> Write Count: 2513164
> Write Latency: 0.018211106398149903 ms.
> Pending Flushes: 0
> Table: xxx
> SSTable count: 10
> SSTables in each level: [2, 0, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 894498746
> Space used (total): 894498746
> Space used by snapshots (total): 0
> Off heap memory used (total): 11576197
> SSTable Compression Ratio: 0.6956629530569777
> Number of keys (estimate): 3562207
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 87
> Local read count: 57761088
> Local read latency: 0.108 ms
> Local write count: 2513164
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 86.33
> Bloom filter false positives: 43
> Bloom filter false ratio: 0.0
> Bloom filter space used: 8046104
> Bloom filter off heap memory used: 8046024
> Index summary off heap memory used: 3449005
> Compression metadata off heap memory used: 81168
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 5722
> Compacted partition mean bytes: 175
> Average live cells per slice (last five minutes): 1.0
> Maximum live cells per slice (last five minutes): 1
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Dropped Mutations: 0
> {code}
> Also for STCS we've confirmed that SSTable count will be different to the 
> number of SSTables reported in the Compaction Bucket's. In the below example 
> there's only 3 SSTables in a single bucket - no more are listed for this 
> table. Compaction thresholds haven't been modified for this table and it's a 
> very basic KV schema.
> {code:java}
> Keyspace : yyy
> Read Count: 30485
> Read Latency: 0.06708991307200263 ms.
> Write Count: 57044
> Write Latency: 0.02204061776873992 ms.
> Pending Flushes: 0
> Table: yyy
> SSTable count: 19
> Space used (live): 18195482
> Space used (total): 18195482
> Space used by snapshots (total): 0
> Off heap memory used (total): 747376
> SSTable Compression Ratio: 0.7607394576769735
> Number of keys (estimate): 116074
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 39
> Local read count: 30485
> Local read latency: NaN ms
> Local write count: 57044
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 79.76
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 690912
> Bloom filter off heap memory used: 690760
> Index summary off heap memory used: 54736
> Compression metadata off heap memory used: 1880
> Compacted partition minimum bytes: 73
> Compacted partition maximum bytes: 124
> Compacted partition mean bytes: 96
> Average live cells per slice (last five minutes): NaN
> Maximum live cells per slice (last five minutes): 0
> Average tombstones per slice (last five minutes): NaN
> Maximum tombstones per slice (last five minutes): 0
> Dropped Mutations: 0 
> {code}
> {code:java}
> Apr 27 03:10:39 cassandra[9263]: TRACE o.a.c.d.c.SizeTieredCompactionStrategy 
> Compaction buckets are 
> 

[jira] [Comment Edited] (CASSANDRA-14423) SSTables stop being compacted

2018-06-27 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524885#comment-16524885
 ] 

Kurt Greaves edited comment on CASSANDRA-14423 at 6/27/18 10:51 AM:


[~krummas] I've suggested that we at the very least provide a flag to skip 
anti-compaction from full repairs before, but it was all deemed too 
[complicated|https://issues.apache.org/jira/browse/CASSANDRA-13885?focusedCommentId=16206922=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16206922].
 Regardless, I think a flag would be perfectly fine and it's still desirable so 
at least people can go back to at the very least running full repairs 
successfully without having to worry about SSTables being marked repaired. 
However, I don't think we can go and change the default behaviour, purely 
because people _could_ still be running full repairs on earlier versions of 
3.x/3.0 before this bug came along.


was (Author: kurtg):
[~krummas] I've suggested that we at the very least provide a flag to skip 
anti-compaction from full repairs before, but it was all deemed too 
complicated. Regardless, I think a flag would be perfectly fine and it's still 
desirable so at least people can go back to at the very least running full 
repairs successfully without having to worry about SSTables being marked 
repaired. However, I don't think we can go and change the default behaviour, 
purely because people _could_ still be running full repairs on earlier versions 
of 3.x/3.0 before this bug came along.

> SSTables stop being compacted
> -
>
> Key: CASSANDRA-14423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Blocker
> Fix For: 2.2.13, 3.0.17, 3.11.3
>
>
> So seeing a problem in 3.11.0 where SSTables are being lost from the view and 
> not being included in compactions/as candidates for compaction. It seems to 
> get progressively worse until there's only 1-2 SSTables in the view which 
> happen to be the most recent SSTables and thus compactions completely stop 
> for that table.
> The SSTables seem to still be included in reads, just not compactions.
> The issue can be fixed by restarting C*, as it will reload all SSTables into 
> the view, but this is only a temporary fix. User defined/major compactions 
> still work - not clear if they include the result back in the view but is not 
> a good work around.
> This also results in a discrepancy between SSTable count and SSTables in 
> levels for any table using LCS.
> {code:java}
> Keyspace : xxx
> Read Count: 57761088
> Read Latency: 0.10527088681224288 ms.
> Write Count: 2513164
> Write Latency: 0.018211106398149903 ms.
> Pending Flushes: 0
> Table: xxx
> SSTable count: 10
> SSTables in each level: [2, 0, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 894498746
> Space used (total): 894498746
> Space used by snapshots (total): 0
> Off heap memory used (total): 11576197
> SSTable Compression Ratio: 0.6956629530569777
> Number of keys (estimate): 3562207
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 87
> Local read count: 57761088
> Local read latency: 0.108 ms
> Local write count: 2513164
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 86.33
> Bloom filter false positives: 43
> Bloom filter false ratio: 0.0
> Bloom filter space used: 8046104
> Bloom filter off heap memory used: 8046024
> Index summary off heap memory used: 3449005
> Compression metadata off heap memory used: 81168
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 5722
> Compacted partition mean bytes: 175
> Average live cells per slice (last five minutes): 1.0
> Maximum live cells per slice (last five minutes): 1
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Dropped Mutations: 0
> {code}
> Also for STCS we've confirmed that SSTable count will be different to the 
> number of SSTables reported in the Compaction Bucket's. In the below example 
> there's only 3 SSTables in a single bucket - no more are listed for this 
> table. Compaction thresholds haven't been modified for this table and it's a 
> very basic KV schema.
> {code:java}
> Keyspace : yyy
> Read Count: 30485
> Read Latency: 0.06708991307200263 ms.
> Write Count: 57044
> Write Latency: 0.02204061776873992 ms.
> Pending Flushes: 0
> Table: yyy
> SSTable count: 19
> Space used (live): 18195482
> Space used (total): 18195482
> Space used by snapshots (total): 0
> Off heap memory used (total): 747376
> 

[jira] [Comment Edited] (CASSANDRA-14543) Hinted handoff to replay purgeable tombstones

2018-06-27 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523302#comment-16523302
 ] 

Kurt Greaves edited comment on CASSANDRA-14543 at 6/27/18 10:48 AM:


{quote}But hints-handoff will only happen once, and we know the target node is 
missing that deletion. It may not need it if the tombstone is repaired from 
another node within GCGS, otherwise, it's the best (only) way to delete the 
data on that node.
{quote}
-Well, it could create more unnecessary read-repair in the following scenario 
(which is the only case where HH of purgeable tombstones comes into play):-
 -3 nodes, A, B, C.-
 # -Insert as per your example-
 # -Node B goes down-
 # -Delete partition-
 # -GCGS passes-
 # -A and C compact away partition deletion-
 # -B comes back up-
 # -A/C HH tombstone to B-

-Any further reads for that partition will now cause a RR where the tombstone 
is not propagated-

^ clearly NFI what I'm talking about.

But really, we'd be only addressing the case where a deletion is performed 
within the HH window and then the node stays down until GCGS passes. This seems 
like a really narrow use case here, especially because if a node is down for 
GCGS you're going to have problems anyway (unless there's something I'm missing 
here).


was (Author: kurtg):
{quote}But hints-handoff will only happen once, and we know the target node is 
missing that deletion. It may not need it if the tombstone is repaired from 
another node within GCGS, otherwise, it's the best (only) way to delete the 
data on that node.
{quote}
Well, it could create more unnecessary read-repair in the following scenario 
(which is the only case where HH of purgeable tombstones comes into play):
 3 nodes, A, B, C.
 # Insert as per your example
 # Node B goes down
 # Delete partition
 # GCGS passes
 # A and C compact away partition deletion
 # B comes back up
 # A/C HH tombstone to B
 
Any further reads for that partition will now cause a RR where the tombstone is 
not propagated

But really, we'd be only addressing the case where a deletion is performed 
within the HH window and then the node stays down until GCGS passes. This seems 
like a really narrow use case here, especially because if a node is down for 
GCGS you're going to have problems anyway (unless there's something I'm missing 
here).

> Hinted handoff to replay purgeable tombstones 
> --
>
> Key: CASSANDRA-14543
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14543
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Priority: Minor
>
> Hinted-handoff currently only dispatches and applies the mutations that are 
> within GCGS: 
> [{{Hint.java:97}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hints/Hint.java#L97].
>  Which is to make sure it won't resurrect any deleted data.
> But replaying tombstones should be safe, it could reduce the chance to have 
> [un-repairable inconsistent 
> data|https://lists.apache.org/thread.html/2d3d39d960143d4d2146ed2530821504ff855e832713dec7d0afd8ac@%3Cdev.cassandra.apache.org%3E].
> Here is the user scenario it tries to fix:
> {noformat}
> 1. Create a 3 nodes cluster
> 2. Create a table with small gc_grace_seconds (for reproducing purpose):
> CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy',
> 'replication_factor': 3};
> CREATE TABLE foo.bar (
> id int PRIMARY KEY,
> name text
> ) WITH gc_grace_seconds=30;
> 3. Insert data with consistency all:
> INSERT INTO foo.bar (id, name) VALUES(1, 'cstar');
> 4. stop 1 node
> $ ccm node2 stop
> 5. Delete the data with consistency quorum:
> DELETE FROM foo.bar WHERE id=1;
> 6. Wait 30 seconds and then start node2:
> $ ccm node2 start
> {noformat}
> Now, node2 has the data, node1/node3 have the purgeable tombstone. It 
> triggers RR every time which sends data from node2 to node1/node3 but repairs 
> nothing.
> With purgeable tombstones hints handoff, it at least will dispatch the 
> tombstone and delete the data on node2. It won't fix the root cause but 
> reduce the chance to have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14543) Hinted handoff to replay purgeable tombstones

2018-06-27 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524896#comment-16524896
 ] 

Kurt Greaves commented on CASSANDRA-14543:
--

[~jay.zhuang], right, sorry my misunderstanding. But unless I'm mistaken this 
still only helps the case where a node is down for the entirety of GCGS 
correct? Because that's the only way you'll ever hint a purgeable tombstone. In 
which case you'll be in for a real bad time anyway and hinting tombstones from 
a small portion of the downtime isn't going to give any real benefit?

> Hinted handoff to replay purgeable tombstones 
> --
>
> Key: CASSANDRA-14543
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14543
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jay Zhuang
>Priority: Minor
>
> Hinted-handoff currently only dispatches and applies the mutations that are 
> within GCGS: 
> [{{Hint.java:97}}|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/hints/Hint.java#L97].
>  Which is to make sure it won't resurrect any deleted data.
> But replaying tombstones should be safe, it could reduce the chance to have 
> [un-repairable inconsistent 
> data|https://lists.apache.org/thread.html/2d3d39d960143d4d2146ed2530821504ff855e832713dec7d0afd8ac@%3Cdev.cassandra.apache.org%3E].
> Here is the user scenario it tries to fix:
> {noformat}
> 1. Create a 3 nodes cluster
> 2. Create a table with small gc_grace_seconds (for reproducing purpose):
> CREATE KEYSPACE foo WITH replication = {'class': 'SimpleStrategy',
> 'replication_factor': 3};
> CREATE TABLE foo.bar (
> id int PRIMARY KEY,
> name text
> ) WITH gc_grace_seconds=30;
> 3. Insert data with consistency all:
> INSERT INTO foo.bar (id, name) VALUES(1, 'cstar');
> 4. stop 1 node
> $ ccm node2 stop
> 5. Delete the data with consistency quorum:
> DELETE FROM foo.bar WHERE id=1;
> 6. Wait 30 seconds and then start node2:
> $ ccm node2 start
> {noformat}
> Now, node2 has the data, node1/node3 have the purgeable tombstone. It 
> triggers RR every time which sends data from node2 to node1/node3 but repairs 
> nothing.
> With purgeable tombstones hints handoff, it at least will dispatch the 
> tombstone and delete the data on node2. It won't fix the root cause but 
> reduce the chance to have this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14423) SSTables stop being compacted

2018-06-27 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524885#comment-16524885
 ] 

Kurt Greaves commented on CASSANDRA-14423:
--

[~krummas] I've suggested that we at the very least provide a flag to skip 
anti-compaction from full repairs before, but it was all deemed too 
complicated. Regardless, I think a flag would be perfectly fine and it's still 
desirable so at least people can go back to at the very least running full 
repairs successfully without having to worry about SSTables being marked 
repaired. However, I don't think we can go and change the default behaviour, 
purely because people _could_ still be running full repairs on earlier versions 
of 3.x/3.0 before this bug came along.

> SSTables stop being compacted
> -
>
> Key: CASSANDRA-14423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Blocker
> Fix For: 2.2.13, 3.0.17, 3.11.3
>
>
> So seeing a problem in 3.11.0 where SSTables are being lost from the view and 
> not being included in compactions/as candidates for compaction. It seems to 
> get progressively worse until there's only 1-2 SSTables in the view which 
> happen to be the most recent SSTables and thus compactions completely stop 
> for that table.
> The SSTables seem to still be included in reads, just not compactions.
> The issue can be fixed by restarting C*, as it will reload all SSTables into 
> the view, but this is only a temporary fix. User defined/major compactions 
> still work - not clear if they include the result back in the view but is not 
> a good work around.
> This also results in a discrepancy between SSTable count and SSTables in 
> levels for any table using LCS.
> {code:java}
> Keyspace : xxx
> Read Count: 57761088
> Read Latency: 0.10527088681224288 ms.
> Write Count: 2513164
> Write Latency: 0.018211106398149903 ms.
> Pending Flushes: 0
> Table: xxx
> SSTable count: 10
> SSTables in each level: [2, 0, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 894498746
> Space used (total): 894498746
> Space used by snapshots (total): 0
> Off heap memory used (total): 11576197
> SSTable Compression Ratio: 0.6956629530569777
> Number of keys (estimate): 3562207
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 87
> Local read count: 57761088
> Local read latency: 0.108 ms
> Local write count: 2513164
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 86.33
> Bloom filter false positives: 43
> Bloom filter false ratio: 0.0
> Bloom filter space used: 8046104
> Bloom filter off heap memory used: 8046024
> Index summary off heap memory used: 3449005
> Compression metadata off heap memory used: 81168
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 5722
> Compacted partition mean bytes: 175
> Average live cells per slice (last five minutes): 1.0
> Maximum live cells per slice (last five minutes): 1
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Dropped Mutations: 0
> {code}
> Also for STCS we've confirmed that SSTable count will be different to the 
> number of SSTables reported in the Compaction Bucket's. In the below example 
> there's only 3 SSTables in a single bucket - no more are listed for this 
> table. Compaction thresholds haven't been modified for this table and it's a 
> very basic KV schema.
> {code:java}
> Keyspace : yyy
> Read Count: 30485
> Read Latency: 0.06708991307200263 ms.
> Write Count: 57044
> Write Latency: 0.02204061776873992 ms.
> Pending Flushes: 0
> Table: yyy
> SSTable count: 19
> Space used (live): 18195482
> Space used (total): 18195482
> Space used by snapshots (total): 0
> Off heap memory used (total): 747376
> SSTable Compression Ratio: 0.7607394576769735
> Number of keys (estimate): 116074
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 39
> Local read count: 30485
> Local read latency: NaN ms
> Local write count: 57044
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 79.76
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 690912
> Bloom filter off heap memory used: 690760
> Index summary off heap memory used: 54736
> Compression metadata off heap memory used: 1880
> Compacted partition minimum bytes: 73
> Compacted partition maximum 

[jira] [Commented] (CASSANDRA-14525) streaming failure during bootstrap makes new node into inconsistent state

2018-06-27 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524862#comment-16524862
 ] 

Kurt Greaves commented on CASSANDRA-14525:
--

Mostly looks good, however we're still leaving write_survey mode after a resume 
bootstrap completes when we were started in write survey mode. Also just 
noticed but as we hackily re-use isSurveyMode when resuming a bootstrap we 
always log the following message regardless of if we were in write survey mode 
originally or not.
{code}Leaving write survey mode and joining ring at operator request{code}
I think at this point we could solve these 2 problems by simply calling 
{{finishJoiningRing}} explicitly when we successfully bootstrap after a resume 
in {{resumeBootstrap}}, rather than indirectly through {{joinRing}, and also 
handle write_survey in the same place.

Also, another small nit, can we change the spelling of {{bootstraped}} to 
{{bootstrapped}} in the exception messages?

> streaming failure during bootstrap makes new node into inconsistent state
> -
>
> Key: CASSANDRA-14525
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14525
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Jaydeepkumar Chovatia
>Assignee: Jaydeepkumar Chovatia
>Priority: Major
> Fix For: 4.0, 2.2.x, 3.0.x
>
>
> If bootstrap fails for newly joining node (most common reason is due to 
> streaming failure) then Cassandra state remains in {{joining}} state which is 
> fine but Cassandra also enables Native transport which makes overall state 
> inconsistent. This further creates NullPointer exception if auth is enabled 
> on the new node, please find reproducible steps here:
> For example if bootstrap fails due to streaming errors like
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) 
> ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1256)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:894)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:660)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.StorageService.initServer(StorageService.java:573)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:330) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:567)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:695) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>  Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
>  at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:85)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at com.google.common.util.concurrent.Futures$6.run(Futures.java:1310) 
> ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145)
>  ~[guava-18.0.jar:na]
>  at 
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:202)
>  ~[guava-18.0.jar:na]
>  at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:211)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:187)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:440)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:540) 
> ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:307)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>  at 
> 

[jira] [Commented] (CASSANDRA-14423) SSTables stop being compacted

2018-06-27 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524758#comment-16524758
 ] 

Marcus Eriksson commented on CASSANDRA-14423:
-

Should we stop doing anticompaction at all after full repairs instead? Clearly 
no one does {{--full}} repairs right now and letting users do non-incremental 
full repairs might be good until 4.0 (CASSANDRA-9143).

> SSTables stop being compacted
> -
>
> Key: CASSANDRA-14423
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14423
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Blocker
> Fix For: 2.2.13, 3.0.17, 3.11.3
>
>
> So seeing a problem in 3.11.0 where SSTables are being lost from the view and 
> not being included in compactions/as candidates for compaction. It seems to 
> get progressively worse until there's only 1-2 SSTables in the view which 
> happen to be the most recent SSTables and thus compactions completely stop 
> for that table.
> The SSTables seem to still be included in reads, just not compactions.
> The issue can be fixed by restarting C*, as it will reload all SSTables into 
> the view, but this is only a temporary fix. User defined/major compactions 
> still work - not clear if they include the result back in the view but is not 
> a good work around.
> This also results in a discrepancy between SSTable count and SSTables in 
> levels for any table using LCS.
> {code:java}
> Keyspace : xxx
> Read Count: 57761088
> Read Latency: 0.10527088681224288 ms.
> Write Count: 2513164
> Write Latency: 0.018211106398149903 ms.
> Pending Flushes: 0
> Table: xxx
> SSTable count: 10
> SSTables in each level: [2, 0, 0, 0, 0, 0, 0, 0, 0]
> Space used (live): 894498746
> Space used (total): 894498746
> Space used by snapshots (total): 0
> Off heap memory used (total): 11576197
> SSTable Compression Ratio: 0.6956629530569777
> Number of keys (estimate): 3562207
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 87
> Local read count: 57761088
> Local read latency: 0.108 ms
> Local write count: 2513164
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 86.33
> Bloom filter false positives: 43
> Bloom filter false ratio: 0.0
> Bloom filter space used: 8046104
> Bloom filter off heap memory used: 8046024
> Index summary off heap memory used: 3449005
> Compression metadata off heap memory used: 81168
> Compacted partition minimum bytes: 104
> Compacted partition maximum bytes: 5722
> Compacted partition mean bytes: 175
> Average live cells per slice (last five minutes): 1.0
> Maximum live cells per slice (last five minutes): 1
> Average tombstones per slice (last five minutes): 1.0
> Maximum tombstones per slice (last five minutes): 1
> Dropped Mutations: 0
> {code}
> Also for STCS we've confirmed that SSTable count will be different to the 
> number of SSTables reported in the Compaction Bucket's. In the below example 
> there's only 3 SSTables in a single bucket - no more are listed for this 
> table. Compaction thresholds haven't been modified for this table and it's a 
> very basic KV schema.
> {code:java}
> Keyspace : yyy
> Read Count: 30485
> Read Latency: 0.06708991307200263 ms.
> Write Count: 57044
> Write Latency: 0.02204061776873992 ms.
> Pending Flushes: 0
> Table: yyy
> SSTable count: 19
> Space used (live): 18195482
> Space used (total): 18195482
> Space used by snapshots (total): 0
> Off heap memory used (total): 747376
> SSTable Compression Ratio: 0.7607394576769735
> Number of keys (estimate): 116074
> Memtable cell count: 0
> Memtable data size: 0
> Memtable off heap memory used: 0
> Memtable switch count: 39
> Local read count: 30485
> Local read latency: NaN ms
> Local write count: 57044
> Local write latency: NaN ms
> Pending flushes: 0
> Percent repaired: 79.76
> Bloom filter false positives: 0
> Bloom filter false ratio: 0.0
> Bloom filter space used: 690912
> Bloom filter off heap memory used: 690760
> Index summary off heap memory used: 54736
> Compression metadata off heap memory used: 1880
> Compacted partition minimum bytes: 73
> Compacted partition maximum bytes: 124
> Compacted partition mean bytes: 96
> Average live cells per slice (last five minutes): NaN
> Maximum live cells per slice (last five minutes): 0
> Average tombstones per slice (last five minutes): NaN
> Maximum tombstones per slice (last five minutes): 0
> Dropped Mutations: 0 
> {code}
> 

[jira] [Updated] (CASSANDRA-14545) dtests: fix pytest.raises argument names

2018-06-27 Thread Stefan Podkowinski (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-14545:
---
Description: 
I've been through a couple of [dtest 
results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/580/#showFailuresLink]
 lately and notices some interpreter errors regarding how we call 
pytest.raises. The 
[reference|https://docs.pytest.org/en/latest/assert.html#assertions-about-expected-exceptions]
 is pretty clear on what would be the correct arguments, but still want to make 
sure we're not working on different pytest versions. 
[~mkjellman] can you quickly check the following inconsistencies and look at my 
patch (msg->message, matches->match)?
{noformat}
git show 49b2dda4 |egrep 'raises.*, m' {noformat}

  was:
I've been through a couple of [dtest 
results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/580/#showFailuresLink]
 lately and notices some interpreter errors regarding how we call 
pytest.raises. The 
[reference|https://docs.pytest.org/en/latest/assert.html#assertions-about-expected-exceptions]
 is pretty clear on what would be the correct arguments, but still want to make 
sure we're not working on different pytest versions. 
 [~bdeggleston], can you quickly check the following inconsistencies and look 
at my patch (msg->message, matches->match)?
{noformat}
git show 49b2dda4 |egrep 'raises.*, m' {noformat}


> dtests: fix pytest.raises argument names
> 
>
> Key: CASSANDRA-14545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14545
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
> Attachments: CASSANDRA-14545.patch
>
>
> I've been through a couple of [dtest 
> results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/580/#showFailuresLink]
>  lately and notices some interpreter errors regarding how we call 
> pytest.raises. The 
> [reference|https://docs.pytest.org/en/latest/assert.html#assertions-about-expected-exceptions]
>  is pretty clear on what would be the correct arguments, but still want to 
> make sure we're not working on different pytest versions. 
> [~mkjellman] can you quickly check the following inconsistencies and look at 
> my patch (msg->message, matches->match)?
> {noformat}
> git show 49b2dda4 |egrep 'raises.*, m' {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14545) dtests: fix pytest.raises argument names

2018-06-27 Thread Stefan Podkowinski (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-14545:
---
Status: Patch Available  (was: Open)

> dtests: fix pytest.raises argument names
> 
>
> Key: CASSANDRA-14545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14545
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
> Attachments: CASSANDRA-14545.patch
>
>
> I've been through a couple of [dtest 
> results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/580/#showFailuresLink]
>  lately and notices some interpreter errors regarding how we call 
> pytest.raises. The 
> [reference|https://docs.pytest.org/en/latest/assert.html#assertions-about-expected-exceptions]
>  is pretty clear on what would be the correct arguments, but still want to 
> make sure we're not working on different pytest versions. 
> [~mkjellman] can you quickly check the following inconsistencies and look at 
> my patch (msg->message, matches->match)?
> {noformat}
> git show 49b2dda4 |egrep 'raises.*, m' {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14545) dtests: fix pytest.raises argument names

2018-06-27 Thread Stefan Podkowinski (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Podkowinski updated CASSANDRA-14545:
---
Attachment: CASSANDRA-14545.patch

> dtests: fix pytest.raises argument names
> 
>
> Key: CASSANDRA-14545
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14545
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
> Attachments: CASSANDRA-14545.patch
>
>
> I've been through a couple of [dtest 
> results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/580/#showFailuresLink]
>  lately and notices some interpreter errors regarding how we call 
> pytest.raises. The 
> [reference|https://docs.pytest.org/en/latest/assert.html#assertions-about-expected-exceptions]
>  is pretty clear on what would be the correct arguments, but still want to 
> make sure we're not working on different pytest versions. 
>  [~bdeggleston], can you quickly check the following inconsistencies and look 
> at my patch (msg->message, matches->match)?
> {noformat}
> git show 49b2dda4 |egrep 'raises.*, m' {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14545) dtests: fix pytest.raises argument names

2018-06-27 Thread Stefan Podkowinski (JIRA)
Stefan Podkowinski created CASSANDRA-14545:
--

 Summary: dtests: fix pytest.raises argument names
 Key: CASSANDRA-14545
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14545
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Stefan Podkowinski
Assignee: Stefan Podkowinski


I've been through a couple of [dtest 
results|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/580/#showFailuresLink]
 lately and notices some interpreter errors regarding how we call 
pytest.raises. The 
[reference|https://docs.pytest.org/en/latest/assert.html#assertions-about-expected-exceptions]
 is pretty clear on what would be the correct arguments, but still want to make 
sure we're not working on different pytest versions. 
 [~bdeggleston], can you quickly check the following inconsistencies and look 
at my patch (msg->message, matches->match)?
{noformat}
git show 49b2dda4 |egrep 'raises.*, m' {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra-dtest git commit: Fix typo in write_failures_test: enocde() -> encode()

2018-06-27 Thread spod
Repository: cassandra-dtest
Updated Branches:
  refs/heads/master b76b49e6a -> 5276d89b6


Fix typo in write_failures_test: enocde() -> encode()


Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/5276d89b
Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/5276d89b
Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/5276d89b

Branch: refs/heads/master
Commit: 5276d89b64593f2809ed050ef8aefbf7cd10eb18
Parents: b76b49e
Author: Stefan Podkowinski 
Authored: Wed Jun 27 08:42:17 2018 +0200
Committer: Stefan Podkowinski 
Committed: Wed Jun 27 08:42:17 2018 +0200

--
 write_failures_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/5276d89b/write_failures_test.py
--
diff --git a/write_failures_test.py b/write_failures_test.py
index 67bb22a..ece6245 100644
--- a/write_failures_test.py
+++ b/write_failures_test.py
@@ -224,7 +224,7 @@ class TestWriteFailures(Tester):
 with pytest.raises(self.expected_expt):
 client.insert('key1'.encode(),
   thrift_types.ColumnParent('mytable'),
-  thrift_types.Column('value'.encode(), 'Value 
1'.enocde(), 0),
+  thrift_types.Column('value'.encode(), 'Value 
1'.encode(), 0),
   thrift_types.ConsistencyLevel.ALL)
 
 client.transport.close()


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org