[jira] [Comment Edited] (CASSANDRA-14468) "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16

2018-07-23 Thread Jordan West (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553675#comment-16553675
 ] 

Jordan West edited comment on CASSANDRA-14468 at 7/24/18 1:40 AM:
--

[~wadey], sorry for taking so long to get to this but I finally had some time 
today. I agree with your assessment so far but unfortunately don’t have much to 
add. It looks like CASSANDRA-12516 fixed what the cache was keyed on but not 
all the {{getInterned}} call sites. Indeed the {{type}} column is the CQL 
(value) type. Further, we no longer have the comparator after 
{{LegacySchemaMigrator}} runs (of note, {{LegacySchemaMigrator}} does use 
{{getInterned}} as intended but since we lose the comparator that only makes 
things worse)*.

[~iamaleksey], do you have any thoughts on this since you reported the original 
issue?

\* I’m actually just getting familiar with this code but I think [~jasobrown] 
referred this ticket to me because of the initial relation to 2i


was (Author: jrwest):
[~wadey], sorry for taking so long to get to this but I finally had some time 
today. I agree with your assessment so far but unfortunately don’t have much to 
add. It looks like CASSANDRA-12516 fixed what the cache was keyed on but not 
all the {{getInterned}} call sites. Indeed the {{type}} column is the CQL 
(value) type. Further, we no longer have the comparator after 
{{LegacySchemaMigrator}} runs (of note, {{LegacySchemaMigrator}} does use 
{{getInterned}} as intended but since we lose the comparator that only makes 
things worse)*.

[~iamaleksey], do you have any thoughts on this since you reported the original 
issue?

* I’m actually just getting familiar with this code but I think [~jasobrown] 
referred this ticket to me because of the initial relation to 2i

> "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16
> -
>
> Key: CASSANDRA-14468
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14468
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wade Simmons
>Priority: Major
> Attachments: data.tar.gz
>
>
> I am attempting to upgrade from Cassandra 2.2.10 to 3.0.16. I am getting this 
> error:
> {code}
> org.apache.cassandra.exceptions.ConfigurationException: Unable to parse 
> targets for index idx_foo ("666f6f")
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.parseTarget(CassandraIndex.java:800)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.indexCfsMetadata(CassandraIndex.java:747)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:645)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:251) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.16.jar:3.0.16]
> {code}
> It looks like this might be related to CASSANDRA-14104 that was just added to 
> 3.0.16 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14468) "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16

2018-07-23 Thread Jordan West (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553675#comment-16553675
 ] 

Jordan West edited comment on CASSANDRA-14468 at 7/24/18 1:40 AM:
--

[~wadey], sorry for taking so long to get to this but I finally had some time 
today. I agree with your assessment so far but unfortunately don’t have much to 
add. It looks like CASSANDRA-12516 fixed what the cache was keyed on but not 
all the {{getInterned}} call sites. Indeed the {{type}} column is the CQL 
(value) type. Further, we no longer have the comparator after 
{{LegacySchemaMigrator}} runs (of note, {{LegacySchemaMigrator}} does use 
{{getInterned}} as intended but since we lose the comparator that only makes 
things worse)*.

[~iamaleksey], do you have any thoughts on this since you reported the original 
issue?

* I’m actually just getting familiar with this code but I think [~jasobrown] 
referred this ticket to me because of the initial relation to 2i


was (Author: jrwest):
[~wadey], sorry for taking so long to get to this but I finally had some time 
today. I agree with your assessment so far but unfortunately don’t have much to 
add. It looks like CASSANDRA-12516 fixed what the cache was keyed on but not 
all the {{getInterned}} call sites. Indeed the {{type}} column is the CQL 
(value) type. Further, we no longer have the comparator after 
{{LegacySchemaMigrator}} runs (of note, {{LegacySchemaMigrator}} does use 
{{getInterned}} as intended but since we lose the comparator that only makes 
things worse)*. 

Aleksey, do you have any thoughts on this since you reported the original issue?

\* I’m actually just getting familiar with this code but I think [~jasobrown] 
referred this ticket to me because of the initial relation to 2i

> "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16
> -
>
> Key: CASSANDRA-14468
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14468
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wade Simmons
>Priority: Major
> Attachments: data.tar.gz
>
>
> I am attempting to upgrade from Cassandra 2.2.10 to 3.0.16. I am getting this 
> error:
> {code}
> org.apache.cassandra.exceptions.ConfigurationException: Unable to parse 
> targets for index idx_foo ("666f6f")
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.parseTarget(CassandraIndex.java:800)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.indexCfsMetadata(CassandraIndex.java:747)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:645)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:251) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.16.jar:3.0.16]
> {code}
> It looks like this might be related to CASSANDRA-14104 that was just added to 
> 3.0.16 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14468) "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16

2018-07-23 Thread Jordan West (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553675#comment-16553675
 ] 

Jordan West commented on CASSANDRA-14468:
-

[~wadey], sorry for taking so long to get to this but I finally had some time 
today. I agree with your assessment so far but unfortunately don’t have much to 
add. It looks like CASSANDRA-12516 fixed what the cache was keyed on but not 
all the {{getInterned}} call sites. Indeed the {{type}} column is the CQL 
(value) type. Further, we no longer have the comparator after 
{{LegacySchemaMigrator}} runs (of note, {{LegacySchemaMigrator}} does use 
{{getInterned}} as intended but since we lose the comparator that only makes 
things worse)*. 

Aleksey, do you have any thoughts on this since you reported the original issue?

\* I’m actually just getting familiar with this code but I think [~jasobrown] 
referred this ticket to me because of the initial relation to 2i

> "Unable to parse targets for index" on upgrade to Cassandra 3.0.10-3.0.16
> -
>
> Key: CASSANDRA-14468
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14468
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Wade Simmons
>Priority: Major
> Attachments: data.tar.gz
>
>
> I am attempting to upgrade from Cassandra 2.2.10 to 3.0.16. I am getting this 
> error:
> {code}
> org.apache.cassandra.exceptions.ConfigurationException: Unable to parse 
> targets for index idx_foo ("666f6f")
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.parseTarget(CassandraIndex.java:800)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.index.internal.CassandraIndex.indexCfsMetadata(CassandraIndex.java:747)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:645)
>  ~[apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:251) 
> [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
>  [apache-cassandra-3.0.16.jar:3.0.16]
>   at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697) 
> [apache-cassandra-3.0.16.jar:3.0.16]
> {code}
> It looks like this might be related to CASSANDRA-14104 that was just added to 
> 3.0.16 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553565#comment-16553565
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user dineshjoshi commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r204586299
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraOutgoingFile.java ---
@@ -114,13 +155,51 @@ public void write(StreamSession session, 
DataOutputStreamPlus out, int version)
 CassandraStreamHeader.serializer.serialize(header, out, version);
 out.flush();
 
-CassandraStreamWriter writer = header.compressionInfo == null ?
-   new CassandraStreamWriter(sstable, 
header.sections, session) :
-   new 
CompressedCassandraStreamWriter(sstable, header.sections,
-   
header.compressionInfo, session);
+IStreamWriter writer;
+if (shouldStreamFullSSTable())
+{
+writer = new CassandraBlockStreamWriter(sstable, session, 
components);
+}
+else
+{
+writer = (header.compressionInfo == null) ?
+ new CassandraStreamWriter(sstable, header.sections, 
session) :
+ new CompressedCassandraStreamWriter(sstable, 
header.sections,
+ 
header.compressionInfo, session);
+}
 writer.write(out);
 }
 
+@VisibleForTesting
+public boolean shouldStreamFullSSTable()
+{
+return isFullSSTableTransfersEnabled && isFullyContained;
+}
+
+@VisibleForTesting
+public boolean fullyContainedIn(List> normalizedRanges, 
SSTableReader sstable)
+{
+if (normalizedRanges == null)
+return false;
+
+RangeOwnHelper rangeOwnHelper = new 
RangeOwnHelper(normalizedRanges);
+try (KeyIterator iter = new KeyIterator(sstable.descriptor, 
sstable.metadata()))
+{
+while (iter.hasNext())
+{
+DecoratedKey key = iter.next();
+try
+{
+rangeOwnHelper.check(key);
+} catch(RuntimeException e)
--- End diff --

@iamaleksey thank you for the useful feedback. I did discuss this with 
@krummas and I believe while there was a room for improvement, the thinking 
back then was that the benefits would outweigh the cost. I looked through the 
codebase and this was the best way to definitely verify range containment as I 
was going for correctness. That said, what you suggest is obviously better. I 
am concerned about scope creep in this PR. Would it be ok if we address it as 
part of a separate PR?

It would also be useful, if we could design the effective range computation 
and storage in the metadata. I am not sure what sort of gotchas I might run 
into.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553429#comment-16553429
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user aweisberg commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r204562307
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraOutgoingFile.java ---
@@ -114,13 +155,51 @@ public void write(StreamSession session, 
DataOutputStreamPlus out, int version)
 CassandraStreamHeader.serializer.serialize(header, out, version);
 out.flush();
 
-CassandraStreamWriter writer = header.compressionInfo == null ?
-   new CassandraStreamWriter(sstable, 
header.sections, session) :
-   new 
CompressedCassandraStreamWriter(sstable, header.sections,
-   
header.compressionInfo, session);
+IStreamWriter writer;
+if (shouldStreamFullSSTable())
+{
+writer = new CassandraBlockStreamWriter(sstable, session, 
components);
+}
+else
+{
+writer = (header.compressionInfo == null) ?
+ new CassandraStreamWriter(sstable, header.sections, 
session) :
+ new CompressedCassandraStreamWriter(sstable, 
header.sections,
+ 
header.compressionInfo, session);
+}
 writer.write(out);
 }
 
+@VisibleForTesting
+public boolean shouldStreamFullSSTable()
+{
+return isFullSSTableTransfersEnabled && isFullyContained;
+}
+
+@VisibleForTesting
+public boolean fullyContainedIn(List> normalizedRanges, 
SSTableReader sstable)
+{
+if (normalizedRanges == null)
+return false;
+
+RangeOwnHelper rangeOwnHelper = new 
RangeOwnHelper(normalizedRanges);
+try (KeyIterator iter = new KeyIterator(sstable.descriptor, 
sstable.metadata()))
+{
+while (iter.hasNext())
+{
+DecoratedKey key = iter.next();
+try
+{
+rangeOwnHelper.check(key);
+} catch(RuntimeException e)
--- End diff --

I mistakenly thought this index was a sampled index not a full index. 
Requiring a comparison of every partition key in every sstable for the entire 
data set seems like a big regression for some use cases. 

I was trying and failing to find the reasoning for why we switched to this.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553343#comment-16553343
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r204537184
  
--- Diff: src/java/org/apache/cassandra/db/streaming/ComponentInfo.java ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.IOException;
+
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
+import org.apache.cassandra.db.TypeSizes;
+import org.apache.cassandra.io.IVersionedSerializer;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.io.util.DataOutputPlus;
+
+public class ComponentInfo
+{
+final Component.Type type;
+final long length;
+
+public ComponentInfo(Component.Type type, long length)
+{
+assert length >= 0 : "Component length cannot be negative";
+this.type = type;
+this.length = length;
+}
+
+@Override
+public String toString()
+{
+return "ComponentInfo{" +
+   "type=" + type +
+   ", length=" + length +
+   '}';
+}
+
+public boolean equals(Object o)
--- End diff --

It's generally considered to be a bad practice to implement `equals()` and 
`hashCode()` unless that class is stored in a set or a map - or an upstream 
implementation of such. Otherwise it's just confusing boilerplate (confusing 
because it implies that the class is used in ways it clearly isn't).

In this case, there is a `List` field in 
`CassandraStreamHeader`, which has an `equals()`/`hashCode()` implementation, 
which on the surface justifies these. But those, in turn, are actually dead 
code. So what we should do is remove the implementations of `equals()` and 
`hashCode()` here, and do the same in `CassandraStreamHeader`, being good 
citizens.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553342#comment-16553342
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r204538045
  
--- Diff: src/java/org/apache/cassandra/db/streaming/ComponentInfo.java ---
@@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.IOException;
+
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
+import org.apache.cassandra.db.TypeSizes;
+import org.apache.cassandra.io.IVersionedSerializer;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.io.util.DataOutputPlus;
+
+public class ComponentInfo
--- End diff --

Now, if you look carefully at all current uses of `ComponentInfo`, you'll 
see that it's only ever being used as an element of lists, and never as a 
separate entity. As such it would be cleaner - and nicer to work with - to 
implement a class, say, `ComponentManifest` that would have an ordered map of 
`Component.Type` to `long` size, and expose the ordered keyset, with 
serializers for the whole manifest. As it's written now, ser/deser code is 
leaking outside, while it could instead be nicely encapsulated here.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553322#comment-16553322
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r204531887
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraOutgoingFile.java ---
@@ -114,13 +155,51 @@ public void write(StreamSession session, 
DataOutputStreamPlus out, int version)
 CassandraStreamHeader.serializer.serialize(header, out, version);
 out.flush();
 
-CassandraStreamWriter writer = header.compressionInfo == null ?
-   new CassandraStreamWriter(sstable, 
header.sections, session) :
-   new 
CompressedCassandraStreamWriter(sstable, header.sections,
-   
header.compressionInfo, session);
+IStreamWriter writer;
+if (shouldStreamFullSSTable())
+{
+writer = new CassandraBlockStreamWriter(sstable, session, 
components);
+}
+else
+{
+writer = (header.compressionInfo == null) ?
+ new CassandraStreamWriter(sstable, header.sections, 
session) :
+ new CompressedCassandraStreamWriter(sstable, 
header.sections,
+ 
header.compressionInfo, session);
+}
 writer.write(out);
 }
 
+@VisibleForTesting
+public boolean shouldStreamFullSSTable()
+{
+return isFullSSTableTransfersEnabled && isFullyContained;
+}
+
+@VisibleForTesting
+public boolean fullyContainedIn(List> normalizedRanges, 
SSTableReader sstable)
+{
+if (normalizedRanges == null)
+return false;
+
+RangeOwnHelper rangeOwnHelper = new 
RangeOwnHelper(normalizedRanges);
+try (KeyIterator iter = new KeyIterator(sstable.descriptor, 
sstable.metadata()))
+{
+while (iter.hasNext())
+{
+DecoratedKey key = iter.next();
+try
+{
+rangeOwnHelper.check(key);
+} catch(RuntimeException e)
--- End diff --

On a more general note, this is potentially quite an expensive thing to do, 
especially for big sstables with skinny partitions, and in some cases this will 
introduce a performance regression.

The whole optimisation is realistically only useful for bootrstrap, decom, 
and rebuild, with LCS (which is still plenty useful and impactful and worth 
having). But it wouldn't normally kick in for regular repairs because of the 
full-cover requirement, and it won't normally kick in for STCS until 
CASSANDRA-10540 (range aware compaction) is implemented. In those cases having 
to read through the whole primary index is a perf regression that we shouldn't 
allow to happen.

The easiest way to avoid it would be to store sstable's effective token 
ranges in sstable metadata in relation to the node's ranges, making this check 
essentially free. Otherwise we should probably disable complete sstable 
streaming for STCS tables, at least until CASSANDRA-10540 is implemented. That 
however wouldn't address the regression to regular streaming, so keeping ranges 
in the metadata would be my preferred way to go.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Robert Stupp (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553228#comment-16553228
 ] 

Robert Stupp commented on CASSANDRA-9608:
-

Needs a change in ccm - [PR for ccm|https://github.com/riptano/ccm/pull/680]. 
Not sure which branch (master or cassandra-test) in the ccm repo is the "right" 
one for PRs.

Updating line 3 in {{requirements.txt}} in dtests to {{-e 
git+https://github.com/snazy/ccm.git@9608-jvm-options#egg=ccm}} fixes that as 
well.

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Robert Stupp (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553160#comment-16553160
 ] 

Robert Stupp commented on CASSANDRA-9608:
-

Um - yea. That's ccm complaining that {{jvm.options}} doesn't exist... Let me 
look into that.

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14557) Consider adding default and required keyspace replication options

2018-07-23 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553108#comment-16553108
 ] 

Joseph Lynch edited comment on CASSANDRA-14557 at 7/23/18 4:54 PM:
---

[~KurtG] I believe the current state of the patch is to provide the options and 
implement {{SimpleStrategy}}. We need CASSANDRA-14303 I believe to add support 
for {{NetworkTopologyStrategy}}. I think for CASSANDRA-14303 the default would 
only apply only if no datacenters were given in the declaration at all, e.g. { 
{{'class': 'NetworkTopologyStrategy'}} } would be equivalent to { {{'class': 
'NetworkTopologyStrategy', 'default_datacenter_replication': 3}} }. The adding 
a datacenter flow is covered in CASSANDRA-14303 but briefly it relies on the 
autoexpansion only to happen at {{CREATE}} or {{ALTER}} time. If you don't 
supply {{default_datacenter_replication}} but do supply explicit datacenters 
everything works the same as in the past, if you do supply it then any 
datacenters that exist but are not specified in the {{CREATE/ALTER}} statement 
get that default value added. The minimum validation catches e.g. { {{'class': 
'NetworkTopologyStrategy', 'default_datacenter_replication': 1}} }

Removing a datacenter is slightly trickier because of the minimum rf validation 
(to remove a datacenter in the presence of {{default_datacenter_replication}} 
you have to provide an excluded datacenter with RF=0), but I think it will 
still work because the validation happens after we template out the replication 
map.

 


was (Author: jolynch):
[~KurtG] I believe the current state of the patch is to provide the options and 
implement {{SimpleStrategy}}. We need CASSANDRA-14303 I believe to add support 
for {{NetworkTopologyStrategy}}. I think for CASSANDRA-14303 the default would 
only apply only if no datacenters were given in the declaration at all, e.g. 
{{{'class': 'NetworkTopologyStrategy'}}} would be equivalent to {{{'class': 
'NetworkTopologyStrategy', 'default_datacenter_replication': 3}}}. The adding a 
datacenter flow is covered in CASSANDRA-14303 but briefly it relies on the 
autoexpansion only to happen at {{CREATE}} or {{ALTER}} time. If you don't 
supply {{default_datacenter_replication}} but do supply explicit datacenters 
everything works the same as in the past, if you do supply it then any 
datacenters that exist but are not specified in the {{CREATE/ALTER}} statement 
get that default value added. The minimum validation catches e.g. {{{'class': 
'NetworkTopologyStrategy', 'default_datacenter_replication': 1}}}

Removing a datacenter is slightly trickier because of the minimum rf validation 
(to remove a datacenter in the presence of {{default_datacenter_replication}} 
you have to provide an excluded datacenter with RF=0), but I think it will 
still work because the validation happens after we template out the replication 
map.

 

> Consider adding default and required keyspace replication options
> -
>
> Key: CASSANDRA-14557
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14557
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Minor
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.0
>
> Attachments: 14557-trunk.txt
>
>
> Ending up with a keyspace of RF=1 is unfortunately pretty easy in C* right 
> now - the system_auth table for example is created with RF=1 (to take into 
> account single node setups afaict from CASSANDRA-5112), and a user can 
> further create a keyspace with RF=1 posing availability and streaming risks 
> (e.g. rebuild).
> I propose we add two configuration options in cassandra.yaml:
>  # {{default_keyspace_rf}} (default: 1) - If replication factors are not 
> specified, use this number.
>  # {{required_minimum_keyspace_rf}} (default: unset) - Prevent users from 
> creating a keyspace with an RF less than what is configured
> These settings could further be re-used to:
>  * Provide defaults for new keyspaces created with SimpleStrategy or 
> NetworkTopologyStrategy (CASSANDRA-14303)
>  * Make the automatic token [allocation 
> algorithm|https://issues.apache.org/jira/browse/CASSANDRA-13701?focusedCommentId=16095662&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16095662]
>  interface more intuitive allowing easy use of the new token allocation 
> algorithm.
> At the end of the day, if someone really wants to allow RF=1, they simply 
> don’t set the setting. For backwards compatibility the default remains 1 and 
> C* would create with RF=1, and would default to current behavior of allowing 
> any RF on keyspaces.



--
This messa

[jira] [Commented] (CASSANDRA-14557) Consider adding default and required keyspace replication options

2018-07-23 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553108#comment-16553108
 ] 

Joseph Lynch commented on CASSANDRA-14557:
--

[~KurtG] I believe the current state of the patch is to provide the options and 
implement {{SimpleStrategy}}. We need CASSANDRA-14303 I believe to add support 
for {{NetworkTopologyStrategy}}. I think for CASSANDRA-14303 the default would 
only apply only if no datacenters were given in the declaration at all, e.g. 
{{{'class': 'NetworkTopologyStrategy'}}} would be equivalent to {{{'class': 
'NetworkTopologyStrategy', 'default_datacenter_replication': 3}}}. The adding a 
datacenter flow is covered in CASSANDRA-14303 but briefly it relies on the 
autoexpansion only to happen at {{CREATE}} or {{ALTER}} time. If you don't 
supply {{default_datacenter_replication}} but do supply explicit datacenters 
everything works the same as in the past, if you do supply it then any 
datacenters that exist but are not specified in the {{CREATE/ALTER}} statement 
get that default value added. The minimum validation catches e.g. {{{'class': 
'NetworkTopologyStrategy', 'default_datacenter_replication': 1}}}

Removing a datacenter is slightly trickier because of the minimum rf validation 
(to remove a datacenter in the presence of {{default_datacenter_replication}} 
you have to provide an excluded datacenter with RF=0), but I think it will 
still work because the validation happens after we template out the replication 
map.

 

> Consider adding default and required keyspace replication options
> -
>
> Key: CASSANDRA-14557
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14557
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Minor
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.0
>
> Attachments: 14557-trunk.txt
>
>
> Ending up with a keyspace of RF=1 is unfortunately pretty easy in C* right 
> now - the system_auth table for example is created with RF=1 (to take into 
> account single node setups afaict from CASSANDRA-5112), and a user can 
> further create a keyspace with RF=1 posing availability and streaming risks 
> (e.g. rebuild).
> I propose we add two configuration options in cassandra.yaml:
>  # {{default_keyspace_rf}} (default: 1) - If replication factors are not 
> specified, use this number.
>  # {{required_minimum_keyspace_rf}} (default: unset) - Prevent users from 
> creating a keyspace with an RF less than what is configured
> These settings could further be re-used to:
>  * Provide defaults for new keyspaces created with SimpleStrategy or 
> NetworkTopologyStrategy (CASSANDRA-14303)
>  * Make the automatic token [allocation 
> algorithm|https://issues.apache.org/jira/browse/CASSANDRA-13701?focusedCommentId=16095662&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16095662]
>  interface more intuitive allowing easy use of the new token allocation 
> algorithm.
> At the end of the day, if someone really wants to allow RF=1, they simply 
> don’t set the setting. For backwards compatibility the default remains 1 and 
> C* would create with RF=1, and would default to current behavior of allowing 
> any RF on keyspaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553033#comment-16553033
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r204458523
  
--- Diff: src/java/org/apache/cassandra/config/DatabaseDescriptor.java ---
@@ -2260,6 +2260,20 @@ public static int getStreamingConnectionsPerHost()
 return conf.streaming_connections_per_host;
 }
 
+public static boolean isFullSSTableTransfersEnabled()
+{
+if (conf.server_encryption_options.enabled || 
conf.server_encryption_options.optional)
+{
+logger.debug("Internode encryption enabled. Disabling zero 
copy SSTable transfers for streaming.");
--- End diff --

Nobody will ever see this at `debug` level. We should at minimum `warn` if 
`streaming_zerocopy_sstables_enabled` is and internode encryption are both 
enabled at the same time.


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553030#comment-16553030
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r204457225
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraBlockStreamReader.java ---
@@ -0,0 +1,184 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.cassandra.db.streaming;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.List;
+import java.util.Set;
+
+import com.google.common.base.Throwables;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.cassandra.db.ColumnFamilyStore;
+import org.apache.cassandra.db.DecoratedKey;
+import org.apache.cassandra.db.Directories;
+import org.apache.cassandra.db.SerializationHeader;
+import org.apache.cassandra.db.lifecycle.LifecycleTransaction;
+import org.apache.cassandra.io.sstable.Component;
+import org.apache.cassandra.io.sstable.Descriptor;
+import org.apache.cassandra.io.sstable.SSTableMultiWriter;
+import org.apache.cassandra.io.sstable.format.SSTableFormat;
+import org.apache.cassandra.io.sstable.format.Version;
+import org.apache.cassandra.io.sstable.format.big.BigTableBlockWriter;
+import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.schema.TableId;
+import org.apache.cassandra.streaming.ProgressInfo;
+import org.apache.cassandra.streaming.StreamReceiver;
+import org.apache.cassandra.streaming.StreamSession;
+import org.apache.cassandra.streaming.messages.StreamMessageHeader;
+import org.apache.cassandra.utils.Collectors3;
+import org.apache.cassandra.utils.FBUtilities;
+
+/**
+ * CassandraBlockStreamReader reads SSTable off the wire and writes it to 
disk.
+ */
+public class CassandraBlockStreamReader implements IStreamReader
+{
+private static final Logger logger = 
LoggerFactory.getLogger(CassandraBlockStreamReader.class);
+protected final TableId tableId;
+protected final StreamSession session;
+protected final int sstableLevel;
--- End diff --

It has taken me some time (and @krummas's help) to prove that this wasn't a 
correctness issue, but at its best this is confusing/misleading code.

We extract `sstableLevel` from the header, but don't use it anywhere. 
Instead, since we stream `StatsMetadata` directly, we also inherit the level 
from there - regardless of whether `CassandraOutgoingStream.keepSSTableLevel` 
is set to `true`. If `LeveledManifest.canAddSSTable` check didn't exist, we'd 
be in trouble here. For clarity, I would probably look at that flag, and 
explicitly reset the level to `L0` if `keepSSTableLevel` is set to `false`.

P.S. What's the deal with all these `protected` fields?


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

--

[jira] [Commented] (CASSANDRA-14495) Memory Leak /High Memory usage post 3.11.2 upgrade

2018-07-23 Thread Abdul Patel (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16553017#comment-16553017
 ] 

Abdul Patel commented on CASSANDRA-14495:
-

no Gc logs and have to reboot the cluster every 2 weeks, anyone else faced the 
issue ? is it better to wait for 3.11.3?

> Memory Leak /High Memory usage post 3.11.2 upgrade
> --
>
> Key: CASSANDRA-14495
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14495
> Project: Cassandra
>  Issue Type: Bug
>  Components: Metrics
>Reporter: Abdul Patel
>Priority: Major
> Attachments: cas_heap.txt
>
>
> Hi All,
>  
> I recently upgraded my non prod cassandra cluster( 4 nodes single DC) from 
> 3.10 to 3.11.2 version.
> No issues reported apart from only nodetool info reporting 80% usage .
> I intially had 16GB memory on each node, later i bumped up to 20GB, and 
> rebooted all nodes.
> Waited for an week and now again i have seen memory usage more than 80% , 
> 16GB + .
> this means some memory leaks are happening over the time.
> Any one has faced such issue or do we have any workaround ? my 3.11.2 version 
>  upgrade rollout has been halted because of this bug.
> ===
> ID : 65b64f5a-7fe6-4036-94c8-8da9c57718cc
> Gossip active  : true
> Thrift active  : true
> Native Transport active: true
> Load   : 985.24 MiB
> Generation No  : 1526923117
> Uptime (seconds)   : 1097684
> Heap Memory (MB)   : 16875.64 / 20480.00
> Off Heap Memory (MB)   : 20.42
> Data Center    : DC7
> Rack   : rac1
> Exceptions : 0
> Key Cache  : entries 3569, size 421.44 KiB, capacity 100 MiB, 
> 7931933 hits, 8098632 requests, 0.979 recent hit rate, 14400 save period in 
> seconds
> Row Cache  : entries 0, size 0 bytes, capacity 0 bytes, 0 hits, 0 
> requests, NaN recent hit rate, 0 save period in seconds
> Counter Cache  : entries 0, size 0 bytes, capacity 50 MiB, 0 hits, 0 
> requests, NaN recent hit rate, 7200 save period in seconds
> Chunk Cache    : entries 2361, size 147.56 MiB, capacity 3.97 GiB, 
> 2412803 misses, 72594047 requests, 0.967 recent hit rate, NaN microseconds 
> miss latency
> Percent Repaired   : 99.88086234106282%
> Token  : (invoke with -T/--tokens to see all 256 tokens)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552971#comment-16552971
 ] 

ASF GitHub Bot commented on CASSANDRA-14556:


Github user iamaleksey commented on a diff in the pull request:

https://github.com/apache/cassandra/pull/239#discussion_r204437695
  
--- Diff: 
src/java/org/apache/cassandra/db/streaming/CassandraOutgoingFile.java ---
@@ -114,13 +155,51 @@ public void write(StreamSession session, 
DataOutputStreamPlus out, int version)
 CassandraStreamHeader.serializer.serialize(header, out, version);
 out.flush();
 
-CassandraStreamWriter writer = header.compressionInfo == null ?
-   new CassandraStreamWriter(sstable, 
header.sections, session) :
-   new 
CompressedCassandraStreamWriter(sstable, header.sections,
-   
header.compressionInfo, session);
+IStreamWriter writer;
+if (shouldStreamFullSSTable())
+{
+writer = new CassandraBlockStreamWriter(sstable, session, 
components);
+}
+else
+{
+writer = (header.compressionInfo == null) ?
+ new CassandraStreamWriter(sstable, header.sections, 
session) :
+ new CompressedCassandraStreamWriter(sstable, 
header.sections,
+ 
header.compressionInfo, session);
+}
 writer.write(out);
 }
 
+@VisibleForTesting
+public boolean shouldStreamFullSSTable()
+{
+return isFullSSTableTransfersEnabled && isFullyContained;
+}
+
+@VisibleForTesting
+public boolean fullyContainedIn(List> normalizedRanges, 
SSTableReader sstable)
+{
+if (normalizedRanges == null)
+return false;
+
+RangeOwnHelper rangeOwnHelper = new 
RangeOwnHelper(normalizedRanges);
+try (KeyIterator iter = new KeyIterator(sstable.descriptor, 
sstable.metadata()))
+{
+while (iter.hasNext())
+{
+DecoratedKey key = iter.next();
+try
+{
+rangeOwnHelper.check(key);
+} catch(RuntimeException e)
--- End diff --

Catching `RuntimeException` is really not the way we should be using 
`RangeOwnHelper` here. Can you refactor `RangeOwnHelper` to introduce a method 
that would return a `boolean` instead?


> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14289) Document sstable tools

2018-07-23 Thread JIRA


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552962#comment-16552962
 ] 

Hannu Kröger commented on CASSANDRA-14289:
--

Just my 2 eurocents: I think it would be best to update the sstable 
documentation also in the tools but ultimately I hope there will be more 
detailed description in the documentation on the site to describe all the 
things that you can do with the tool and considerations and when you should or 
can run it etc. And that level of documentation might be a bit much if you just 
run "sstabletool --help".

> Document sstable tools
> --
>
> Key: CASSANDRA-14289
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14289
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Documentation and Website
>Reporter: Hannu Kröger
>Priority: Major
> Attachments: gen-sstable-docs.py, sstabledocs.tar.gz
>
>
> Following tools are missing in the documentation of cassandra tools on the 
> documentation site (http://cassandra.apache.org/doc/latest/tools/index.html):
>  * sstabledump
>  * sstableexpiredblockers
>  * sstablelevelreset
>  * sstableloader
>  * sstablemetadata
>  * sstableofflinerelevel
>  * sstablerepairedset
>  * sstablescrub
>  * sstablesplit
>  * sstableupgrade
>  * sstableutil
>  * sstableverify



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552956#comment-16552956
 ] 

Jason Brown commented on CASSANDRA-9608:


bq. The best compromise of approaches here is probably the ReentrantLock 
approach, as it is simplest and has the best behavioural profile, with the only 
cost being a modest increase in heap for contended objects (which are typically 
heavy already, else the chance of contention is low)

I agree. And thanks for your input here, [~benedict]. I appreciate having 
another PoV on this subject.

[~snazy] I've been [running this branch on 
circleci|https://circleci.com/gh/jasobrown/workflows/cassandra/tree/9608-circleci],
 and the dtests have lot of failures (~120 or so). Can you take a look? Not 
sure if it's an environmental problem or what (haven't had a chance to dig in 
yet).

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552758#comment-16552758
 ] 

Benedict commented on CASSANDRA-9608:
-

bq. ReentrantLock: Oh, you propose RL to be pre-allocated

Vs

bq. Why do you suppose this to be the case? It only needs to be allocated on 
demand, but then must persist after it has been allocated.

bq. However, I feel like it's beyond the scope of this ticket

Agreed.  Perhaps we should go with a simple solution and then file a separate 
ticket to ensure no regression, as any of the approaches here will regress in 
some way.

The best compromise of approaches here is probably the ReentrantLock approach, 
as it is simplest and has the best behavioural profile, with the only cost 
being a modest increase in heap for contended objects (which are typically 
heavy already, else the chance of contention is low)

The other locking approaches are either more work and inferior to alternative 
(wait or lock free) approaches we could invest the time in, or have worse 
characteristics under contention (ie simple condition)

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552754#comment-16552754
 ] 

Jason Brown commented on CASSANDRA-9608:


[~benedict] Where do the Inflating Locks in your table come from? Are those 
what you are thinking about when you earlier mentioned "a property using a 
special inflated lock object, that can be used for synchronisation until there 
is no contention, and the last owning thread sets the property to null on 
completion." Or something else all together?

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552736#comment-16552736
 ] 

Jason Brown commented on CASSANDRA-9608:


bq. a stack of waiting mutations that can be merged on read or next write. 
Everybody makes progress.

That is an interesting idea, and I'd be interested in hearing more (and/or 
having a proposal/discussion on the dev@ ML). However, I feel like it's beyond 
the scope of this ticket. Going to try to absorb your previous comment now (the 
one with that table).

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Robert Stupp (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552735#comment-16552735
 ] 

Robert Stupp commented on CASSANDRA-9608:
-

ReentrantLock: Oh, you propose RL to be pre-allocated. Well, I'm actually not a 
big fan of pre-allocating two more objects for the case where we don't need 
pessimistic locking.

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13701) Lower default num_tokens

2018-07-23 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-13701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552732#comment-16552732
 ] 

Kurt Greaves commented on CASSANDRA-13701:
--

Possibly, but it's doubling up on meanings for a single property. I can see a 
case where your default is not necessarily the same RF as your biggest keyspace 
(which is what you want to set the allocation algorithm to), and it's not 
exactly clear how it would work with multi-dc and different RF's per DC. I 
think Datastax did it right by moving to specifying the RF including multi-dc 
in the yaml.

> Lower default num_tokens
> 
>
> Key: CASSANDRA-13701
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13701
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Chris Lohfink
>Priority: Minor
>
> For reasons highlighted in CASSANDRA-7032, the high number of vnodes is not 
> necessary. It is very expensive for operations processes and scanning. Its 
> come up a lot and its pretty standard and known now to always reduce the 
> num_tokens within the community. We should just lower the defaults.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14557) Consider adding default and required keyspace replication options

2018-07-23 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552725#comment-16552725
 ] 

Kurt Greaves commented on CASSANDRA-14557:
--

How does this work when adding a new DC? Will it be the default RF in each DC?

> Consider adding default and required keyspace replication options
> -
>
> Key: CASSANDRA-14557
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14557
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Sumanth Pasupuleti
>Assignee: Sumanth Pasupuleti
>Priority: Minor
>  Labels: 4.0-feature-freeze-review-requested
> Fix For: 4.0
>
> Attachments: 14557-trunk.txt
>
>
> Ending up with a keyspace of RF=1 is unfortunately pretty easy in C* right 
> now - the system_auth table for example is created with RF=1 (to take into 
> account single node setups afaict from CASSANDRA-5112), and a user can 
> further create a keyspace with RF=1 posing availability and streaming risks 
> (e.g. rebuild).
> I propose we add two configuration options in cassandra.yaml:
>  # {{default_keyspace_rf}} (default: 1) - If replication factors are not 
> specified, use this number.
>  # {{required_minimum_keyspace_rf}} (default: unset) - Prevent users from 
> creating a keyspace with an RF less than what is configured
> These settings could further be re-used to:
>  * Provide defaults for new keyspaces created with SimpleStrategy or 
> NetworkTopologyStrategy (CASSANDRA-14303)
>  * Make the automatic token [allocation 
> algorithm|https://issues.apache.org/jira/browse/CASSANDRA-13701?focusedCommentId=16095662&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16095662]
>  interface more intuitive allowing easy use of the new token allocation 
> algorithm.
> At the end of the day, if someone really wants to allow RF=1, they simply 
> don’t set the setting. For backwards compatibility the default remains 1 and 
> C* would create with RF=1, and would default to current behavior of allowing 
> any RF on keyspaces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552722#comment-16552722
 ] 

Benedict commented on CASSANDRA-9608:
-

Stepping back for a moment, there’s also another option that I proposed some 
time ago that avoids parking altogether, and is probably preferable - a stack 
of waiting mutations that can be merged on read or next write.  Everybody makes 
progress.

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552717#comment-16552717
 ] 

Jason Brown commented on CASSANDRA-9608:


Leaving aside the java 11 {{AtomicBTreePartition}} for a moment, I've completed 
a second pass at reviewing the entire patch. I've added a few minor comments, 
but on whole my review is complete. Need to rerun circleci for tests and 
resolve the {{AtomicBTreePartition}} before blessing the patch as a whole.

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14325) Java executable check succeeds despite no java on PATH

2018-07-23 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14325:
-
Assignee: Angelo Polo
  Status: Patch Available  (was: Open)

> Java executable check succeeds despite no java on PATH
> --
>
> Key: CASSANDRA-14325
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14325
> Project: Cassandra
>  Issue Type: Bug
>  Components: Lifecycle
>Reporter: Angelo Polo
>Assignee: Angelo Polo
>Priority: Minor
> Attachments: bin_cassandra.patch
>
>
> The check -z $JAVA on line 102 of bin/cassandra currently always succeeds if 
> JAVA_HOME is not set since in this case JAVA gets set directly to 'java'. The 
> error message "_Unable to find java executable. Check JAVA_HOME and PATH 
> environment variables._" will never be echoed on a PATH misconfiguration. If 
> java isn't on the PATH the failure will instead occur on line 95 of 
> cassandra-env.sh at the java version check.
> It would be better to check consistently for the java executable in one place 
> in bin/cassandra. Also we don't want users to mistakenly think they have a 
> java version problem when they in fact have a PATH problem.
> See proposed patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14328) Invalid metadata has been detected for role

2018-07-23 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552714#comment-16552714
 ] 

Kurt Greaves commented on CASSANDRA-14328:
--

[~prnvjndl] If this issue is still occurring can you send the output of:
{code}
select * from system_auth.roles where role = 'utorjwcnruzzlzafxffgyqmlvkxiqcgb'
{code}

You can remove any credentials if you see fit.

> Invalid metadata has been detected for role
> ---
>
> Key: CASSANDRA-14328
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14328
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
>Reporter: Pranav Jindal
>Priority: Major
>
> Cassandra Version : 3.10
> One node was replaced and was successfully up and working but CQL-SH fails 
> with error.
>  
> CQL-SH error:
>  
> {code:java}
> Connection error: ('Unable to connect to any servers', {'10.180.0.150': 
> AuthenticationFailed('Failed to authenticate to 10.180.0.150: Error from 
> server: code= [Server error] message="java.lang.RuntimeException: Invalid 
> metadata has been detected for role utorjwcnruzzlzafxffgyqmlvkxiqcgb"',)})
> {code}
>  
> Cassandra server ERROR:
> {code:java}
> WARN [Native-Transport-Requests-1] 2018-03-20 13:37:17,894 
> CassandraRoleManager.java:96 - An invalid value has been detected in the 
> roles table for role utorjwcnruzzlzafxffgyqmlvkxiqcgb. If you are unable to 
> login, you may need to disable authentication and confirm that values in that 
> table are accurate
> ERROR [Native-Transport-Requests-1] 2018-03-20 13:37:17,895 Message.java:623 
> - Unexpected exception during request; channel = [id: 0xdfc3604f, 
> L:/10.180.0.150:9042 - R:/10.180.0.150:51668]
> java.lang.RuntimeException: Invalid metadata has been detected for role 
> utorjwcnruzzlzafxffgyqmlvkxiqcgb
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:99)
>  ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:82)
>  ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:528)
>  ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503)
>  ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310)
>  ~[apache-cassandra-3.10.jar:3.10]
> at org.apache.cassandra.service.ClientState.login(ClientState.java:271) 
> ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80)
>  ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517)
>  [apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410)
>  [apache-cassandra-3.10.jar:3.10]
> at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at 
> io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357)
>  [netty-all-4.0.39.Final.jar:4.0.39.Final]
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_121]
> at 
> org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162)
>  [apache-cassandra-3.10.jar:3.10]
> at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) 
> [apache-cassandra-3.10.jar:3.10]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> Caused by: java.lang.NullPointerException: null
> at 
> org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273)
>  ~[apache-cassandra-3.10.jar:3.10]
> at 
> org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:88)
>  ~[apache-cassandra-3.10.jar:3.10]
> ... 16 common frames omitted
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552702#comment-16552702
 ] 

Benedict commented on CASSANDRA-9608:
-

{quote}I was racking my brain to see if there's an alternative to allocating an 
object on-demand
{quote}
Well, it would be possible to implement our own variant of the JDK's internal 
locking, which has a shared pool of locks that are allocated to a given object 
on demand, but this would have to be done in unsafe territory.  This would be 
quite possible too, but it's more involved.  I would be happy to have a crack 
at it, but that's probably several days' work instead of a couple of hours.
{quote}A {{ReentrantLock}} would need to be allocated and used 
"pessimistically" up-front - i.e. for every instance of 
{{AtomicBTreePartition}}, so some non-negligible overhead to what we have now.
{quote}
Why do you suppose this to be the case? It only needs to be allocated on 
demand, but then must persist after it has been allocated. The current 
proposals have the following characteristics (which are approximately true; I 
haven't been absolutely thorough, particularly for ReentrantLock, and have 
fudged some context-specific things):
||Operation||Simple Condition||ReentrantLock||Inflating Lock 1||Inflating Lock 
2||
|Quiescent memory overhead 
(after contention)|0 bytes|60 bytes|0 bytes|32-bytes|
|First Uncontended lock()|1x CAS
24-byte allocation|60-byte allocation|1x CAS
32-byte allocation (or 48-byte, depending on choices)|1x CAS
64-byte allocation (or 80-byte, depending on choices)|
|Future Uncontended lock()|1x CAS
24-byte allocation|1x CAS|1x CAS
32-byte allocation|1x CAS
32-byte allocation|
|First Contended lock()|2x CAS
112-byte allocation|3x CAS
32-byte allocation|1x CAS
32-byte allocation|1x CAS
32-byte allocation|
|Future Contended lock()|1.5x CAS (only hop tail once per two additions)
64-byte allocation|3x CAS
32-byte allocation|1x CAS
32-byte allocation|1x CAS
32-byte allocation|
|Release|2x volatile write
1x volatile write + 1x unpark *_per_* waiting thread
_If > 1 waiting thread: *will incur all*_ 
_*contended lock() costs again*_|2x volatile write
1x CAS + 1x unpark *_if_* waiting thread|1x volatile write
1x volatile write + 1x unpark *_if_* waiting thread
1x CAS if no waiting threads (to deallocate safely)|1x volatile write
1x volatile write + 1x unpark *_if_* waiting thread|

There are other possible variants with different tradeoffs, for instance 
reducing allocation costs for uncontended lock() in option (2) at possible 
modest increase in CPU and implementation complexity.  Again, I disclaim any 
modest inaccuracies as didn't want to spend too long putting this together.

 

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Reopened] (CASSANDRA-14401) Attempted serializing to buffer exceeded maximum of 65535 bytes

2018-07-23 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves reopened CASSANDRA-14401:
--

Was going to close this but realised it's possibly still actually a bug in 
Cassandra. 
[~arokhin] do you have an example schema and query string that causes this? 
Also what plugin you are using if that applies?

>  Attempted serializing to buffer exceeded maximum of 65535 bytes
> 
>
> Key: CASSANDRA-14401
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14401
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Artem Rokhin
>Priority: Major
>
> Cassandra version: 3.11.2
> 3 nodes cluster 
> The following exception appears on all 3 nodes and after awhile cluster 
> becomes unreposnsive 
>  
> {code}
> java.lang.AssertionError: Attempted serializing to buffer exceeded maximum of 
> 65535 bytes: 67661
>  at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:309)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.db.filter.RowFilter$Expression$Serializer.serialize(RowFilter.java:547)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.db.filter.RowFilter$Serializer.serialize(RowFilter.java:1143)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.db.ReadCommand$Serializer.serialize(ReadCommand.java:726)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.db.ReadCommand$Serializer.serialize(ReadCommand.java:683)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.io.ForwardingVersionedSerializer.serialize(ForwardingVersionedSerializer.java:45)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:120) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:385)
>  [apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:337)
>  [apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:263)
>  [apache-cassandra-3.11.2.jar:3.11.2]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14401) Attempted serializing to buffer exceeded maximum of 65535 bytes

2018-07-23 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves resolved CASSANDRA-14401.
--
Resolution: Invalid

>  Attempted serializing to buffer exceeded maximum of 65535 bytes
> 
>
> Key: CASSANDRA-14401
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14401
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Artem Rokhin
>Priority: Major
>
> Cassandra version: 3.11.2
> 3 nodes cluster 
> The following exception appears on all 3 nodes and after awhile cluster 
> becomes unreposnsive 
>  
> {code}
> java.lang.AssertionError: Attempted serializing to buffer exceeded maximum of 
> 65535 bytes: 67661
>  at 
> org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:309)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.db.filter.RowFilter$Expression$Serializer.serialize(RowFilter.java:547)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.db.filter.RowFilter$Serializer.serialize(RowFilter.java:1143)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.db.ReadCommand$Serializer.serialize(ReadCommand.java:726)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.db.ReadCommand$Serializer.serialize(ReadCommand.java:683)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.io.ForwardingVersionedSerializer.serialize(ForwardingVersionedSerializer.java:45)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:120) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:385)
>  [apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:337)
>  [apache-cassandra-3.11.2.jar:3.11.2]
>  at 
> org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:263)
>  [apache-cassandra-3.11.2.jar:3.11.2]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-14424) Gossip EchoMessages not being handled somewhere after node restart

2018-07-23 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves resolved CASSANDRA-14424.
--
Resolution: Duplicate

> Gossip EchoMessages not being handled somewhere after node restart
> --
>
> Key: CASSANDRA-14424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14424
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra 3.11.2 - brand new ring - 18 nodes.
> ubuntu 16.04
> AWS - cross AZ, with GossipingPropertyFileSnitch setting the rack to the AZs.
>Reporter: Jason Harvey
>Priority: Major
> Fix For: 3.11.x, 4.x
>
>
> Noticing this behaviour on a brand new 3.11.2 ring:
>  # Restart a random node in the ring.
>  # When that node comes back up, around 30% of the time it sees a single 
> other node down. No other node in the ring sees that node is down.
>  # After 10-20 minutes, the DOWN node suddenly appears UP to the restarted 
> node.
>  
> After digging through tracing logs, here's what I know:
>  
> The node seen as DOWN has not gone down, but simply hasn't been seen as UP 
> yet. The restarted node is attempting to `markAlive()` the target node. 
> Relevant logs from the restarted node's POV:
>  
> {{INFO [GossipStage:1] 2018-04-27 14:03:50,950 Gossiper.java:1053 - Node 
> /10.0.225.147 has restarted, now UP}}
>  {{INFO [GossipStage:1] 2018-04-27 14:03:50,969 StorageService.java:2292 - 
> Node /10.0.225.147 state jump to NORMAL}}
>  {{INFO [HANDSHAKE-/10.0.225.147] 2018-04-27 14:03:50,976 
> OutboundTcpConnection.java:560 - Handshaking version with /10.0.225.147}}
>  {{INFO [GossipStage:1] 2018-04-27 14:03:50,977 TokenMetadata.java:479 - 
> Updating topology for /10.0.225.147}}
>  {{INFO [GossipStage:1] 2018-04-27 14:03:50,977 TokenMetadata.java:479 - 
> Updating topology for /10.0.225.147}}
>  
> (note that despite the Gossip seeing the DOWN node as 'UP', nodetool status 
> still shows it as 'DOWN', as markAlive has not completed, and will not 
> actually be seen as 'UP' for 20 more minutes)
>  
> The restarted node is repeatedly sending Echo messages to the DOWN node as 
> part of the `markAlive()` call. The DOWN node is receiving those, and claims 
> to be sending a response. However, the restarted node is not marking the DOWN 
> node as UP even after the DOWN node sends the Echo response.
>  
> Relevant logs from the restarted node's POV:
>  
> {{TRACE [GossipStage:1] 2018-04-27 14:11:28,792 MessagingService.java:945 - 
> 10.0.103.45 sending ECHO to 99248@/10.0.225.147}}
> {{TRACE [GossipTasks:1] 2018-04-27 14:11:29,792 MessagingService.java:945 - 
> 10.0.103.45 sending GOSSIP_DIGEST_SYN to 99631@/10.0.225.147}}
> {{TRACE [GossipStage:1] 2018-04-27 14:11:29,792 MessagingService.java:945 - 
> 10.0.103.45 sending ECHO to 99632@/10.0.225.147}}
> {{TRACE [GossipStage:1] 2018-04-27 14:11:29,793 MessagingService.java:945 - 
> 10.0.103.45 sending GOSSIP_DIGEST_ACK2 to 99633@/10.0.225.147}}
> {{TRACE [GossipStage:1] 2018-04-27 14:11:29,793 MessagingService.java:945 - 
> 10.0.103.45 sending ECHO to 99635@/10.0.225.147}}
> {{TRACE [GossipStage:1] 2018-04-27 14:11:31,794 MessagingService.java:945 - 
> 10.0.103.45 sending ECHO to 100348@/10.0.225.147}}
> {{TRACE [GossipStage:1] 2018-04-27 14:11:33,750 MessagingService.java:945 - 
> 10.0.103.45 sending ECHO to 101157@/10.0.225.147}}
> {{TRACE [GossipStage:1] 2018-04-27 14:11:35,412 MessagingService.java:945 - 
> 10.0.103.45 sending ECHO to 101753@/10.0.225.147}}
>  
>  
> Relevant logs from the DOWN node's POV:
>  
> {{TRACE [GossipStage:1] 2018-04-27 14:18:16,500 EchoVerbHandler.java:39 - 
> Sending a EchoMessage reply /10.0.103.45}}
>  {{TRACE [GossipStage:1] 2018-04-27 14:18:16,500 MessagingService.java:945 - 
> 10.0.225.147 sending REQUEST_RESPONSE to 328389@/10.0.103.45}}
> {{TRACE [GossipStage:1] 2018-04-27 14:18:17,679 EchoVerbHandler.java:39 - 
> Sending a EchoMessage reply /10.0.103.45}}
>  {{TRACE [GossipStage:1] 2018-04-27 14:18:17,679 MessagingService.java:945 - 
> 10.0.225.147 sending REQUEST_RESPONSE to 329412@/10.0.103.45}}
> {{TRACE [GossipStage:1] 2018-04-27 14:18:18,680 EchoVerbHandler.java:39 - 
> Sending a EchoMessage reply /10.0.103.45}}
>  {{TRACE [GossipStage:1] 2018-04-27 14:18:18,680 MessagingService.java:945 - 
> 10.0.225.147 sending REQUEST_RESPONSE to 330185@/10.0.103.45}}
>  
>  
> The metrics on the restarted node show that the MessagingService has a large 
> number of TimeoutsPerHost for the DOWN node, and all other nodes have 0 
> timeouts.
>  
>  
> Eventually, `realMarkAlive()` is called and the restarted node finally sees 
> DOWN node as coming up, and it spams several UP messages when this happens:
>  
>  
> {{INFO [RequestResponseStage-7] 2018-04-27 14:19:27,210 Gossiper.java:1019 - 
> InetAddress /10.0.225.147 is now UP}}
>  {{IN

[jira] [Updated] (CASSANDRA-14476) ShortType and ByteType are incorrectly considered variable-length types

2018-07-23 Thread Kurt Greaves (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14476:
-
Labels: lhf  (was: )

> ShortType and ByteType are incorrectly considered variable-length types
> ---
>
> Key: CASSANDRA-14476
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14476
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Vladimir Krivopalov
>Priority: Minor
>  Labels: lhf
>
> The AbstractType class has a method valueLengthIfFixed() that returns -1 for 
> data types with a variable length and a positive value for types with a fixed 
> length. This is primarily used for efficient serialization and 
> deserialization. 
>  
> It turns out that there is an inconsistency in types ShortType and ByteType 
> as those are in fact fixed-length types (2 bytes and 1 byte, respectively) 
> but they don't have the valueLengthIfFixed() method overloaded and it returns 
> -1 as if they were of variable length.
>  
> It would be good to fix that at some appropriate point, for example, when 
> introducing a new version of SSTables format, to keep the meaning of the 
> function consistent across data types. Saving some bytes in serialized format 
> is a minor but pleasant bonus.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14483) Bootstrap stream fails with Configuration exception merging remote schema

2018-07-23 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552665#comment-16552665
 ] 

Kurt Greaves commented on CASSANDRA-14483:
--

Was that the only error? The ColumnFamily ID mismatch may not be related to the 
streaming failure. Did you try resume bootstrap after the failure and did it 
complete?

> Bootstrap stream fails with Configuration exception merging remote schema
> -
>
> Key: CASSANDRA-14483
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14483
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
>Reporter: Yongxin Cen
>Priority: Major
> Fix For: 3.11.2
>
>
> I configured yaml file for a seed node, and start it up, cqlsh into it, and
> create keyspace kong with replication = 
> \{'class':'SimpleStrategy','replication_factor':2};
> create user kong with password 'xxx';
> And create tables in keyspace kong.
>  
> Then, in another Cassandra node, point to the seed, and start Cassandra 
> service in the new node
> Run command "nodetool status kong" shows the new node Owns ?, seed owns 100%.
> Run command "nodetool bootstrap resume", 
> Resuming bootstrap
>     [2018-05-31 04:15:57,807] prepare with IP_Seed complete (progress: 0%)
>     [2018-05-31 04:15:57,921] received file system_auth/roles (progress: 50%)
>     [2018-05-31 04:15:57,960] session with IP_Seed complete (progress: 50%)
>     [2018-05-31 04:15:57,965] Stream failed
>     [2018-05-31 04:15:57,966] Error during bootstrap: Stream failed
>     [2018-05-31 04:15:57,966] Resume bootstrap complete
> At the end of /var/log/cassandra/cassandra.log, there are errors:
> ERROR [InternalResponseStage:2] 2018-05-31 00:02:30,559 MigrationTask.java:95 
> - Configuration exception merging remote schema
> org.apache.cassandra.exceptions.ConfigurationException: Column family ID 
> mismatch (found cce68250-63d6-11e8-b887-09f7d93c2253; expected 
> 41679dd0-2804-11e8-a8d4-cd6631f48e81)
>     at 
> org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:941)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>     at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:895) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
>     at org.apache.cassandra.config.Schema.updateTable(Schema.java:687) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
>     at 
> org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1464)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>     at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1420)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>     at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1389)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>     at 
> org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1366)
>  ~[apache-cassandra-3.11.2.jar:3.11.2]
>     at 
> org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:91) 
> ~[apache-cassandra-3.11.2.jar:3.11.2]
>     at 
> org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53)
>  [apache-cassandra-3.11.2.jar:3.11.2]
>     at 
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) 
> [apache-cassandra-3.11.2.jar:3.11.2]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_161]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [na:1.8.0_161]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_161]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_161]
>     at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81)
>  [apache-cassandra-3.11.2.jar:3.11.2]
>     at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161]
>     ERROR [main] 2018-05-31 00:02:58,417 StorageService.java:1524 - Error 
> while waiting on bootstrap to complete. Bootstrap will have to be restarted.
>     java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
>         at 
> com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299)
>  ~[guava-18.0.jar:na]
>         at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286)
>  ~[guava-18.0.jar:na]
>         at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) 
> ~[guava-18.0.jar:na]
>         at 
> org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1519)
>  [apache-cassandra-3.11.2.jar:3.11.2]
>  

[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552655#comment-16552655
 ] 

Jason Brown commented on CASSANDRA-9608:


[~snazy] Yeah, that's more or less what I was thinking, as well, wrt CAS'ing a 
field a field to allocate. Will look at it more whilst I finish up the second 
pass review.

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14584) insert if not exists, with replication factor of 2 doesn't work

2018-07-23 Thread arik (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552647#comment-16552647
 ] 

arik commented on CASSANDRA-14584:
--

I'm not asking for QUORUM.

I'm just doing insert if not exists.

As far as I understand, I can use keyspace RF of 2 with a single node cluster.

All other commands works ok.

 

> insert if not exists, with replication factor of 2 doesn't work
> ---
>
> Key: CASSANDRA-14584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14584
> Project: Cassandra
>  Issue Type: Bug
>Reporter: arik
>Priority: Major
>
> Running with a single node cluster.
> My keyspace has a replication factor of 2.
> Insert if not exists doesn't work on that setup.
> Produce the following error:
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720)
>  Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: cassandra-service/10.23.251.29:9042 
> (com.datastax.driver.core.exceptions.UnavailableException: Not enough 
> replicas available for query at consistency QUORUM (2 required but only 1 
> alive))) at 
> com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:223)
>  at 
> com.datastax.driver.core.RequestHandler.access$1200(RequestHandler.java:41) 
> at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:309)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.retry(RequestHandler.java:477)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.processRetryDecision(RequestHandler.java:455)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:686)
>  at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1091)
>  at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1008)
>  at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>  at 
> com.datastax.driver.core.InboundTrafficMeter.channelRead(InboundTrafficMeter.java:29)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1273) at 
> io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1084) at 
> io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
>  at 
>

[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Robert Stupp (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552632#comment-16552632
 ] 

Robert Stupp commented on CASSANDRA-9608:
-

{quote}I assume this is to handle e.g. the removal of 
Unsafe.monitorEnter/monitorExit?
{quote}
Right.
{quote}could go with the {{volatile ReentrantLock}}
{quote}
A {{ReentrantLock}} would need to be allocated and used "pessimistically" 
up-front - i.e. for every instance of {{AtomicBTreePartition}}, so some 
non-negligible overhead to what we have now.

We could, as [~benedict] proposed, use some "special" handling and gave it [a 
try in this 
commit|https://github.com/snazy/cassandra/commit/41ea5eb96c95d7896b452df7ae228ecccd95c660].
 Haven't tested it though, but it's based on {{SimpleCondition}} instead of 
{{Lock}}, because that one is a) already in the code base and b) doesn't 
require an explicit {{lock()}}, and therefore prevents the "pessimistic" 
allocation and use in every case.

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14584) insert if not exists, with replication factor of 2 doesn't work

2018-07-23 Thread Kurt Greaves (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552633#comment-16552633
 ] 

Kurt Greaves commented on CASSANDRA-14584:
--

You'll need multiple nodes if you want to actually have RF=2 and do QUORUM. You 
should either add more nodes or drop your RF to 1.

> insert if not exists, with replication factor of 2 doesn't work
> ---
>
> Key: CASSANDRA-14584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14584
> Project: Cassandra
>  Issue Type: Bug
>Reporter: arik
>Priority: Major
>
> Running with a single node cluster.
> My keyspace has a replication factor of 2.
> Insert if not exists doesn't work on that setup.
> Produce the following error:
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720)
>  Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
> host(s) tried for query failed (tried: cassandra-service/10.23.251.29:9042 
> (com.datastax.driver.core.exceptions.UnavailableException: Not enough 
> replicas available for query at consistency QUORUM (2 required but only 1 
> alive))) at 
> com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:223)
>  at 
> com.datastax.driver.core.RequestHandler.access$1200(RequestHandler.java:41) 
> at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:309)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.retry(RequestHandler.java:477)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.processRetryDecision(RequestHandler.java:455)
>  at 
> com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:686)
>  at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1091)
>  at 
> com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1008)
>  at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>  at 
> com.datastax.driver.core.InboundTrafficMeter.channelRead(InboundTrafficMeter.java:29)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
>  at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
>  at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1273) at 
> io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1084) at 
> io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
>  at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
>  at 
> io.netty.channel.AbstractChan

[jira] [Commented] (CASSANDRA-14556) Optimize streaming path in Cassandra

2018-07-23 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552625#comment-16552625
 ] 

Aleksey Yeschenko commented on CASSANDRA-14556:
---

[~jjirsa] It's already there, see {{StatsMetadata.hasLegacyCounterShards}}, as 
mentioned in the previous comment.

> Optimize streaming path in Cassandra
> 
>
> Key: CASSANDRA-14556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14556
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Streaming and Messaging
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: Performance
> Fix For: 4.x
>
>
> During streaming, Cassandra reifies the sstables into objects. This creates 
> unnecessary garbage and slows down the whole streaming process as some 
> sstables can be transferred as a whole file rather than individual 
> partitions. The objective of the ticket is to detect when a whole sstable can 
> be transferred and skip the object reification. We can also use a zero-copy 
> path to avoid bringing data into user-space on both sending and receiving 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9608) Support Java 11

2018-07-23 Thread Jason Brown (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16552613#comment-16552613
 ] 

Jason Brown commented on CASSANDRA-9608:


{quote}I assume this is to handle e.g. the removal of 
Unsafe.monitorEnter/monitorExit?
{quote}

Ahh, sorry for the lack of context, [~benedict], but yes, you've hit the nail 
on the head.

I was racking my brain to see if there's an alternative to allocating an object 
on-demand, which of course requires contending on assignment to a share field 
in the {{AtomibBTreePartitionBase}}, and then contend on the allocated object. 
(Which is what {{Unsafe.monitorEnter()}} gave us a way around). I see we're not 
gonna be that lucky.

bq. Or we could implement some static helper methods to help us lock against a 
property using a special inflated lock object, that can be used for 
synchronisation until there is no contention, and the last owning thread sets 
the property to null on completion

I'd be interested in a sample of this as, tbqh, I don't understand what you are 
proposing here - but I don't want you invest a lot of time just for my 
edification. I suspect, at a minimum, we could go with the {{volatile 
ReentrantLock}} for now, but could consider [~benedict]'s idea if it's 
reasonable (and doesn't burden him too much). wdyt, [~snazy]?

> Support Java 11
> ---
>
> Key: CASSANDRA-9608
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9608
> Project: Cassandra
>  Issue Type: Task
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 4.x
>
> Attachments: jdk_9_10.patch
>
>
> This ticket is intended to group all issues found to support Java 9 in the 
> future.
> From what I've found out so far:
> * Maven dependency {{com.sun:tools:jar:0}} via cobertura cannot be resolved. 
> It can be easily solved using this patch:
> {code}
> - artifactId="cobertura"/>
> + artifactId="cobertura">
> +  
> +
> {code}
> * Another issue is that {{sun.misc.Unsafe}} no longer contains the methods 
> {{monitorEnter}} + {{monitorExit}}. These methods are used by 
> {{o.a.c.utils.concurrent.Locks}} which is only used by 
> {{o.a.c.db.AtomicBTreeColumns}}.
> I don't mind to start working on this yet since Java 9 is in a too early 
> development phase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14584) insert if not exists, with replication factor of 2 doesn't work

2018-07-23 Thread arik (JIRA)
arik created CASSANDRA-14584:


 Summary: insert if not exists, with replication factor of 2 
doesn't work
 Key: CASSANDRA-14584
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14584
 Project: Cassandra
  Issue Type: Bug
Reporter: arik


Running with a single node cluster.

My keyspace has a replication factor of 2.

Insert if not exists doesn't work on that setup.

Produce the following error:

org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720)
 Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All 
host(s) tried for query failed (tried: cassandra-service/10.23.251.29:9042 
(com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas 
available for query at consistency QUORUM (2 required but only 1 alive))) at 
com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:223)
 at com.datastax.driver.core.RequestHandler.access$1200(RequestHandler.java:41) 
at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:309)
 at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.retry(RequestHandler.java:477)
 at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.processRetryDecision(RequestHandler.java:455)
 at 
com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:686)
 at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1091)
 at 
com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1008)
 at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
 at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
 at 
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
 at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
 at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
 at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
 at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
 at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
 at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
 at 
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
 at 
com.datastax.driver.core.InboundTrafficMeter.channelRead(InboundTrafficMeter.java:29)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
 at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
 at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1273) at 
io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1084) at 
io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489)
 at 
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428)
 at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
 at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
 at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannel

[jira] [Created] (CASSANDRA-14583) [DTEST] fix write_failures_test.py::TestWriteFailures::test_thrift

2018-07-23 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-14583:
---

 Summary: [DTEST] fix 
write_failures_test.py::TestWriteFailures::test_thrift
 Key: CASSANDRA-14583
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14583
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson


seems it needs a {{WITH COMPACT STORAGE}} to avoid failing like this:
{code}
write_failures_test.py::TestWriteFailures::test_thrift swapoff: Not superuser.
01:23:57,245 ccm DEBUG Log-watching thread starting.

INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/main.py", 
line 178, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/main.py", 
line 215, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 617, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + 
self._wrappers, kwargs)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 222, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 216, in 
INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py", 
line 201, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py", 
line 76, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py", 
line 180, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/main.py", 
line 236, in pytest_runtestloop
INTERNALERROR> item.config.hook.pytest_runtest_protocol(item=item, 
nextitem=nextitem)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 617, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + 
self._wrappers, kwargs)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 222, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 216, in 
INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py", 
line 201, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py", 
line 76, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py", 
line 180, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/flaky/flaky_pytest_plugin.py",
 line 81, in pytest_runtest_protocol
INTERNALERROR> self.runner.pytest_runtest_protocol(item, nextitem)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/runner.py", 
line 64, in pytest_runtest_protocol
INTERNALERROR> runtestprotocol(item, nextitem=nextitem)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/runner.py", 
line 79, in runtestprotocol
INTERNALERROR> reports.append(call_and_report(item, "call", log))
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/flaky/flaky_pytest_plugin.py",
 line 120, in call_and_report
INTERNALERROR> report = hook.pytest_runtest_makereport(item=item, call=call)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 617, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + 
self._wrappers, kwargs)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 222, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File 
"/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
 line 216, in 
INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'),
INTERNALERRO

[jira] [Updated] (CASSANDRA-14583) [DTEST] fix write_failures_test.py::TestWriteFailures::test_thrift

2018-07-23 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14583:

Issue Type: Bug  (was: Improvement)

> [DTEST] fix write_failures_test.py::TestWriteFailures::test_thrift
> --
>
> Key: CASSANDRA-14583
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14583
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Marcus Eriksson
>Priority: Major
>
> seems it needs a {{WITH COMPACT STORAGE}} to avoid failing like this:
> {code}
> write_failures_test.py::TestWriteFailures::test_thrift swapoff: Not superuser.
> 01:23:57,245 ccm DEBUG Log-watching thread starting.
> INTERNALERROR> Traceback (most recent call last):
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/main.py", 
> line 178, in wrap_session
> INTERNALERROR> session.exitstatus = doit(config, session) or 0
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/main.py", 
> line 215, in _main
> INTERNALERROR> config.hook.pytest_runtestloop(session=session)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
>  line 617, in __call__
> INTERNALERROR> return self._hookexec(self, self._nonwrappers + 
> self._wrappers, kwargs)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
>  line 222, in _hookexec
> INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
>  line 216, in 
> INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'),
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py",
>  line 201, in _multicall
> INTERNALERROR> return outcome.get_result()
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py",
>  line 76, in get_result
> INTERNALERROR> raise ex[1].with_traceback(ex[2])
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py",
>  line 180, in _multicall
> INTERNALERROR> res = hook_impl.function(*args)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/main.py", 
> line 236, in pytest_runtestloop
> INTERNALERROR> item.config.hook.pytest_runtest_protocol(item=item, 
> nextitem=nextitem)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
>  line 617, in __call__
> INTERNALERROR> return self._hookexec(self, self._nonwrappers + 
> self._wrappers, kwargs)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
>  line 222, in _hookexec
> INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
>  line 216, in 
> INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'),
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py",
>  line 201, in _multicall
> INTERNALERROR> return outcome.get_result()
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py",
>  line 76, in get_result
> INTERNALERROR> raise ex[1].with_traceback(ex[2])
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/callers.py",
>  line 180, in _multicall
> INTERNALERROR> res = hook_impl.function(*args)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/flaky/flaky_pytest_plugin.py",
>  line 81, in pytest_runtest_protocol
> INTERNALERROR> self.runner.pytest_runtest_protocol(item, nextitem)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/runner.py",
>  line 64, in pytest_runtest_protocol
> INTERNALERROR> runtestprotocol(item, nextitem=nextitem)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/_pytest/runner.py",
>  line 79, in runtestprotocol
> INTERNALERROR> reports.append(call_and_report(item, "call", log))
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/flaky/flaky_pytest_plugin.py",
>  line 120, in call_and_report
> INTERNALERROR> report = hook.pytest_runtest_makereport(item=item, 
> call=call)
> INTERNALERROR>   File 
> "/home/cassandra/cassandra/venv/lib/python3.6/site-packages/pluggy/__init__.py",
>  line 617, in __call__
> INTERNALERROR> return self._hookexec(self, self._nonwrappe