To unsubscribe from this list, please send an email to
user-unsubscr...@cassandra.apache.org
Thanks!
On Wed, Apr 12, 2017 at 6:37 AM, Lawrence Turcotte <
lawrence.turco...@gmail.com> wrote:
> UNSUBSCRIBE
>
Hello,
I am seeing streaming errors while adding new nodes(in the same DC) to the
cluster.
ERROR [STREAM-IN-/x.x.x.x] 2017-04-11 23:09:29,318 StreamSession.java:512 -
[Stream #a8d56c70-1f0b-11e7-921e-61bb8bdc19bb] Streaming error occurred
java.io.IOException: CF *465ed8d0-086c-11e6-9744-2900b5a9a
UNSUBSCRIBE
Right! Another reason why I just stick with sequential decommissions. Maybe
someone here could shed some light on what happens under the covers if
parallel decommissions are kicked off.
-- Jacob Shadix
On Tue, Apr 11, 2017 at 12:55 PM, benjamin roth wrote:
> I did not test it but I'd bet that p
I did not test it but I'd bet that parallel decommision will lead to
inconsistencies.
Each decommission results in range movements and range reassignments which
becomes effective after a successful decommission.
If you start several decommissions at once, I guess the calculated
reassignments are in
Are you using vnodes? I typically do one-by-one as the decommission will
create additional load/network activity streaming data to the other nodes
as the token ranges are reassigned.
-- Jacob Shadix
On Sat, Apr 8, 2017 at 10:55 AM, Vlad wrote:
> Hi,
>
> how multiple nodes should be decommission
We run an 8 node Cassandra v2.1.16 cluster (4 nodes in two discrete
datacentres) and we're currently investigating a problem where by
restarting Cassandra on a node resulted in the filling of
Eden/Survivor/Old and frequent GCs.
http://imgur.com/a/OR1dk
This hammered reads from our application
We have included the IPV6 address with scope GLOBAL, and not IPV6 with
SCOPE LINK in the YAML and TOPOLOGY files.
inet6 addr: 2001: *** : ** : ** : * : * : : Scope:Global
inet6 addr: fe80 :: *** : : : Scope:Link
Not sure if this might be of relevance to the issue you a
From: sai krishnam raju potturi
> I got a similar error, and commenting out the below line helped.
> JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
>
> Did you also include "rpc_interface_prefer_ipv6: true" in the YAML file?
No luck at all here. Yes, I had commented out that line (and also
I got a similar error, and commenting out the below line helped.
JVM_OPTS="$JVM_OPTS -Djava.net.preferIPv4Stack=true"
Did you also include "rpc_interface_prefer_ipv6: true" in the YAML file?
thanks
Sai
On Tue, Apr 11, 2017 at 6:37 AM, Martijn Pieters wrote:
> I’m having issues getting a
I have a table containing a column `foo` which is a string, and is json.
I have a class called `Foo` which maps to `foo_json` and can be serialized
/ deserialized using Jackson.
Is it possible to define the column as `private Foo foo` rather than
`private String foo` and manually deserializing it
Thanks for your reply. Yes, it would be nice to know the root cause.
Now running a full repair. Hopefully this will solve the problem
On Tue, Apr 11, 2017 at 9:43 AM, Roland Otta
wrote:
> well .. thats pretty much the same we saw in our environment (cassandra
> 3.7).
>
> in our case a full repa
I’m having issues getting a single-node Cassandra cluster to run on a Ubuntu
16.04 VM with only IPv6 available. I’m running Oracle Java 8
(8u121-1~webupd8~2), Cassandra 3.10 (installed via the Cassandra
http://www.apache.org/dist/cassandra/debian packages.)
I consistently get a “Protocol family
"system_auth" not my table.
On 04/11/2017 07:12 AM, Oskar Kjellin wrote:
> You changed to 6 nodes because you were running out of disk? But you
> still replicate 100% to all so you don't gain anything
>
>
>
> On 10 Apr 2017, at 13:48, Cogumelos Maravilha
> mailto:cogumelosmaravi...@sapo.pt>> wro
well .. thats pretty much the same we saw in our environment (cassandra 3.7).
in our case a full repair fixed the issues.
but no doubt .. it would be more satisfying to know the root cause for that
issue
br,
roland
On Mon, 2017-04-10 at 19:12 +0200, George Sigletos wrote:
In 3 out of 5 nodes o
hi,
sometimes we have the problem that we have hinted handoffs (for example
because auf network problems between 2 DCs) that do not get processed
even if the connection problem between the dcs recovers. Some of the
files stay in the hints directory until we restart the node that
contains the hints
16 matches
Mail list logo