dra
> 3.7).
>
> in our case a full repair fixed the issues.
> but no doubt .. it would be more satisfying to know the root cause for
> that issue
>
> br,
> roland
>
>
> On Mon, 2017-04-10 at 19:12 +0200, George Sigletos wrote:
>
> In 3 out of 5 nodes of our ne
s that stopped streaming and working but have not finished.
> what does nodetool netstats output for your newly built up nodes?
>
> br,
> roland
>
>
> On Mon, 2017-04-10 at 17:15 +0200, George Sigletos wrote:
>
> Hello,
>
> We recently added a new datacenter to our cluster
Hello,
We recently added a new datacenter to our cluster and run "nodetool rebuild
-- " in all 5 new nodes, one by one.
After this process finished we noticed there is data missing from the new
datacenter, although it exists on the current one.
How would that be possible? Should I maybe have
kely your
>> problem here.
>>
>> On 14 March 2017 at 18:58, George Sigletos <sigle...@textkernel.nl>
>> wrote:
>>
>>> To give a complete picture, my node has actually two network interfaces:
>>> eth0 for 192.168.xx.xx and eth1 for 10.179.xx.xx
>
To give a complete picture, my node has actually two network interfaces:
eth0 for 192.168.xx.xx and eth1 for 10.179.xx.xx
On Tue, Mar 14, 2017 at 7:46 PM, George Sigletos <sigle...@textkernel.nl>
wrote:
> Hello,
>
> I am trying to change the IP of a live node (I am not replaci
Hello,
I am trying to change the IP of a live node (I am not replacing a dead
one).
So I stop the service on my node (not a seed node), I change the IP from
192.168.xx.xx to 10.179.xx.xx, and modify "listen_address" and
"rpc_address" in the cassandra.yaml, while I also set auto_bootstrap:
false.
Even when I set a lower request-timeout in order to trigger a timeout,
still no WARN or ERROR in the logs
On Wed, Sep 28, 2016 at 8:22 PM, George Sigletos <sigle...@textkernel.nl>
wrote:
> Hi Joaquin,
>
> Unfortunately neither WARN nor ERROR found in the system logs across the
acktraces that you
> see?
>
> Cheers,
>
> Joaquin Casares
> Consultant
> Austin, TX
>
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> On Wed, Sep 28, 2016 at 12:43 PM, George Sigletos <sigle...@textkernel.nl>
> wrote:
>
>> Thank
> In older versions you can not control when this call will timeout, it is
> fairly normal that it does!
>
>
> On Wed, Sep 28, 2016 at 12:50 PM, George Sigletos <sigle...@textkernel.nl>
> wrote:
>
>> Hello,
>>
>> I keep executing a TRUNCATE command on a
Hello,
I keep executing a TRUNCATE command on an empty table and it throws
OperationTimedOut randomly:
cassandra@cqlsh> truncate test.mytable;
OperationTimedOut: errors={}, last_host=cassiebeta-01
cassandra@cqlsh> truncate test.mytable;
OperationTimedOut: errors={}, last_host=cassiebeta-01
.jar:na]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195)
~[guava-16.0.jar:na]
On Tue, Sep 20, 2016 at 11:12 AM, George Sigletos <sigle...@textkernel.nl>
wrote:
> I am also getting the same error:
> cqlsh -u cassandra -p cassandra
>
> Connecti
I am also getting the same error:
cqlsh -u cassandra -p cassandra
Connection error: ('Unable to connect to any servers', {'':
OperationTimedOut('errors=Timed out creating connection (5 seconds),
last_host=None',)})
But it is not consistent. Sometimes I manage to connect. It is random.
Using
t; Alain Rodriguez - al...@thelastpickle.com
> France
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
> 2016-05-20 17:54 GMT+02:00 George Sigletos <sigle...@textkernel.nl>:
>
>> Hello,
>>
>> Using withLocalDC="myLocalDC"
, George Sigletos <sigle...@textkernel.nl>
wrote:
> No luck unfortunately. It seems that the connection to the destination
> node was lost.
>
> However there was progress compared to the previous times. A lot more data
> was streamed.
>
> (From source node)
> INFO [G
(ConnectionHandler.java:257)
~[apache-cassandra-2.1.14.jar:2.1.14]
at java.lang.Thread.run(Unknown Source) [na:1.7.0_79]
INFO [SharedPool-Worker-1] 2016-05-28 17:54:59,612 Gossiper.java:993 -
InetAddress /54.172.235.227 is now UP
On Fri, May 27, 2016 at 5:37 PM, George Sigletos <si
e your 2.1.13 nodes to 2.1.14 first, or add
>> the new nodes using 2.1.13, and upgrade after.
>>
>> On Fri, May 27, 2016 at 8:41 AM, George Sigletos <sigle...@textkernel.nl>
>> wrote:
>>
>> >>>> ERROR [STREAM-IN-/192.168.1.141] 2016-05-26 09:08
first, or add
> the new nodes using 2.1.13, and upgrade after.
>
> On Fri, May 27, 2016 at 8:41 AM, George Sigletos <sigle...@textkernel.nl>
> wrote:
>
> >>>> ERROR [STREAM-IN-/192.168.1.141] 2016-05-26 09:08:05,027
> >>>> StreamSession.java:50
:05 PM, George Sigletos <sigle...@textkernel.nl>
wrote:
> The time the first streaming failure occurs varies from a few hours to 1+
> day.
>
> We also experience slowness problems with the destination node on Amazon.
> Rebuild is slow. That may also contribute to the problem.
entire rebuild process?
>>
>
>> On Thu, May 26, 2016 at 12:17 AM, Paulo Motta <pauloricard...@gmail.com>
>> wrote:
>>
>>> If increasing or disabling streaming_socket_timeout_in_ms on the source
>>> node does not fix it, you may want to have a look
information:
> https://docs.datastax.com/en/cassandra/2.0/cassandra/troubleshooting/trblshootIdleFirewall.html
>
> This will ultimately fixed by CASSANDRA-11841 by adding keep-alive to the
> streaming protocol.
>
> 2016-05-25 18:09 GMT-03:00 George Sigletos <sigle...@textkern
e stream session is failed. Workaround is to set to a larger
>> streaming_socket_timeout_in_ms, the new default will be 8640ms (1
>> day).
>>
>> We are addressing this on
>> https://issues.apache.org/jira/browse/CASSANDRA-11839.
>>
>> 2016-05-25 16:42 GMT-03:00 George Si
<pauloricard...@gmail.com>
wrote:
> This is the log of the destination/rebuilding node, you need to check what
> is the error message on the stream source node (192.168.1.140).
>
>
> 2016-05-25 15:22 GMT-03:00 George Sigletos <sigle...@textkernel.nl>:
>
>> Hello,
>&
Hello,
Here is additional stack trace from system.log:
ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
Remote peer 192.168.1.140 failed stream session.
ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
your keyspace? If yes, can you check the
> cassandra-rackdc.properties of this new node?
>
> https://issues.apache.org/jira/browse/CASSANDRA-8279
>
>
> Regards,
> Mike Yeap
>
> On Wed, May 25, 2016 at 2:31 PM, George Sigletos <sigle...@textkernel.nl>
> wrote:
I am getting this error repeatedly while I am trying to add a new DC
consisting of one node in AWS to my existing cluster. I have tried 5 times
already. Running Cassandra 2.1.13
I have also set:
streaming_socket_timeout_in_ms: 360
in all of my nodes
Does anybody have any idea how this can be
Hello,
Using withLocalDC="myLocalDC" and withUsedHostsPerRemoteDc>0 will guarantee
that you will connect to one of the nodes in "myLocalDC",
but DOES NOT guarantee that your read/write request will be acknowledged by
a "myLocalDC" node. It may well be acknowledged by a remote DC node as
well,
Unfortunately Datastax decided to discontinue Opscenter for open source
Cassandra, starting from version 2.2.
Pitty
On Wed, Jan 6, 2016 at 6:00 PM, Michael Shuler
wrote:
> On 01/06/2016 10:55 AM, Michael Shuler wrote:
> > On 01/06/2016 01:47 AM, Wills Feng wrote:
> >>
Hello,
We had a similar problem where we needed to migrate data from one cluster
to another.
We ended up using Spark to accomplish this. It is fast and reliable but
some downtime was required after all.
We minimized the downtime by doing a first run, and then run incremental
updates.
Kind
On Mon, Dec 21, 2015 at 12:53 PM, Noorul Islam K M <noo...@noorul.com>
wrote:
> George Sigletos <sigle...@textkernel.nl> writes:
>
> > Hello,
> >
> > We had a similar problem where we needed to migrate data from one cluster
> > to another.
> >
> >
You can use sstable2json to create the json of your keyspace and then load
> this json to your keyspace in new cluster using json2sstable utility.
>
> On Tue, Dec 1, 2015 at 3:06 AM, Robert Coli <rc...@eventbrite.com> wrote:
>
>> On Thu, Nov 19, 2015 at 7:01 AM, George Siglet
Hello,
We would like to migrate one keyspace from a 6-node cluster to a 3-node one.
Since an individual node does not contain all data, this means that we
should run the sstableloader 6 times, one for each node of our cluster.
To be precise, do "nodetool flush " then run sstableloader -d <3
Hello,
I have been frequently receiving those warnings:
java.lang.IllegalArgumentException: Mutation of 35141120 bytes is too large
for the maxiumum size of 33554432
at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:221)
~[apache-cassandra-2.1.9.jar:2.1.9]
at
REQUEST_RESPONSE 0
COUNTER_MUTATION 0
On Tue, Oct 6, 2015 at 5:35 PM, Kiran mk <coolkiran2...@gmail.com> wrote:
> Do you see more dropped mutation messages in nodetool tpstats output.
> On Oct 6, 2015 7:51 PM, "George Sigletos" <sigle...@textker
I'm also facing problems regarding corrupt sstables and also couldn't run
sstablescrub successfully.
I restarted my nodes with disk failure policy "best_effort", then I run the
"nodetool scrub "
Once done I removed the corrupt tables manually and started repair
On Thu, Oct 1, 2015 at 7:27 PM,
Hello again and sorry for the late response,
Still having problems with upgrading from 2.1.8 to 2.1.9.
I decided to start the problematic nodes with "disk_failure_policy:
best_effort"
Currently running "nodetool scrub "
Then removing the corrupted sstables and planning to run repair
Hello,
I tried to upgrade two of our clusters from 2.1.8 to 2.1.9. In some, but
not all nodes, I got errors about corrupt sstables when restarting. I
downgraded back to 2.1.8 for now.
Has anybody else faced the same problem? Should sstablescrub fix the
problem? I ddin't tried that yet.
Kind
36 matches
Mail list logo