Is it ok to run this
https://blog.mozilla.org/it/2012/06/30/mysql-and-the-leap-second-high-cpu-and-the-fix/
Seeing high cpu consumption for cassandra process
reboot of the machine worked
From: nair...@outlook.com
To: user@cassandra.apache.org
Subject: Cassandra leap second
Date: Wed, 1 Jul 2015 02:54:53 +
Is it ok to run this
https://blog.mozilla.org/it/2012/06/30/mysql-and-the-leap-second-high-cpu-and-the-fix/
Seeing high cpu
That is a GREAT lead! So it looks like I can't add a few nodes to the
cluster of the new version, have it settle down, and then upgrade the rest?
On Tue, Jun 30, 2015 at 11:58 AM, Alain RODRIGUEZ arodr...@gmail.com
wrote:
Would it matter that I'm mixing cassandra versions?
From:
Looks like you have too many open files issue. Increase the ulimit for the user.
If you are starting the cassandra daemon using user cassandra, increase the
ulimit for that user.
On Jun 30, 2015, at 21:16, Neha Trivedi nehajtriv...@gmail.com wrote:
Hello,
I have a 4 node cluster with
Thanks Arun ! I will try and get back !
On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:
Looks like you have too many open files issue. Increase the ulimit for the
user.
If you are starting the cassandra daemon using user cassandra, increase
the ulimit for that user.
On
Hi,
I have a cluster which had 4 datacenters running 2.0.12. Last week one of
the datacenters was decommissioned using nodetool decommission on each of
the servers in turn. This seemed to work fine until one of the nodes
started appearing in the logs of all of the remaining servers with messages
Hi all,
I configured a Cassandra Cluster (3 nodes).
Then I created a KEYSPACE:
cql CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'SimpleStrategy',
'replication_factor' : 2 };
and a table:
cql CREATE TABLE chiamate_stabili (chiave TEXT PRIMARY KEY, valore
BLOB);
I inserted
Hi David ?
What does a nodetool describecluster output look like ?
My guess is you might be having a schema version desynchronisation. If you
see a node with different schema version you might want to try a nodetool
resetlocal*schema* - Reset node's local *schema* and resync
You asked for
Hi Moreno,
Which consistency level are you using? If you're using ONE, that may make
sense, as, depending on the partitioning and the cluster coordinating the
query, different values may be received.
Hope it helps.
Regards
Carlos Alonso | Software Engineer | @calonso
Can you try two more tests:
1) Write the way you are, perform a repair on all nodes, then read the way
you are.
wipe data
2) Write with CL quorum, read with CL quorum.
On Tue, Jun 30, 2015 at 8:34 AM, Mauri Moreno Cesare
morenocesare.ma...@italtel.com wrote:
Hi all,
I configured a
Hi Carlos,
first I used Consistency ONE, then ALL
(I retry with ALL in order to be sure that problem doesn’t disappear).
Thanks
Moreno
From: Carlos Alonso [mailto:i...@mrcalonso.com]
Sent: martedì 30 giugno 2015 15.24
To: user@cassandra.apache.org
Subject: Re: Insert (and delete) data loss?
Hi David,
Are you sure you ran the repair entirely (9 days + repair logs ok on
opscenter server) before adding the 10th node ? This is important to avoid
potential data loss ! Did you set auto_bootstrap to true on this 10th node ?
C*heers,
Alain
2015-06-29 14:54 GMT+02:00 David CHARBONNIER
Thank you Jason!
Ok, I will try with QUORUM and, if problem continues to be present, I’ll give
“nodetool repair” cmd.
(but it’s a good practice, in production environment, the use of “nodetool
repair”?)
Moreno
From: Jason Kushmaul [mailto:jkushm...@rocketfuelinc.com]
Sent: martedì 30 giugno
Hello,
I have a 4 node cluster with SimpleSnitch.
Cassandra : Cassandra 2.1.3
I am trying to add a new node (cassandra 2.1.7) and I get the following
error.
ERROR [STREAM-IN-] 2015-06-30 05:13:48,516 JVMStabilityInspector.java:94 -
JVM state determined to be unstable. Exiting forcefully due
Arun,
I am logging on to Server as root and running (sudo service cassandra start)
regards
Neha
On Wed, Jul 1, 2015 at 11:00 AM, Neha Trivedi nehajtriv...@gmail.com
wrote:
Thanks Arun ! I will try and get back !
On Wed, Jul 1, 2015 at 10:32 AM, Arun arunsi...@gmail.com wrote:
Looks like
I was having exactly the same issue with the same version, check your seed list
and make sure it contains only the live nodes, I know that seeds are only read
when cassandra starts but updating the seed list to live nodes and then doing a
roiling restart fixed this issue for me.
I hope this
Problem is still present ☹
First I changed ConsistencyLevel in java client code(from ONE to QUORUM).
When I changed ConsistencyLevel I need to re-config Cassandra Cluster in a
proper way
(according to “Cassandra Calculator for Dummies”, I changed Keyspace’s
Replication Factor, from 2 to 3:
Agree Tyler. I think its our application problem. If client returns failed
write in spite of retries, application must have a rollback mechanism to make
sure old state is restored. Failed write may be because of the fact that CL was
not met even though one node successfully wrote.Cassandra wont
Hi All,
I am having general understanding of cassandra working and basic
knowledge
of Python. But I want to conduct a session in which I want to take
audience from
intermediate to Advanced. So which contents would you recommend me to take
for the workshop with Python. If you will send me
I think these scenarios are still possible even when we are writing at
QUORUM ..if we have dropped mutations in our cluster..
It was very strange in our case ...We had RF=3 and READ/WRITE
CL=QUORUM..we had dropped mutations for long time but we never faced any
scenario like scenario 1 when
Another option is Brian's cassandra loader:
https://github.com/brianmhess/cassandra-loader
All the best,
[image: datastax_logo.png] http://www.datastax.com/
Sebastián Estévez
Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com
[image: linkedin.png]
I appreciate the thoughts! My issue is that it seems to work perfectly,
until the node goes away. Would it matter that I'm mixing cassandra
versions? (2.1.4 and 2.1.5)?
On Tue, Jun 30, 2015 at 5:23 AM, Alain RODRIGUEZ arodr...@gmail.com wrote:
Hi David ?
What does a nodetool describecluster
On Tue, Jun 30, 2015 at 12:27 PM, Anuj Wadehra anujw_2...@yahoo.co.in
wrote:
Agree Tyler. I think its our application problem. If client returns failed
write in spite of retries, application must have a rollback mechanism to
make sure old state is restored. Failed write may be because of the
You might want to take a look at CQLSSTableWriter[1] in the Cassandra
source tree.
http://www.datastax.com/dev/blog/using-the-cassandra-bulk-loader-updated
On Tue, Jun 30, 2015 at 1:18 PM, Umut Kocasaraç ukocasa...@gmail.com
wrote:
Hi,
I want to change clustering order column of my table. As
Hi,
I want to change clustering order column of my table. As far as i know it
is not possible to use alter command so i have created new table and i
would like to move data from old table to this one.
I am using Cassandra 2.0.7 and there is almost 100GB data on table. Is
there any easy method to
Hi all,
To quote Sebastian Estevez in one recent thread: You said you ran a
nodetool drain before the restart, but your logs show commitlogs replayed.
That does not add up... The docs seem to generally agree with this: if you
did `nodetool drain` before restarting your node there shouldn't be any
Would it matter that I'm mixing cassandra versions?
From:
http://docs.datastax.com/en/upgrade/doc/upgrade/datastax_enterprise/upgrdLim.html
General upgrade limitations¶
Do not run nodetool repair.
Do not enable new features.
Do not issue these types of queries during a rolling restart: DDL,
Looking at the debug log, I see
[2015-06-29 23:38:11] [main] DEBUG CqlRecordReader - cqlQuery SELECT
wpid,value FROM qarth_catalog_dev.product_v1 WHERE token(wpid)?
AND token(wpid)=? LIMIT 10
[2015-06-29 23:38:11] [main] DEBUG CqlRecordReader - created
On Tue, Jun 30, 2015 at 11:59 AM, Dan Kinder dkin...@turnitin.com wrote:
Is this unusual or the same thing others see? Is `nodetool drain` really
supposed to wait until all memtables are flushed and commitlogs are deleted
before it returns?
nodetool drain *should* work the way you suggest -
On Tue, Jun 30, 2015 at 11:16 AM, Tyler Hobbs ty...@datastax.com wrote:
Correct, if you get a WriteTimeout error, you don't know if any replicas
have written the data or not. It's even possible that all replicas wrote
the data but didn't respond to the coordinator in time. I suspect most
30 matches
Mail list logo