C*
When I create a keyspace with pycassa on a multi-node cluster, it takes
some time before all the nodes know about the keyspace.
So, if I do this:
sm = SystemManager(random.choice(server_list))
sm.create_keyspace(keyspace, SIMPLE_STRATEGY, {'replication_factor': '1'})
Thanks all for the help.
I ran the traffic over the weekend surprisingly, my heap was doing OK
(around 5.7G of 8G) but GC activity went nuts and dropped the throughput. I
will probably increase the number of nodes.
The other interesting thing I noticed was that there were some objects with
Is setting live ratio to minimum of 1.0 instead of X supposed to be rare?
Because were getting it fairly consistently.
On Sat, Jun 1, 2013 at 8:58 PM, Darren Smythe darren1...@gmail.com wrote:
If the amount of remaining time for compaction keeps going up, does this
point to an overloaded node
On Mon, Jun 3, 2013 at 8:54 AM, Darren Smythe darren1...@gmail.com wrote:
Is setting live ratio to minimum of 1.0 instead of X supposed to be rare?
Because were getting it fairly consistently.
Do you have working JNA? If so, my understanding is that message
should be relatively rare..
Hello
I am trying to find the source code or the algorithm that cassandra use for
storing and loading the data. Is there any reference for that?
Regards,
Mahmood
C*,
Is it considered normal for cassandra to experience this error:
ERROR [NonPeriodicTasks:1] 2013-06-03 18:17:05,374 SSTableDeletingTask.java (line 72) Unable to delete
/raid0/cassandra/data/KEYSPACE/CF/KEYSPACE-CF-ic-19-Data.db (it will be
removed on server restart; we'll also retry after
On Mon, Jun 3, 2013 at 11:57 AM, John R. Frank j...@mit.edu wrote:
Is it considered normal for cassandra to experience this error:
ERROR [NonPeriodicTasks:1] 2013-06-03 18:17:05,374 SSTableDeletingTask.java
(line 72) Unable to delete
/raid0/cassandra/data/KEYSPACE/CF/KEYSPACE-CF-ic-19-Data.db
I am a bit confused when using the consistency level for multi datacenter
setup. Following is my setup:
I have 4 nodes the way these are set up are
Node 1 DC 1 - N1DC1
Node 2 DC 1 - N2DC1
Node 1 DC 2 - N1DC2
Node 2 DC 2 - N2DC2
I setup a delay in between two datacenters (DC1 and DC2 around 1
What happens when you use CL=TWO.
Dean
From: srmore comom...@gmail.commailto:comom...@gmail.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Monday, June 3, 2013 2:09 PM
To:
Also, we had to put a fix into cassandra so it removed slow nodes from the
list of nodes to read from. With that fix our QUOROM(not local quorom) started
working again and would easily take the other DC nodes out of the list of
reading from for you as well. I need to circle back to with my
With CL=TWO it appears that one node randomly picks the node from other
datacenter to get the data. i.e. one node in the datacenter consistently
underperforms.
On Mon, Jun 3, 2013 at 3:21 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
What happens when you use CL=TWO.
Dean
From: srmore
We observed that as well, please let us know what you find out it would be
extremely helpful. There is also this property that you can play with to
take care of slow nodes
*dynamic_snitch_badness_threshold*.
What's your replication factor? Do you have RF=2 on both datacenters?
On Mon, Jun 3, 2013 at 10:09 PM, srmore comom...@gmail.com wrote:
I am a bit confused when using the consistency level for multi datacenter
setup. Following is my setup:
I have 4 nodes the way these are set up are
Node 1
Yup, RF is 2 for both the datacenters.
On Mon, Jun 3, 2013 at 3:36 PM, Sylvain Lebresne sylv...@datastax.comwrote:
What's your replication factor? Do you have RF=2 on both datacenters?
On Mon, Jun 3, 2013 at 10:09 PM, srmore comom...@gmail.com wrote:
I am a bit confused when using the
Our badness threshold is 0.1 currently(just checked). Our website used to get
slow during a slow node time until we rolled our own patch out.
Dean
From: srmore comom...@gmail.commailto:comom...@gmail.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
http://drm.kz/kxijt/uaunmiotlrasr.rwz
http://ezhik-tv.ru/ldqsjz/oiwoguluaehcf.zzlixlntu
Hi all
I am using puppet to push cassandra.yaml file which has seeds node hardcoded,
going forward I don't want to hard code the seed nodes and I plan to maintain a
list of seed nodes. Since I have a cluster in place I would populate this list
for now to start with and next time when I add a
All the documentation that I have read about cassanrda always says to keep the
same list of seeds on every node in the cluster. Without this, you can end up
with fragmentation within your cluster where nodes don't know about other nodes
in the cluster. In your case, sure the nodes will be in the
@Faraaz check out the comment by Aaron morton here :
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Seed-Nodes-td6077958.html
Having same nodes is a good idea but it is not necessary.
In your case, sure the nodes will be in the cluster for 10
minutes but what about sporadic
After some more investigation it does not appear to be the CL issue. Every
time I am starting up the node in other datacenter with 1sec delay my
throughput starts degrading, even with CL=ONE and CL=LOCAL_QUORUM.
I will put the logs on debug and investigate more and report back the
findings.
On
21 matches
Mail list logo