> With a single node I get 3K for cassandra 1.0.12 and 1.2.12. So I suspect
> there is some network chatter. I have started looking at the sources, hoping
> to find something.
1.2 is pretty stable, I doubt there is anything in there that makes it run
slower than 1.0. It’s probably something in y
Quote from
http://www.datastax.com/dev/blog/performance-improvements-in-cassandra-1-2
*"Murmur3Partitioner is NOT compatible with RandomPartitioner, so if you’re
upgrading and using the new cassandra.yaml file, be sure to change the
partitioner back to RandomPartitioner"*
On Thu, Dec 12, 2013 at
On Thu, Dec 12, 2013 at 11:15 AM, J. Ryan Earl wrote:
> Why did you switch to RandomPartitioner away from Murmur3Partitioner?
> Have you tried with Murmur3?
>
>
>1. # partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>2. partitioner: org.apache.cassandra.dht.RandomPartitioner
>
>
Why did you switch to RandomPartitioner away from Murmur3Partitioner? Have
you tried with Murmur3?
1. # partitioner: org.apache.cassandra.dht.Murmur3Partitioner
2. partitioner: org.apache.cassandra.dht.RandomPartitioner
On Fri, Dec 6, 2013 at 10:36 AM, srmore wrote:
>
>
>
> On Fri, De
On Wed, Dec 11, 2013 at 10:49 PM, Aaron Morton wrote:
> It is the write latency, read latency is ok. Interestingly the latency is
> low when there is one node. When I join other nodes the latency drops about
> 1/3. To be specific, when I start sending traffic to the other nodes the
> latency for a
> It is the write latency, read latency is ok. Interestingly the latency is low
> when there is one node. When I join other nodes the latency drops about 1/3.
> To be specific, when I start sending traffic to the other nodes the latency
> for all the nodes increases, if I stop traffic to other n
Thanks Aaron
On Wed, Dec 11, 2013 at 8:15 PM, Aaron Morton wrote:
> Changed memtable_total_space_in_mb to 1024 still no luck.
>
> Reducing memtable_total_space_in_mb will increase the frequency of
> flushing to disk, which will create more for compaction to do and result in
> increased IO.
>
> Y
> Changed memtable_total_space_in_mb to 1024 still no luck.
Reducing memtable_total_space_in_mb will increase the frequency of flushing to
disk, which will create more for compaction to do and result in increased IO.
You should return it to the default.
> when I send traffic to one node its per
Changed memtable_total_space_in_mb to 1024 still no luck.
On Fri, Dec 6, 2013 at 11:05 AM, Vicky Kak wrote:
> Can you set the memtable_total_space_in_mb value, it is defaulting to 1/3
> which is 8/3 ~ 2.6 gb in capacity
>
> http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-me
Not long: Uptime (seconds) : 6828
Token: 56713727820156410577229101238628035242
ID : c796609a-a050-48df-bf56-bb09091376d9
Gossip active: true
Thrift active: true
Native Transport active: false
Load : 49.71 GB
Generation No: 1386344053
Uptime (secon
Since how long the server had been up, hours,days,months?
On Fri, Dec 6, 2013 at 10:41 PM, srmore wrote:
> Looks like I am spending some time in GC.
>
> java.lang:type=GarbageCollector,name=ConcurrentMarkSweep
>
> CollectionTime = 51707;
> CollectionCount = 103;
>
> java.lang:type=GarbageCo
Looks like I am spending some time in GC.
java.lang:type=GarbageCollector,name=ConcurrentMarkSweep
CollectionTime = 51707;
CollectionCount = 103;
java.lang:type=GarbageCollector,name=ParNew
CollectionTime = 466835;
CollectionCount = 21315;
On Fri, Dec 6, 2013 at 9:58 AM, Jason Wee wrote:
Can you set the memtable_total_space_in_mb value, it is defaulting to 1/3
which is 8/3 ~ 2.6 gb in capacity
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-memory-and-disk-space-management
The flushing of 2.6 gb to the disk might slow the performance if frequently
called, may
On Fri, Dec 6, 2013 at 9:59 AM, Vicky Kak wrote:
> You have passed the JVM configurations and not the cassandra
> configurations which is in cassandra.yaml.
>
Apologies, was tuning JVM and that's what was in my mind.
Here are the cassandra settings http://pastebin.com/uN42GgYT
> The spikes ar
You have passed the JVM configurations and not the cassandra configurations
which is in cassandra.yaml.
The spikes are not that significant in our case and we are running the
cluster with 1.7 gb heap.
Are these spikes causing any issue at your end?
On Fri, Dec 6, 2013 at 9:10 PM, srmore wrote
Hi srmore,
Perhaps if you use jconsole and connect to the jvm using jmx. Then uner
MBeans tab, start inspecting the GC metrics.
/Jason
On Fri, Dec 6, 2013 at 11:40 PM, srmore wrote:
>
>
>
> On Fri, Dec 6, 2013 at 9:32 AM, Vicky Kak wrote:
>
>> Hard to say much without knowing about the cassa
On Fri, Dec 6, 2013 at 9:32 AM, Vicky Kak wrote:
> Hard to say much without knowing about the cassandra configurations.
>
The cassandra configuration is
-Xms8G
-Xmx8G
-Xmn800m
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:SurvivorRatio=4
-XX:MaxTenuringThreshold=2
-X
Hard to say much without knowing about the cassandra configurations.
Yes compactions/GC's could skipe the CPU, I had similar behavior with my
setup.
-VK
On Fri, Dec 6, 2013 at 7:40 PM, srmore wrote:
> We have a 3 node cluster running cassandra 1.2.12, they are pretty big
> machines 64G ram wit
18 matches
Mail list logo