throughput.
Regards,
Eric
De : DuyHai Doan [mailto:doanduy...@gmail.com]
Envoyé : mercredi 24 septembre 2014 00:10
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Nice catch Daniel. The comment from Sylvain explains a lot !
On Tue, Sep 23, 2014 at 11:33 PM, Daniel Chia
danc
:* user@cassandra.apache.org
*Objet :* Re: CPU consumption of Cassandra
Nice catch Daniel. The comment from Sylvain explains a lot !
On Tue, Sep 23, 2014 at 11:33 PM, Daniel Chia danc...@coursera.org
wrote:
If I had to guess, it might be in part i could be due to inefficiencies in
2.0
it is probably not a solution in my
case. :(
Regards,
Eric
De : Chris Lohfink [mailto:clohf...@blackbirdit.com]
Envoyé : lundi 22 septembre 2014 22:03
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Its going to depend a lot on your data model but 5-6k is on the low end
consumption of Cassandra
Its going to depend a lot on your data model but 5-6k is on the low end of
what I would expect. N=RF=2 is not really something I would recommend. That
said 93GB is not much data so the bottleneck may exist more in your data
model, queries, or client.
What profiler
]
Date d'envoi : mardi 23 septembre 2014 19:23
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Well, first off you shouldn't run stress tool on the node your testing. Give
it its own box.
With RF=N=2 your essentially testing a single machine locally which isnt the
best
: CPU consumption of Cassandra
Well, first off you shouldn't run stress tool on the node your testing. Give
it its own box.
With RF=N=2 your essentially testing a single machine locally which isnt the
best indicator long term (optimizations available when reading data thats
local
,
Eric
De : Chris Lohfink [clohf...@blackbirdit.com]
Date d'envoi : mardi 23 septembre 2014 19:23
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Well, first off you shouldn't run stress tool on the node your testing.
Give it its
]
Date d'envoi : mardi 23 septembre 2014 19:23
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Well, first off you shouldn't run stress tool on the node your testing.
Give it its own box.
With RF=N=2 your essentially testing a single machine locally which isnt
the best
the difference of
CPU consumption?
Regards,
Eric
De : Chris Lohfink [clohf...@blackbirdit.com]
Date d'envoi : mardi 23 septembre 2014 19:23
À : user@cassandra.apache.org
Objet : Re: CPU consumption of Cassandra
Well, first off you shouldn't run
Hi,
I'm currently testing Cassandra 2.0.9 (and since the last week 2.1) under some
read heavy load...
I have 2 cassandra nodes (RF : 2) running under CentOS 6 with 16GB of RAM and 8
Cores.
I have around 93GB of data per node (one Disk of 300GB with SAS interface and a
Rotational Speed of
Its going to depend a lot on your data model but 5-6k is on the low end of what
I would expect. N=RF=2 is not really something I would recommend. That said
93GB is not much data so the bottleneck may exist more in your data model,
queries, or client.
What profiler are you using? The cpu on
Eric,
We have a new stress tool to help you share your schema for wider bench
marking. see
http://www.datastax.com/dev/blog/improved-cassandra-2-1-stress-tool-benchmark-any-schema
If you wouldn't mind creating a yaml for your schema I would be happy to
take a look.
-Jake
On Mon, Sep 22,
12 matches
Mail list logo