It will allow several memtables to queue up, and then it will block
further writes so it doesn't exhaust memory while it flushes.
Perhaps that is what you are seeing. You should check the flush queue
sizes with JMX or nodeprobe tpstats.
On Fri, Feb 26, 2010 at 5:25 AM, Boris Shulman wrote:
> Wh
On Fri, Feb 26, 2010 at 4:49 AM, Boris Shulman wrote:
> I did some analysis using iostat and vmstat and those are the results:
> When the node freezes (I'm not running on a vm, I'm running on 2 cpu 8
> cores machine with 12G RAM):
>
> sda 0.00 9791.20 0.00 93.60 0.00 94080.00
What will be the implications of the fact that cassandra can't keep up
with the write? Will the memtables be queued in memory until they are
flushed?
On Thu, Feb 25, 2010 at 4:56 PM, Jonathan Ellis wrote:
> Are you swapping?
> http://spyced.blogspot.com/2010/01/linux-performance-basics.html
>
> o
I did some analysis using iostat and vmstat and those are the results:
When the node freezes (I'm not running on a vm, I'm running on 2 cpu 8
cores machine with 12G RAM):
sda 0.00 9791.20 0.00 93.60 0.00 94080.00 1005.13
140.46 1589.95 10.69 100.02
does that mean that the
On Thu, 25 Feb 2010 08:56:25 -0600 Jonathan Ellis wrote:
JE> Are you swapping?
JE> http://spyced.blogspot.com/2010/01/linux-performance-basics.html
JE> otherwise there's something wrong w/ your vm (?), disk i/o doesn't
JE> block incoming writes in cassandra
If the user has enough memory, can t
Are you swapping?
http://spyced.blogspot.com/2010/01/linux-performance-basics.html
otherwise there's something wrong w/ your vm (?), disk i/o doesn't
block incoming writes in cassandra
On Thu, Feb 25, 2010 at 8:49 AM, Boris Shulman wrote:
> I don't think it is gc related issue. There is no corre
I don't think it is gc related issue. There is no correlation between
gc times and the freeze times. More over I don't see any gc activity
that lasts for omre than o.03 sec. But there is a correlation between
disk flushing operations. I've noticed that the system freezes each
time when my commit lo
Then you should check GC timing with -Xverbose:gc option (see:
http://wiki.apache.org/cassandra/RunningCassandra for how to modify
jvm options) for a correlation.
On Thu, Feb 25, 2010 at 8:09 AM, Boris Shulman wrote:
> In these tests I perform only write operations, no reads.
>
> On Thu, Feb 25,
In these tests I perform only write operations, no reads.
On Thu, Feb 25, 2010 at 4:07 PM, Jonathan Ellis wrote:
> The only kind of "freeze" that makes sense there is your reads are i/o
> bound and the extra disk activity is killing you. In that case the
> fix is to add more RAM, or give less to
The only kind of "freeze" that makes sense there is your reads are i/o
bound and the extra disk activity is killing you. In that case the
fix is to add more RAM, or give less to the JVM so the OS can use more
for buffer cache.
On Thu, Feb 25, 2010 at 8:01 AM, Boris Shulman wrote:
> In my case th
In my case the cassandra node freezes while memtable flush operation
is performed or compactation operation is performed. How can I
optimize the cassandra configuration in order to avoid this behavior?
I've tried both using large memtable size (1G) and small (128M) but in
every case I have some sor
On Wed, Feb 24, 2010 at 8:46 PM, Santal Li wrote:
> BTW: Somebody in my team told me, that if the cassandra managed data was too
> huge( >15x than heap space) , will cause performance issues, is this true?
It really has more to do with what your hot data set is, than absolute size.
Once any syst
Thank you, it's help.
because I have about 150G data in each node, so I setup the Heap to 8 giga,
just want to make cassandra have enought space to cache key index.
I think reduce the heap size is valuable to try. Try to split one cassandra
instance to 2 sub node, contains in one physical server,
On Fri, Feb 19, 2010 at 7:40 PM, Santal Li wrote:
> I meet almost same thing as you. When I do some benchmarks write test, some
> times one Cassandra will freeze and other node will consider it was shutdown
> and up after 30+ second. I am using 5 node, each node 8G mem for java heap.
>
> From my i
haproxy should be fine.
normal GCs aren't a problem, you don't need to worry about that. what
is a problem is when you shove more requests into cassandra than it
can handle, so it tries to GC to get enough memory to handle that,
then you shove even more requests, so it GC's again, and it spirals
I'm still in the experimentation stage so perhaps forgive this hypothetical
question/idea. I am planning to load balance by putting haproxy in front of
the cassandra cluster. First of all, is that a bad idea?
Secondly, if I have high enough replication and # of nodes, is it possible
and a good i
the GC options as bellow:
JVM_OPTS=" \
-ea \
-Xms2G \
-Xmx8G \
-XX:SurvivorRatio=8 \
-XX:TargetSurvivorRatio=90 \
-XX:+AggressiveOpts \
-XX:+UseParNewGC \
-XX:+UseConcMarkSweepGC \
-XX:+CMSParallelRemarkEnabled \
-XX:+
are you using the old deb package? because that had broken gc settings.
On Fri, Feb 19, 2010 at 10:40 PM, Santal Li wrote:
> I meet almost same thing as you. When I do some benchmarks write test, some
> times one Cassandra will freeze and other node will consider it was shutdown
> and up after 3
I meet almost same thing as you. When I do some benchmarks write test, some
times one Cassandra will freeze and other node will consider it was shutdown
and up after 30+ second. I am using 5 node, each node 8G mem for java heap.
>From my investigate, it was caused by GC thread, because I start the
On Tue, Feb 16, 2010 at 6:25 AM, Boris Shulman wrote:
> Hello, I'm running some benchmarks on 2 cassandra nodes each running
> on 8 cores machine with 16G RAM, 10G for Java heap. I've noticed that
> during benchmarks with numerous writes cassandra just freeze for
> several minutes (in those benchm
Hello, I'm running some benchmarks on 2 cassandra nodes each running
on 8 cores machine with 16G RAM, 10G for Java heap. I've noticed that
during benchmarks with numerous writes cassandra just freeze for
several minutes (in those benchmarks I'm writing batches of 10 columns
with 1K data each for ev
21 matches
Mail list logo