ad.
>
>
>
> 71% of your heap is collections – may be a weird data model quirk, but try
> CMS first and see if that behaves better.
>
>
>
>
>
>
>
> *From: *Mikhail Strebkov
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Wednesday, December 9, 2
t;
> 8G is probably too small for a G1 heap. Raise your heap or try CMS instead.
>
>
>
> 71% of your heap is collections – may be a weird data model quirk, but try
> CMS first and see if that behaves better.
>
>
>
>
>
>
>
> *From: *Mikhail Strebkov
> *
Hi everyone,
While upgrading our 5 machines cluster from DSE version 4.7.1 (Cassandra
2.1.8) to DSE version: 4.8.2 (Cassandra 2.1.11) one of the nodes can't
start with OutOfMemoryError.
We're using HotSpot 64-Bit Server VM/1.8.0_45 and G1 garbage collector with
8 GiB heap.
Average node size is
We had the same issue with huge number of sstables on this version and 2.1.3.
After updating to 2.1.8 the issue slowly faded out (it took a long time for
Cassandra to compact thousands of sstables)
On Mon, Jul 27, 2015 at 4:05 AM, Peer, Oded oded.p...@rsa.com wrote:
It’s noticeable from the
and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.
On Tue, Jul 14, 2015 at 4:40 PM, Mikhail Strebkov streb...@gmail.com
wrote:
Looks like it dies with OOM:
https://gist.github.com/kluyg/03785041e16333015c2c
On Tue, Jul 14, 2015
Looks like it dies with OOM:
https://gist.github.com/kluyg/03785041e16333015c2c
On Tue, Jul 14, 2015 at 12:01 PM, Mikhail Strebkov streb...@gmail.com
wrote:
OpsCenter 5.1.3 and datastax-agent-5.1.3-standalone.jar
On Tue, Jul 14, 2015 at 12:00 PM, Sebastian Estevez
sebastian.este
Hi everyone,
Recently I've noticed that most of the nodes have OpsCenter agents running
at 300% CPU. Each node has 4 cores, so agents are using 75% of total
available CPU.
We're running 5 nodes with OpenSource Cassandra 2.1.8 in AWS using
Community AMI. OpsCenter version is 5.1.3. We're using
fixed the issue.
On Jul 14, 2015 2:58 PM, Mikhail Strebkov streb...@gmail.com wrote:
Hi everyone,
Recently I've noticed that most of the nodes have OpsCenter agents
running at 300% CPU. Each node has 4 cores, so agents are using 75% of
total available CPU.
We're running 5 nodes
Hi Saladi,
Recently I faced a similar problem, I had a lot of CFs to fix, so I wrote
this: https://github.com/kluyg/cassandra-schema-fix
I think it can be useful to you.
Kind regards,
Mikhail
On Mon, Jul 13, 2015 at 11:51 AM, Saladi Naidu naidusp2...@yahoo.com
wrote:
Sebastian,
Thank you so
Hi Kevin,
Here is what we use, works for us in production:
https://gist.github.com/kluyg/46ae3dee9000a358edf9
To unit test it, you'll need to check that your custom retry policy returns
the RetryDecision you want for the inputs.
To verify that it works in production, you can wrap it in a
We have observed the same issue in our production Cassandra cluster (5 nodes in
one DC). We use Cassandra 2.1.3 (I joined the list too late to realize we
shouldn’t user 2.1.x yet) on Amazon machines (created from community AMI).
In addition to count variations with 5 to 10% we observe
It is open sourced but works only with C* 1.x as far as I know.
Mikhail
On Tuesday, January 27, 2015, Mohammed Guller moham...@glassbeam.com
wrote:
I believe Aegisthus is open sourced.
Mohammed
*From:* Jan [mailto:cne...@yahoo.com
javascript:_e(%7B%7D,'cvml','cne...@yahoo.com');]
, 2014 at 4:30 PM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Dec 30, 2014 at 3:12 PM, Mikhail Strebkov streb...@gmail.com
wrote:
We have a table in our production Cassandra that is stored on 11369
SSTables. The average SSTable count for the other tables is around 15, and
the read latency
, 3:836, 4:122, }
On Wed, Dec 31, 2014 at 10:11 AM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Dec 31, 2014 at 12:01 AM, Mikhail Strebkov streb...@gmail.com
wrote:
I set compaction_throughput_mb_per_sec to 0 and restarted Cassandra.
You can also set this online w/ nodetool, fyi
I see, well that's what I expected, but it still should improve a read
latency, since it will reduce the number of disk seeks per row request, is
my assumption correct?
On Wed, Dec 31, 2014 at 11:51 AM, Robert Coli rc...@eventbrite.com wrote:
On Wed, Dec 31, 2014 at 11:35 AM, Mikhail Strebkov
Hi,
We have a table in our production Cassandra that is stored on 11369
SSTables. The average SSTable count for the other tables is around 15, and
the read latency for them is much smaller.
I tried to run manual compaction (nodetool compact my_keyspace my_table)
but then the node starts spending
16 matches
Mail list logo