Got it.
Thank you for helping me, Jon, Jeff!

> Is there a reason why you’re picking Cassandra for this dataset?
Decision wasn’t made by myself, I guess C* was chosen because some huge growth 
was planned.

Regards,
Kyrill

From: Jeff Jirsa <jji...@gmail.com>
Reply-To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Date: Wednesday, March 6, 2019 at 22:19
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: Re: Maximum memory usage reached

Also, that particular logger is for the internal chunk / page cache. If it 
can’t allocate from within that pool, it’ll just use a normal bytebuffer.

It’s not really a problem, but if you see performance suffer, upgrade to latest 
3.11.4, there was a bit of a perf improvement in the case where that cache 
fills up.

--
Jeff Jirsa


On Mar 6, 2019, at 11:40 AM, Jonathan Haddad 
<j...@jonhaddad.com<mailto:j...@jonhaddad.com>> wrote:
That’s not an error. To the left of the log message is the severity, level INFO.

Generally, I don’t recommend running Cassandra on only 2GB ram or for small 
datasets that can easily fit in memory. Is there a reason why you’re picking 
Cassandra for this dataset?

On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev 
<klebed...@conductor.com<mailto:klebed...@conductor.com>> wrote:
Hi All,

We have a tiny 3-node cluster
C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade immediately)
HEAP_SIZE is 2G
JVM options are default
All setting in cassandra.yaml are default (file_cache_size_in_mb not set)

Data per node – just ~ 1Gbyte

We’re getting following errors messages:

DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545 
CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df) 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
 ]
DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582 
CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df) 4 
sstables to 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
 to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read Throughput 
= 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput = ~106/s.  194 
total partitions merged to 44.  Partition merge counts were {1:18, 4:44, }
INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007 
IndexSummaryRedistribution.java:75 - Redistributing index summaries
INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB

What’s interesting that “Maximum memory usage reached” messages appears each 15 
minutes.
Reboot temporary solve the issue, but it then appears again after some time

Checked, there are no huge partitions (max partition size is ~2Mbytes )

How such small amount of data may cause this issue?
How to debug this issue further?


Regards,
Kyrill


--
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade

Reply via email to