Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-04 Thread Reid Pinchback
Probably helps to think of how swap actually functions.  It has a valid place, 
so long as the behavior of the kernel and the OOM killer are understood.

You can have a lot of cold pages that have nothing at all to do with C*.  If 
you look at where memory goes, it isn’t surprising to see things that the 
kernel finds it can page out, leaving RAM for better things.  I’ve seen crond 
soak up a lot of memory, and Dell’s assorted memory-bloated tooling, for 
example. Anything that is truly cold, swap is your friend because those things 
are infrequently used… swapping them in and out leaves more memory on average 
for what you want.  However, that’s not huge numbers, that could be something 
like a half gig of RAM kept routinely free, depending on the assorted tooling 
you have as a baseline install for servers.

If swap exists to avoid the OOM killer on truly active processes, the returns 
there diminish rapidly. Within seconds you’ll find you can’t even ssh into a 
box to investigate. In something like a traditional database it’s worth the 
pain because there are multiple child processes to the rdbms, and the OOM 
killer preferentially targets big process families.  Databases can go into a 
panic if you toast a child, and you have a full-blown recovery on your hands.  
Fortunately the more mature databases give you knobs for memory tuning, like 
being able to pin particular tables in memory if they are critical; anything 
not pinned (via madvise I believe) can get tossed when under pressure.

The situation is a bit different with C*.  By design, you have replicas that 
the clients automatically find, and things like speculative retry cause 
processing to skip over the slowpokes. The better-slow-than-dead argument seems 
more tenuous to me here than for an rdbms.  And if you have an SLA based on 
latency, you’ll never meet it if you have page faults happening during memory 
references in the JVM. So if you have swappiness enabled, probably best to keep 
it tuned low.  That way a busy C* JVM hopefully is one of the last victims in 
the race to shove pages to swap.



From: Shishir Kumar 
Reply-To: "user@cassandra.apache.org" 
Date: Wednesday, December 4, 2019 at 8:04 AM
To: "user@cassandra.apache.org" 
Subject: Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk 
of 1.000MiB"

Message from External Sender
Correct. Normally one should avoid this, as performance might degrade, but 
system will not die (until process gets paged out).

In production we haven't done this (just changed mmap_index_only). We have an 
environment which gets used for customer to train/beta test that grows rapidly. 
Investing on infra do not make sense from cost prospective, so swap as option.

But here if environment is up running it will be interesting to understand what 
is consuming memory and is infra sized correctly.

-Shishir
On Wed, 4 Dec 2019, 16:13 Hossein Ghiyasi Mehr, 
mailto:ghiyasim...@gmail.com>> wrote:
"3. Though Datastax do not recommended and recommends Horizontal scale, so 
based on your requirement alternate old fashion option is to add swap space."
Hi Shishir,
swap isn't recommended by DataStax!

---
VafaTech.com - A Total Solution for Data Gathering & Analysis
---


On Tue, Dec 3, 2019 at 5:53 PM Shishir Kumar 
mailto:shishirroy2...@gmail.com>> wrote:
Options: Assuming model and configurations are good and Data size per node less 
than 1 TB (though no such Benchmark).

1. Infra scale for memory
2. Try to change disk_access_mode to mmap_index_only.
In this case you should not have any in memory DB tables.
3. Though Datastax do not recommended and recommends Horizontal scale, so based 
on your requirement alternate old fashion option is to add swap space.

-Shishir

On Tue, 3 Dec 2019, 15:52 John Belliveau, 
mailto:belliveau.j...@gmail.com>> wrote:
Reid,

I've only been working with Cassandra for 2 years, and this echoes my 
experience as well.

Regarding the cache use, I know every use case is different, but have you 
experimented and found any performance benefit to increasing its size?

Thanks,
John Belliveau

On Mon, Dec 2, 2019, 11:07 AM Reid Pinchback 
mailto:rpinchb...@tripadvisor.com>> wrote:
Rahul, if my memory of this is correct, that particular logging message is 
noisy, the cache is pretty much always used to its limit (and why not, it’s a 
cache, no point in using less than you have).

No matter what value you set, you’ll just change the “reached (….)” part of it. 
 I think what would help you more is to work with the team(s) that have apps 
depending upon C* and decide what your performance SLA is with them.  If you 
are meeting your SLA, you don’t care about noisy messages.  If you aren’t 
meeting your SLA, then the noisy messages become sources of ideas to look at.

One thing you’ll find out pre

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-04 Thread Shishir Kumar
Correct. Normally one should avoid this, as performance might degrade, but
system will not die (until process gets paged out).

In production we haven't done this (just changed mmap_index_only). We have
an environment which gets used for customer to train/beta test that grows
rapidly. Investing on infra do not make sense from cost prospective, so
swap as option.

But here if environment is up running it will be interesting to understand
what is consuming memory and is infra sized correctly.

-Shishir

On Wed, 4 Dec 2019, 16:13 Hossein Ghiyasi Mehr, 
wrote:

> "3. Though Datastax do not recommended and recommends Horizontal scale, so
> based on your requirement alternate old fashion option is to add swap
> space."
> Hi Shishir,
> swap isn't recommended by DataStax!
>
> *---*
> *VafaTech.com - A Total Solution for Data Gathering & Analysis*
> *---*
>
>
> On Tue, Dec 3, 2019 at 5:53 PM Shishir Kumar 
> wrote:
>
>> Options: Assuming model and configurations are good and Data size per
>> node less than 1 TB (though no such Benchmark).
>>
>> 1. Infra scale for memory
>> 2. Try to change disk_access_mode to mmap_index_only.
>> In this case you should not have any in memory DB tables.
>> 3. Though Datastax do not recommended and recommends Horizontal scale, so
>> based on your requirement alternate old fashion option is to add swap space.
>>
>> -Shishir
>>
>> On Tue, 3 Dec 2019, 15:52 John Belliveau, 
>> wrote:
>>
>>> Reid,
>>>
>>> I've only been working with Cassandra for 2 years, and this echoes my
>>> experience as well.
>>>
>>> Regarding the cache use, I know every use case is different, but have
>>> you experimented and found any performance benefit to increasing its size?
>>>
>>> Thanks,
>>> John Belliveau
>>>
>>>
>>> On Mon, Dec 2, 2019, 11:07 AM Reid Pinchback 
>>> wrote:
>>>
>>>> Rahul, if my memory of this is correct, that particular logging message
>>>> is noisy, the cache is pretty much always used to its limit (and why not,
>>>> it’s a cache, no point in using less than you have).
>>>>
>>>>
>>>>
>>>> No matter what value you set, you’ll just change the “reached (….)”
>>>> part of it.  I think what would help you more is to work with the team(s)
>>>> that have apps depending upon C* and decide what your performance SLA is
>>>> with them.  If you are meeting your SLA, you don’t care about noisy
>>>> messages.  If you aren’t meeting your SLA, then the noisy messages become
>>>> sources of ideas to look at.
>>>>
>>>>
>>>>
>>>> One thing you’ll find out pretty quickly.  There are a lot of knobs you
>>>> can turn with C*, too many to allow for easy answers on what you should
>>>> do.  Figure out what your throughput and latency SLAs are, and you’ll know
>>>> when to stop tuning.  Otherwise you’ll discover that it’s a rabbit hole you
>>>> can dive into and not come out of for weeks.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *From: *Hossein Ghiyasi Mehr 
>>>> *Reply-To: *"user@cassandra.apache.org" 
>>>> *Date: *Monday, December 2, 2019 at 10:35 AM
>>>> *To: *"user@cassandra.apache.org" 
>>>> *Subject: *Re: "Maximum memory usage reached (512.000MiB), cannot
>>>> allocate chunk of 1.000MiB"
>>>>
>>>>
>>>>
>>>> *Message from External Sender*
>>>>
>>>> It may be helpful:
>>>> https://thelastpickle.com/blog/2018/08/08/compression_performance.html
>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__thelastpickle.com_blog_2018_08_08_compression-5Fperformance.html&d=DwMFaQ&c=9Hv6XPedRSA-5PSECC38X80c1h60_XWA4z1k_R1pROA&r=OIgB3poYhzp3_A7WgD7iBCnsJaYmspOa2okNpf6uqWc&m=BlMYluADfxjSCocEBuEfptXuOJCAamgGaQreoJcMRJQ&s=rPo3nouxhBU2Yf2HRb2Udl87roS0KkGuPr-l2ferKXA&e=>
>>>>
>>>> It's complex. Simple explanation, cassandra keeps sstables in memory
>>>> based on chunk size and sstable parts. It manage loading new sstables to
>>>> memory based on requests on different sstables correctly . You should be
>>>> worry about it (sstables loaded in memory)
>>>>
>>>>
>>>> *VafaTech.com - A Total Soluti

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-04 Thread Hossein Ghiyasi Mehr
"3. Though Datastax do not recommended and recommends Horizontal scale, so
based on your requirement alternate old fashion option is to add swap
space."
Hi Shishir,
swap isn't recommended by DataStax!

*---*
*VafaTech.com - A Total Solution for Data Gathering & Analysis*
*---*


On Tue, Dec 3, 2019 at 5:53 PM Shishir Kumar 
wrote:

> Options: Assuming model and configurations are good and Data size per node
> less than 1 TB (though no such Benchmark).
>
> 1. Infra scale for memory
> 2. Try to change disk_access_mode to mmap_index_only.
> In this case you should not have any in memory DB tables.
> 3. Though Datastax do not recommended and recommends Horizontal scale, so
> based on your requirement alternate old fashion option is to add swap space.
>
> -Shishir
>
> On Tue, 3 Dec 2019, 15:52 John Belliveau, 
> wrote:
>
>> Reid,
>>
>> I've only been working with Cassandra for 2 years, and this echoes my
>> experience as well.
>>
>> Regarding the cache use, I know every use case is different, but have you
>> experimented and found any performance benefit to increasing its size?
>>
>> Thanks,
>> John Belliveau
>>
>>
>> On Mon, Dec 2, 2019, 11:07 AM Reid Pinchback 
>> wrote:
>>
>>> Rahul, if my memory of this is correct, that particular logging message
>>> is noisy, the cache is pretty much always used to its limit (and why not,
>>> it’s a cache, no point in using less than you have).
>>>
>>>
>>>
>>> No matter what value you set, you’ll just change the “reached (….)” part
>>> of it.  I think what would help you more is to work with the team(s) that
>>> have apps depending upon C* and decide what your performance SLA is with
>>> them.  If you are meeting your SLA, you don’t care about noisy messages.
>>> If you aren’t meeting your SLA, then the noisy messages become sources of
>>> ideas to look at.
>>>
>>>
>>>
>>> One thing you’ll find out pretty quickly.  There are a lot of knobs you
>>> can turn with C*, too many to allow for easy answers on what you should
>>> do.  Figure out what your throughput and latency SLAs are, and you’ll know
>>> when to stop tuning.  Otherwise you’ll discover that it’s a rabbit hole you
>>> can dive into and not come out of for weeks.
>>>
>>>
>>>
>>>
>>>
>>> *From: *Hossein Ghiyasi Mehr 
>>> *Reply-To: *"user@cassandra.apache.org" 
>>> *Date: *Monday, December 2, 2019 at 10:35 AM
>>> *To: *"user@cassandra.apache.org" 
>>> *Subject: *Re: "Maximum memory usage reached (512.000MiB), cannot
>>> allocate chunk of 1.000MiB"
>>>
>>>
>>>
>>> *Message from External Sender*
>>>
>>> It may be helpful:
>>> https://thelastpickle.com/blog/2018/08/08/compression_performance.html
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__thelastpickle.com_blog_2018_08_08_compression-5Fperformance.html&d=DwMFaQ&c=9Hv6XPedRSA-5PSECC38X80c1h60_XWA4z1k_R1pROA&r=OIgB3poYhzp3_A7WgD7iBCnsJaYmspOa2okNpf6uqWc&m=BlMYluADfxjSCocEBuEfptXuOJCAamgGaQreoJcMRJQ&s=rPo3nouxhBU2Yf2HRb2Udl87roS0KkGuPr-l2ferKXA&e=>
>>>
>>> It's complex. Simple explanation, cassandra keeps sstables in memory
>>> based on chunk size and sstable parts. It manage loading new sstables to
>>> memory based on requests on different sstables correctly . You should be
>>> worry about it (sstables loaded in memory)
>>>
>>>
>>> *VafaTech.com - A Total Solution for Data Gathering & Analysis*
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy 
>>> wrote:
>>>
>>> Thanks Hossein,
>>>
>>>
>>>
>>> How does the chunks are moved out of memory (LRU?) if it want to make
>>> room for new requests to get chunks?if it has mechanism to clear chunks
>>> from cache what causes to cannot allocate chunk? Can you point me to any
>>> documention?
>>>
>>>
>>>
>>> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr <
>>> ghiyasim...@gmail.com> wrote:
>>>
>>> Chunks are part of sstables. When there is enough space in memory to
>>> cache them, read performance will increase if application requests it
>>> again.
>>>
>>>
>>>
>>> Your real answer is application dependent. For example write heavy
>>> applications are different than read heavy or read-write heavy. Real time
>>> applications are different than time series data environments and ... .
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy 
>>> wrote:
>>>
>>> Hello,
>>>
>>>
>>>
>>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I
>>> see this because file_cache_size_mb by default set to 512MB.
>>>
>>>
>>>
>>> Datastax document recommends to increase the file_cache_size.
>>>
>>>
>>>
>>> We have 32G over all memory allocated 16G to Cassandra. What is the
>>> recommended value in my case. And also when does this memory gets filled up
>>> frequent does nodeflush helps in avoiding this info messages?
>>>
>>>


Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-03 Thread Reid Pinchback
John, anything I’ll say will be as a collective ‘we’ since it has been a team 
effort here at Trip, and I’ve just been the hired gun to help out a bit. I’m 
more of a Postgres and Java guy so filter my answers accordingly.

I can’t say we saw as much relevance to tuning chunk cache size, as we did to 
do everything possible to migrate things off-heap.  I haven’t worked with 2.x 
so I don’t know how much these options changed, but in 3.11.x anyways, you 
definitely can migrate a fair bit off-heap.  Our first use case was 3 9’s 
sensitive on latency, which turns out to be a rough go for C* particularly if 
the data model is a bit askew from C*’s sweet spot, as was true for us.  The 
deeper merkle trees that were introduced somewhere I think in the 3.0.x series, 
that was a bane of our existence, we back-patched the 4.0 work to tune the tree 
height so that we weren’t OOMing nodes during reaper repair runs.

As to Shishir’s notion of using swap, because latency mattered to us, we had 
RAM headroom on the boxes.  We couldn’t use it all without pushing on something 
that was hurting us on 3 9’s.  C* is like this over-constrained problem space 
when it comes to tuning, poking in one place resulted in a twitch somewhere 
else, and we had to see which twitches worked out in our favour. If, like us, 
you have RAM headroom, you’re unlikely to care about swap for obvious reasons.  
All you really need is enough room for the O/S file buffer cache.

Tuning related to I/O and file buffer cache mattered a fair bit.  As did GC 
tuning obviously.  Personally, if I were to look at swap as helpful, I’d be 
debating with myself if the sstables should just remain uncompressed in the 
first place.  After all, swap space is disk space so holding 
compressed+uncompressed at the same time would only make sense if the storage 
footprint was large but the hot data in use was routinely much smaller… yet 
stuck around long enough in a cold state that the kernel would target it to 
swap out.  That’s a lot of if’s to line up to your benefit.  When it comes to a 
system running based on garbage collection, I get skeptical of how effectively 
the O/S will determine what is good to swap. Most of the JVM memory in C* 
churns at a rate that you wouldn’t want swap i/o to combine with if you cared 
about latency.  Not everybody cares about tight variance on latency though, so 
there can be other rationales for tuning that would result in different 
conclusions from ours.

I might have more definitive statements to make in the upcoming months, I’m in 
the midst of putting together my own test cluster for more controlled analysis 
on C* and Kafka tuning. Tuning live environments I’ve found makes it hard to 
control the variables enough for my satisfaction. It can feel like a game of 
empirical whack-a-mole.


From: Shishir Kumar 
Reply-To: "user@cassandra.apache.org" 
Date: Tuesday, December 3, 2019 at 9:23 AM
To: "user@cassandra.apache.org" 
Subject: Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk 
of 1.000MiB"

Message from External Sender
Options: Assuming model and configurations are good and Data size per node less 
than 1 TB (though no such Benchmark).

1. Infra scale for memory
2. Try to change disk_access_mode to mmap_index_only.
In this case you should not have any in memory DB tables.
3. Though Datastax do not recommended and recommends Horizontal scale, so based 
on your requirement alternate old fashion option is to add swap space.

-Shishir

On Tue, 3 Dec 2019, 15:52 John Belliveau, 
mailto:belliveau.j...@gmail.com>> wrote:
Reid,

I've only been working with Cassandra for 2 years, and this echoes my 
experience as well.

Regarding the cache use, I know every use case is different, but have you 
experimented and found any performance benefit to increasing its size?

Thanks,
John Belliveau

On Mon, Dec 2, 2019, 11:07 AM Reid Pinchback 
mailto:rpinchb...@tripadvisor.com>> wrote:
Rahul, if my memory of this is correct, that particular logging message is 
noisy, the cache is pretty much always used to its limit (and why not, it’s a 
cache, no point in using less than you have).

No matter what value you set, you’ll just change the “reached (….)” part of it. 
 I think what would help you more is to work with the team(s) that have apps 
depending upon C* and decide what your performance SLA is with them.  If you 
are meeting your SLA, you don’t care about noisy messages.  If you aren’t 
meeting your SLA, then the noisy messages become sources of ideas to look at.

One thing you’ll find out pretty quickly.  There are a lot of knobs you can 
turn with C*, too many to allow for easy answers on what you should do.  Figure 
out what your throughput and latency SLAs are, and you’ll know when to stop 
tuning.  Otherwise you’ll discover that it’s a rabbit hole you can dive into 
and not come out of for weeks.


From: Hossein Ghiyasi Mehr mailto:ghiyasim...@gmail.com>

Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-03 Thread Shishir Kumar
Options: Assuming model and configurations are good and Data size per node
less than 1 TB (though no such Benchmark).

1. Infra scale for memory
2. Try to change disk_access_mode to mmap_index_only.
In this case you should not have any in memory DB tables.
3. Though Datastax do not recommended and recommends Horizontal scale, so
based on your requirement alternate old fashion option is to add swap space.

-Shishir

On Tue, 3 Dec 2019, 15:52 John Belliveau,  wrote:

> Reid,
>
> I've only been working with Cassandra for 2 years, and this echoes my
> experience as well.
>
> Regarding the cache use, I know every use case is different, but have you
> experimented and found any performance benefit to increasing its size?
>
> Thanks,
> John Belliveau
>
>
> On Mon, Dec 2, 2019, 11:07 AM Reid Pinchback 
> wrote:
>
>> Rahul, if my memory of this is correct, that particular logging message
>> is noisy, the cache is pretty much always used to its limit (and why not,
>> it’s a cache, no point in using less than you have).
>>
>>
>>
>> No matter what value you set, you’ll just change the “reached (….)” part
>> of it.  I think what would help you more is to work with the team(s) that
>> have apps depending upon C* and decide what your performance SLA is with
>> them.  If you are meeting your SLA, you don’t care about noisy messages.
>> If you aren’t meeting your SLA, then the noisy messages become sources of
>> ideas to look at.
>>
>>
>>
>> One thing you’ll find out pretty quickly.  There are a lot of knobs you
>> can turn with C*, too many to allow for easy answers on what you should
>> do.  Figure out what your throughput and latency SLAs are, and you’ll know
>> when to stop tuning.  Otherwise you’ll discover that it’s a rabbit hole you
>> can dive into and not come out of for weeks.
>>
>>
>>
>>
>>
>> *From: *Hossein Ghiyasi Mehr 
>> *Reply-To: *"user@cassandra.apache.org" 
>> *Date: *Monday, December 2, 2019 at 10:35 AM
>> *To: *"user@cassandra.apache.org" 
>> *Subject: *Re: "Maximum memory usage reached (512.000MiB), cannot
>> allocate chunk of 1.000MiB"
>>
>>
>>
>> *Message from External Sender*
>>
>> It may be helpful:
>> https://thelastpickle.com/blog/2018/08/08/compression_performance.html
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__thelastpickle.com_blog_2018_08_08_compression-5Fperformance.html&d=DwMFaQ&c=9Hv6XPedRSA-5PSECC38X80c1h60_XWA4z1k_R1pROA&r=OIgB3poYhzp3_A7WgD7iBCnsJaYmspOa2okNpf6uqWc&m=BlMYluADfxjSCocEBuEfptXuOJCAamgGaQreoJcMRJQ&s=rPo3nouxhBU2Yf2HRb2Udl87roS0KkGuPr-l2ferKXA&e=>
>>
>> It's complex. Simple explanation, cassandra keeps sstables in memory
>> based on chunk size and sstable parts. It manage loading new sstables to
>> memory based on requests on different sstables correctly . You should be
>> worry about it (sstables loaded in memory)
>>
>>
>> *VafaTech.com - A Total Solution for Data Gathering & Analysis*
>>
>>
>>
>>
>>
>> On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy 
>> wrote:
>>
>> Thanks Hossein,
>>
>>
>>
>> How does the chunks are moved out of memory (LRU?) if it want to make
>> room for new requests to get chunks?if it has mechanism to clear chunks
>> from cache what causes to cannot allocate chunk? Can you point me to any
>> documention?
>>
>>
>>
>> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr 
>> wrote:
>>
>> Chunks are part of sstables. When there is enough space in memory to
>> cache them, read performance will increase if application requests it
>> again.
>>
>>
>>
>> Your real answer is application dependent. For example write heavy
>> applications are different than read heavy or read-write heavy. Real time
>> applications are different than time series data environments and ... .
>>
>>
>>
>>
>>
>>
>>
>> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy 
>> wrote:
>>
>> Hello,
>>
>>
>>
>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
>> this because file_cache_size_mb by default set to 512MB.
>>
>>
>>
>> Datastax document recommends to increase the file_cache_size.
>>
>>
>>
>> We have 32G over all memory allocated 16G to Cassandra. What is the
>> recommended value in my case. And also when does this memory gets filled up
>> frequent does nodeflush helps in avoiding this info messages?
>>
>>


Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-03 Thread John Belliveau
Reid,

I've only been working with Cassandra for 2 years, and this echoes my
experience as well.

Regarding the cache use, I know every use case is different, but have you
experimented and found any performance benefit to increasing its size?

Thanks,
John Belliveau


On Mon, Dec 2, 2019, 11:07 AM Reid Pinchback 
wrote:

> Rahul, if my memory of this is correct, that particular logging message is
> noisy, the cache is pretty much always used to its limit (and why not, it’s
> a cache, no point in using less than you have).
>
>
>
> No matter what value you set, you’ll just change the “reached (….)” part
> of it.  I think what would help you more is to work with the team(s) that
> have apps depending upon C* and decide what your performance SLA is with
> them.  If you are meeting your SLA, you don’t care about noisy messages.
> If you aren’t meeting your SLA, then the noisy messages become sources of
> ideas to look at.
>
>
>
> One thing you’ll find out pretty quickly.  There are a lot of knobs you
> can turn with C*, too many to allow for easy answers on what you should
> do.  Figure out what your throughput and latency SLAs are, and you’ll know
> when to stop tuning.  Otherwise you’ll discover that it’s a rabbit hole you
> can dive into and not come out of for weeks.
>
>
>
>
>
> *From: *Hossein Ghiyasi Mehr 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Monday, December 2, 2019 at 10:35 AM
> *To: *"user@cassandra.apache.org" 
> *Subject: *Re: "Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB"
>
>
>
> *Message from External Sender*
>
> It may be helpful:
> https://thelastpickle.com/blog/2018/08/08/compression_performance.html
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__thelastpickle.com_blog_2018_08_08_compression-5Fperformance.html&d=DwMFaQ&c=9Hv6XPedRSA-5PSECC38X80c1h60_XWA4z1k_R1pROA&r=OIgB3poYhzp3_A7WgD7iBCnsJaYmspOa2okNpf6uqWc&m=BlMYluADfxjSCocEBuEfptXuOJCAamgGaQreoJcMRJQ&s=rPo3nouxhBU2Yf2HRb2Udl87roS0KkGuPr-l2ferKXA&e=>
>
> It's complex. Simple explanation, cassandra keeps sstables in memory based
> on chunk size and sstable parts. It manage loading new sstables to memory
> based on requests on different sstables correctly . You should be worry
> about it (sstables loaded in memory)
>
>
> *VafaTech.com - A Total Solution for Data Gathering & Analysis*
>
>
>
>
>
> On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy 
> wrote:
>
> Thanks Hossein,
>
>
>
> How does the chunks are moved out of memory (LRU?) if it want to make room
> for new requests to get chunks?if it has mechanism to clear chunks from
> cache what causes to cannot allocate chunk? Can you point me to any
> documention?
>
>
>
> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr 
> wrote:
>
> Chunks are part of sstables. When there is enough space in memory to cache
> them, read performance will increase if application requests it again.
>
>
>
> Your real answer is application dependent. For example write heavy
> applications are different than read heavy or read-write heavy. Real time
> applications are different than time series data environments and ... .
>
>
>
>
>
>
>
> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy 
> wrote:
>
> Hello,
>
>
>
> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
> this because file_cache_size_mb by default set to 512MB.
>
>
>
> Datastax document recommends to increase the file_cache_size.
>
>
>
> We have 32G over all memory allocated 16G to Cassandra. What is the
> recommended value in my case. And also when does this memory gets filled up
> frequent does nodeflush helps in avoiding this info messages?
>
>


Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Jeff Jirsa
This would be true except that the pretty print for the log message is done
before the logging rate limiter is applied, so if you see MiB instead of a
raw byte count, you're PROBABLY spending a ton of time in string formatting
within the read path.

This is fixed in 3.11.3 (
https://issues.apache.org/jira/browse/CASSANDRA-14416 )


On Mon, Dec 2, 2019 at 8:07 AM Reid Pinchback 
wrote:

> Rahul, if my memory of this is correct, that particular logging message is
> noisy, the cache is pretty much always used to its limit (and why not, it’s
> a cache, no point in using less than you have).
>
>
>
> No matter what value you set, you’ll just change the “reached (….)” part
> of it.  I think what would help you more is to work with the team(s) that
> have apps depending upon C* and decide what your performance SLA is with
> them.  If you are meeting your SLA, you don’t care about noisy messages.
> If you aren’t meeting your SLA, then the noisy messages become sources of
> ideas to look at.
>
>
>
> One thing you’ll find out pretty quickly.  There are a lot of knobs you
> can turn with C*, too many to allow for easy answers on what you should
> do.  Figure out what your throughput and latency SLAs are, and you’ll know
> when to stop tuning.  Otherwise you’ll discover that it’s a rabbit hole you
> can dive into and not come out of for weeks.
>
>
>
>
>
> *From: *Hossein Ghiyasi Mehr 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Monday, December 2, 2019 at 10:35 AM
> *To: *"user@cassandra.apache.org" 
> *Subject: *Re: "Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB"
>
>
>
> *Message from External Sender*
>
> It may be helpful:
> https://thelastpickle.com/blog/2018/08/08/compression_performance.html
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__thelastpickle.com_blog_2018_08_08_compression-5Fperformance.html&d=DwMFaQ&c=9Hv6XPedRSA-5PSECC38X80c1h60_XWA4z1k_R1pROA&r=OIgB3poYhzp3_A7WgD7iBCnsJaYmspOa2okNpf6uqWc&m=BlMYluADfxjSCocEBuEfptXuOJCAamgGaQreoJcMRJQ&s=rPo3nouxhBU2Yf2HRb2Udl87roS0KkGuPr-l2ferKXA&e=>
>
> It's complex. Simple explanation, cassandra keeps sstables in memory based
> on chunk size and sstable parts. It manage loading new sstables to memory
> based on requests on different sstables correctly . You should be worry
> about it (sstables loaded in memory)
>
>
> *VafaTech.com - A Total Solution for Data Gathering & Analysis*
>
>
>
>
>
> On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy 
> wrote:
>
> Thanks Hossein,
>
>
>
> How does the chunks are moved out of memory (LRU?) if it want to make room
> for new requests to get chunks?if it has mechanism to clear chunks from
> cache what causes to cannot allocate chunk? Can you point me to any
> documention?
>
>
>
> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr 
> wrote:
>
> Chunks are part of sstables. When there is enough space in memory to cache
> them, read performance will increase if application requests it again.
>
>
>
> Your real answer is application dependent. For example write heavy
> applications are different than read heavy or read-write heavy. Real time
> applications are different than time series data environments and ... .
>
>
>
>
>
>
>
> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy 
> wrote:
>
> Hello,
>
>
>
> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
> this because file_cache_size_mb by default set to 512MB.
>
>
>
> Datastax document recommends to increase the file_cache_size.
>
>
>
> We have 32G over all memory allocated 16G to Cassandra. What is the
> recommended value in my case. And also when does this memory gets filled up
> frequent does nodeflush helps in avoiding this info messages?
>
>


Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Reid Pinchback
Rahul, if my memory of this is correct, that particular logging message is 
noisy, the cache is pretty much always used to its limit (and why not, it’s a 
cache, no point in using less than you have).

No matter what value you set, you’ll just change the “reached (….)” part of it. 
 I think what would help you more is to work with the team(s) that have apps 
depending upon C* and decide what your performance SLA is with them.  If you 
are meeting your SLA, you don’t care about noisy messages.  If you aren’t 
meeting your SLA, then the noisy messages become sources of ideas to look at.

One thing you’ll find out pretty quickly.  There are a lot of knobs you can 
turn with C*, too many to allow for easy answers on what you should do.  Figure 
out what your throughput and latency SLAs are, and you’ll know when to stop 
tuning.  Otherwise you’ll discover that it’s a rabbit hole you can dive into 
and not come out of for weeks.


From: Hossein Ghiyasi Mehr 
Reply-To: "user@cassandra.apache.org" 
Date: Monday, December 2, 2019 at 10:35 AM
To: "user@cassandra.apache.org" 
Subject: Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk 
of 1.000MiB"

Message from External Sender
It may be helpful: 
https://thelastpickle.com/blog/2018/08/08/compression_performance.html<https://urldefense.proofpoint.com/v2/url?u=https-3A__thelastpickle.com_blog_2018_08_08_compression-5Fperformance.html&d=DwMFaQ&c=9Hv6XPedRSA-5PSECC38X80c1h60_XWA4z1k_R1pROA&r=OIgB3poYhzp3_A7WgD7iBCnsJaYmspOa2okNpf6uqWc&m=BlMYluADfxjSCocEBuEfptXuOJCAamgGaQreoJcMRJQ&s=rPo3nouxhBU2Yf2HRb2Udl87roS0KkGuPr-l2ferKXA&e=>
It's complex. Simple explanation, cassandra keeps sstables in memory based on 
chunk size and sstable parts. It manage loading new sstables to memory based on 
requests on different sstables correctly . You should be worry about it 
(sstables loaded in memory)

VafaTech.com - A Total Solution for Data Gathering & Analysis


On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy 
mailto:rahulreddy1...@gmail.com>> wrote:
Thanks Hossein,

How does the chunks are moved out of memory (LRU?) if it want to make room for 
new requests to get chunks?if it has mechanism to clear chunks from cache what 
causes to cannot allocate chunk? Can you point me to any documention?

On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr 
mailto:ghiyasim...@gmail.com>> wrote:
Chunks are part of sstables. When there is enough space in memory to cache 
them, read performance will increase if application requests it again.

Your real answer is application dependent. For example write heavy applications 
are different than read heavy or read-write heavy. Real time applications are 
different than time series data environments and ... .



On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy 
mailto:rahulreddy1...@gmail.com>> wrote:
Hello,

We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see this 
because file_cache_size_mb by default set to 512MB.

Datastax document recommends to increase the file_cache_size.

We have 32G over all memory allocated 16G to Cassandra. What is the recommended 
value in my case. And also when does this memory gets filled up frequent does 
nodeflush helps in avoiding this info messages?


Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Hossein Ghiyasi Mehr
It may be helpful:
https://thelastpickle.com/blog/2018/08/08/compression_performance.html
It's complex. Simple explanation, cassandra keeps sstables in memory based
on chunk size and sstable parts. It manage loading new sstables to memory
based on requests on different sstables correctly . You should be worry
about it (sstables loaded in memory)

*VafaTech.com - A Total Solution for Data Gathering & Analysis*


On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy  wrote:

> Thanks Hossein,
>
> How does the chunks are moved out of memory (LRU?) if it want to make room
> for new requests to get chunks?if it has mechanism to clear chunks from
> cache what causes to cannot allocate chunk? Can you point me to any
> documention?
>
> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr 
> wrote:
>
>> Chunks are part of sstables. When there is enough space in memory to
>> cache them, read performance will increase if application requests it again.
>>
>> Your real answer is application dependent. For example write heavy
>> applications are different than read heavy or read-write heavy. Real time
>> applications are different than time series data environments and ... .
>>
>>
>>
>> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy 
>> wrote:
>>
>>> Hello,
>>>
>>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I
>>> see this because file_cache_size_mb by default set to 512MB.
>>>
>>> Datastax document recommends to increase the file_cache_size.
>>>
>>> We have 32G over all memory allocated 16G to Cassandra. What is the
>>> recommended value in my case. And also when does this memory gets filled up
>>> frequent does nodeflush helps in avoiding this info messages?
>>>
>>


Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Rajsekhar Mallick
Hello Rahul,

I would request Hossein to correct me if I am wrong. Below is how it works

How will a application/database read something from the disk
A request comes in for read> the application code internally would be
invoking upon system calls-> these kernel level system calls will
schedule a job with io-scheduler--> the data is then read and  returned
by the device drivers-> this fetched data from the disk is a
accumulated in a memory location ( file buffer) until the entire read
operation is complete-> then i guess the data is uncompressed>
processed inside jvm as JAVA objects-> handed over to the application
logic to transmit it over the network interface.

This is my understanding of file_cache_size_in_mb. Basically caching disk
data onto the file system cache.
The alert you are getting is an INFO level log.
I would recommend try understanding why is it that this cache is filling up
fast. Increasing the cache size is a solution but as i remember there are
some impact if this is increased. I faced a similar issue and increased the
cache size. Eventually it happened that the increased size started falling
short.

You have the right question of how cache is being recycled. If you find an
answer do post the same. But that is something Cassandra doesn't have a
control on ( that is what i understand) .
 Investigating your reads,if a lot of data is being read to satisfy few
queries, might be another way to start troubleshooting

Thanks,
Rajsekhar








On Mon, 2 Dec, 2019, 8:18 PM Rahul Reddy,  wrote:

> Thanks Hossein,
>
> How does the chunks are moved out of memory (LRU?) if it want to make room
> for new requests to get chunks?if it has mechanism to clear chunks from
> cache what causes to cannot allocate chunk? Can you point me to any
> documention?
>
> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr 
> wrote:
>
>> Chunks are part of sstables. When there is enough space in memory to
>> cache them, read performance will increase if application requests it again.
>>
>> Your real answer is application dependent. For example write heavy
>> applications are different than read heavy or read-write heavy. Real time
>> applications are different than time series data environments and ... .
>>
>>
>>
>> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy 
>> wrote:
>>
>>> Hello,
>>>
>>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I
>>> see this because file_cache_size_mb by default set to 512MB.
>>>
>>> Datastax document recommends to increase the file_cache_size.
>>>
>>> We have 32G over all memory allocated 16G to Cassandra. What is the
>>> recommended value in my case. And also when does this memory gets filled up
>>> frequent does nodeflush helps in avoiding this info messages?
>>>
>>


Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-02 Thread Rahul Reddy
Thanks Hossein,

How does the chunks are moved out of memory (LRU?) if it want to make room
for new requests to get chunks?if it has mechanism to clear chunks from
cache what causes to cannot allocate chunk? Can you point me to any
documention?

On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr 
wrote:

> Chunks are part of sstables. When there is enough space in memory to cache
> them, read performance will increase if application requests it again.
>
> Your real answer is application dependent. For example write heavy
> applications are different than read heavy or read-write heavy. Real time
> applications are different than time series data environments and ... .
>
>
>
> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy 
> wrote:
>
>> Hello,
>>
>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
>> this because file_cache_size_mb by default set to 512MB.
>>
>> Datastax document recommends to increase the file_cache_size.
>>
>> We have 32G over all memory allocated 16G to Cassandra. What is the
>> recommended value in my case. And also when does this memory gets filled up
>> frequent does nodeflush helps in avoiding this info messages?
>>
>


Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-01 Thread Hossein Ghiyasi Mehr
Chunks are part of sstables. When there is enough space in memory to cache
them, read performance will increase if application requests it again.

Your real answer is application dependent. For example write heavy
applications are different than read heavy or read-write heavy. Real time
applications are different than time series data environments and ... .



On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy  wrote:

> Hello,
>
> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
> this because file_cache_size_mb by default set to 512MB.
>
> Datastax document recommends to increase the file_cache_size.
>
> We have 32G over all memory allocated 16G to Cassandra. What is the
> recommended value in my case. And also when does this memory gets filled up
> frequent does nodeflush helps in avoiding this info messages?
>


"Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"

2019-12-01 Thread Rahul Reddy
Hello,

We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
this because file_cache_size_mb by default set to 512MB.

Datastax document recommends to increase the file_cache_size.

We have 32G over all memory allocated 16G to Cassandra. What is the
recommended value in my case. And also when does this memory gets filled up
frequent does nodeflush helps in avoiding this info messages?


Commit sync issues and maximum memory usage reached messages seen in system log

2019-05-09 Thread Rajsekhar Mallick
Hello team,

I am observing below warn and info message in system.log

1. Info log: maximum memory usage reached (1.000GiB), cannot allocate chunk
of 1 MiB.

I tried by increasing the file_cache_size_in_mb in Cassandra.yaml from 512
to 1024. But still this message shows up in logs.

2. Warn log: Out of 25 commit log syncs over the past 223 second with
average duration of 36.28 ms, 2 have exceeded the configured commit
interval by an average of 200.44ms

Commitlog sync is periodic
Commitlog_sync_period_in_ms is set to 10 seconds
Commitlog_segment_size_in_mb is set to 32
Commitlog_total_size_in_mb is set to 1024

Kindly comment on what may I conclude from above logs


Re: Maximum memory usage reached

2019-03-07 Thread Kyrylo Lebediev
Got it.
Thank you for helping me, Jon, Jeff!

> Is there a reason why you’re picking Cassandra for this dataset?
Decision wasn’t made by myself, I guess C* was chosen because some huge growth 
was planned.

Regards,
Kyrill

From: Jeff Jirsa 
Reply-To: "user@cassandra.apache.org" 
Date: Wednesday, March 6, 2019 at 22:19
To: "user@cassandra.apache.org" 
Subject: Re: Maximum memory usage reached

Also, that particular logger is for the internal chunk / page cache. If it 
can’t allocate from within that pool, it’ll just use a normal bytebuffer.

It’s not really a problem, but if you see performance suffer, upgrade to latest 
3.11.4, there was a bit of a perf improvement in the case where that cache 
fills up.

--
Jeff Jirsa


On Mar 6, 2019, at 11:40 AM, Jonathan Haddad 
mailto:j...@jonhaddad.com>> wrote:
That’s not an error. To the left of the log message is the severity, level INFO.

Generally, I don’t recommend running Cassandra on only 2GB ram or for small 
datasets that can easily fit in memory. Is there a reason why you’re picking 
Cassandra for this dataset?

On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev 
mailto:klebed...@conductor.com>> wrote:
Hi All,

We have a tiny 3-node cluster
C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade immediately)
HEAP_SIZE is 2G
JVM options are default
All setting in cassandra.yaml are default (file_cache_size_in_mb not set)

Data per node – just ~ 1Gbyte

We’re getting following errors messages:

DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545 
CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df) 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
 ]
DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582 
CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df) 4 
sstables to 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
 to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read Throughput 
= 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput = ~106/s.  194 
total partitions merged to 44.  Partition merge counts were {1:18, 4:44, }
INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007 
IndexSummaryRedistribution.java:75 - Redistributing index summaries
INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB

What’s interesting that “Maximum memory usage reached” messages appears each 15 
minutes.
Reboot temporary solve the issue, but it then appears again after some time

Checked, there are no huge partitions (max partition size is ~2Mbytes )

How such small amount of data may cause this issue?
How to debug this issue further?


Regards,
Kyrill


--
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: Maximum memory usage reached

2019-03-06 Thread Jeff Jirsa
Also, that particular logger is for the internal chunk / page cache. If it 
can’t allocate from within that pool, it’ll just use a normal bytebuffer. 

It’s not really a problem, but if you see performance suffer, upgrade to latest 
3.11.4, there was a bit of a perf improvement in the case where that cache 
fills up.


-- 
Jeff Jirsa


> On Mar 6, 2019, at 11:40 AM, Jonathan Haddad  wrote:
> 
> That’s not an error. To the left of the log message is the severity, level 
> INFO. 
> 
> Generally, I don’t recommend running Cassandra on only 2GB ram or for small 
> datasets that can easily fit in memory. Is there a reason why you’re picking 
> Cassandra for this dataset?
> 
>> On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev  
>> wrote:
>> Hi All,
>> 
>>  
>> 
>> We have a tiny 3-node cluster
>> 
>> C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade immediately)
>> 
>> HEAP_SIZE is 2G
>> 
>> JVM options are default
>> 
>> All setting in cassandra.yaml are default (file_cache_size_in_mb not set)
>> 
>>  
>> 
>> Data per node – just ~ 1Gbyte
>> 
>>  
>> 
>> We’re getting following errors messages:
>> 
>>  
>> 
>> DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545 
>> CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df) 
>> [/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
>>  
>> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
>>  
>> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
>>  
>> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
>>  ]
>> 
>> DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582 
>> CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df) 4 
>> sstables to 
>> [/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
>>  to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read 
>> Throughput = 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput = 
>> ~106/s.  194 total partitions merged to 44.  Partition merge counts were 
>> {1:18, 4:44, }
>> 
>> INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007 
>> IndexSummaryRedistribution.java:75 - Redistributing index summaries
>> 
>> INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> 
>> INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> 
>> INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> 
>> INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 - 
>> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>> 
>>  
>> 
>> What’s interesting that “Maximum memory usage reached” messages appears each 
>> 15 minutes.
>> 
>> Reboot temporary solve the issue, but it then appears again after some time
>> 
>>  
>> 
>> Checked, there are no huge partitions (max partition size is ~2Mbytes )
>> 
>>  
>> 
>> How such small amount of data may cause this issue?
>> 
>> How to debug this issue further?
>> 
>>  
>> 
>>  
>> 
>> Regards,
>> 
>> Kyrill
>> 
>>  
>> 
>>  
>> 
> -- 
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade


Re: Maximum memory usage reached

2019-03-06 Thread Jonathan Haddad
That’s not an error. To the left of the log message is the severity, level
INFO.

Generally, I don’t recommend running Cassandra on only 2GB ram or for small
datasets that can easily fit in memory. Is there a reason why you’re
picking Cassandra for this dataset?

On Thu, Mar 7, 2019 at 8:04 AM Kyrylo Lebediev 
wrote:

> Hi All,
>
>
>
> We have a tiny 3-node cluster
>
> C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade
> immediately)
>
> HEAP_SIZE is 2G
>
> JVM options are default
>
> All setting in cassandra.yaml are default (file_cache_size_in_mb not set)
>
>
>
> Data per node – just ~ 1Gbyte
>
>
>
> We’re getting following errors messages:
>
>
>
> DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545
> CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df)
> [/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
> /cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
> ]
>
> DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582
> CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df)
> 4 sstables to
> [/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
> to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read
> Throughput = 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput =
> ~106/s.  194 total partitions merged to 44.  Partition merge counts were
> {1:18, 4:44, }
>
> INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007
> IndexSummaryRedistribution.java:75 - Redistributing index summaries
>
> INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 -
> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>
> INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 -
> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>
> INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 -
> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>
> INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 -
> Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
>
>
>
> What’s interesting that “Maximum memory usage reached” messages appears
> each 15 minutes.
>
> Reboot temporary solve the issue, but it then appears again after some time
>
>
>
> Checked, there are no huge partitions (max partition size is ~2Mbytes )
>
>
>
> How such small amount of data may cause this issue?
>
> How to debug this issue further?
>
>
>
>
>
> Regards,
>
> Kyrill
>
>
>
>
>
-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Maximum memory usage reached

2019-03-06 Thread Kyrylo Lebediev
Hi All,

We have a tiny 3-node cluster
C* version 3.9 (I know 3.11 is better/stable, but can’t upgrade immediately)
HEAP_SIZE is 2G
JVM options are default
All setting in cassandra.yaml are default (file_cache_size_in_mb not set)

Data per node – just ~ 1Gbyte

We’re getting following errors messages:

DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,545 
CompactionTask.java:150 - Compacting (ed4a4d90-4028-11e9-adc0-230e0d6622df) 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23248-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23247-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23246-big-Data.db:level=0,
 
/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23245-big-Data.db:level=0,
 ]
DEBUG [CompactionExecutor:87412] 2019-03-06 11:00:13,582 
CompactionTask.java:230 - Compacted (ed4a4d90-4028-11e9-adc0-230e0d6622df) 4 
sstables to 
[/cassandra/data/data/system/sstable_activity-5a1ff267ace03f128563cfae6103c65e/mc-23249-big,]
 to level=0.  6.264KiB to 1.485KiB (~23% of original) in 36ms.  Read Throughput 
= 170.754KiB/s, Write Throughput = 40.492KiB/s, Row Throughput = ~106/s.  194 
total partitions merged to 44.  Partition merge counts were {1:18, 4:44, }
INFO  [IndexSummaryManager:1] 2019-03-06 11:00:22,007 
IndexSummaryRedistribution.java:75 - Redistributing index summaries
INFO  [pool-1-thread-1] 2019-03-06 11:11:24,903 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:26:24,926 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:41:25,010 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB
INFO  [pool-1-thread-1] 2019-03-06 11:56:25,018 NoSpamLogger.java:91 - Maximum 
memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB

What’s interesting that “Maximum memory usage reached” messages appears each 15 
minutes.
Reboot temporary solve the issue, but it then appears again after some time

Checked, there are no huge partitions (max partition size is ~2Mbytes )

How such small amount of data may cause this issue?
How to debug this issue further?


Regards,
Kyrill




RE: Maximum memory usage

2019-02-10 Thread Kenneth Brotman
Can we the see “nodetool tablestats” for the biggest table as well.

 

From: Kenneth Brotman [mailto:kenbrot...@yahoo.com.INVALID] 
Sent: Sunday, February 10, 2019 7:21 AM
To: user@cassandra.apache.org
Subject: RE: Maximum memory usage

 

Okay, that’s at the moment it was calculated.  Still need to see histograms.

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Sunday, February 10, 2019 7:09 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage

 

Thanks Kenneth,

 

110mb is the biggest partition in our db

 

On Sun, Feb 10, 2019, 9:55 AM Kenneth Brotman  
wrote:

Rahul,

 

Those partitions are tiny.  Could you give us the table histograms for the 
biggest tables.

 

Thanks,

 

Kenneth Brotman

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Sunday, February 10, 2019 6:43 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage

 

```Percentile  SSTables Write Latency  Read LatencyPartition Size   
 Cell Count

  (micros)  (micros)   (bytes)  


50% 1.00 24.60219.34   258  
   4

75% 100 24.60379.02   446   
  8

95% 1.00 35.43379.02   924  
  17

98% 1.00 51.01379.02  1109  
  24

99% 1.00 61.21379.02  1331  
  29

Min 0.00  8.24126.94   104  
   0

Max 1.00152.32379.02   8409007  
  152321

```

 

On Wed, Feb 6, 2019, 8:34 PM Kenneth Brotman  
wrote:

Can you give us the “nodetool tablehistograms”

 

Kenneth Brotman

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Wednesday, February 06, 2019 6:19 AM
To: user@cassandra.apache.org
Subject: Maximum memory usage

 

Hello,

 

I see maximum memory usage alerts in my system.log couple of times in a day as 
INFO. So far I haven't seen any issue with db. Why those messages are logged in 
system.log do we have any impact for reads/writes with those warnings? And what 
nerd to be looked

 

INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408 
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot 
allocate chunk of 1.000MiB

 

Thanks in advance



RE: Maximum memory usage

2019-02-10 Thread Kenneth Brotman
Okay, that’s at the moment it was calculated.  Still need to see histograms.

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Sunday, February 10, 2019 7:09 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage

 

Thanks Kenneth,

 

110mb is the biggest partition in our db

 

On Sun, Feb 10, 2019, 9:55 AM Kenneth Brotman  
wrote:

Rahul,

 

Those partitions are tiny.  Could you give us the table histograms for the 
biggest tables.

 

Thanks,

 

Kenneth Brotman

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Sunday, February 10, 2019 6:43 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage

 

```Percentile  SSTables Write Latency  Read LatencyPartition Size   
 Cell Count

  (micros)  (micros)   (bytes)  


50% 1.00 24.60219.34   258  
   4

75% 100 24.60379.02   446   
  8

95% 1.00 35.43379.02   924  
  17

98% 1.00 51.01379.02  1109  
  24

99% 1.00 61.21379.02  1331  
  29

Min 0.00  8.24126.94   104  
   0

Max 1.00152.32379.02   8409007  
  152321

```

 

On Wed, Feb 6, 2019, 8:34 PM Kenneth Brotman  
wrote:

Can you give us the “nodetool tablehistograms”

 

Kenneth Brotman

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Wednesday, February 06, 2019 6:19 AM
To: user@cassandra.apache.org
Subject: Maximum memory usage

 

Hello,

 

I see maximum memory usage alerts in my system.log couple of times in a day as 
INFO. So far I haven't seen any issue with db. Why those messages are logged in 
system.log do we have any impact for reads/writes with those warnings? And what 
nerd to be looked

 

INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408 
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot 
allocate chunk of 1.000MiB

 

Thanks in advance



Re: Maximum memory usage

2019-02-10 Thread Rahul Reddy
Thanks Kenneth,

110mb is the biggest partition in our db

On Sun, Feb 10, 2019, 9:55 AM Kenneth Brotman 
wrote:

> Rahul,
>
>
>
> Those partitions are tiny.  Could you give us the table histograms for the
> biggest tables.
>
>
>
> Thanks,
>
>
>
> Kenneth Brotman
>
>
>
> *From:* Rahul Reddy [mailto:rahulreddy1...@gmail.com]
> *Sent:* Sunday, February 10, 2019 6:43 AM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Maximum memory usage
>
>
>
> ```Percentile  SSTables Write Latency  Read LatencyPartition
> SizeCell Count
>
>   (micros)  (micros)
>  (bytes)
>
> 50% 1.00 24.60219.34
>  258 4
>
> 75% 1.00 24.60379.02
>  446 8
>
> 95% 1.00 35.43379.02
>  92417
>
> 98% 1.00 51.01379.02
> 110924
>
> 99% 1.00 61.21379.02
> 133129
>
> Min 0.00  8.24126.94
>  104 0
>
> Max 1.00152.32379.02
>  8409007152321
>
> ```
>
>
>
> On Wed, Feb 6, 2019, 8:34 PM Kenneth Brotman 
> wrote:
>
> Can you give us the “nodetool tablehistograms”
>
>
>
> Kenneth Brotman
>
>
>
> *From:* Rahul Reddy [mailto:rahulreddy1...@gmail.com]
> *Sent:* Wednesday, February 06, 2019 6:19 AM
> *To:* user@cassandra.apache.org
> *Subject:* Maximum memory usage
>
>
>
> Hello,
>
>
>
> I see maximum memory usage alerts in my system.log couple of times in a
> day as INFO. So far I haven't seen any issue with db. Why those messages
> are logged in system.log do we have any impact for reads/writes with those
> warnings? And what nerd to be looked
>
>
>
> INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB
>
>
>
> Thanks in advance
>
>


Re: Maximum memory usage

2019-02-10 Thread Rahul Reddy
One of the other db with 100mb partition* out of memory happens  very
frequently.
```Percentile  SSTables Write Latency  Read LatencyPartition
SizeCell Count
  (micros)  (micros)   (bytes)

50% 0.00  0.00  0.00182785
2759
75% 0.00  0.00  0.00943127
   14237
95% 0.00  0.00  0.00   4866323
   73457
98% 0.00  0.00  0.00  12108970
  152321
99% 0.00  0.00  0.00  17436917
  219342
Min 0.00  0.00  0.00   150
   2
Max 0.00  0.00  0.00 107964792
 1358102



On Thu, Feb 7, 2019, 2:29 AM dinesh.jo...@yahoo.com.INVALID
 wrote:

> Are you running any nodetool commands during that period? IIRC, this is a
> log entry emitted by the BufferPool. It may be harm unless it's happening
> very often or logging a OOM.
>
> Dinesh
>
>
> On Wednesday, February 6, 2019, 6:19:42 AM PST, Rahul Reddy <
> rahulreddy1...@gmail.com> wrote:
>
>
> Hello,
>
> I see maximum memory usage alerts in my system.log couple of times in a
> day as INFO. So far I haven't seen any issue with db. Why those messages
> are logged in system.log do we have any impact for reads/writes with those
> warnings? And what nerd to be looked
>
> INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB
>
> Thanks in advance
>


RE: Maximum memory usage

2019-02-10 Thread Kenneth Brotman
Rahul,

 

Those partitions are tiny.  Could you give us the table histograms for the 
biggest tables.

 

Thanks,

 

Kenneth Brotman

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Sunday, February 10, 2019 6:43 AM
To: user@cassandra.apache.org
Subject: Re: Maximum memory usage

 

```Percentile  SSTables Write Latency  Read LatencyPartition Size   
 Cell Count

  (micros)  (micros)   (bytes)  


50% 1.00 24.60219.34   258  
   4

75% 1.00 24.60379.02   446  
   8

95% 1.00 35.43379.02   924  
  17

98% 1.00 51.01379.02  1109  
  24

99% 1.00 61.21379.02  1331  
  29

Min 0.00  8.24126.94   104  
   0

Max 1.00152.32379.02   8409007  
  152321

```

 

On Wed, Feb 6, 2019, 8:34 PM Kenneth Brotman  
wrote:

Can you give us the “nodetool tablehistograms”

 

Kenneth Brotman

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Wednesday, February 06, 2019 6:19 AM
To: user@cassandra.apache.org
Subject: Maximum memory usage

 

Hello,

 

I see maximum memory usage alerts in my system.log couple of times in a day as 
INFO. So far I haven't seen any issue with db. Why those messages are logged in 
system.log do we have any impact for reads/writes with those warnings? And what 
nerd to be looked

 

INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408 
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot 
allocate chunk of 1.000MiB

 

Thanks in advance



Re: Maximum memory usage

2019-02-10 Thread Rahul Reddy
No not running any nodetool commands. It happens 2 to 3 times a day

On Thu, Feb 7, 2019, 2:29 AM dinesh.jo...@yahoo.com.INVALID
 wrote:

> Are you running any nodetool commands during that period? IIRC, this is a
> log entry emitted by the BufferPool. It may be harm unless it's happening
> very often or logging a OOM.
>
> Dinesh
>
>
> On Wednesday, February 6, 2019, 6:19:42 AM PST, Rahul Reddy <
> rahulreddy1...@gmail.com> wrote:
>
>
> Hello,
>
> I see maximum memory usage alerts in my system.log couple of times in a
> day as INFO. So far I haven't seen any issue with db. Why those messages
> are logged in system.log do we have any impact for reads/writes with those
> warnings? And what nerd to be looked
>
> INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB
>
> Thanks in advance
>


Re: Maximum memory usage

2019-02-10 Thread Rahul Reddy
```Percentile  SSTables Write Latency  Read LatencyPartition
SizeCell Count
  (micros)  (micros)   (bytes)

50% 1.00 24.60219.34   258
   4
75% 1.00 24.60379.02   446
   8
95% 1.00 35.43379.02   924
  17
98% 1.00 51.01379.02  1109
  24
99% 1.00 61.21379.02  1331
  29
Min 0.00  8.24126.94   104
   0
Max 1.00152.32379.02   8409007
  152321
```

On Wed, Feb 6, 2019, 8:34 PM Kenneth Brotman 
wrote:

> Can you give us the “nodetool tablehistograms”
>
>
>
> Kenneth Brotman
>
>
>
> *From:* Rahul Reddy [mailto:rahulreddy1...@gmail.com]
> *Sent:* Wednesday, February 06, 2019 6:19 AM
> *To:* user@cassandra.apache.org
> *Subject:* Maximum memory usage
>
>
>
> Hello,
>
>
>
> I see maximum memory usage alerts in my system.log couple of times in a
> day as INFO. So far I haven't seen any issue with db. Why those messages
> are logged in system.log do we have any impact for reads/writes with those
> warnings? And what nerd to be looked
>
>
>
> INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
> allocate chunk of 1.000MiB
>
>
>
> Thanks in advance
>


Re: Maximum memory usage

2019-02-06 Thread dinesh.jo...@yahoo.com.INVALID
Are you running any nodetool commands during that period? IIRC, this is a log 
entry emitted by the BufferPool. It may be harm unless it's happening very 
often or logging a OOM.
Dinesh 

On Wednesday, February 6, 2019, 6:19:42 AM PST, Rahul Reddy 
 wrote:  
 
 Hello,
I see maximum memory usage alerts in my system.log couple of times in a day as 
INFO. So far I haven't seen any issue with db. Why those messages are logged in 
system.log do we have any impact for reads/writes with those warnings? And what 
nerd to be looked
INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408 
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot 
allocate chunk of 1.000MiB

Thanks in advance  

RE: Maximum memory usage

2019-02-06 Thread Kenneth Brotman
Can you give us the “nodetool tablehistograms”

 

Kenneth Brotman

 

From: Rahul Reddy [mailto:rahulreddy1...@gmail.com] 
Sent: Wednesday, February 06, 2019 6:19 AM
To: user@cassandra.apache.org
Subject: Maximum memory usage

 

Hello,

 

I see maximum memory usage alerts in my system.log couple of times in a day as 
INFO. So far I haven't seen any issue with db. Why those messages are logged in 
system.log do we have any impact for reads/writes with those warnings? And what 
nerd to be looked

 

INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408 
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot 
allocate chunk of 1.000MiB

 

Thanks in advance



Maximum memory usage

2019-02-06 Thread Rahul Reddy
Hello,

I see maximum memory usage alerts in my system.log couple of times in a day
as INFO. So far I haven't seen any issue with db. Why those messages are
logged in system.log do we have any impact for reads/writes with those
warnings? And what nerd to be looked

INFO  [RMI TCP Connection(170917)-127.0.0.1] 2019-02-05 23:15:47,408
NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB), cannot
allocate chunk of 1.000MiB

Thanks in advance


Re: How to identify which table causing Maximum Memory usage limit

2018-06-11 Thread Nitan Kainth
Sorry, I didn't mean to high jack the thread. But I have seen similar
issues and ignore it always because it wasn't really causing any issues.
But I am really curious on how to find these.

On Mon, Jun 11, 2018 at 9:45 AM, Nitan Kainth  wrote:

> thanks Martin.
>
> 99 percentile of all tables are even size. Max is always higher in all
> tables.
>
> The question is, How do I identify, which table is throwing this "Maximum
> memory usage reached (512.000MiB)" usage message?
>
> On Mon, Jun 11, 2018 at 5:59 AM, Martin Mačura  wrote:
>
>> Hi,
>> we've had this issue with large partitions (100 MB and more).  Use
>> nodetool tablehistograms to find partition sizes for each table.
>>
>> If you have enough heap space to spare, try increasing this parameter:
>> file_cache_size_in_mb: 512
>>
>> There's also the following parameter, but I did not test the impact yet:
>> buffer_pool_use_heap_if_exhausted: true
>>
>>
>> Regards,
>>
>> Martin
>>
>>
>> On Tue, Jun 5, 2018 at 3:54 PM, learner dba
>>  wrote:
>> > Hi,
>> >
>> > We see this message often, cluster has multiple keyspaces and column
>> > families;
>> > How do I know which CF is causing this?
>> > Or it could be something else?
>> > Do we need to worry about this message?
>> >
>> > INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983
>> NoSpamLogger.java:91
>> > - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
>> > 1.000MiB
>> >
>> >
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
>> For additional commands, e-mail: user-h...@cassandra.apache.org
>>
>>
>


Re: How to identify which table causing Maximum Memory usage limit

2018-06-11 Thread Nitan Kainth
thanks Martin.

99 percentile of all tables are even size. Max is always higher in all
tables.

The question is, How do I identify, which table is throwing this "Maximum
memory usage reached (512.000MiB)" usage message?

On Mon, Jun 11, 2018 at 5:59 AM, Martin Mačura  wrote:

> Hi,
> we've had this issue with large partitions (100 MB and more).  Use
> nodetool tablehistograms to find partition sizes for each table.
>
> If you have enough heap space to spare, try increasing this parameter:
> file_cache_size_in_mb: 512
>
> There's also the following parameter, but I did not test the impact yet:
> buffer_pool_use_heap_if_exhausted: true
>
>
> Regards,
>
> Martin
>
>
> On Tue, Jun 5, 2018 at 3:54 PM, learner dba
>  wrote:
> > Hi,
> >
> > We see this message often, cluster has multiple keyspaces and column
> > families;
> > How do I know which CF is causing this?
> > Or it could be something else?
> > Do we need to worry about this message?
> >
> > INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983
> NoSpamLogger.java:91
> > - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
> > 1.000MiB
> >
> >
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


Re: How to identify which table causing Maximum Memory usage limit

2018-06-11 Thread Martin Mačura
Hi,
we've had this issue with large partitions (100 MB and more).  Use
nodetool tablehistograms to find partition sizes for each table.

If you have enough heap space to spare, try increasing this parameter:
file_cache_size_in_mb: 512

There's also the following parameter, but I did not test the impact yet:
buffer_pool_use_heap_if_exhausted: true


Regards,

Martin


On Tue, Jun 5, 2018 at 3:54 PM, learner dba
 wrote:
> Hi,
>
> We see this message often, cluster has multiple keyspaces and column
> families;
> How do I know which CF is causing this?
> Or it could be something else?
> Do we need to worry about this message?
>
> INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983 NoSpamLogger.java:91
> - Maximum memory usage reached (512.000MiB), cannot allocate chunk of
> 1.000MiB
>
>

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



How to identify which table causing Maximum Memory usage limit

2018-06-05 Thread learner dba
Hi,
We see this message often, cluster has multiple keyspaces and column families; 
How do I know which CF is causing this? Or it could be something else?Do we 
need to worry about this message?

INFO  [CounterMutationStage-1] 2018-06-05 13:36:35,983 NoSpamLogger.java:91 - 
Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB



Re: Maximum memory usage reached in cassandra!

2017-04-03 Thread Mark Rose
You may have better luck switching to G1GC and using a much larger
heap (16 to 30GB). 4GB is likely too small for your amount of data,
especially if you have a lot of sstables. Then try increasing
file_cache_size_in_mb further.

Cheers,
Mark

On Tue, Mar 28, 2017 at 3:01 AM, Mokkapati, Bhargav (Nokia -
IN/Chennai)  wrote:
> Hi Cassandra users,
>
>
>
> I am getting “Maximum memory usage reached (536870912 bytes), cannot
> allocate chunk of 1048576 bytes” . As a remedy I have changed the off heap
> memory usage limit cap i.e file_cache_size_in_mb parameter in cassandra.yaml
> from 512 to 1024.
>
>
>
> But now again the increased limit got filled up and throwing a message
> “Maximum memory usage reached (1073741824 bytes), cannot allocate chunk of
> 1048576 bytes”
>
>
>
> This issue occurring when redistribution of index’s happening ,due to this
> Cassandra nodes are still UP but read requests are getting failed from
> application side.
>
>
>
> My configuration details are as below:
>
>
>
> 5 node cluster , each node with 68 disks, each disk is 3.7 TB
>
>
>
> Total CPU cores - 8
>
>
>
> total  Mem:377G
>
> used  265G
>
> free   58G
>
> shared  378M
>
> buff/cache 53G
>
> available 104G
>
>
>
> MAX_HEAP_SIZE is 4GB
>
> file_cache_size_in_mb: 1024
>
>
>
> memtable heap space is commented in yaml file as below:
>
> # memtable_heap_space_in_mb: 2048
>
> # memtable_offheap_space_in_mb: 2048
>
>
>
> Can anyone please suggest the solution for this issue. Thanks in advance !
>
>
>
> Thanks,
>
> Bhargav M
>
>
>
>
>
>
>
>


Maximum memory usage reached in cassandra!

2017-03-28 Thread Mokkapati, Bhargav (Nokia - IN/Chennai)
Hi Cassandra users,

I am getting "Maximum memory usage reached (536870912 bytes), cannot allocate 
chunk of 1048576 bytes" . As a remedy I have changed the off heap memory usage 
limit cap i.e file_cache_size_in_mb parameter in cassandra.yaml from 512 to 
1024.

But now again the increased limit got filled up and throwing a message "Maximum 
memory usage reached (1073741824 bytes), cannot allocate chunk of 1048576 bytes"

This issue occurring when redistribution of index's happening ,due to this 
Cassandra nodes are still UP but read requests are getting failed from 
application side.

My configuration details are as below:

5 node cluster , each node with 68 disks, each disk is 3.7 TB

Total CPU cores - 8

total  Mem:377G
used  265G
free   58G
shared  378M
buff/cache 53G
available 104G

MAX_HEAP_SIZE is 4GB
file_cache_size_in_mb: 1024

memtable heap space is commented in yaml file as below:
# memtable_heap_space_in_mb: 2048
# memtable_offheap_space_in_mb: 2048

Can anyone please suggest the solution for this issue. Thanks in advance !

Thanks,
Bhargav M