Thanks. I guess some earlier thread got truncated.

I already applied Erick's recommendations and that seem to have worked in
reducing the ram consumption by around 50%.

Regarding cheap memory and hardware, we are already running 96GB boxes and
getting multiple larger ones might be a little difficult at this point.
Hence I wanted to understand cons of disabling mmap use for data.

Besides degraded read performance, wouldn't we be putting more pressure on
heap memory, when disabling mmap, which might cause frequent GCs and OOM
errors at some point? Since currently whatever was being served by mmap
would be loaded over heap now and processed/stored further.

Also, we've disabled the swap usage on hosts as recommended to optimize
performance so cass won't be able to enter that too in case memory starts
to fill up.

On Tue, 3 Aug, 2021, 6:33 pm Jim Shaw, <jxys...@gmail.com> wrote:

> I think Erick posted https://community.datastax.com/questions/6947/.
> explained very clearly.
>
> We hit same issue only on a huge table when upgrade, and we changed back
> after done.
> My understanding,  Which option to chose,  shall depend on your user case.
> If chasing high performance on a big table, then go default one, and
> increase capacity in memory, nowadays hardware is cheaper.
>
> Thanks,
> Jim
>
> On Mon, Aug 2, 2021 at 7:12 PM Amandeep Srivastava <
> amandeep.srivastava1...@gmail.com> wrote:
>
>> Can anyone please help with the above questions? To summarise:
>>
>> 1) What is the impact of using mmap only for indices besides a
>> degradation in read performance?
>> 2) Why does the off heap consumed during Cassandra full repair remains
>> occupied 12+ hours after the repair completion and is there a
>> manual/configuration driven way to clear that earlier?
>>
>> Thanks,
>> Aman
>>
>> On Thu, 29 Jul, 2021, 6:47 pm Amandeep Srivastava, <
>> amandeep.srivastava1...@gmail.com> wrote:
>>
>>> Hi Erick,
>>>
>>> Limiting mmap to index only seems to have resolved the issue. The max
>>> ram usage remained at 60% this time. Could you please point me to the
>>> limitations for setting this param? - For starters, I can see read
>>> performance getting reduced up to 30% (CASSANDRA-8464
>>> <https://issues.apache.org/jira/browse/CASSANDRA-8464>)
>>>
>>> Also if you could please shed light on extended questions in my earlier
>>> email.
>>>
>>> Thanks a lot.
>>>
>>> Regards,
>>> Aman
>>>
>>> On Thu, Jul 29, 2021 at 12:52 PM Amandeep Srivastava <
>>> amandeep.srivastava1...@gmail.com> wrote:
>>>
>>>> Thanks, Bowen, don't think that's an issue - but yes I can try
>>>> upgrading to 3.11.5 and limit the merkle tree size to bring down the memory
>>>> utilization.
>>>>
>>>> Thanks, Erick, let me try that.
>>>>
>>>> Can someone please share documentation relating to internal functioning
>>>> of full repairs - if there exists one? Wanted to understand the role of the
>>>> heap and off-heap memory separately during the process.
>>>>
>>>> Also, for my case, once the nodes reach the 95% memory usage, it stays
>>>> there for almost 10-12 hours after the repair is complete, before falling
>>>> back to 65%. Any pointers on what might be consuming off-heap for so long
>>>> and can something be done to clear it earlier?
>>>>
>>>> Thanks,
>>>> Aman
>>>>
>>>>
>>>>
>>>
>>> --
>>> Regards,
>>> Aman
>>>
>>

Reply via email to