Missed the heap part, not sure why is that happening

On Tue, Aug 3, 2021 at 8:59 AM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:

> mmap is used for faster reads and as you guessed right you might see read
> performance degradation. If you are seeing high memory usage after repairs
> due to mmaped files, the only way to reduce the memory usage is to trigger
> some other process which requires memory. *mmapped* files use
> buffer/cache memory which gets released as soon as some other process
> requests memory from the kernel. Kernel does not want to waste its effort
> until some request for resource comes in.
>
>
>
> On Tue, Aug 3, 2021 at 4:42 AM Amandeep Srivastava <
> amandeep.srivastava1...@gmail.com> wrote:
>
>> Can anyone please help with the above questions? To summarise:
>>
>> 1) What is the impact of using mmap only for indices besides a
>> degradation in read performance?
>> 2) Why does the off heap consumed during Cassandra full repair remains
>> occupied 12+ hours after the repair completion and is there a
>> manual/configuration driven way to clear that earlier?
>>
>> Thanks,
>> Aman
>>
>> On Thu, 29 Jul, 2021, 6:47 pm Amandeep Srivastava, <
>> amandeep.srivastava1...@gmail.com> wrote:
>>
>>> Hi Erick,
>>>
>>> Limiting mmap to index only seems to have resolved the issue. The max
>>> ram usage remained at 60% this time. Could you please point me to the
>>> limitations for setting this param? - For starters, I can see read
>>> performance getting reduced up to 30% (CASSANDRA-8464
>>> <https://issues.apache.org/jira/browse/CASSANDRA-8464>)
>>>
>>> Also if you could please shed light on extended questions in my earlier
>>> email.
>>>
>>> Thanks a lot.
>>>
>>> Regards,
>>> Aman
>>>
>>> On Thu, Jul 29, 2021 at 12:52 PM Amandeep Srivastava <
>>> amandeep.srivastava1...@gmail.com> wrote:
>>>
>>>> Thanks, Bowen, don't think that's an issue - but yes I can try
>>>> upgrading to 3.11.5 and limit the merkle tree size to bring down the memory
>>>> utilization.
>>>>
>>>> Thanks, Erick, let me try that.
>>>>
>>>> Can someone please share documentation relating to internal functioning
>>>> of full repairs - if there exists one? Wanted to understand the role of the
>>>> heap and off-heap memory separately during the process.
>>>>
>>>> Also, for my case, once the nodes reach the 95% memory usage, it stays
>>>> there for almost 10-12 hours after the repair is complete, before falling
>>>> back to 65%. Any pointers on what might be consuming off-heap for so long
>>>> and can something be done to clear it earlier?
>>>>
>>>> Thanks,
>>>> Aman
>>>>
>>>>
>>>>
>>>
>>> --
>>> Regards,
>>> Aman
>>>
>>

Reply via email to