Hi Ju Hyung,
On 2019/8/25 19:06, Ju Hyung Park wrote:
> Hi Chao,
>
> On Sat, Aug 24, 2019 at 12:52 AM Chao Yu wrote:
>> It's not intentional, I failed to reproduce this issue, could you add some
>> logs
>> to track why we stop urgent GC even there are still dirty segments?
>
> I'm pretty sure
Hi Chao,
On Sat, Aug 24, 2019 at 12:52 AM Chao Yu wrote:
> It's not intentional, I failed to reproduce this issue, could you add some
> logs
> to track why we stop urgent GC even there are still dirty segments?
I'm pretty sure you can reproduce this issue quite easily.
I can see this
Hi Ju Hyung,
Sorry for the delay.
On 2019-8-16 23:37, Ju Hyung Park wrote:
> Hi Chao,
>
> On Thu, Aug 15, 2019 at 3:49 PM Chao Yu wrote:
>> I doubt that before triggering urgent GC, system has dirty datas in memory,
>> then
>> when you trigger `sync`, GCed data and dirty data were flushed to
Hi Chao,
On Thu, Aug 15, 2019 at 3:49 PM Chao Yu wrote:
> I doubt that before triggering urgent GC, system has dirty datas in memory,
> then
> when you trigger `sync`, GCed data and dirty data were flushed to devices
> together, if we write dirty data with out-place-update model, it may make
>
Hi.
I'm reporting some strangeness with gc_urgent.
When running gc_urgent, I can see that dirty memory written in
/proc/meminfo continuously getting increased until GC cannot find any
more segments to clean.
I thought FG_GC are flushed.
And after GC ends, if I do `sync` and run gc_urgent