Basically the idea is right to me.
1. But we need smaller granularity to control the contribution to OOM badness.
Because when the TTM buffer resides in VRAM rather than evict to system
memory, we should not take this account into badness.
But I think it is not easy to implement.
2. If the TTM buffer(GTT here) is mapped to user for CPU access, not quite sure
the buffer size is already taken into account for kernel.
If yes, at last the size will be counted again by your patches.
So, I am thinking if we can counted the TTM buffer size into:
struct mm_rss_stat {
atomic_long_t count[NR_MM_COUNTERS];
};
Which is done by kernel based on CPU VM (page table).
Something like that:
When GTT allocate suceess:
add_mm_counter(vma->vm_mm, MM_ANONPAGES, buffer_size);
When GTT swapped out:
dec_mm_counter from MM_ANONPAGES frist, then
add_mm_counter(vma->vm_mm, MM_SWAPENTS, buffer_size); // or MM_SHMEMPAGES or
add new item.
Update the corresponding item in mm_rss_stat always.
If that, we can control the status update accurately.
What do you think about that?
And is there any side-effect for this approach?
Thanks
Roger(Hongbo.He)
-----Original Message-----
From: dri-devel [mailto:[email protected]] On Behalf Of
Andrey Grodzovsky
Sent: Friday, January 19, 2018 12:48 AM
To: [email protected]; [email protected];
[email protected]; [email protected]
Cc: Koenig, Christian <[email protected]>
Subject: [RFC] Per file OOM badness
Hi, this series is a revised version of an RFC sent by Christian König a few
years ago. The original RFC can be found at
https://lists.freedesktop.org/archives/dri-devel/2015-September/089778.html
This is the same idea and I've just adressed his concern from the original RFC
and switched to a callback into file_ops instead of a new member in struct file.
Thanks,
Andrey
_______________________________________________
dri-devel mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/dri-devel