g
Please follow Michal's suggestion to update the commit message.
After that:
Reviewed-by: Shakeel Butt
> ---
> kernel/fork.c | 15 ++-
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index d66cd1014211..6e2201fe
On Tue, Mar 2, 2021 at 1:34 AM Michal Hocko wrote:
>
[snip]
> > Yeah, imprecision may
> > not be a problem. But if this is what we did deliberately, I think that
> > it is better to add a comment there. Thanks.
>
> Yes the comment is quite confusing. I suspect it meant to say
> /* All
On Tue, Mar 2, 2021 at 1:44 AM Michal Hocko wrote:
>
> On Mon 01-03-21 17:16:29, Mike Kravetz wrote:
> > On 3/1/21 9:23 AM, Michal Hocko wrote:
> > > On Mon 01-03-21 08:39:22, Shakeel Butt wrote:
> > >> On Mon, Mar 1, 2021 at 7:57 AM Michal Hocko wrote:
> >
and mem_cgroup_finish_swapin_page()
completes the charging process. So, the kernel starts the charging
process of the page for swapin with mem_cgroup_charge_swapin_page(),
adds the page to the swapcache and on success completes the charging
process with mem_cgroup_finish_swapin_page().
Signed-off-by: Shakeel
On Tue, Mar 2, 2021 at 1:19 PM Mike Kravetz wrote:
>
> On 3/2/21 6:29 AM, Michal Hocko wrote:
> > On Tue 02-03-21 06:11:51, Shakeel Butt wrote:
> >> On Tue, Mar 2, 2021 at 1:44 AM Michal Hocko wrote:
> >>>
> >>> On Mon 01-03-21 17:16:29, Mike Kravetz
On Thu, Mar 4, 2021 at 7:48 AM Johannes Weiner wrote:
>
> On Wed, Mar 03, 2021 at 05:42:29PM -0800, Shakeel Butt wrote:
> > Currently the kernel adds the page, allocated for swapin, to the
> > swapcache before charging the page. This is fine but now we want a
> > per-me
h adding nr_deferred cleaner and readable and
> make
> review easier. Also remove the "memcg_" prefix.
>
> Acked-by: Vlastimil Babka
> Acked-by: Kirill Tkhai
> Acked-by: Roman Gushchin
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
Kirill Tkhai
> Acked-by: Roman Gushchin
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote:
>
> Using kvfree_rcu() to free the old shrinker_maps instead of call_rcu().
> We don't have to define a dedicated callback for call_rcu() anymore.
>
> Signed-off-by: Yang Shi
> ---
> mm/vmscan.c | 7 +--
> 1 file changed, 1 insertion(+), 6
o[nid]->shrinker_info. And the later patch
> will add more dereference places.
>
> So extract the dereference into a helper to make the code more readable. No
> functional change.
>
> Acked-by: Roman Gushchin
> Acked-by: Kirill Tkhai
> Acked-by: Vlastimil Babka
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
r_max is also used by
> iterating the
> bit map.
>
> Acked-by: Kirill Tkhai
> Acked-by: Roman Gushchin
> Acked-by: Vlastimil Babka
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
ed-by: Vlastimil Babka
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
ameter
>
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Mon, Mar 8, 2021 at 12:30 PM Yang Shi wrote:
>
> On Mon, Mar 8, 2021 at 11:12 AM Shakeel Butt wrote:
> >
> > On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote:
> > >
> > > Currently the number of deferred objects are per shrinker, but some
> > > sla
On Mon, Mar 8, 2021 at 12:22 PM Yang Shi wrote:
>
> On Mon, Mar 8, 2021 at 8:49 AM Roman Gushchin wrote:
> >
> > On Sun, Mar 07, 2021 at 10:13:04PM -0800, Shakeel Butt wrote:
> > > On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote:
> > > >
> > > &g
On Tue, Feb 16, 2021 at 4:13 PM Yang Shi wrote:
>
> Currently the number of deferred objects are per shrinker, but some slabs,
> for example,
> vfs inode/dentry cache are per memcg, this would result in poor isolation
> among memcgs.
>
> The deferred objects typically are generated by
CONFIG_MEMCG or memcg
> is disabled
> by kernel command line, then shrinker's SHRINKER_MEMCG_AWARE flag would be
> cleared.
> This makes the implementation of this patch simpler.
>
> Acked-by: Vlastimil Babka
> Reviewed-by: Kirill Tkhai
> Acked-by: Roman Gushchin
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
gt; Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Mon, Mar 1, 2021 at 4:12 AM Michal Hocko wrote:
>
> On Fri 26-02-21 16:00:30, Shakeel Butt wrote:
> > On Fri, Feb 26, 2021 at 3:14 PM Mike Kravetz
> > wrote:
> > >
> > > Cc: Michal
> > >
> > > On 2/26/21 2:44 PM, Shakeel Butt
On Mon, Mar 1, 2021 at 7:57 AM Michal Hocko wrote:
>
> On Mon 01-03-21 07:10:11, Shakeel Butt wrote:
> > On Mon, Mar 1, 2021 at 4:12 AM Michal Hocko wrote:
> > >
> > > On Fri 26-02-21 16:00:30, Shakeel Butt wrote:
> > > > On Fri, Feb 26, 20
which is from memory.stat. Fix it by using mod_lruvec_page_state instead
> of mod_node_page_state.
>
> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
On Mon, Feb 22, 2021 at 9:55 PM Shakeel Butt wrote:
[snip]
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -596,6 +596,9 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup
> *memcg)
> }
>
> int mem_cgroup_charge(struct page *page,
and mem_cgroup_finish_swapin_page()
completes the charging process. So, the kernel starts the charging
process of the page for swapin with mem_cgroup_charge_swapin_page(),
adds the page to the swapcache and on success completes the charging
process with mem_cgroup_finish_swapin_page().
Signed-off-by: Shakeel
t in the interrupt context) is ignored. This is not what we want.
> So fix it.
>
> Fixes: 37d5985c003d ("mm: kmem: prepare remote memcg charging infra for
> interrupt contexts")
> Signed-off-by: Muchun Song
Good catch.
Cc: sta...@vger.kernel.org
Reviewed-by: S
he limit")), we can again allow __GFP_NOFAIL allocations to trigger
memcg oom-kill. This will make memcg oom behavior closer to page
allocator oom behavior.
Signed-off-by: Shakeel Butt
---
mm/memcontrol.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/mm/memcontrol.c b/mm/m
Replace the implicit checking of root memcg with explicit root memcg
checking i.e. !css->parent with mem_cgroup_is_root().
Signed-off-by: Shakeel Butt
---
mm/memcontrol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index dcb5665ae
On Fri, Mar 5, 2021 at 8:25 AM Johannes Weiner wrote:
>
[...]
> I'd also rename cgroup_memory_noswap to cgroup_swapaccount - to match
> the commandline and (hopefully) make a bit clearer what it effects.
Do we really need to keep supporting "swapaccount=0"? Is swap
page_counter really a
On Fri, Mar 5, 2021 at 9:37 AM David Hildenbrand wrote:
>
> On 04.03.21 01:03, Shakeel Butt wrote:
> > On Wed, Mar 3, 2021 at 3:34 PM Suren Baghdasaryan wrote:
> >>
> >> On Wed, Mar 3, 2021 at 3:17 PM Shakeel Butt wrote:
> >>>
> >>&
On Fri, Mar 5, 2021 at 8:25 AM Johannes Weiner wrote:
>
> On Fri, Mar 05, 2021 at 12:06:31AM -0800, Hugh Dickins wrote:
> > On Wed, 3 Mar 2021, Shakeel Butt wrote:
> >
> > > Currently the kernel adds the page, allocated for swapin, to the
> > > swapcache be
and mem_cgroup_finish_swapin_page()
completes the charging process. So, the kernel starts the charging
process of the page for swapin with mem_cgroup_charge_swapin_page(),
adds the page to the swapcache and on success completes the charging
process with mem_cgroup_finish_swapin_page().
Signed-off-by: Shakeel
On Fri, Mar 5, 2021 at 1:26 PM Shakeel Butt wrote:
>
> Currently the kernel adds the page, allocated for swapin, to the
> swapcache before charging the page. This is fine but now we want a
> per-memcg swapcache stat which is essential for folks who wants to
> transparently migrate
nfo(memcg and flag) of
> the memcg needs to be set to the tail pages.
>
> Signed-off-by: Zhou Guanghui
Reviewed-by: Shakeel Butt
rged.
>
> Therefore, the memcg of the tail page needs to be set when split page.
>
> Signed-off-by: Zhou Guanghui
Reviewed-by: Shakeel Butt
>
> The cgroup v2 documents it, but the description is missed for cgroup v1.
>
> Signed-off-by: Yang Shi
Reviewed-by: Shakeel Butt
On Wed, Apr 7, 2021 at 4:55 AM Michal Hocko wrote:
>
> On Mon 05-04-21 11:18:48, Bharata B Rao wrote:
> > Hi,
> >
> > When running 1 (more-or-less-empty-)containers on a bare-metal Power9
> > server(160 CPUs, 2 NUMA nodes, 256G memory), it is seen that memory
> > consumption increases quite a
Hi Tim,
On Mon, Apr 5, 2021 at 11:08 AM Tim Chen wrote:
>
> Traditionally, all memory is DRAM. Some DRAM might be closer/faster than
> others NUMA wise, but a byte of media has about the same cost whether it
> is close or far. But, with new memory tiers such as Persistent Memory
> (PMEM).
imping along.
>
> [ We used to do this with the original res_counter, where it was a
> more straight-forward correction inside the spinlock section. I
> didn't carry it forward into the lockless page counters for
> simplicity, but it turns out this is quite useful in practice. ]
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Wed, Apr 7, 2021 at 2:47 PM Daniel Xu wrote:
>
> There currently does not exist a way to answer the question: "What is in
> the page cache?". There are various heuristics and counters but nothing
> that can tell you anything like:
>
> * 3M from /home/dxu/foo.txt
> * 5K from ...
> * etc.
On Thu, Apr 8, 2021 at 11:01 AM Yang Shi wrote:
>
> On Thu, Apr 8, 2021 at 10:19 AM Shakeel Butt wrote:
> >
> > Hi Tim,
> >
> > On Mon, Apr 5, 2021 at 11:08 AM Tim Chen wrote:
> > >
> > > Traditionally, all memory is DRAM. Some DRAM mi
On Thu, Apr 8, 2021 at 4:52 AM Michal Hocko wrote:
>
[...]
>
> What I am trying to say (and I have brought that up when demotion has been
> discussed at LSFMM) is that the implementation shouldn't be PMEM aware.
> The specific technology shouldn't be imprinted into the interface.
> Fundamentally
On Thu, Apr 8, 2021 at 1:50 PM Yang Shi wrote:
>
[...]
> >
> > The low and min limits have semantics similar to the v1's soft limit
> > for this situation i.e. letting the low priority job occupy top tier
> > memory and depending on reclaim to take back the excess top tier
> > memory use of such
ent. Introduce a new function obj_cgroup_uncharge_mod_state()
> that combines them with a single irq_save/irq_restore cycle.
>
> Signed-off-by: Waiman Long
Reviewed-by: Shakeel Butt
On Sat, Apr 10, 2021 at 9:16 AM Ilias Apalodimas
wrote:
>
> Hi Matthew
>
> On Sat, Apr 10, 2021 at 04:48:24PM +0100, Matthew Wilcox wrote:
> > On Sat, Apr 10, 2021 at 12:37:58AM +0200, Matteo Croce wrote:
> > > This is needed by the page_pool to avoid recycling a page not allocated
> > > via
d-by: Johannes Weiner
Reviewed-by: Shakeel Butt
r
> >
> > Since this patch is somewhat independent of the rest of the series,
> > you may want to put it in the very beginning, or even submit it
> > separately, to keep the main series as compact as possible. Reviewers
> > can be more hesitant to get involved with larger series ;)
>
> OK. I will gather all the cleanup patches into a separate series.
> Thanks for your suggestion.
That would be best.
For this patch:
Reviewed-by: Shakeel Butt
a WARN_ON_ONCE in the page_counter_cancel(). Who knows if it
> will trigger? So it is better to fix it.
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
to allow either of the two parameters to be set to null. This
> makes mod_memcg_lruvec_state() equivalent to mod_memcg_state() if lruvec
> is null.
>
> Signed-off-by: Waiman Long
Similar to Roman's suggestion: instead of what this patch is doing the
'why' would be better in the changelog.
Reviewed-by: Shakeel Butt
On Fri, Apr 9, 2021 at 4:26 PM Tim Chen wrote:
>
>
> On 4/8/21 4:52 AM, Michal Hocko wrote:
>
> >> The top tier memory used is reported in
> >>
> >> memory.toptier_usage_in_bytes
> >>
> >> The amount of top tier memory usable by each cgroup without
> >> triggering page reclaim is controlled by
On Sun, Apr 18, 2021 at 10:12 PM Ilias Apalodimas
wrote:
>
> On Wed, Apr 14, 2021 at 01:09:47PM -0700, Shakeel Butt wrote:
> > On Wed, Apr 14, 2021 at 12:42 PM Jesper Dangaard Brouer
> > wrote:
> > >
> > [...]
> > > > >
> > > &
On Sun, Apr 18, 2021 at 11:07 PM Muchun Song wrote:
>
> On Mon, Apr 19, 2021 at 8:01 AM Waiman Long wrote:
> >
> > There are two issues with the current refill_obj_stock() code. First of
> > all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
> > atomically flush out
On Mon, Apr 19, 2021 at 8:43 AM Ilias Apalodimas
wrote:
>
[...]
> > Pages mapped into the userspace have their refcnt elevated, so the
> > page_ref_count() check by the drivers indicates to not reuse such
> > pages.
> >
>
> When tcp_zerocopy_receive() is invoked it will call
>
ff-by: Waiman Long
Reviewed-by: Shakeel Butt
Proposal: Provide memory guarantees to userspace oom-killer.
Background:
Issues with kernel oom-killer:
1. Very conservative and prefer to reclaim. Applications can suffer
for a long time.
2. Borrows the context of the allocator which can be resource limited
(low sched priority or limited CPU
On Mon, Apr 12, 2021 at 11:58 PM Muchun Song wrote:
>
> The css_set_lock is used to guard the list of inherited objcgs. So there
> is no need to uncharge kernel memory under css_set_lock. Just move it
> out of the lock.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
is also
> impossible for the two to run in parallel. So xchg() is unnecessary
> and it is enough to use WRITE_ONCE().
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Wed, Apr 14, 2021 at 6:52 AM Rik van Riel wrote:
>
> On Wed, 2021-04-14 at 16:27 +0800, Huang, Ying wrote:
> > Yu Zhao writes:
> >
> > > On Wed, Apr 14, 2021 at 12:15 AM Huang, Ying
> > > wrote:
> > > >
> > > NUMA Optimization
> > > -
> > > Support NUMA policies and per-node
local variable of @pgdat. So mem_cgroup_page_lruvec() do not
> need the pgdat parameter. Just remove it to simplify the code.
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
roup_disabled() and
> CONFIG_MEMCG.
>
> Signed-off-by: Muchun Song
> Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Wed, Apr 14, 2021 at 12:42 PM Jesper Dangaard Brouer
wrote:
>
[...]
> > >
> > > Can this page_pool be used for TCP RX zerocopy? If yes then PageType
> > > can not be used.
> >
> > Yes it can, since it's going to be used as your default allocator for
> > payloads, which might end up on an SKB.
On Mon, Apr 19, 2021 at 11:46 PM Michal Hocko wrote:
>
> On Mon 19-04-21 18:44:02, Shakeel Butt wrote:
[...]
> > memory.min. However a new allocation from userspace oom-killer can
> > still get stuck in the reclaim and policy rich oom-killer do trigger
> > new allocations
On Thu, Apr 15, 2021 at 10:16 PM Muchun Song wrote:
>
> lruvec_holds_page_lru_lock() doesn't check anything about locking and is
> used to check whether the page belongs to the lruvec. So rename it to
> page_matches_lruvec().
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
ime when __mod_obj_stock_state() is called leads to an actual call to
> mod_objcg_state() after initial boot. When doing parallel kernel build,
> the figure was about 16% (21894614 out of 139780628). So caching the
> vmstat data reduces the number of calls to mod_objcg_state() by more
> than 80%.
>
> Signed-off-by: Waiman Long
Reviewed-by: Shakeel Butt
> Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
On Mon, Apr 12, 2021 at 3:55 PM Waiman Long wrote:
>
> Most kmem_cache_alloc() calls are from user context. With instrumentation
> enabled, the measured amount of kmem_cache_alloc() calls from non-task
> context was about 0.01% of the total.
>
> The irq disable/enable sequence used in this case
On Mon, Apr 12, 2021 at 4:10 PM Shakeel Butt wrote:
>
> On Mon, Apr 12, 2021 at 3:55 PM Waiman Long wrote:
> >
> > Most kmem_cache_alloc() calls are from user context. With instrumentation
> > enabled, the measured amount of kmem_cache_alloc() calls from non-task
&g
CCing more folks.
On Fri, Feb 12, 2021 at 9:14 AM Muchun Song wrote:
>
> The swap charges the actual number of swap entries on cgroup v2.
> If a swap cache page is charged successful, and then we uncharge
> the swap counter. It is wrong on cgroup v2. Because the swap
> entry is not freed.
>
>
consumed swap when shared
> pages are partially swapped back in. This in turn allows a cgroup to
> consume more swap than its configured limit intends.
>
> Add the do_memsw_account() check back to fix this problem.
> ---
>
> > Fixes: 2d1c498072de ("mm: memcontrol: make
On Fri, Feb 12, 2021 at 10:48 PM Muchun Song wrote:
>
> On Sat, Feb 13, 2021 at 2:57 AM Shakeel Butt wrote:
> >
> > CCing more folks.
> >
> > On Fri, Feb 12, 2021 at 9:14 AM Muchun Song
> > wrote:
> > >
> > > The swap charges the actua
On Tue, Feb 16, 2021 at 5:25 PM David Rientjes wrote:
>
> On Tue, 16 Feb 2021, Michal Hocko wrote:
>
> > > Hugepages can be preallocated to avoid unpredictable allocation latency.
> > > If we run into 4k page shortage, the kernel can trigger OOM even though
> > > there were free hugepages. When
ed by physical capacity. This in turn allows cgroups to
> significantly overconsume their alloted swap space.
>
> Add the do_memsw_account() check back to fix this problem.
>
> Fixes: 2d1c498072de ("mm: memcontrol: make swap tracking an integral part of
> memory control"
+Cc Roman
On Fri, Feb 5, 2021 at 2:49 AM Michal Hocko wrote:
>
[snip]
> > > > Also, css_get is enough because page
> > > > has a reference to the memcg.
> > >
> > > tryget used to be there to guard against offlined memcg but we have
> > > concluded this is impossible in this path. tryget stayed
On Fri, Feb 5, 2021 at 10:31 AM Johannes Weiner wrote:
>
> On Fri, Feb 05, 2021 at 11:32:24AM +0100, Michal Hocko wrote:
> > On Fri 05-02-21 17:14:30, Muchun Song wrote:
> > > On Fri, Feb 5, 2021 at 4:36 PM Michal Hocko wrote:
> > > >
> > > > On Fri 05-02-21 14:27:19, Muchun Song wrote:
> > > >
unmount, the css moves back to the default hierarchy. Annotate
> rebind_subsystems() to move the root css linkage along between roots.
>
> Signed-off-by: Johannes Weiner
> Reviewed-by: Roman Gushchin
Reviewed-by: Shakeel Butt
t_for_each_entry().
>
> There is only one caller of the uncharge_list(). So just fold it into
> mem_cgroup_uncharge_list() and remove it.
>
> Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
global one.
Signed-off-by: Shakeel Butt
---
mm/list_lru.c | 12 ++--
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/mm/list_lru.c b/mm/list_lru.c
index fe230081690b..6f067b6b935f 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -373,21 +373,13 @@ static void memcg_destroy_lis
accounting code and then we would
need to add additional parameter to tell to not touch NR_SWAPCACHE stat
as that code patch bypass swapcache.
This patch added memcg charging API explicitly foe swapin pages and
cleaned up do_swap_page() to not set and reset PageSwapCache bit.
Signed-off-by: Shakeel
On Fri, Feb 19, 2021 at 2:44 PM Shakeel Butt wrote:
[snip]
> mode change 100644 => 100755 scripts/cc-version.sh
[snip
> diff --git a/scripts/cc-version.sh b/scripts/cc-version.sh
> old mode 100644
> new mode 100755
Please ignore these unintended mode changes. I will remove th
On Fri, Feb 19, 2021 at 4:34 PM Johannes Weiner wrote:
>
> On Fri, Feb 19, 2021 at 02:44:05PM -0800, Shakeel Butt wrote:
> > Currently the kernel adds the page, allocated for swapin, to the
> > swapcache before charging the page. This is fine but now we want a
> > per-me
On Wed, Dec 20, 2023 at 01:45:01PM -0800, Mina Almasry wrote:
> Add the netmem_ref type, an abstraction for network memory.
>
> To add support for new memory types to the net stack, we must first
> abstract the current memory type. Currently parts of the net stack
> use struct page directly:
>
>
On Wed, Dec 20, 2023 at 01:45:02PM -0800, Mina Almasry wrote:
> diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
> index 65d1f6755f98..3180a54b2c68 100644
> --- a/net/kcm/kcmsock.c
> +++ b/net/kcm/kcmsock.c
> @@ -636,9 +636,15 @@ static int kcm_write_msgs(struct kcm_sock *kcm)
>
age.
>
> Signed-off-by: Mina Almasry
Reviewed-by: Shakeel Butt
On Thu, Jan 4, 2024 at 1:44 PM Jakub Kicinski wrote:
>
[...]
>
> You seem to be trying hard to make struct netmem a thing.
> Perhaps you have a reason I'm not getting?
Mina already went with your suggestion and that is fine. To me, struct
netmem is more aesthetically aligned with the existing
On Tue, Apr 02, 2024 at 09:50:54AM +0800, Ubisectech Sirius wrote:
> > On Mon, Apr 01, 2024 at 03:04:46PM +0800, Ubisectech Sirius wrote:
> > Hello.
> > We are Ubisectech Sirius Team, the vulnerability lab of China ValiantSec.
> > Recently, our team has discovered a issue in Linux kernel 6.7.
The following commit has been merged into the sched/core branch of tip:
Commit-ID: df77430639c9cf73559bac0f25084518bf9a812d
Gitweb:
https://git.kernel.org/tip/df77430639c9cf73559bac0f25084518bf9a812d
Author:Shakeel Butt
AuthorDate:Sun, 21 Mar 2021 13:51:56 -07:00
1101 - 1184 of 1184 matches
Mail list logo