then the
switch probably doesn't happen at this moment (and I guess
stock->cached_objcg and stock->cached can be independent to some extent,
so the old memcg in one needn't be the old in the latter).
In conclusion
Reviewed-by: Michal Koutný
Michal
signature.asc
Description: Digital signature
Hello Chen.
On Fri, Dec 18, 2020 at 02:17:55PM +0800, Chen Zhou
wrote:
> When mounting a cgroup hierarchy with disabled controller in cgroup v1,
> all available controllers will be attached.
Not sure if I understand the situation -- have you observed a v1
controller attached to a hierarchy
re almost
identical. In order to reduce duplication, factor out the common code in
similar fashion we already do for other threadgroup/task functions. No
functional changes are intended.
Suggested-by: Hao Lee
Signed-off-by: Michal Koutný
---
kernel/cgroup/cgro
On Thu, Jan 14, 2021 at 10:08:19PM +0800, chenzhou
wrote:
> In this case, at the beginning of function check_cgroupfs_options(), the mask
> ctx->subsys_mask will be 0. And if we mount without 'none' and 'name='
> options,
> then in check_cgroupfs_options(), the flag ctx->all_ss will be set,
es - high,
> + GFP_KERNEL, true);
Although I was also initially confused by throwing 'reclaimed' info
away, the patch makes sense to me given the reasoning.
It is
Reviewed-by: Michal Koutný
As for the discussed unsuccessful retries, I'd keep it a separate chan
h correct
Reviewed-by: Michal Koutný
> The behavior was changed since commit f5dfb5315d34 ("cgroup: take
> options parsing into ->parse_monolithic()"), will add this as Fixes.
Thanks.
Michal
signature.asc
Description: Digital signature
On Fri, Jan 15, 2021 at 05:37:17PM +0800, Chen Zhou
wrote:
> [...]
> kernel/cgroup/cgroup-v1.c | 3 +++
> 1 file changed, 3 insertions(+)
Reviewed-by: Michal Koutný
signature.asc
Description: Digital signature
On Thu, Oct 01, 2020 at 01:27:13PM -0400, Johannes Weiner
wrote:
> The activation code is the only path where page migration is not
> excluded. Because unlike with page state statistics, we don't really
> mind a race when counting an activation event.
Thanks for the explanation. I see why the
On Thu, Aug 06, 2020 at 09:37:17PM -0700, Roman Gushchin wrote:
> In general, yes. But in this case I think it wouldn't be a good idea:
> most often cgroups are created by a centralized daemon (systemd),
> which is usually located in the root cgroup. Even if it's located not in
> the root cgroup,
Hi.
On Wed, Sep 30, 2020 at 05:27:10PM -0700, Roman Gushchin wrote:
> @@ -369,8 +371,12 @@ enum page_memcg_data_flags {
> */
> static inline struct mem_cgroup *page_memcg(struct page *page)
> {
> + unsigned long memcg_data = page->memcg_data;
> +
> VM_BUG_ON_PAGE(PageSlab(page),
Hello.
On Wed, Oct 14, 2020 at 08:07:49PM +0100, Richard Palethorpe
wrote:
> SLAB objects which outlive their memcg are moved to their parent
> memcg where they may be uncharged. However if they are moved to the
> root memcg, uncharging will result in negative page counter values as
> root has
On Fri, Oct 16, 2020 at 10:53:08AM -0400, Johannes Weiner
wrote:
> The central try_charge() function charges recursively all the way up
> to and including the root.
Except for use_hiearchy=0 (which is the case here as Richard
wrote). The reparenting is hence somewhat incompatible with
On Fri, Oct 16, 2020 at 04:05:21PM +0100, Richard Palethorpe
wrote:
> I'm don't know if that could happen without reparenting. I suppose if
> use_hierarchy=1 then actually this patch will result in root being
> overcharged, so perhaps it should also check for use_hierarchy?
Right, you'd need to
Hi Shakeel.
On Tue, Jul 07, 2020 at 10:02:50AM -0700, Shakeel Butt
wrote:
> > Well, I was talkingg about memory.low. It is not meant only to protect
> > from the global reclaim. It can be used for balancing memory reclaim
> > from _any_ external memory pressure source. So it is somehow related
On Tue, Aug 11, 2020 at 09:55:27AM -0700, Roman Gushchin wrote:
> As I said, there are 2 problems with charging systemd (or a similar daemon):
> 1) It often belongs to the root cgroup.
This doesn't hold for systemd (if we agree that systemd is the most
common case).
> 2) OOMing or failing some
On Tue, Aug 11, 2020 at 12:32:28PM -0700, Roman Gushchin wrote:
> If we'll limit init.slice (where systemd seems to reside), as you suggest,
> we'll eventually create trashing in init.slice, followed by OOM.
> I struggle to see how it makes the life of a user better?
> [...]
> The problem is that
Hi.
On Mon, Oct 19, 2020 at 03:28:45PM -0700, Roman Gushchin wrote:
> Currently the root memory cgroup is never charged directly, but
> if an ancestor cgroup is charged, the charge is propagated up to the
s/ancestor/descendant/
> The root memory cgroup doesn't show the charge to a user, neither
Hi.
On Tue, Oct 20, 2020 at 06:52:08AM +0100, Richard Palethorpe
wrote:
> I don't think that is relevant as we get the memcg from objcg->memcg
> which is set during reparenting. I suppose however, we can determine if
> the objcg was reparented by inspecting memcg->objcg.
+1
> If we just check
Hi.
On Tue, Nov 10, 2020 at 07:11:28AM -0800, Shakeel Butt
wrote:
> > The problem is that cgroup_subsys_on_dfl(memory_cgrp_subsys)'s return value
> > can change at any particular moment.
The switch can happen only when singular (i.e. root-only) hierarchy
exists. (Or it could if
On Mon, Sep 02, 2019 at 04:02:57PM -0700, Suren Baghdasaryan
wrote:
> > +static inline void cpu_uclamp_print(struct seq_file *sf,
> > + enum uclamp_id clamp_id)
> > [...]
> > + rcu_read_lock();
> > + tg = css_tg(seq_css(sf));
> > + util_clamp =
On Thu, Aug 08, 2019 at 04:08:10PM +0100, Patrick Bellasi
wrote:
> Well, if I've got correctly your comment in the previous message, I
> would say that at this stage we don't need RCU looks at all.
Agreed.
> Reason being that cpu_util_update_eff() gets called only from
> cpu_uclamp_write()
On Thu, Aug 08, 2019 at 04:10:21PM +0100, Patrick Bellasi
wrote:
> Not sure to get what you mean here: I'm currently exposing uclamp to
> both v1 and v2 hierarchies.
cpu controller has different API for v1 and v2 hierarchies. My question
reworded is -- are the new knobs exposed in the legacy API
sn't affect how the hierarchical mode is working,
> which is the only sane and truly supported mode now.
I agree with the patch and you can add
Reviewed-by: Michal Koutný
However, it effectively switches any users of root.use_hierarchy=0 (if there
are any, watching the coun
On Mon, Jul 08, 2019 at 09:43:54AM +0100, Patrick Bellasi
wrote:
> Since it's possible for a cpu.uclamp.min value to be bigger than the
> cpu.uclamp.max value, ensure local consistency by restricting each
> "protection"
> (i.e. min utilization) with the corresponding "limit" (i.e. max
>
On Mon, Jul 08, 2019 at 09:43:55AM +0100, Patrick Bellasi
wrote:
> +static void uclamp_update_root_tg(void)
> +{
> + struct task_group *tg = _task_group;
> +
> + uclamp_se_set(>uclamp_req[UCLAMP_MIN],
> + sysctl_sched_uclamp_util_min, false);
> +
On Mon, Jul 08, 2019 at 09:43:56AM +0100, Patrick Bellasi
wrote:
> This mimics what already happens for a task's CPU affinity mask when the
> task is also in a cpuset, i.e. cgroup attributes are always used to
> restrict per-task attributes.
If I am not mistaken when set_schedaffinity(2) call is
Hello Patrick.
I took a look at your series and I've posted some notes to your patches.
One applies more to the series overall -- I see there is enum uclamp_id
defined but at many places (local variables, function args) int or
unsigned int is used. Besides the inconsistency, I think it'd be nice
On Tue, Jul 16, 2019 at 03:07:06PM +0100, Patrick Bellasi
wrote:
> That note comes from the previous review cycle and it's based on a
> request from Tejun to align uclamp behaviors with the way the
> delegation model is supposed to work.
I saw and hopefully understood that reasoning --
On Tue, Jul 16, 2019 at 03:34:17PM +0100, Patrick Bellasi
wrote:
> > cpu_util_update_eff internally calls css_for_each_descendant_pre() so
> > this should be protected with rcu_read_lock().
>
> Right, good catch! Will add in v12.
When I responded to your other patch, it occurred to me that
On Tue, Jul 16, 2019 at 03:34:35PM +0100, Patrick Bellasi
wrote:
> Am I missing something?
No, it's rather my misinterpretation of the syscall semantics.
> Otherwise, I think the changelog sentence you quoted is just
> misleading.
It certainly mislead me to thinking about the sched_setattr
Hello Yun.
On Fri, Jul 12, 2019 at 06:10:24PM +0800, 王贇
wrote:
> Forgive me but I have no idea on how to combined this
> with memory cgroup's locality hierarchical update...
> parent memory cgroup do not have influence on mems_allowed
> to it's children, correct?
I'd recommend to look at the
Hello Song.
On Wed, Apr 10, 2019 at 07:43:35PM +, Song Liu
wrote:
> The load level above is measured as requests-per-second.
>
> When there is no side workload, the system has about 45% busy CPU with
> load level of 1.0; and about 75% busy CPU at load level of 1.5.
>
> The saturation
On Tue, Apr 09, 2019 at 04:40:03PM -0400, Joel Savitz
wrote:
> $ grep Cpus /proc/$$/status
> Cpus_allowed: ff
> Cpus_allowed_list: 0-7
(a)
> $ taskset -p 4 $$
> pid 19202's current affinity mask: f
> pid 19202's new affinity mask: 4
>
> $ grep
On Thu, Jul 18, 2019 at 07:17:43PM +0100, Patrick Bellasi
wrote:
> +static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
> + char *buf, size_t nbytes,
> + loff_t off)
> +{
> [...]
> +static ssize_t
On Thu, Jul 18, 2019 at 07:17:45PM +0100, Patrick Bellasi
wrote:
> The clamp values are not tunable at the level of the root task group.
> That's for two main reasons:
>
> - the root group represents "system resources" which are always
>entirely available from the cgroup standpoint.
>
>
Hello Song and I apology for late reply.
I understand the motivation for the headroom attribute is to achieve
side load throttling before the CPU is fully saturated since your
measurements show that something else gets saturated earlier than CPU
and causes grow of the observed latency.
The
On Tue, Jul 16, 2019 at 11:40:35AM +0800, 王贇
wrote:
> By doing 'cat /sys/fs/cgroup/cpu/CGROUP_PATH/cpu.numa_stat', we see new
> output line heading with 'exectime', like:
>
> exectime 311900 407166
What you present are times aggregated over CPUs in the NUMA nodes, this
seems a bit lossy
On Tue, Jul 16, 2019 at 10:41:36AM +0800, 王贇
wrote:
> Actually whatever the memory node sets or cpu allow sets is, it will
> take effect on task's behavior regarding memory location and cpu
> location, while the locality only care about the results rather than
> the sets.
My previous response
On Fri, Aug 02, 2019 at 10:08:48AM +0100, Patrick Bellasi
wrote:
> +static ssize_t cpu_uclamp_write(struct kernfs_open_file *of, char *buf,
> + size_t nbytes, loff_t off,
> + enum uclamp_id clamp_id)
> +{
> + struct uclamp_request req;
On Fri, Aug 02, 2019 at 10:08:49AM +0100, Patrick Bellasi
wrote:
> @@ -7095,6 +7149,7 @@ static ssize_t cpu_uclamp_write(struct kernfs_open_file
> *of, char *buf,
> if (req.ret)
> return req.ret;
>
> + mutex_lock(_mutex);
> rcu_read_lock();
>
> tg =
On Fri, Aug 02, 2019 at 10:08:47AM +0100, Patrick Bellasi
wrote:
> Patrick Bellasi (6):
> sched/core: uclamp: Extend CPU's cgroup controller
> sched/core: uclamp: Propagate parent clamps
> sched/core: uclamp: Propagate system defaults to root group
> sched/core: uclamp: Use TG's clamps
Hi.
On Thu, Sep 26, 2019 at 05:55:29PM -0700, Mina Almasry
wrote:
> My guess is that a new controller needs to support cgroups-v2, which
> is fine. But can a new controller also support v1? Or is there a
> requirement that new controllers support *only* v2? I need whatever
> solution here to
On Thu, Sep 05, 2019 at 02:45:44PM -0700, Roman Gushchin wrote:
> Roman Gushchin (14):
> [...]
> mm: memcg/slab: use one set of kmem_caches for all memory cgroups
From that commit's message:
> 6) obsoletes kmem.slabinfo cgroup v1 interface file, as there are
> no per-memcg kmem_caches
On Wed, Oct 02, 2019 at 10:00:07PM +0900, Suleiman Souhlal
wrote:
> kmem.slabinfo has been absolutely invaluable for debugging, in my experience.
> I am however not aware of any automation based on it.
My experience is the same. However, the point is that this has been
exposed since ages, so the
Hi (and apology for relatively late reply).
On Tue, Sep 10, 2019 at 09:08:55AM -0700, Tejun Heo wrote:
> I can implement the switching if so.
I see the "conflict" is solved by the switching.
> Initially, I put them under block device sysfs but it was too clumsy
> with different config file
Simplify task migration by being oblivious about its PID during
migration. This allows to easily migrate individual threads as well.
This change brings no functional change and prepares grounds for thread
granularity migrating tests.
Signed-off-by: Michal Koutný
---
tools/testing/selftests
Add two new tests that verify that thread and threadgroup migrations
work as expected.
Signed-off-by: Michal Koutný
---
tools/testing/selftests/cgroup/Makefile | 2 +-
tools/testing/selftests/cgroup/cgroup_util.c | 26
tools/testing/selftests/cgroup/cgroup_util.h | 2 +
tools
test_core tests various cgroup creation/removal and task migration
paths. Run the tests repeatedly with interfering noise (for lockdep
checks). Currently, forking noise and subsystem enabled/disabled
switching are the implemented noises.
Signed-off-by: Michal Koutný
---
tools/testing/selftests
We no longer take cgroup_mutex in cgroup_exit and the exiting tasks are
not moved to init_css_set, reflect that in several comments to prevent
confusion.
Signed-off-by: Michal Koutný
---
kernel/cgroup/cgroup.c | 29 +
1 file changed, 9 insertions(+), 20 deletions
Hello.
The important part is the patch 02 where the reasoning is.
The rest is mostly auxiliar and split out into separate commits for
better readability.
The patches are based on v5.3.
Michal
Michal Koutný (5):
cgroup: Update comments about task exit path
cgroup: Optimize single thread
only the case of self-migration by writing "0" into
cgroup.threads. For simplicity, we always take cgroup_threadgroup_rwsem
with numeric PIDs.
This change improves migration dependent workload performance similar
to per-signal_struct state.
Signed-off-by: Michal Koutný
---
kernel/cgroup
(+CC cgro...@vger.kernel.org)
On Thu, Aug 08, 2019 at 12:40:02PM -0700, Mina Almasry
wrote:
> We have developers interested in using hugetlb_cgroups, and they have
> expressed
> dissatisfaction regarding this behavior.
I assume you still want to enforce a limit on a particular group and the
On Fri, Oct 04, 2019 at 03:11:04PM -0700, Roman Gushchin wrote:
> An inode which is getting dirty for the first time is associated
> with the wb structure (look at __inode_attach_wb()). It can later
> be switched to another wb under some conditions (e.g. some other
> cgroup is writing a lot of
Hello.
On Tue, Jun 23, 2020 at 11:45:14AM -0700, Roman Gushchin wrote:
> Because the size of memory cgroup internal structures can dramatically
> exceed the size of object or page which is pinning it in the memory, it's
> not a good idea to simple ignore it. It actually breaks the isolation
>
Hi.
On Tue, May 05, 2020 at 04:04:31PM +0200, Christian Brauner
wrote:
> -SYSCALL_DEFINE2(setns, int, fd, int, nstype)
> +SYSCALL_DEFINE2(setns, int, fd, int, flags)
> [...]
> - file = proc_ns_fget(fd);
> - if (IS_ERR(file))
> - return PTR_ERR(file);
> + int err = 0;
>
On Wed, Jun 24, 2020 at 01:54:56PM +0200, Christian Brauner
wrote:
> Yep, I already have a fix for this in my tree based on a previous
> report from LTP.
Perfect. (Sorry for the noise then.)
Thanks,
Michal
signature.asc
Description: Digital signature
nification is also aligning FilePmdMapped with others.)
Signed-off-by: Michal Koutný
---
fs/proc/task_mmu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index dbda4499a859..5066b0251ed8 100644
--- a/fs/proc/task_mmu.c
+++
Signed-off-by: Michal Koutný
---
Documentation/admin-guide/cgroup-v2.rst | 4
1 file changed, 4 insertions(+)
diff --git a/Documentation/admin-guide/cgroup-v2.rst
b/Documentation/admin-guide/cgroup-v2.rst
index 94bdff4f9e09..47f9f056e66f 100644
--- a/Documentation/admin-guide/cgroup-v2
Signed-off-by: Michal Koutný
---
Documentation/admin-guide/cgroup-v2.rst | 24 +---
1 file changed, 21 insertions(+), 3 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst
b/Documentation/admin-guide/cgroup-v2.rst
index d09471aa7443..94bdff4f9e09 100644
patch just makes docs indefinite until the idea is implemented.
Michal Koutný (3):
docs: cgroup: Explain reclaim protection target
docs: cgroup: Note about sibling relative reclaim protection
docs: cgroup: No special handling of unpopulated memcgs
Documentation/admin-guide/cgroup-v
inner-node constraint may be added later.)
Signed-off-by: Michal Koutný
---
Documentation/admin-guide/cgroup-v2.rst | 3 ---
1 file changed, 3 deletions(-)
diff --git a/Documentation/admin-guide/cgroup-v2.rst
b/Documentation/admin-guide/cgroup-v2.rst
index 47f9f056e66f..3d62922c4
Thanks for digging through this.
On Fri, May 24, 2019 at 11:33:55AM -0400, Joel Savitz
wrote:
> It is a bit ambiguous, but I performed no action on the task's cpuset
> nor did I offline any cpus at point (a).
So did you do any operation that left you with
cpu_active_mask & 0xf0 == 0
?
(If
On Wed, Jun 05, 2019 at 01:49:35PM +0200, Juri Lelli
wrote:
> Existing code comes with a comment saying the "we don't support RT-tasks
> being in separate groups".
I'm also inclined to this check not being completely correct.
This guard also prevents enabling cpu controller on unified hierarchy
sem)) in find_extend_vma could be
triggered as
#
Signed-off-by: Michal Koutný
---
When I was attempting to reduce usage of mmap_sem I came across this
unprotected access and increased number of its holders :-/
I'm not sure whether there is a real concurrent writer at this early
stages (I conside
On Wed, Jun 12, 2019 at 10:00:34AM -0700, Matthew Wilcox
wrote:
> On Wed, Jun 12, 2019 at 04:28:11PM +0200, Michal Koutný wrote:
> > - /* N.B. passed_fileno might not be initialized? */
> > +
>
> Why did you delete this comment?
The variable got removed in
d20894
On Tue, May 28, 2019 at 02:10:37PM +0200, Michal Koutný
wrote:
> Although, on v1 we will lose the "no longer affine to..." message
> (which is what happens in your demo IIUC).
FWIW, I was wrong, off by one 'state' transition. So the patch doesn't
cause change in messaging (not tested though).
sem)) in find_extend_vma could be
triggered as
#
Cc: Matthew Wilcox
Reviewed-by: Cyrill Gorcunov
Signed-off-by: Michal Koutný
---
fs/binfmt_elf.c | 10 +-
fs/exec.c | 3 ++-
2 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
in
On Wed, Jun 05, 2019 at 04:20:03PM +0200, Michal Koutný
wrote:
> I considered relaxing the check to non-root cgroups only, however, as
> your example shows, it doesn't prevent reaching the avoided state by
> other paths. I'm not that familiar with RT sched to tell whether
> RT-pr
Hello.
I see suspicious asymmetry, in the current mainline:
> WRITE_ONCE(memcg->memory.emin, effective_protection(usage, parent_usage,
> READ_ONCE(memcg->memory.min),
> READ_ONCE(parent->memory.emin),
>
truct")
avoided the coarse use of mmap_sem in similar situations.
get_cmdline can also use arg_lock instead of mmap_sem when it reads the
boundaries.
Signed-off-by: Michal Koutný
---
mm/util.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/util.c b/mm/util.c
index d559bd
Hi.
I see this discussion somewhat faded away since the previous year.
There was rework [1] that reduced (ab)use of mmap_sem in prctl
functions.
Actually, there still remains the down_write() in prctl_set_mm.
I considered at least replacing it with the mm_struct.arg_lock +
down_read() but then
On Wed, Apr 17, 2019 at 03:41:52PM +0200, Michal Hocko
wrote:
> Don't we need to use the lock in prctl_set_mm as well then?
Correct. The patch alone just moves the race from
get_cmdline/prctl_set_mm_map to get_cmdline/prctl_set_mm.
arg_lock could be used in prctl_set_mm but the better idea
On Wed, Apr 17, 2019 at 03:38:41PM +0300, Cyrill Gorcunov
wrote:
> I've a bit vague memory what we've ended up with, but iirc there was
> a problem with brk() syscall or similar. Then I think we left everything
> as is.
Was this related to the removal of non PR_SET_MM_MAP operations too?
Do you
ming basic
arguments validation.
Signed-off-by: Michal Koutný
---
kernel/sys.c | 12 +---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sys.c b/kernel/sys.c
index 12df0e5434b8..bbce0f26d707 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2125,8 +2125,12 @@
Hi,
making this holder of mmap_sem killable was for the reasons of /proc/...
diagnostics was an idea I was pondeering too. However, I think the
approach of pretending we read 0 bytes is not correct. The API would IMO
need to be extended to allow pass a result such as EINTR to the end
caller.
Why
d in mm_struct")
Cc: Yang Shi
Cc: Mateusz Guzik
CC: Cyrill Gorcunov
Co-developed-by: Laurent Dufour
Signed-off-by: Laurent Dufour
Signed-off-by: Michal Koutný
---
kernel/sys.c | 10 --
mm/util.c| 4 ++--
2 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/ker
tch should not change any behavior, it is mere refactoring for
following patch.
v1, v2: ---
v3: Remove unused mm variable from validate_prctl_map_addr
CC: Kirill Tkhai
CC: Cyrill Gorcunov
Signed-off-by: Michal Koutný
Reviewed-by: Kirill Tkhai
---
kernel/
unused variable mm
Michal Koutný (2):
prctl_set_mm: Refactor checks from validate_prctl_map
prctl_set_mm: downgrade mmap_sem to read lock
kernel/sys.c | 56
mm/util.c| 4 ++--
2 files changed, 30 insertions(+), 30 deletions
Hello.
(apologies for late reply) I've aggregated the two previously discussed patches
into one series and based on responses made some changes summed below.
v2
- insert a patch refactoring validate_prctl_map
- move find_vma out of the arg_lock critical section
Michal Koutný (3):
mm
truct")
avoided the coarse use of mmap_sem in similar situations.
get_cmdline can also use arg_lock instead of mmap_sem when it reads the
boundaries.
Fixes: 88aa7cc688d4 ("mm: introduce arg_lock to protect arg_start|end and
env_start|end in mm_struct")
Cc: Yang Shi
Cc: Mateusz Guzik
v2: call find_vma without arg_lock held
CC: Cyrill Gorcunov
CC: Laurent Dufour
Signed-off-by: Michal Koutný
---
kernel/sys.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/sys.c b/kernel/sys.c
index e1acb444d7b0..641fda756575 100644
--- a/kernel/sys.c
++
tch should not change any behavior, it is mere refactoring for
following patch.
CC: Kirill Tkhai
CC: Cyrill Gorcunov
Signed-off-by: Michal Koutný
---
kernel/sys.c | 45 -
1 file changed, 20 insertions(+), 25 deletions(-)
diff --git a/kernel/sys.c b/ke
On Tue, Apr 30, 2019 at 01:45:17PM +0300, Cyrill Gorcunov
wrote:
> It setups these parameters unconditionally. I need to revisit
> this moment. Technically (if only I'm not missing something
> obvious) we might have a race here with prctl setting up new
> params, but this should be harmless
sumers of
__access_remote_vm() (they won't actually handle -EINTR correctly w/out
further changes). This beats my original idea with simplicity.
Reviewed-by: Michal Koutný
Michal
viewed-by: Michal Koutný
signature.asc
Description: Digital signature
On Tue, Mar 30, 2021 at 11:00:36AM +0200, Arnd Bergmann wrote:
> Would it be possible to enclose most or all of kernel/cgroup/cgroup.c
> in an #ifdef CGROUP_SUBSYS_COUNT block?
Even without any controllers, there can still be named hierarchies (v1)
or the default hierarchy (v2) (for instance) for
Hi Vipin.
On Thu, Mar 04, 2021 at 03:19:45PM -0800, Vipin Sharma
wrote:
> arch/x86/kvm/svm/sev.c| 65 +-
> arch/x86/kvm/svm/svm.h| 1 +
> include/linux/cgroup_subsys.h | 4 +
> include/linux/misc_cgroup.h | 130 +++
> init/Kconfig | 14 ++
>
Hello.
On Sun, Mar 07, 2021 at 07:48:40AM -0500, Tejun Heo wrote:
> Vipin, thank you very much for your persistence and patience.
Yes, and thanks for taking my remarks into account.
> Michal, as you've been reviewing the series, can you please take
> another look and ack them if you don't find
On Fri, Mar 12, 2021 at 11:07:14AM -0800, Vipin Sharma
wrote:
> We should be fine without atomic64_t because we are using unsigned
> long and not 64 bit explicitly. This will work on both 32 and 64 bit
> machines.
I see.
> But I will add READ_ONCE and WRITE_ONCE because of potential chances of
On Fri, Mar 12, 2021 at 09:49:26AM -0800, Vipin Sharma
wrote:
> I will add some more information in the cover letter of the next version.
Thanks.
> Each one coming up with their own interaction is a duplicate effort
> when they all need similar thing.
Could this be expressed as a new BPF hook
On Mon, Mar 15, 2021 at 07:41:00PM -0400, Johannes Weiner
wrote:
> Switch to the atomic variant, cgroup_rstat_irqsafe().
Congratulations(?), the first use of cgroup_rstat_irqsafe().
Reviewed-by: Michal Koutný
signature.asc
Description: Digital signature
gt; cfs_rq_of(se)->on_list wouldn't hold, so the patch
certainly isn't finished.
Signed-off-by: Michal Koutný
---
kernel/sched/fair.c | 41 ++---
kernel/sched/sched.h | 4 +---
2 files changed, 7 insertions(+), 38 deletions(-)
diff --git a/kernel/sched/fa
Hello.
(Sorry for necroposting, found this upstream reference only now.)
On Mon, Apr 20, 2020 at 03:04:53PM +0800, Muchun Song
wrote:
> /* Time spent by the tasks of the CPU accounting group executing in ... */
> @@ -339,7 +340,7 @@ void cpuacct_charge(struct task_struct *tsk, u64 cputime)
>
Hello.
On Thu, Feb 18, 2021 at 11:55:47AM -0800, Vipin Sharma
wrote:
> This patch is creating a new misc cgroup controller for allocation and
> tracking of resources which are not abstract like other cgroup
> controllers.
Please don't refer to this as "allocation" anywhere, that has a specific
On Thu, Feb 18, 2021 at 11:55:48AM -0800, Vipin Sharma
wrote:
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> [...]
> +#ifndef CONFIG_KVM_AMD_SEV
> +/*
> + * When this config is not defined, SEV feature is not supported and APIs in
> + * this file are not used but this file still
On Tue, Feb 23, 2021 at 05:24:23PM +0800, Muchun Song
wrote:
> mm/slab_common.c | 4 ++--
> mm/slub.c| 8
> 2 files changed, 6 insertions(+), 6 deletions(-)
Reviewed-by: Michal Koutný
signature.asc
Description: Digital signature
t=UTF-8
Content-Transfer-Encoding: 8bit
The function has no current users and it is a remnant from kdbus
enthusiasm era 857a2beb09ab ("cgroup: implement
task_cgroup_path_from_hierarchy()"). Drop it to eliminate unused code.
Suggested-by: Romain Perier
Signed-off-by: Michal Koutný
---
in
On Wed, Feb 24, 2021 at 08:57:36PM -0800, Vipin Sharma
wrote:
> This function is meant for hot unplug functionality too.
Then I'm wondering if the current form is sufficient, i.e. the generic
controller can hardly implement preemption but possibly it should
prevent any additional charges of the
On Thu, Feb 25, 2021 at 11:28:46AM -0800, Vipin Sharma
wrote:
> My approach here is that it is the responsibility of the caller to:
> 1. Check the return value and proceed accordingly.
> 2. Ideally, let all of the usage be 0 before deactivating this resource
>by setting capacity to 0
If the
Hello.
IIUC, the premise is that the tasks that have different cookies imply
they would never share a core.
On Thu, Apr 01, 2021 at 03:10:12PM +0200, Peter Zijlstra wrote:
> The cgroup interface now uses a 'core_sched' file, which still takes 0,1. It
> is
> however changed such that you can
1 - 100 of 131 matches
Mail list logo