On Thu, Feb 17, 2022 at 2:06 PM Alex Deucher wrote:
>
> On Thu, Feb 17, 2022 at 2:04 PM Nick Desaulniers
> wrote:
> >
> >
> > Alex,
> > Has AMD been able to set up clang builds, yet?
>
> No. I think some individual teams do, but it's never been integrated
> into our larger CI systems as of yet a
On Thu, Nov 18, 2021 at 11:33 PM Alexei Starovoitov
wrote:
>
> On Thu, Nov 18, 2021 at 03:28:40PM -0500, Kenny Ho wrote:
> > + for_each_possible_cpu(cpu) {
> > + /* allocate first, connect the cgroup later */
> > + events[i] = perf_event_create_
3c452
[2] https://lwn.net/Articles/679074/
[3]
https://www.linuxplumbersconf.org/event/4/contributions/291/attachments/313/528/Linux_Plumbers_Conference_2019.pdf
[4] https://linuxplumbersconf.org/event/11/contributions/899/
Kenny Ho (4):
cgroup, perf: Add ability to connect to perf cgroup fr
Change-Id: Ie2580c3a71e2a5116551879358cb5304b04d3838
Signed-off-by: Kenny Ho
---
include/linux/trace_events.h | 9 +
kernel/trace/bpf_trace.c | 28
2 files changed, 37 insertions(+)
diff --git a/include/linux/trace_events.h b/include/linux
This provides the ability to allocate cgroup specific perf_event by
bpf-cgroup in later patch
Change-Id: I13aa7f3dfc2883ba3663c0b94744a6169504bbd8
Signed-off-by: Kenny Ho
---
include/linux/cgroup.h | 2 ++
include/linux/perf_event.h | 2 ++
kernel/cgroup/cgroup.c | 4 ++--
kernel
On Fri, May 7, 2021 at 12:54 PM Daniel Vetter wrote:
>
> SRIOV is kinda by design vendor specific. You set up the VF endpoint, it
> shows up, it's all hw+fw magic. Nothing for cgroups to manage here at all.
Right, so in theory you just use the device cgroup with the VF endpoints.
> All I meant is
On Fri, May 7, 2021 at 4:59 AM Daniel Vetter wrote:
>
> Hm I missed that. I feel like time-sliced-of-a-whole gpu is the easier gpu
> cgroups controler to get started, since it's much closer to other cgroups
> that control bandwidth of some kind. Whether it's i/o bandwidth or compute
> bandwidht is
Sorry for the late reply (I have been working on other stuff.)
On Fri, Feb 5, 2021 at 8:49 AM Daniel Vetter wrote:
>
> So I agree that on one side CU mask can be used for low-level quality
> of service guarantees (like the CLOS cache stuff on intel cpus as an
> example), and that's going to be ra
n, Feb 01, 2021 at 11:51:07AM -0500, Kenny Ho wrote:
> > On Mon, Feb 1, 2021 at 9:49 AM Daniel Vetter wrote:
> > > - there's been a pile of cgroups proposal to manage gpus at the drm
> > > subsystem level, some by Kenny, and frankly this at least looks a bit
> &
hat.
No Daniel, this is quick *draft* to get a conversation going. Bpf was
actually a path suggested by Tejun back in 2018 so I think you are
mischaracterizing this quite a bit.
"2018-11-20 Kenny Ho:
To put the questions in more concrete terms, let say a user wants to
expose certain
aniel, this is quick *draft* to get a conversation going. Bpf was
actually a path suggested by Tejun back in 2018 so I think you are
mischaracterizing this quite a bit.
"2018-11-20 Kenny Ho:
To put the questions in more concrete terms, let say a user wants to
expose certain part of a gpu to a par
On Tue, Nov 3, 2020 at 4:04 PM Alexei Starovoitov
wrote:
>
> On Tue, Nov 03, 2020 at 02:19:22PM -0500, Kenny Ho wrote:
> > On Tue, Nov 3, 2020 at 12:43 AM Alexei Starovoitov
> > wrote:
> > > On Mon, Nov 2, 2020 at 9:39 PM Kenny Ho wrote:
>
> Sounds like either
On Tue, Nov 3, 2020 at 12:43 AM Alexei Starovoitov
wrote:
> On Mon, Nov 2, 2020 at 9:39 PM Kenny Ho wrote:
> pls don't top post.
My apology.
> > Cgroup awareness is desired because the intent
> > is to use this for resource management as well (potentially along with
>
wrote:
>
> On Mon, Nov 02, 2020 at 02:23:02PM -0500, Kenny Ho wrote:
> > Adding a few more emails from get_maintainer.pl and bumping this
> > thread since there hasn't been any comments so far. Is this too
> > crazy? Am I missing something fundamental?
>
> sorry
Adding a few more emails from get_maintainer.pl and bumping this
thread since there hasn't been any comments so far. Is this too
crazy? Am I missing something fundamental?
Regards,
Kenny
On Wed, Oct 7, 2020 at 11:24 AM Kenny Ho wrote:
>
> This is a skeleton implementation to invi
more useful for specific
device
Signed-off-by: Kenny Ho
---
fs/ioctl.c | 5 +++
include/linux/bpf-cgroup.h | 14
include/linux/bpf_types.h | 2 ++
include/uapi/linux/bpf.h | 8 +
kernel/bpf/cgroup.c| 66 ++
kerne
support today? How would you support low-jitter/low-latency
sharing of a single GPU if you have whatever hardware support you need
today?
Regards,
Kenny
> > On Tue, Apr 14, 2020 at 9:26 AM Daniel Vetter wrote:
> > >
> > > On Tue, Apr 14, 2020 at 3:14 PM Kenny Ho wrote:
&g
gestion,
if not...question 2.)
2) If spatial sharing is required to support GPU HPC use cases, what
would you implement if you have the hardware support today?
Regards,
Kenny
On Tue, Apr 14, 2020 at 9:26 AM Daniel Vetter wrote:
>
> On Tue, Apr 14, 2020 at 3:14 PM Kenny Ho wrote:
> >
your switching cost is zero.) As a drm co-maintainer, are you
suggesting GPU has no place in the HPC use case?
Regards,
Kenny
On Tue, Apr 14, 2020 at 8:52 AM Daniel Vetter wrote:
>
> On Tue, Apr 14, 2020 at 2:47 PM Kenny Ho wrote:
> > On Tue, Apr 14, 2020 at 8:20 AM Daniel Vetter wr
Hi Daniel,
On Tue, Apr 14, 2020 at 8:20 AM Daniel Vetter wrote:
> My understanding from talking with a few other folks is that
> the cpumask-style CU-weight thing is not something any other gpu can
> reasonably support (and we have about 6+ of those in-tree)
How does Intel plan to support the Su
Hi,
On Mon, Apr 13, 2020 at 4:54 PM Tejun Heo wrote:
>
> Allocations definitely are acceptable and it's not a pre-requisite to have
> work-conserving control first either. Here, given the lack of consensus in
> terms of what even constitute resource units, I don't think it'd be a good
> idea to c
work-conserving
implementation first, especially when we have users asking for such
functionality?
Regards,
Kenny
On Mon, Apr 13, 2020 at 3:11 PM Tejun Heo wrote:
>
> Hello, Kenny.
>
> On Tue, Mar 24, 2020 at 02:49:27PM -0400, Kenny Ho wrote:
> > Can you elaborate more on what are the mi
Hi Tejun,
Can you elaborate more on what are the missing pieces?
Regards,
Kenny
On Tue, Mar 24, 2020 at 2:46 PM Tejun Heo wrote:
>
> On Tue, Mar 17, 2020 at 12:03:20PM -0400, Kenny Ho wrote:
> > What's your thoughts on this latest series?
>
> My overall impression is th
Hi Tejun,
What's your thoughts on this latest series?
Regards,
Kenny
On Wed, Feb 26, 2020 at 2:02 PM Kenny Ho wrote:
>
> This is a submission for the introduction of a new cgroup controller for the
> drm subsystem follow a series of RFCs [v1, v2, v3, v4]
>
> Changes fr
Set allocation limit for /dev/dri/card1 to 1GB
echo "226:1 1g" > gpu.buffer.total.max
Set allocation limit for /dev/dri/card0 to 512MB
echo "226:0 512m" > gpu.buffer.total.max
Change-Id: Id3265bbd0fafe84a16b59617df79bd32196160be
Signed-off-by: Kenn
gpu.buffer.count.stats
A read-only flat-keyed file which exists on all cgroups. Each
entry is keyed by the drm device's major:minor.
Total number of GEM buffer allocated.
Change-Id: Iad29bdf44390dbcee07b1e72ea0ff811aa3b9dcd
Signed-off-by: Kenny Ho
---
Document
gpu.buffer.peak.stats
A read-only flat-keyed file which exists on all cgroups. Each
entry is keyed by the drm device's major:minor.
Largest (high water mark) GEM buffer allocated in bytes.
Change-Id: I40fe4c13c1cea8613b3e04b802f3e1f19eaab4fc
Signed-off-by: Ken
y memparse
(such as k, m, g) can be used.
Set largest allocation for /dev/dri/card1 to 4MB
echo "226:1 4m" > gpu.buffer.peak.max
Change-Id: I5ab3fb4a442b6cbd5db346be595897c90217da69
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 18 +
defined by the drmcg the kfd process belongs to.
Change-Id: I2930e76ef9ac6d36d0feb81f604c89a4208e6614
Signed-off-by: Kenny Ho
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h| 4 +
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 29
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 7
applies to the root cgroup since it can be
created before DRM devices are available. The drmcg controller will go
through all existing drm cgroups and initialize them with the new device
accordingly.
Change-Id: I64e421d8dfcc22ee8282cc1305960e20c2704db7
Signed-off-by: Kenny Ho
---
drivers/gpu/drm
list Enumeration of the subdevices
= ==
Change-Id: Idde0ef9a331fd67bb9c7eb8ef9978439e6452488
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 21 +++
include/drm/drm_cgroup.h| 3 +
include/linux/cg
type for the migrated task.
Change-Id: I0ce7c4e5a04c31bd0f8d9853a383575d4bc9a3fa
Signed-off-by: Kenny Ho
---
include/drm/drm_drv.h | 10
kernel/cgroup/drm.c | 58 +++
2 files changed, 68 insertions(+)
diff --git a/include/drm/drm_drv.h b/include
virtualization.)
Change-Id: Ia90aed8c4cb89ff20d8216a903a765655b44fc9a
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 18 -
Documentation/cgroup-v1/drm.rst | 1 +
include/linux/cgroup_drm.h | 92 +
include/linux/cgroup_subsys.h
evice-plugin
[8] https://github.com/kubernetes/kubernetes/issues/52757
Kenny Ho (11):
cgroup: Introduce cgroup for drm subsystem
drm, cgroup: Bind drm and cgroup subsystem
drm, cgroup: Initialize drmcg properties
drm, cgroup: Add total GEM buffer allocation stats
drm, cgroup: Add
is keyed by the drm device's major:minor.
Total GEM buffer allocation in bytes.
Change-Id: Ibc1f646ca7dbc588e2d11802b156b524696a23e7
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 50 +-
drivers/gpu/drm/drm_gem.c | 9 ++
incl
Since the drm subsystem can be compiled as a module and drm devices can
be added and removed during run time, add several functions to bind the
drm subsystem as well as drm devices with drmcg.
Two pairs of functions:
drmcg_bind/drmcg_unbind - used to bind/unbind the drm subsystem to the
cgroup sub
Thanks, I will take a look.
Regards,
Kenny
On Wed, Feb 19, 2020 at 1:38 PM Johannes Weiner wrote:
>
> On Wed, Feb 19, 2020 at 11:28:48AM -0500, Kenny Ho wrote:
> > On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner wrote:
> > >
> > > Yes, I'd go with abs
On Wed, Feb 19, 2020 at 11:18 AM Johannes Weiner wrote:
>
> Yes, I'd go with absolute units when it comes to memory, because it's
> not a renewable resource like CPU and IO, and so we do have cliff
> behavior around the edge where you transition from ok to not-enough.
>
> memory.low is a bit in fl
Hi Tejun,
On Fri, Feb 14, 2020 at 2:17 PM Tejun Heo wrote:
>
> I have to agree with Daniel here. My apologies if I weren't clear
> enough. Here's one interface I can think of:
>
> * compute weight: The same format as io.weight. Proportional control
>of gpu compute.
>
> * memory low: Please
a cgroup, they
> > would set count=5. Per the documentation in this patch: "Some DRM
> > devices may only support lgpu as anonymous resources. In such case,
> > the significance of the position of the set bits in list will be
> > ignored." What Intel
ignored." What Intel does with the user expressed configuration of "5
out of 100" is entirely up to Intel (time slice if you like, change to
specific EUs later if you like, or make it driver configurable to
support both if you like.)
Regards,
Kenny
>
> On Fri, Feb 14, 2020
the drmcg the kfd process belongs to.
Change-Id: I2930e76ef9ac6d36d0feb81f604c89a4208e6614
Signed-off-by: Kenny Ho
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h| 4 +
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 29
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 6 +
drivers
llocation after
considering the relationship between the cgroups and their
configurations in drm.lgpu.
Change-Id: Idde0ef9a331fd67bb9c7eb8ef9978439e6452488
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 80 ++
include/drm/drm_cgroup.h|
Set allocation limit for /dev/dri/card1 to 1GB
echo "226:1 1g" > drm.buffer.total.max
Set allocation limit for /dev/dri/card0 to 512MB
echo "226:0 512m" > drm.buffer.total.max
Change-Id: Id3265bbd0fafe84a16b59617df79bd32196160be
Signed-off-by: Kenn
type for the migrated task.
Change-Id: I0ce7c4e5a04c31bd0f8d9853a383575d4bc9a3fa
Signed-off-by: Kenny Ho
---
include/drm/drm_drv.h | 10
kernel/cgroup/drm.c | 59 ++-
2 files changed, 68 insertions(+), 1 deletion(-)
diff --git a/include/drm
3-10/
[7] https://github.com/RadeonOpenCompute/k8s-device-plugin
[8] https://github.com/kubernetes/kubernetes/issues/52757
Kenny Ho (11):
cgroup: Introduce cgroup for drm subsystem
drm, cgroup: Bind drm and cgroup subsystem
drm, cgroup: Initialize drmcg properties
drm, cgroup: Add total GEM bu
virtualization.)
Change-Id: Ia90aed8c4cb89ff20d8216a903a765655b44fc9a
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 18 -
Documentation/cgroup-v1/drm.rst | 1 +
include/linux/cgroup_drm.h | 92 +
include/linux/cgroup_subsys.h
drm.buffer.peak.stats
A read-only flat-keyed file which exists on all cgroups. Each
entry is keyed by the drm device's major:minor.
Largest (high water mark) GEM buffer allocated in bytes.
Change-Id: I40fe4c13c1cea8613b3e04b802f3e1f19eaab4fc
Signed-off-by: Ken
applies to the root cgroup since it can be
created before DRM devices are available. The drmcg controller will go
through all existing drm cgroups and initialize them with the new device
accordingly.
Change-Id: I64e421d8dfcc22ee8282cc1305960e20c2704db7
Signed-off-by: Kenny Ho
---
drivers/gpu/drm
Since the drm subsystem can be compiled as a module and drm devices can
be added and removed during run time, add several functions to bind the
drm subsystem as well as drm devices with drmcg.
Two pairs of functions:
drmcg_bind/drmcg_unbind - used to bind/unbind the drm subsystem to the
cgroup sub
y memparse
(such as k, m, g) can be used.
Set largest allocation for /dev/dri/card1 to 4MB
echo "226:1 4m" > drm.buffer.peak.max
Change-Id: I5ab3fb4a442b6cbd5db346be595897c90217da69
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 18 +
is keyed by the drm device's major:minor.
Total GEM buffer allocation in bytes.
Change-Id: Ibc1f646ca7dbc588e2d11802b156b524696a23e7
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 50 +-
drivers/gpu/drm/drm_gem.c | 9 ++
incl
drm.buffer.count.stats
A read-only flat-keyed file which exists on all cgroups. Each
entry is keyed by the drm device's major:minor.
Total number of GEM buffer allocated.
Change-Id: Iad29bdf44390dbcee07b1e72ea0ff811aa3b9dcd
Signed-off-by: Kenny Ho
---
Document
obvious to debug. (I want to write this down so I don't
forget also... :) I should probably have some dmesg for situation like
this.) Thanks!
Regards,
Kenny
On Mon, Dec 2, 2019 at 5:05 PM Greathouse, Joseph
wrote:
>
> > -Original Message-
> > From: Kenny Ho
> > S
On Tue, Oct 1, 2019 at 10:30 AM Michal Koutný wrote:
> On Thu, Aug 29, 2019 at 02:05:24AM -0400, Kenny Ho wrote:
> > drm.buffer.default
> > A read-only flat-keyed file which exists on the root cgroup.
> > Each entry is keyed by the drm
On Tue, Oct 1, 2019 at 10:31 AM Michal Koutný wrote:
> On Thu, Aug 29, 2019 at 02:05:19AM -0400, Kenny Ho wrote:
> > +struct cgroup_subsys drm_cgrp_subsys = {
> > + .css_alloc = drmcg_css_alloc,
> > + .css_free = drmcg_css_free,
> > +
Reducing audience since this is AMD specific.
On Tue, Oct 8, 2019 at 3:11 PM Kuehling, Felix wrote:
>
> On 2019-08-29 2:05 a.m., Kenny Ho wrote:
> > The number of logical gpu (lgpu) is defined to be the number of compute
> > unit (CU) for a device. The lgpu allocation lim
wrote:
> > On 2019-08-29 2:05 a.m., Kenny Ho wrote:
> > > drm.lgpu
> > > A read-write nested-keyed file which exists on all cgroups.
> > > Each entry is keyed by the DRM device's major:minor.
> > >
> > > lgpu st
On Thu, Sep 5, 2019 at 4:32 PM Daniel Vetter wrote:
>
*snip*
> drm_dev_unregister gets called on hotunplug, so your cgroup-internal
> tracking won't get out of sync any more than the drm_minor list gets
> out of sync with drm_devices. The trouble with drm_minor is just that
> cgroup doesn't track
On Thu, Sep 5, 2019 at 4:06 PM Daniel Vetter wrote:
>
> On Thu, Sep 5, 2019 at 8:28 PM Kenny Ho wrote:
> >
> > (resent in plain text mode)
> >
> > Hi Daniel,
> >
> > This is the previous patch relevant to this discussion:
> > https://patchwork.free
ter wrote:
>
> On Tue, Sep 03, 2019 at 04:43:45PM -0400, Kenny Ho wrote:
> > On Tue, Sep 3, 2019 at 4:12 PM Daniel Vetter wrote:
> > > On Tue, Sep 3, 2019 at 9:45 PM Kenny Ho wrote:
> > > > On Tue, Sep 3, 2019 at 3:57 AM Daniel Vetter wrote:
> > > > >
2019 at 04:43:45PM -0400, Kenny Ho wrote:
> > On Tue, Sep 3, 2019 at 4:12 PM Daniel Vetter wrote:
> > > On Tue, Sep 3, 2019 at 9:45 PM Kenny Ho wrote:
> > > > On Tue, Sep 3, 2019 at 3:57 AM Daniel Vetter
> wrote:
> > > > > Iterating over mi
On Tue, Sep 3, 2019 at 4:12 PM Daniel Vetter wrote:
> On Tue, Sep 3, 2019 at 9:45 PM Kenny Ho wrote:
> > On Tue, Sep 3, 2019 at 3:57 AM Daniel Vetter wrote:
> > > Iterating over minors for cgroups sounds very, very wrong. Why do we care
> > > whether a buffer was al
On Tue, Sep 3, 2019 at 3:57 AM Daniel Vetter wrote:
>
> On Thu, Aug 29, 2019 at 02:05:18AM -0400, Kenny Ho wrote:
> > To allow other subsystems to iterate through all stored DRM minors and
> > act upon them.
> >
> > Also exposes drm_minor_acquire and drm_minor_rele
On Tue, Sep 3, 2019 at 5:20 AM Daniel Vetter wrote:
>
> On Tue, Sep 3, 2019 at 10:24 AM Koenig, Christian
> wrote:
> >
> > Am 03.09.19 um 10:02 schrieb Daniel Vetter:
> > > On Thu, Aug 29, 2019 at 02:05:17AM -0400, Kenny Ho wrote:
> > >> With this RFC v
Hi Tejun,
Thanks for looking into this. I can definitely help where I can and I
am sure other experts will jump in if I start misrepresenting the
reality :) (as Daniel already have done.)
Regarding your points, my understanding is that there isn't really a
TTM vs GEM situation anymore (there is
point, so this patch set
> here switched from a dynamic approach to just assuming the worst and
> reserving some memory for page tables.
>
> Regards,
> Christian.
>
> Am 02.09.19 um 16:07 schrieb Kenny Ho:
> > Hey Christian,
> >
> > Can you go into details a
Hey Christian,
Can you go into details a bit more on the how and why this doesn't
work well anymore? (such as its relationship with per VM BOs?) I am
curious to learn more because I was reading into this chunk of code
earlier. Is this something that the Shrinker API can help with?
Regards,
Ken
#x27;t have a distinction which domain you need to evict stuff from.
>
> Regards,
> Christian.
>
> Am 29.08.19 um 16:07 schrieb Kenny Ho:
>
> Thanks for the feedback Christian. I am still digging into this one. Daniel
> suggested leveraging the Shrinker API for the functio
straightforward as far as I understand it currently.)
Regards,
Kenny
On Thu, Aug 29, 2019 at 3:08 AM Koenig, Christian
wrote:
> Am 29.08.19 um 08:05 schrieb Kenny Ho:
> > Allow DRM TTM memory manager to register a work_struct, such that, when
> > a drmcgrp is under memory pressure, memory re
set bits
in list will be ignored.
This lgpu resource supports the 'allocation' resource
distribution model.
Change-Id: I1afcacf356770930c7f925df043e51ad06ceb98e
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 46
includ
226:2 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536
Change-Id: Ie573491325ccc16535bb943e7857f43bd0962add
Signed-off-by: Kenny Ho
---
drivers/gpu/drm/ttm/ttm_bo.c | 7 +
include/drm/drm_cgroup.h | 19 +++
include/linux/cgroup_drm.h | 16 ++
kernel/cgroup/drm.c | 319 +
stats
A read-only flat-keyed file which exists on all cgroups. Each
entry is keyed by the drm device's major:minor.
Total number of evictions.
Change-Id: Ice2c4cc845051229549bebeb6aa2d7d6153bdf6a
Signed-off-by: Kenny Ho
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3 +-
d
drm.buffer.peak.stats
A read-only flat-keyed file which exists on all cgroups. Each
entry is keyed by the drm device's major:minor.
Largest (high water mark) GEM buffer allocated in bytes.
Change-Id: I79e56222151a3d33a76a61ba0097fe93ebb3449f
Signed-off-by: Ken
type for the migrated task.
Change-Id: I68187a72818b855b5f295aefcb241cda8ab63b00
Signed-off-by: Kenny Ho
---
include/drm/drm_drv.h | 10
kernel/cgroup/drm.c | 57 +++
2 files changed, 67 insertions(+)
diff --git a/include/drm/drm_drv.h b/include
y memparse
(such as k, m, g) can be used.
Set largest allocation for /dev/dri/card1 to 4MB
echo "226:1 4m" > drm.buffer.peak.max
Change-Id: I0830d56775568e1cf215b56cc892d5e7945e9f25
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 18
is keyed by the drm device's major:minor.
Total GEM buffer allocation in bytes.
Change-Id: I9d662ec50d64bb40a37dbf47f018b2f3a1c033ad
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 50 +-
drivers/gpu/drm/drm_gem.c | 9 ++
incl
drm.buffer.count.stats
A read-only flat-keyed file which exists on all cgroups. Each
entry is keyed by the drm device's major:minor.
Total number of GEM buffer allocated.
Change-Id: Id3e1809d5fee8562e47a7d2b961688956d844ec6
Signed-off-by: Kenny Ho
---
Document
Change-Id: I7988e28a453b53140b40a28c176239acbc81d491
Signed-off-by: Kenny Ho
---
drivers/gpu/drm/ttm/ttm_bo.c | 7 ++
include/drm/drm_cgroup.h | 17 +
include/linux/cgroup_drm.h | 2 +
kernel/cgroup/drm.c | 135 +++
4 files changed, 161
the drmcg the kfd process belongs to.
Change-Id: I69a57452c549173a1cd623c30dc57195b3b6563e
Signed-off-by: Kenny Ho
---
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h| 4 +
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 21 +++
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 6 +
drivers/gpu
to the root cgroup since it can be
created before DRM devices are available. The drmcg controller will go
through all existing drm cgroups and initialize them with the new device
accordingly.
Change-Id: I908ee6975ea0585e4c30eafde4599f87094d8c65
Signed-off-by: Kenny Ho
---
drivers/gpu/drm
Allow DRM TTM memory manager to register a work_struct, such that, when
a drmcgrp is under memory pressure, memory reclaiming can be triggered
immediately.
Change-Id: I25ac04e2db9c19ff12652b88ebff18b44b2706d8
Signed-off-by: Kenny Ho
---
drivers/gpu/drm/ttm/ttm_bo.c| 49
github.com/kubernetes/kubernetes/issues/52757
Kenny Ho (16):
drm: Add drm_minor_for_each
cgroup: Introduce cgroup for drm subsystem
drm, cgroup: Initialize drmcg properties
drm, cgroup: Add total GEM buffer allocation stats
drm, cgroup: Add peak GEM buffer allocation stats
drm, cgroup: Add
: I7c4b67ce6b31f06d1037b03435386ff5b8144ca5
Signed-off-by: Kenny Ho
---
drivers/gpu/drm/drm_drv.c | 19 +++
drivers/gpu/drm/drm_internal.h | 4
include/drm/drm_drv.h | 4
3 files changed, 23 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
index
Set allocation limit for /dev/dri/card1 to 1GB
echo "226:1 1g" > drm.buffer.total.max
Set allocation limit for /dev/dri/card0 to 512MB
echo "226:0 512m" > drm.buffer.total.max
Change-Id: I96e0b7add4d331ed8bb267b3c9243d360c6e9903
Signed-off-by: Kenn
virtualization.)
Change-Id: I6830d3990f63f0c13abeba29b1d330cf28882831
Signed-off-by: Kenny Ho
---
Documentation/admin-guide/cgroup-v2.rst | 18 -
Documentation/cgroup-v1/drm.rst | 1 +
include/linux/cgroup_drm.h | 92 +
include/linux/cgroup_subsys.h
usage
== ==
Reading returns the following::
226:0 system=0 tt=0 vram=0 priv=0
226:1 system=0 tt=9035776 vram=17768448 priv=16809984
226:2 system=0 tt=9035776 vram=17768448 priv=16809984
Change-Id: I986e44533848f66411465bdd52105e78105a709a
Signed-off-by: Kenny Ho
---
in
On Thu, Jun 27, 2019 at 3:24 AM Daniel Vetter wrote:
> Another question I have: What about HMM? With the device memory zone
> the core mm will be a lot more involved in managing that, but I also
> expect that we'll have classic buffer-based management for a long time
> still. So these need to work
On Thu, Jun 27, 2019 at 2:11 AM Daniel Vetter wrote:
> I feel like a better approach would by to add a cgroup for the various
> engines on the gpu, and then also account all the sdma (or whatever the
> name of the amd copy engines is again) usage by ttm_bo moves to the right
> cgroup. I think tha
On Thu, Jun 27, 2019 at 5:24 PM Daniel Vetter wrote:
> On Thu, Jun 27, 2019 at 02:42:43PM -0400, Kenny Ho wrote:
> > Um... I am going to get a bit philosophical here and suggest that the
> > idea of sharing (especially uncontrolled sharing) is inherently at odd
> > with co
On Thu, Jun 27, 2019 at 2:01 AM Daniel Vetter wrote:
>
> btw reminds me: I guess it would be good to have a per-type .total
> read-only exposed, so that userspace has an idea of how much there is?
> ttm is trying to be agnostic to the allocator that's used to manage a
> memory type/resource, so do
On Thu, Jun 27, 2019 at 1:43 AM Daniel Vetter wrote:
>
> On Wed, Jun 26, 2019 at 06:41:32PM -0400, Kenny Ho wrote:
> > So without the sharing restriction and some kind of ownership
> > structure, we will have to migrate/change the owner of the buffer when
> > the cgroup
On Wed, Jun 26, 2019 at 12:25 PM Daniel Vetter wrote:
>
> On Wed, Jun 26, 2019 at 11:05:20AM -0400, Kenny Ho wrote:
> > The bandwidth is measured by keeping track of the amount of bytes moved
> > by ttm within a time period. We defined two type of bandwidth: burst
> &g
On Wed, Jun 26, 2019 at 12:12 PM Daniel Vetter wrote:
>
> On Wed, Jun 26, 2019 at 11:05:18AM -0400, Kenny Ho wrote:
> > drm.memory.stats
> > A read-only nested-keyed file which exists on all cgroups.
> > Each entry is keyed by the drm de
11:05:22AM -0400, Kenny Ho wrote:
> > Allow DRM TTM memory manager to register a work_struct, such that, when
> > a drmcgrp is under memory pressure, memory reclaiming can be triggered
> > immediately.
> >
> > Change-Id: I25ac04e2db9c19ff12652b88ebff18b4
On Wed, Jun 26, 2019 at 5:41 PM Daniel Vetter wrote:
> On Wed, Jun 26, 2019 at 05:27:48PM -0400, Kenny Ho wrote:
> > On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter wrote:
> > > So what happens when you start a lot of threads all at the same time,
> > > allocating gem b
On Wed, Jun 26, 2019 at 5:04 PM Daniel Vetter wrote:
> On Wed, Jun 26, 2019 at 10:37 PM Kenny Ho wrote:
> > (sending again, I keep missing the reply-all in gmail.)
> You can make it the default somewhere in the gmail options.
Um... interesting, my option was actually not set (neit
On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter wrote:
>
> > drm.buffer.default
> > A read-only flat-keyed file which exists on the root cgroup.
> > Each entry is keyed by the drm device's major:minor.
> >
> > Default limits on the total GEM buffer allocation in bytes.
>
> D
(sending again, I keep missing the reply-all in gmail.)
On Wed, Jun 26, 2019 at 11:56 AM Daniel Vetter wrote:
>
> Why the separate, explicit registration step? I think a simpler design for
> drivers would be that we set up cgroups if there's anything to be
> controlled, and then for GEM drivers t
On Wed, Jun 26, 2019 at 11:49 AM Daniel Vetter wrote:
>
> Bunch of naming bikesheds
I appreciate the suggestions, naming is hard :).
> > +#include
> > +
> > +struct drmcgrp {
>
> drm_cgroup for more consistency how we usually call these things.
I was hoping to keep the symbol short if possible
1 - 100 of 134 matches
Mail list logo