Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-07-02 Thread Daniel Vetter
On Fri, Jun 28, 2019 at 03:49:28PM -0400, Kenny Ho wrote:
> On Thu, Jun 27, 2019 at 2:11 AM Daniel Vetter  wrote:
> > I feel like a better approach would by to add a cgroup for the various
> > engines on the gpu, and then also account all the sdma (or whatever the
> > name of the amd copy engines is again) usage by ttm_bo moves to the right
> > cgroup.  I think that's a more meaningful limitation. For direct thrashing
> > control I think there's both not enough information available in the
> > kernel (you'd need some performance counters to watch how much bandwidth
> > userspace batches/CS are wasting), and I don't think the ttm eviction
> > logic is ready to step over all the priority inversion issues this will
> > bring up. Managing sdma usage otoh will be a lot more straightforward (but
> > still has all the priority inversion problems, but in the scheduler that
> > might be easier to fix perhaps with the explicit dependency graph - in the
> > i915 scheduler we already have priority boosting afaiui).
> My concern with hooking into the engine/ lower level is that the
> engine may not be process/cgroup aware.  So the bandwidth tracking is

Why is the engine not process aware? Thus far all command submission I'm
aware of is done by a real process from userspace ... we should be able to
track these with cgroups perfectly.

> per device.  I am also wondering if this is also potentially be a case
> of perfect getting in the way of good.  While ttm_bo_handle_move_mem
> may not track everything, it is still a key function for a lot of the
> memory operation.  Also, if the programming model is designed to
> bypass the kernel then I am not sure if there are anything the kernel
> can do.  (Things like kernel-bypass network stack comes to mind.)  All
> that said, I will certainly dig deeper into the topic.

The problem is there's not a full bypass of the kernel, any reasonable
workload will need both. But if you only control one side of the bandwidth
usuage, you're not really controlling anything.

Also, this is uapi: Perfect is pretty much the bar we need to clear, any
mistake will hurt us for the next 10 years at least :-)

btw if you haven't read it yet: The lwn article about the new block io
controller is pretty interesting. I think you're trying to solve a similar
problem here:

https://lwn.net/SubscriberLink/792256/e66982524fa9477b/

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-28 Thread Kenny Ho
On Thu, Jun 27, 2019 at 2:11 AM Daniel Vetter  wrote:
> I feel like a better approach would by to add a cgroup for the various
> engines on the gpu, and then also account all the sdma (or whatever the
> name of the amd copy engines is again) usage by ttm_bo moves to the right
> cgroup.  I think that's a more meaningful limitation. For direct thrashing
> control I think there's both not enough information available in the
> kernel (you'd need some performance counters to watch how much bandwidth
> userspace batches/CS are wasting), and I don't think the ttm eviction
> logic is ready to step over all the priority inversion issues this will
> bring up. Managing sdma usage otoh will be a lot more straightforward (but
> still has all the priority inversion problems, but in the scheduler that
> might be easier to fix perhaps with the explicit dependency graph - in the
> i915 scheduler we already have priority boosting afaiui).
My concern with hooking into the engine/ lower level is that the
engine may not be process/cgroup aware.  So the bandwidth tracking is
per device.  I am also wondering if this is also potentially be a case
of perfect getting in the way of good.  While ttm_bo_handle_move_mem
may not track everything, it is still a key function for a lot of the
memory operation.  Also, if the programming model is designed to
bypass the kernel then I am not sure if there are anything the kernel
can do.  (Things like kernel-bypass network stack comes to mind.)  All
that said, I will certainly dig deeper into the topic.

Regards,
Kenny
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-27 Thread Daniel Vetter
On Thu, Jun 27, 2019 at 12:34:05AM -0400, Kenny Ho wrote:
> On Wed, Jun 26, 2019 at 12:25 PM Daniel Vetter  wrote:
> >
> > On Wed, Jun 26, 2019 at 11:05:20AM -0400, Kenny Ho wrote:
> > > The bandwidth is measured by keeping track of the amount of bytes moved
> > > by ttm within a time period.  We defined two type of bandwidth: burst
> > > and average.  Average bandwidth is calculated by dividing the total
> > > amount of bytes moved within a cgroup by the lifetime of the cgroup.
> > > Burst bandwidth is similar except that the byte and time measurement is
> > > reset after a user configurable period.
> >
> > So I'm not too sure exposing this is a great idea, at least depending upon
> > what you're trying to do with it. There's a few concerns here:
> >
> > - I think bo movement stats might be useful, but they're not telling you
> >   everything. Applications can also copy data themselves and put buffers
> >   where they want them, especially with more explicit apis like vk.
> >
> > - which kind of moves are we talking about here? Eviction related bo moves
> >   seem not counted here, and if you have lots of gpus with funny
> >   interconnects you might also get other kinds of moves, not just system
> >   ram <-> vram.
> Eviction move is counted but I think I placed the delay in the wrong
> place (the tracking of byte moved is in previous patch in
> ttm_bo_handle_move_mem, which is common to all move as far as I can
> tell.)
> 
> > - What happens if we slow down, but someone else needs to evict our
> >   buffers/move them (ttm is atm not great at this, but Christian König is
> >   working on patches). I think there's lots of priority inversion
> >   potential here.
> >
> > - If the goal is to avoid thrashing the interconnects, then this isn't the
> >   full picture by far - apps can use copy engines and explicit placement,
> >   again that's how vulkan at least is supposed to work.
> >
> > I guess these all boil down to: What do you want to achieve here? The
> > commit message doesn't explain the intended use-case of this.
> Thrashing prevention is the intent.  I am not familiar with Vulkan so
> I will have to get back to you on that.  I don't know how those
> explicit placement translate into the kernel.  At this stage, I think
> it's still worth while to have this as a resource even if some
> applications bypass the kernel.  I certainly welcome more feedback on
> this topic.

The trouble with thrashing prevention like this is that either you don't
limit all the bo moves, and then you don't count everything. Or you limit
them all, and then you create priority inversions in the ttm eviction
handler, essentially rate-limiting everyone who's thrashing. Or at least
you run the risk of that happening.

Not what you want I think :-)

I also think that the blkcg people are still trying to figure out how to
make this work fully reliable (it's the same problem really), and a
critical piece is knowing/estimating the overall bandwidth. Without that
the admin can't really do something meaningful. The problem with that is
you don't know, not just because of vk, but any userspace that has buffers
in the pci gart uses the same interconnect just as part of its rendering
job. So if your goal is to guaranteed some minimal amount of bo move
bandwidth, then this wont work, because you have no idea how much bandwith
there even is for bo moves.

Getting thrashing limited is very hard.

I feel like a better approach would by to add a cgroup for the various
engines on the gpu, and then also account all the sdma (or whatever the
name of the amd copy engines is again) usage by ttm_bo moves to the right
cgroup. I think that's a more meaningful limitation. For direct thrashing
control I think there's both not enough information available in the
kernel (you'd need some performance counters to watch how much bandwidth
userspace batches/CS are wasting), and I don't think the ttm eviction
logic is ready to step over all the priority inversion issues this will
bring up. Managing sdma usage otoh will be a lot more straightforward (but
still has all the priority inversion problems, but in the scheduler that
might be easier to fix perhaps with the explicit dependency graph - in the
i915 scheduler we already have priority boosting afaiui).
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-26 Thread Kenny Ho
On Wed, Jun 26, 2019 at 12:25 PM Daniel Vetter  wrote:
>
> On Wed, Jun 26, 2019 at 11:05:20AM -0400, Kenny Ho wrote:
> > The bandwidth is measured by keeping track of the amount of bytes moved
> > by ttm within a time period.  We defined two type of bandwidth: burst
> > and average.  Average bandwidth is calculated by dividing the total
> > amount of bytes moved within a cgroup by the lifetime of the cgroup.
> > Burst bandwidth is similar except that the byte and time measurement is
> > reset after a user configurable period.
>
> So I'm not too sure exposing this is a great idea, at least depending upon
> what you're trying to do with it. There's a few concerns here:
>
> - I think bo movement stats might be useful, but they're not telling you
>   everything. Applications can also copy data themselves and put buffers
>   where they want them, especially with more explicit apis like vk.
>
> - which kind of moves are we talking about here? Eviction related bo moves
>   seem not counted here, and if you have lots of gpus with funny
>   interconnects you might also get other kinds of moves, not just system
>   ram <-> vram.
Eviction move is counted but I think I placed the delay in the wrong
place (the tracking of byte moved is in previous patch in
ttm_bo_handle_move_mem, which is common to all move as far as I can
tell.)

> - What happens if we slow down, but someone else needs to evict our
>   buffers/move them (ttm is atm not great at this, but Christian König is
>   working on patches). I think there's lots of priority inversion
>   potential here.
>
> - If the goal is to avoid thrashing the interconnects, then this isn't the
>   full picture by far - apps can use copy engines and explicit placement,
>   again that's how vulkan at least is supposed to work.
>
> I guess these all boil down to: What do you want to achieve here? The
> commit message doesn't explain the intended use-case of this.
Thrashing prevention is the intent.  I am not familiar with Vulkan so
I will have to get back to you on that.  I don't know how those
explicit placement translate into the kernel.  At this stage, I think
it's still worth while to have this as a resource even if some
applications bypass the kernel.  I certainly welcome more feedback on
this topic.

Regards,
Kenny
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Re: [RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-26 Thread Daniel Vetter
On Wed, Jun 26, 2019 at 11:05:20AM -0400, Kenny Ho wrote:
> The bandwidth is measured by keeping track of the amount of bytes moved
> by ttm within a time period.  We defined two type of bandwidth: burst
> and average.  Average bandwidth is calculated by dividing the total
> amount of bytes moved within a cgroup by the lifetime of the cgroup.
> Burst bandwidth is similar except that the byte and time measurement is
> reset after a user configurable period.
> 
> The bandwidth control is best effort since it is done on a per move
> basis instead of per byte.  The bandwidth is limited by delaying the
> move of a buffer.  The bandwidth limit can be exceeded when the next
> move is larger than the remaining allowance.
> 
> drm.burst_bw_period_in_us
> A read-write flat-keyed file which exists on the root cgroup.
> Each entry is keyed by the drm device's major:minor.
> 
> Length of a period use to measure burst bandwidth in us.
> One period per device.
> 
> drm.burst_bw_period_in_us.default
> A read-only flat-keyed file which exists on the root cgroup.
> Each entry is keyed by the drm device's major:minor.
> 
> Default length of a period in us (one per device.)
> 
> drm.bandwidth.stats
> A read-only nested-keyed file which exists on all cgroups.
> Each entry is keyed by the drm device's major:minor.  The
> following nested keys are defined.
> 
>   = ==
>   burst_byte_per_us Burst bandwidth
>   avg_bytes_per_us  Average bandwidth
>   moved_byteAmount of byte moved within a period
>   accum_us  Amount of time accumulated in a period
>   total_moved_byte  Byte moved within the cgroup lifetime
>   total_accum_usCgroup lifetime in us
>   byte_credit   Available byte credit to limit avg bw
>   = ==
> 
> Reading returns the following::
> 226:1 burst_byte_per_us=23 avg_bytes_per_us=0 moved_byte=2244608
> accum_us=95575 total_moved_byte=45899776 total_accum_us=201634590
> byte_credit=13214278590464
> 226:2 burst_byte_per_us=10 avg_bytes_per_us=219 moved_byte=430080
> accum_us=39350 total_moved_byte=65518026752 total_accum_us=298337721
> byte_credit=9223372036854644735
> 
> drm.bandwidth.high
> A read-write nested-keyed file which exists on all cgroups.
> Each entry is keyed by the drm device's major:minor.  The
> following nested keys are defined.
> 
>     ===
>   bytes_in_period   Burst limit per period in byte
>   avg_bytes_per_us  Average bandwidth limit in bytes per us
>     ===
> 
> Reading returns the following::
> 
> 226:1 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536
> 226:2 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536
> 
> drm.bandwidth.default
> A read-only nested-keyed file which exists on the root cgroup.
> Each entry is keyed by the drm device's major:minor.  The
> following nested keys are defined.
> 
>     
>   bytes_in_period   Default burst limit per period in byte
>   avg_bytes_per_us  Default average bw limit in bytes per us
>     
> 
> Reading returns the following::
> 
> 226:1 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536
> 226:2 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536
> 
> Change-Id: Ie573491325ccc16535bb943e7857f43bd0962add
> Signed-off-by: Kenny Ho 

So I'm not too sure exposing this is a great idea, at least depending upon
what you're trying to do with it. There's a few concerns here:

- I think bo movement stats might be useful, but they're not telling you
  everything. Applications can also copy data themselves and put buffers
  where they want them, especially with more explicit apis like vk.

- which kind of moves are we talking about here? Eviction related bo moves
  seem not counted here, and if you have lots of gpus with funny
  interconnects you might also get other kinds of moves, not just system
  ram <-> vram.

- What happens if we slow down, but someone else needs to evict our
  buffers/move them (ttm is atm not great at this, but Christian König is
  working on patches). I think there's lots of priority inversion
  potential here.

- If the goal is to avoid thrashing the interconnects, then this isn't the
  full picture by far - apps can use copy engines and explicit placement,
  again that's how vulkan at least is supposed to work.

I guess these all boil down to: What do you want to achieve 

[RFC PATCH v3 09/11] drm, cgroup: Add per cgroup bw measure and control

2019-06-26 Thread Kenny Ho
The bandwidth is measured by keeping track of the amount of bytes moved
by ttm within a time period.  We defined two type of bandwidth: burst
and average.  Average bandwidth is calculated by dividing the total
amount of bytes moved within a cgroup by the lifetime of the cgroup.
Burst bandwidth is similar except that the byte and time measurement is
reset after a user configurable period.

The bandwidth control is best effort since it is done on a per move
basis instead of per byte.  The bandwidth is limited by delaying the
move of a buffer.  The bandwidth limit can be exceeded when the next
move is larger than the remaining allowance.

drm.burst_bw_period_in_us
A read-write flat-keyed file which exists on the root cgroup.
Each entry is keyed by the drm device's major:minor.

Length of a period use to measure burst bandwidth in us.
One period per device.

drm.burst_bw_period_in_us.default
A read-only flat-keyed file which exists on the root cgroup.
Each entry is keyed by the drm device's major:minor.

Default length of a period in us (one per device.)

drm.bandwidth.stats
A read-only nested-keyed file which exists on all cgroups.
Each entry is keyed by the drm device's major:minor.  The
following nested keys are defined.

  = ==
  burst_byte_per_us Burst bandwidth
  avg_bytes_per_us  Average bandwidth
  moved_byteAmount of byte moved within a period
  accum_us  Amount of time accumulated in a period
  total_moved_byte  Byte moved within the cgroup lifetime
  total_accum_usCgroup lifetime in us
  byte_credit   Available byte credit to limit avg bw
  = ==

Reading returns the following::
226:1 burst_byte_per_us=23 avg_bytes_per_us=0 moved_byte=2244608
accum_us=95575 total_moved_byte=45899776 total_accum_us=201634590
byte_credit=13214278590464
226:2 burst_byte_per_us=10 avg_bytes_per_us=219 moved_byte=430080
accum_us=39350 total_moved_byte=65518026752 total_accum_us=298337721
byte_credit=9223372036854644735

drm.bandwidth.high
A read-write nested-keyed file which exists on all cgroups.
Each entry is keyed by the drm device's major:minor.  The
following nested keys are defined.

    ===
  bytes_in_period   Burst limit per period in byte
  avg_bytes_per_us  Average bandwidth limit in bytes per us
    ===

Reading returns the following::

226:1 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536
226:2 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536

drm.bandwidth.default
A read-only nested-keyed file which exists on the root cgroup.
Each entry is keyed by the drm device's major:minor.  The
following nested keys are defined.

    
  bytes_in_period   Default burst limit per period in byte
  avg_bytes_per_us  Default average bw limit in bytes per us
    

Reading returns the following::

226:1 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536
226:2 bytes_in_period=9223372036854775807 avg_bytes_per_us=65536

Change-Id: Ie573491325ccc16535bb943e7857f43bd0962add
Signed-off-by: Kenny Ho 
---
 drivers/gpu/drm/ttm/ttm_bo.c |   7 +
 include/drm/drm_cgroup.h |  13 ++
 include/linux/cgroup_drm.h   |  14 ++
 kernel/cgroup/drm.c  | 309 ++-
 4 files changed, 340 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index e9f70547f0ad..f06c2b9d8a4a 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -36,6 +36,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1176,6 +1177,12 @@ int ttm_bo_validate(struct ttm_buffer_object *bo,
 * Check whether we need to move buffer.
 */
if (!ttm_bo_mem_compat(placement, >mem, _flags)) {
+   unsigned int move_delay = drmcgrp_get_mem_bw_period_in_us(bo);
+   move_delay /= 2000; /* check every half period in ms*/
+   while (bo->bdev->ddev != NULL && !drmcgrp_mem_can_move(bo)) {
+   msleep(move_delay);
+   }
+
ret = ttm_bo_move_buffer(bo, placement, ctx);
if (ret)
return ret;
diff --git a/include/drm/drm_cgroup.h b/include/drm/drm_cgroup.h
index 48ab5450cf17..9b1dbd6a4eca 100644
--- a/include/drm/drm_cgroup.h
+++